id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
889,722 | https://en.wikipedia.org/wiki/Coronagraph | A coronagraph is a telescopic attachment designed to block out the direct light from a star or other bright object so that nearby objects – which otherwise would be hidden in the object's bright glare – can be resolved. Most coronagraphs are intended to view the corona of the Sun, but a new class of conceptually similar instruments (called stellar coronagraphs to distinguish them from solar coronagraphs) are being used to find extrasolar planets and circumstellar disks around nearby stars as well as host galaxies in quasars and other similar objects with active galactic nuclei (AGN).
Invention
The coronagraph was introduced in 1931 by the French astronomer Bernard Lyot; since then, coronagraphs have been used at many solar observatories. Coronagraphs operating within Earth's atmosphere suffer from scattered light in the sky itself, due primarily to Rayleigh scattering of sunlight in the upper atmosphere. At view angles close to the Sun, the sky is much brighter than the background corona even at high altitude sites on clear, dry days. Ground-based coronagraphs, such as the High Altitude Observatory's Mark IV Coronagraph on top of Mauna Loa, use polarization to distinguish sky brightness from the image of the corona: both coronal light and sky brightness are scattered sunlight and have similar spectral properties, but the coronal light is Thomson-scattered at nearly a right angle and therefore undergoes scattering polarization, while the superimposed light from the sky near the Sun is scattered at only a glancing angle and hence remains nearly unpolarized.
Design
Coronagraph instruments are extreme examples of stray light rejection and precise photometry because the total brightness from the solar corona is less than one-millionth the brightness of the Sun. The apparent surface brightness is even fainter because, in addition to delivering less total light, the corona has a much greater apparent size than the Sun itself.
During a total solar eclipse, the Moon acts as an occluding disk and any camera in the eclipse path may be operated as a coronagraph until the eclipse is over. More common is an arrangement where the sky is imaged onto an intermediate focal plane containing an opaque spot; this focal plane is reimaged onto a detector. Another arrangement is to image the sky onto a mirror with a small hole: the desired light is reflected and eventually reimaged, but the unwanted light from the star goes through the hole and does not reach the detector. Either way, the instrument design must take into account scattering and diffraction to make sure that as little unwanted light as possible reaches the final detector. Lyot's key invention was an arrangement of lenses with stops, known as Lyot stops, and baffles such that light scattered by diffraction was focused on the stops and baffles, where it could be absorbed, while light needed for a useful image missed them.
As examples, imaging instruments on the Hubble Space Telescope and James Webb Space Telescope offer coronagraphic capability.
Band-limited coronagraph
A band-limited coronagraph uses a special kind of mask called a band-limited mask. This mask is designed to block light and also manage diffraction effects caused by removal of the light. The band-limited coronagraph has served as the baseline design for the canceled Terrestrial Planet Finder coronagraph. Band-limited masks will also be available on the James Webb Space Telescope.
Phase-mask coronagraph
A phase-mask coronagraph (such as the so-called four-quadrant phase-mask coronagraph) uses a transparent mask to shift the phase of the stellar light in order to create a self-destructive interference, rather than a simple opaque disc to block it.
Optical vortex coronagraph
An optical vortex coronagraph uses a phase-mask in which the phase shift varies azimuthally around the center. Several varieties of optical vortex coronagraphs exist:
the scalar optical vortex coronagraph based on a phase ramp directly etched in a dielectric material, like fused silica.
the vector(ial) vortex coronagraph employs a mask that rotates the angle of polarization of photons, and ramping this angle of rotation has the same effect as ramping a phase-shift. A mask of this kind can be synthesized by various technologies, ranging from liquid crystal polymer (same technology as in 3D television), and micro-structured surfaces (using microfabrication technologies from the microelectronics industry). Such a vector vortex coronagraph made out of liquid crystal polymers is currently in use at the 200-inch Hale Telescope at the Palomar Observatory. It has recently been operated with adaptive optics to image extrasolar planets.
This works with stars other than the sun because they are so far away their light is, for this purpose, a spatially coherent plane wave. The coronagraph using interference masks out the light along the center axis of the telescope, but allows the light from off axis objects through.
Satellite-based coronagraphs
Coronagraphs in outer space are much more effective than the same instruments would be if located on the ground. This is because the complete absence of atmospheric scattering eliminates the largest source of glare present in a terrestrial coronagraph. Several space missions such as NASA-ESA's SOHO, and NASA's SPARTAN, Solar Maximum Mission, and Skylab have used coronagraphs to study the outer reaches of the solar corona. The Hubble Space Telescope (HST) is able to perform coronagraphy using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS), and the James Webb Space Telescope (JWST) is able to perform coronagraphy using the Near Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI).
While space-based coronagraphs such as LASCO avoid the sky brightness problem, they face design challenges in stray light management under the stringent size and weight requirements of space flight. Any sharp edge (such as the edge of an occulting disk or optical aperture) causes Fresnel diffraction of incoming light around the edge, which means that the smaller instruments that one would want on a satellite unavoidably leak more light than larger ones would. The LASCO C-3 coronagraph uses both an external occulter (which casts shadow on the instrument) and an internal occulter (which blocks stray light that is Fresnel-diffracted around the external occulter) to reduce this leakage, and a complicated system of baffles to eliminate stray light scattering off the internal surfaces of the instrument itself.
Aditya-L1
Aditya-L1 is a coronagraphy spacecraft developed by the Indian Space Research Organisation (ISRO) and various Indian research institutes. The spacecraft aims to study the solar atmosphere and its impact on the Earth's environment. It will be positioned approximately 1.5 million km from Earth in a halo orbit around the L1 Lagrangian point between Earth and the Sun.
The primary payload, Visible Emission Line Coronagraph (VELC), will send 1,440 images of the sun daily to ground stations. The VELC payload has been developed by the Indian Institute of Astrophysics (IIA) and will continuously observe the Sun's corona from the L1 point.
Extrasolar planets
The coronagraph has recently been adapted to the challenging task of finding planets around nearby stars. While stellar and solar coronagraphs are similar in concept, they are quite different in practice because the object to be occulted differs by a factor of a million in linear apparent size. (The Sun has an apparent size of about 1900 arcseconds, while a typical nearby star might have an apparent size of 0.0005 and 0.002 arcseconds.) Earth-like exoplanet detection requires contrast. To achieve such contrast requires extreme optothermal stability.
A stellar coronagraph concept was studied for flight on the canceled Terrestrial Planet Finder mission. On ground-based telescopes, a stellar coronagraph can be combined with adaptive optics to search for planets around nearby stars.
In November 2008, NASA announced that a planet was directly observed orbiting the nearby star Fomalhaut. The planet could be seen clearly on images taken by Hubble's Advanced Camera for Surveys' coronagraph in 2004 and 2006. The dark area hidden by the coronagraph mask can be seen on the images, though a bright dot has been added to show where the star would have been.
Up until the year 2010, telescopes could only directly image exoplanets under exceptional circumstances. Specifically, it is easier to obtain images when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation. However, in 2010 a team from NASAs Jet Propulsion Laboratory demonstrated that a vector vortex coronagraph could enable small telescopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets using just a portion of the Hale Telescope.
See also
List of solar telescopes
New Worlds Mission – A proposed external coronagraph
PROBA-3 - a coronagraphy demonstration mission using high-precision formation flying
References
External links
Overview of Technologies for Direct Optical Imaging of Exoplanets, Marie Levine, Rémi Soummer, 2009
"Sun Gazer's Telescope." Popular Mechanics, February 1952, pp. 140–141. Cut-away drawing of first Coronagraph type used in 1952.
Optical Vectorial Vortex Coronagraphs using Liquid Crystal Polymers: theory, manufacturing and laboratory demonstration Optics Infobase
The Vector Vortex Coronagraph: Laboratory Results and First Light at Palomar Observatory IopScience
Annular Groove Phase Mask Coronagraph IopScience
This link shows an HST image of a dust disk surrounding a bright star with the star hidden by the coronagraph.
Optical telescope components
Optical devices
French inventions | Coronagraph | Materials_science,Technology,Engineering | 1,993 |
29,298,900 | https://en.wikipedia.org/wiki/Ecadotril | Ecadotril is a neutral endopeptidase inhibitor ((NEP) EC 3.4.24.11) and determined by the presence of peptidase family M13 as a neutral endopeptidase inhibited by phosphoramidon. Ecadotril is the (S)-enantiomer of racecadotril. NEP-like enzymes include the endothelin-converting enzymes. The peptidase M13 family believed to activate or inactivate oligopeptide (pro)-hormones such as opioid peptides, neprilysin is another member of this group, in the case of the metallopeptidases and aspartic, the nucleophiles clan or family for example MA, is an activated water molecule. The peptidase domain for members of this family also contains a bacterial member and resembles that of thermolysin the predicted active site residues for members of this family and thermolysin occur in the motif HEXXH. Thermolysin complexed with the inhibitor (S)-thiorphan are isomeric thiol-containing inhibitors of endopeptidase EC 24-11 (also called "enkephalinase").
See also
RB-101
Candoxatril
References
Thioesters
Carboxylate esters
Drugs acting on the gastrointestinal system and metabolism | Ecadotril | Chemistry | 302 |
37,538,706 | https://en.wikipedia.org/wiki/Alpha%20Pi%20Mu | Alpha Pi Mu () is an American honor society for Industrial and Systems Engineering students. All chapters are based in the United States, with the exception of one university in Puerto Rico which is an unincorporated territory of the United States.
History
The founder of Alpha Pi Mu was James T. French, who in 1949 was a senior industrial engineering student at Georgia Tech. He recruited nine men that were members of the Georgia Tech chapter of Tau Beta Pi as the first members of Alpha Pi Mu. Alpha Pi Mu is the only nationally accepted industrial engineering honor society. The Georgia Tech engineers who led the initial developmental work wanted an organization to provide a common ground "on which their outstanding young engineers could exchange ideas," and to provide experiences which could help their future professional development.
The Alpha Pi Mu Honor Society aims to:Confer recognition upon students of Industrial and Systems Engineering who have shown exceptional academic interest and abilities in their field, encourage the advancement and quality of Industrial and Systems Engineering education, unify the student body of the Industrial Engineering department in presenting its needs and ideals to the faculty.
Alpha Pi Mu became a member of the Association of College Honor Societies in 1952. The society has 68 chapters that have initiated more than 47,095 members as of 2014.
Symbols
French chose the Greek letter name Alpha Pi Mu to represent the three major areas of Industrial Engineering at the time: Administration, Production, and Methods. The society's motto is "Humble service to humanity is the goal of the true engineer".
The society's colors are purple and light yellow. Its stole is white with an embroidered logo.
Activities
The society provides $1,000 scholarships to outstanding members. Its members also participate in activities that improve their campus and the profession.
Membership
Students of Industrial and Systems Engineering programs who rank in the upper one-third of the senior Industrial and Systems Engineering class and the upper one-fifth of the junior Industrial and Systems Engineering class are considered for membership on the basis of leadership, ethics, sociability, character, and breadth of interest. Graduate students and alumni may be elected to membership if they meet the requirements. Faculty members and professional industrial and systems engineers may be elected to faculty and honorary membership respectively have proven themselves outstanding professionals in the field. Around 940 members are initiated each year.
Chapters
The Society has established 68 chapters since 1949.
References
External Links
Association of College Honor Societies
Engineering honor societies
Student organizations established in 1949
1949 establishments in Georgia (U.S. state) | Alpha Pi Mu | Engineering | 496 |
159,441 | https://en.wikipedia.org/wiki/Type%20metal | In printing, type metal refers to the metal alloys used in traditional typefounding and hot metal typesetting. Historically, type metal was an alloy of lead, tin and antimony in different proportions depending on the application, be it individual character mechanical casting for hand setting, mechanical line casting or individual character mechanical typesetting and stereo plate casting. The proportions used are in the range: lead 50‒86%, antimony 11‒30% and tin 3‒20%. Antimony and tin are added to lead for durability while reducing the difference between the coefficients of expansion of the matrix and the alloy. Apart from durability, the general requirements for type-metal are that it should produce a true and sharp cast, and retain correct dimensions and form after cooling down. It should also be easy to cast, at reasonable low melting temperature, iron should not dissolve in the molten metal, and mould and nozzles should stay clean and easy to maintain. Today, Monotype machines can utilize a wide range of different alloys. Mechanical linecasting equipment uses alloys that are close to eutectic.
History
Although the knowledge of casting soft metals in moulds was well established before Johannes Gutenberg's time, his discovery of an alloy that was hard, durable, and would take a clear impression from the mould represents a fundamental aspect of his solution to the problem of printing with movable type. This alloy did not shrink as much as lead alone when cooled. Gutenberg's other contributions were the creation of inks that would adhere to metal type and a method of softening handmade printing paper so that it would take the impression well.
Required characteristics
Cheap, plentifully available as galena and easily workable, lead has many of the ideal characteristics, but on its own it lacks the necessary hardness and does not make castings with sharp details because molten lead shrinks and sags when it cools to a solid.
After much experimentation it was found that adding pewterer's tin, obtained from cassiterite, improved the ability of the cast type to withstand the wear and tear of the printing process, making it tougher but not more brittle.
Despite patiently trying different proportions of both metals, solving the second part of the type metal problem proved very difficult without the addition of yet a third metal, antimony.
Alchemists had shown that when stibnite, an antimony sulfide ore, was heated with scrap iron, metallic antimony was produced. The typefounder would typically introduce powdered stibnite and horseshoe nails into his crucible to melt lead, tin and antimony into type metal. Both the iron and the sulfides would be rejected in the process.
The addition of antimony conferred the much needed improvements in the properties of hardness, wear resistance and especially, the sharpness of reproduction of the type design, given that it has the curious property of diminishing the shrinkage of the alloy upon solidification.
Composition of type metal
Type metal is an alloy of lead, tin and antimony in different proportions depending on the application, be it individual character mechanical casting for hand setting, mechanical line casting or individual character mechanical typesetting and stereo plate casting.
The proportions used are in the range: lead 50‒86%, antimony 11‒30% and tin 3‒20%. The basic characteristics of these metals are as follows:
Lead
Type metal is an alloy of lead (Pb). Pure lead is a relatively cheap metal, is soft thus easy to work, and it is easy to cast since it melts at . However, it shrinks when it solidifies making letters that are not sharp enough for printing. In addition pure lead letters will quickly deform during use; a direct result of the easy workability of lead.
Lead is exceptionally soft, malleable, and ductile but with little tensile strength.
Lead oxide is a poison, that primarily damages brain function. Metallic lead is more stable and less toxic than its oxidized form. Metallic lead cannot be absorbed through contact with skin, so may be handled, carefully, with far less risk than lead oxide.
Tin
Tin (Sn) promotes the fluidity of the molten alloy and makes the type tough, giving the alloy resistance to wear. It is harder, stiffer and tougher than lead.
Antimony
Antimony (Sb) is a metalloid element, which melts at . Antimony has a crystalline appearance while being both brittle and fusible.
When alloyed with lead to produce type metal, antimony gives it the hardness it needs to resist deformation during printing, and gives it sharper castings from the mould to produce clear, easily read printed text on the page.
Typical type metal proportions
The actual compositions differed over time, different machines were adjusted to different alloys depending on the intended uses of the type. Printers had sometimes their own preferences about the quality of particular alloys. The Lanston Monotype Corporation in the United Kingdom had a whole range of alloys listed in their manuals.
Alloys for mechanical composition
Most mechanical typesetting is divided basically into two different competing technologies: line casting (Linotype and Intertype) and single character casting (Monotype).
The manuals for the Monotype composition caster (1952 and later editions) mention at least five different alloys to be used for casting, depending the purpose of the type and the work to be done with it.
Although in general Monotype cast type characters can be visually identified as having a square nick (as opposed to the round nicks used on foundry type), there is no easy way to identify the alloy aside from an expensive chemical assay in a laboratory.
Apart from this the two Monotype companies in the United States and the UK also made moulds with 'round' nicks. Typefounders and printers could and did order specially designed moulds to their own specifications: height, size, kind of nick, even the number of nicks could be changed.
Type produced with these special moulds can only be identified if the foundry or printer is known.
In Switzerland the company "Metallum Pratteln AG", in Basel had yet another list of type-metal alloys. If needed, any alloy according to customer specifications could be produced.
Dross
Regeneration-metal was melted into the crucible to replace lost tin and antimony through the dross.
Every time type metal is remelted, tin and antimony oxidise. These oxides form on the surface of the crucible and must be removed. After stirring the molten metal, grey powder forms on the surface, the dross, needing to be skimmed. Dross contains recoverable amounts of tin and antimony.
Dross must be processed at specialized companies, in order to extract the pure metals in conditions that would prevent environmental pollution and remain economically feasible.
Behaviour of bipolar alloys
Pure metal melts and solidifies in a simple manner at a specific temperature. This is not the case with alloys. There we find a range of temperatures with all kinds of different events. The melting temperature of all mixtures is considerably lower than the pure components.
Antimony/Lead mixture examples
The addition of a small amount of antimony (5% to 6%) to lead will significantly alter the alloy's behavior compared to pure lead: although the melting point of pure antimony is 630 °C, this mixture will be completely molten and a homogeneous fluid even at temperatures as low as 371 °C. Letting this mixture cool the alloy will remain liquid even through 355 °C, the melting point of pure lead. Once the temperature reaches 291 °C, lead crystals will start to form, increasing the cohesion of the liquid alloy. At 252 °C, the mixture will start to fully solidify, during which the temperature will remain constant. Only when the mixture has fully solidified will the temperature start to decrease again.
Using a 10% antimony, 90% lead mixture delays lead crystal formation until approximately 260 °C.
Using a 12% antimony, 88% lead mixture prevents crystal formation entirely, becoming a eutectic. This alloy has a clear melting point, at 252 °C.
Increasing the antimony content beyond 12% will lead to predominantly antimony crystallization.
Tri-polar mixtures
Adding tin to this bipolar-system complicates the behaviour even further. Some tin enters into the eutectic. A mixture of 4% tin, 12% antimony, and 84% lead solidifies at 240 °C.
Depending from the metals in excess, compared with the eutectic, crystals are formed, depleting the liquid, until the eutectic 4/12 mixture is formed once more.
The 12/20 alloy contains many mixed crystals of tin and antimony, these crystals constitute the hardness of the alloy and the resistance against wear.
Raising the content of antimony cannot be done without adding some tin too. Because the fluidity of the mixture will dramatically diminish when the temperature goes down somewhere in the channels of the machine. Nozzles can be blocked by antimony crystals.
Metals used on typecasting machines
Eutectic alloys are used on Linotype-machines and Ludlow-casters to prevent blockage of the mould and to ensure continuous trouble-free casting.
Alloys used on Monotype machines tend to contain higher contents of tin, to obtain tougher character. All characters should be able to resist the pressure during printing. This meant an extra investment, but Monotype was an expensive system all the way.
Present usage of type metal
The fierce competition between the different mechanical typecasting systems like Linotype and Monotype has given rise to some lasting fairy tales about typemetal. Linotype users looked down on Monotype and vice versa.
Monotype machines however can utilize a wide range of different alloys; maintaining a constant and a high production meant a strict standardization of the typemetal in the company, so as to reduce by all means any interruption of the production. Repeated assays were done at regular intervals to monitor the alloy used, since every time the metal is recycled, roughly half a per cent of tin content is lost through oxidation. These oxides are removed with the dross while cleaning the surface of the molten metal.
Nowadays this "battle" has lost its importance, at least for Monotype. The quality of the produced type is far more important. Alloys with a high-content of antimony, and subsequently a high content of tin, can be cast at a higher temperature, and at a lower speed and with more cooling at a Monotype composition or supercaster.
Although care was taken to avoid mixing different types of type metal in shops with different type casting systems, in actual practice this often occurred. Since a Monotype composition caster can cope with a variety of different metal alloys, occasional mixing of Linotype alloy with discarded typefounders alloy has proven its usefulness.
Mechanical linecasting equipment use alloys that are close to eutectic.
Contamination of type metals
Copper
Copper has been used for hardening type metal; this metal easily forms mixed crystals with tin when the alloy cools down. These crystals will grow just below the exit opening of the nozzle in Monotype machines, resulting in a total blockage after some time. These nozzles are very difficult to clean, because the hard crystals will resist drilling.
Zinc
Brass spaces contain zinc, which is extremely counterproductive in type metal. Even a tiny amount — less than 1% — will form a dusty surface on the molten metal surface that is difficult to remove. Characters cast from contaminated type metal such as this are of inferior quality, the solution being to discard and replace with fresh alloy.
Brass and zinc should therefore be removed before remelting. The same applies to aluminium, although this metal will float on top of the melt, and will be easily discovered and removed, before it is dissolved into the lead.
Magnesium
Magnesium plates are very dangerous in molten lead, because this metal can easily burn and will ignite in the molten lead.
Iron
Iron is hardly dissolved into type metal, although the molten metal is always in contact with the cast iron surface of the melting pot.
Historic references to type metals
Joseph Moxon, in his Mechanick Exercises, mentions a mix of equal amounts of "antimony" and iron nails.
The "antimony" here was in fact stibnite, antimony-sulfide (Sb2S3). The iron was burned away in this process, reducing the antimony and at the same time removing the unwanted sulfur. In this way ferro-sulfide was formed, that would evaporate with all the fumes.
The mixture of stibnite and nails was heated red hot in an open-air furnace, until all is molten and finished. The resulting metal can contain up to 9% of iron. Further purification can be done by mixing the hot melt with kitchen-salt, NaCl. After this red hot lead from another melting pot is added and stirred thoroughly.
Some tin was added to the alloy for casting small characters and narrow spaces, to better fill narrow areas of the mould. The good properties of tin were well known. The use of tin was sometime minimized to save expenses.
Much of this toxic work was done by child labour, a labor force that includes children.
As a supposed antidote to the inhaled toxic metal fumes, the workers were given a mixture of red wine and salad oil:
References
Alloys
Printing | Type metal | Chemistry | 2,741 |
11,958,485 | https://en.wikipedia.org/wiki/Britton%E2%80%93Robinson%20buffer | The Britton–Robinson buffer (BRB or PEM) is a "universal" pH buffer used for the pH range from 2 to 12. It has been used historically as an alternative to the McIlvaine buffer, which has a smaller pH range of effectiveness (from 2 to 8).
Universal buffers consist of mixtures of acids of diminishing strength (increasing pKa), so that the change in pH is approximately proportional to the amount of alkali added. It consists of a mixture of 0.04 M boric acid, 0.04 M phosphoric acid and 0.04 M acetic acid that has been titrated to the desired pH with 0.2 M sodium hydroxide. Britton and Robinson also proposed a second formulation that gave an essentially linear pH response to added alkali from pH 2.5 to pH 9.2 (and buffers to pH 12). This mixture consists of 0.0286 M citric acid, 0.0286 M monopotassium phosphate, 0.0286 M boric acid, 0.0286 M veronal and 0.0286 M hydrochloric acid titrated with 0.2 M sodium hydroxide.
The buffer was invented in 1931 by the English chemist Hubert Thomas Stanley "Kevin" Britton (1892–1960) and the New Zealand chemist Robert Anthony Robinson (1904–1979).
See also
Buffer solution
Good's buffers
References
Acid–base chemistry
Buffer solutions
Chemical tests | Britton–Robinson buffer | Chemistry | 307 |
14,236,540 | https://en.wikipedia.org/wiki/Solid%20sweep | The sweep Sw of a solid S is defined as the solid created when a motion M is applied to a given solid. The solid S should be considered to be a set of points in the Euclidean space R3. Then the solid Sw which is generated by sweeping S over M will contain all the points over which the points of S have moved during the motion M. Solid sweeping which uses this process is employed in different fields, including the modelling of fillets and rounds, interference detection and the simulation of the numerical controlled machining process.
References
Euclidean solid geometry | Solid sweep | Physics,Mathematics | 113 |
67,704,475 | https://en.wikipedia.org/wiki/Richard%20A.%20Andersen%20%28chemist%29 | Richard "Dick" A. Andersen (November 16, 1942 – June 16, 2019) was a professor of chemistry at the University of California, Berkeley, and faculty senior scientist at the chemical sciences division of Lawrence Berkeley National Laboratory.
Early life and career
Born in Oklahoma in 1942, Richard Allan Andersen was raised and educated in the small town of Yankton, South Dakota. He obtained his bachelor's degree in 1965 from the University of South Dakota. Andersen pursued graduate studies at the University of Wyoming, working under the supervision of Professor Geoffrey Coates. Andersen was Coates' last student. In 1973, Andersen earned his Ph.D. with several fundamental organometallic and alkoxide compounds of beryllium.
Andersen then spent a year as postdoctoral researcher at the Oslo Centre for Industrial Research. On the day it was announced that Geoffrey Wilkinson and Ernst O. Fisher would share the 1973 Nobel Prize in Chemistry, Andersen received an offer to conduct his postdoctoral research in Wilkinson's laboratory at Imperial College London. Andersen took up this post a few months later, in 1974. In June 1976 he joined the faculty at the University of California, Berkeley's department of chemistry. He remained a professor in the department until his death in 2019.
Andersen was also active in teaching throughout his career, and was well-known for teaching from the primary inorganic chemistry literature, as well as his hands-on approach to teaching undergraduate laboratory courses.
Research
Andersen began his independent research career at UC Berkeley in 1976. Initially his research focused on ligand substitution patterns in quadruply-bonded Mo2 complexes. He also studied actinide coordination complexes bearing the sterically bulky amido ligand –N(SiMe3)2, including the uranium(III) compound U[N(SiMe3)2]3, which was later found to have pyramidal geometry.
Awards and honors
Andersen was awarded many Visiting Professorships around the world, including appointments in Sevilla, Lyon, Montpellier, New South Wales, and Zurich. He was also an Alexander von Humboldt Professor in various locations in Germany (1994). Andersen was also a member of the Royal Chemical Society, American Chemical Society, and Sigma Xi.
References
1942 births
2019 deaths
American chemists
Academics from Oklahoma
Inorganic chemists
20th-century American chemists
21st-century American chemists
UC Berkeley College of Chemistry faculty
Chemists from Oklahoma
Chemists from South Dakota | Richard A. Andersen (chemist) | Chemistry | 482 |
47,800,230 | https://en.wikipedia.org/wiki/Valdemar%20Poulsen%20Gold%20Medal | The Valdemar Poulsen Gold Medal, named after radio pioneer Valdemar Poulsen, was awarded each year for outstanding research in the field of radio techniques and related fields by the . The award was presented on November 23, the anniversary of Poulsen's birth. The award was discontinued in 1993.
Recipients
1939 Valdemar Poulsen
1946 Robert Watson-Watt
1947 Ernst Alexanderson
1948 Edward Victor Appleton
1953 Balthasar van der Pol
1956 Harald T. Friis
1958 Hidetsugu Yagi
1960 Charles P. Ginsburg
1963 John R. Pierce
1969 Jay Wright Forrester
1973 J. B. Gunn
1976 Andrew Bobeck
References
Science and technology awards | Valdemar Poulsen Gold Medal | Technology | 139 |
3,008,376 | https://en.wikipedia.org/wiki/Frangipane | Frangipane ( ) is a sweet almond-flavoured custard, typical in French pastry, used in a variety of ways, including cakes and such pastries as the Bakewell tart, conversation tart, Jésuite and pithivier. A French spelling from a 1674 cookbook is franchipane, with the earliest modern spelling coming from a 1732 confectioners' dictionary. Originally designated as a custard tart flavoured by almonds or pistachios, it came later to designate a filling that could be used in a variety of confections and baked goods.
It is traditionally made by combining two parts of almond cream (crème d’amande) with one part pastry cream (crème pâtissière). Almond cream is made from butter, sugar, eggs, almond meal, bread flour, and rum; and pastry cream is made from whole milk, vanilla bean, cornstarch, sugar, egg yolks or whole eggs, and butter. There are many variations on both of these creams as well as on the proportion of almond cream to pastry cream in frangipane.
On Epiphany, the French cut the king cake – a round cake made of frangipane layers – into slices to be distributed by a child known as le petit roi (the little king), who is usually hiding under the dining table. The cake is decorated with stars, a crown, flowers and a special bean hidden inside the cake. Whoever gets the piece of the frangipane cake with the bean is crowned "king" or "queen" for the following year.
Etymology
The word frangipane is a French term used to name products with an almond flavour. The word comes ultimately from the last name of Marquis Muzio Frangipani or Cesare Frangipani. The word first denoted the frangipani plant, from which was produced the perfume originally said to flavor frangipane. Other sources say that the name as applied to the almond custard was an homage by 16th-century Parisian chefs in name only to Frangipani, who created a jasmine-based perfume with a smell like the flowers to perfume leather gloves.
See also
List of almond dishes
List of custard desserts
List of pastries
References
Notes
Bibliography
"Frangipane." Oxford Companion to Food (1999), 316.
Almond dishes
Food ingredients
Custard desserts | Frangipane | Technology | 499 |
19,827,765 | https://en.wikipedia.org/wiki/HATU | HATU (Hexafluorophosphate Azabenzotriazole Tetramethyl Uronium) is a reagent used in peptide coupling chemistry to generate an active ester from a carboxylic acid. HATU is used along with Hünig's base (N,N-diisopropylethylamine), or triethylamine to form amide bonds. Typically DMF is used as solvent, although other polar aprotic solvents can also be used.
History
HATU was first reported by Louis A. Carpino in 1993 as an efficient means of preparing active esters derived from 1-hydroxy-7-azabenzotriazole (HOAt). HATU is commonly prepared from HOAt and TCFH under basic conditions and can exist as either the uronium salt (O-form) or the less reactive iminium salt (N-form). HATU was initially reported as the O-form using the original preparation reported by Carpino; however, X-ray crystallographic and NMR studies revealed the true structure of HATU to be the less reactive guanidinium isomer. It is, however, possible to obtain the uronium isomer by preparing HATU using KOAt in place of HOAt and working up the reaction mixture quickly to prevent isomerisation.
Reactions
HATU is commonly encountered in amine acylation reactions (i.e., amide formation). Such reactions are typically performed in two distinct reaction steps: (1) reaction of a carboxylic acid with HATU to form the OAt-active ester; then (2) addition of the nucleophile (amine) to the active ester solution to afford the acylated product.
The reaction mechanism of carboxylic acid activation by HATU and subsequent N-acylation is summarised in the figure below. The mechanism is shown using the more commonly encountered and commercially available iminium isomer; a similar mechanism, however, is likely to apply to the uronium form. In the first step, the carboxylate anion (formed by deprotonation by an organic base [not shown]) attacks HATU to form the unstable O-acyl(tetramethyl)isouronium salt. The OAt anion rapidly attacks the isouronium salt, affording the OAt-active ester and liberating a stoichiometric quantity of tetramethylurea. Addition of a nucleophile, such as an amine, to the OAt-active ester results in acylation.
The high coupling efficiencies and fast reaction rates associated with HATU coupling are thought to arise from a neighbouring group effect brought about by the pyridine nitrogen atom, which stabilises the incoming amine through a hydrogen-bonded 7-membered cyclic transition state.
Because of the extraordinary coupling efficiency of HATU, it has often been used for intramolecular amidation (coupling of a carboxylic acid and an amine of the same molecule). For example, the formation of cyclo-tetrapeptides through the head-to-tail reaction of linear tetrapeptides assisted by HATU has been reported.
Safety
HATU has been shown to induce allergic reactions. In vivo dermal sensitization studies according to OECD 429 confirmed HATU is a moderate skin sensitizer, showing a response at 1.2 wt% in the Local Lymph Node Assay (LLNA) placing it in Globally Harmonized System of Classification and Labelling of Chemicals (GHS) Dermal Sensitization Category 1A. Thermal hazard analysis by differential scanning calorimetry (DSC) shows HATU is potentially explosive.
References
Peptide coupling reagents
Hexafluorophosphates
Triazolopyridines | HATU | Chemistry,Biology | 815 |
7,695,077 | https://en.wikipedia.org/wiki/Guardhouse | A guardhouse (also known as a watch house, guard building, guard booth, guard shack, security booth, security building, or sentry building) is a building used to house personnel and security equipment. Guardhouses have historically been dormitories for sentries or guards, and places where sentries not posted to sentry posts wait "on call", but are more recently staffed by a contracted security company. Some guardhouses also function as jails.
Modern guardhouses
In 21st century commercial, industrial, institutional, governmental, or residential facilities, Guardhouses are generally placed at the entrance as checkpoints for securing, monitoring and maintaining access control into the secured facility. In the case of small to mid-sized facilities, generally, the entire physical security envelope is controlled from the Guardhouse.
One of the general orders of a sentry in the United States Navy and Marine corps is to "Repeat all calls more distant from the guardhouse than my own." Guardhouses thus serve as central communications hubs for outlying sentry posts, being where the Corporal of the Guard is stationed. When sentries are relieved by their replacements, the sentry stationed at the Guardhouse, designated "No. 1", is conventionally relieved first.
Modern guardhouses are manufactured with welded, galvanized steel construction, insulated, include heat and light, have 360 degree visibility, and can also be bullet resistant. These guardhouses keep security guards comfortable as well as secure. The first modern guardhouse was manufactured by Par-Kut International in 1954. In the 21st century, guardhouses have provided more options such as exterior floodlights, reflective bullet resistant glass, gun ports, elevated platforms, highly mobile trailer mounting, anti fatigue floor mats, dimmable interior light, and a built in bathroom.
Historical guardhouses
In the Fortress of Louisbourg in the 18th century, Guardhouses were where sentries were stationed to eat and sleep between periods of sentry duty at the 21 sentry posts around the town. The town had five Guardhouses (the Dauphine Gate, the townside entrance to the King's Bastion, the Queen's Gate, the Maurepas Gate, and the Pièce de la Grave), and whilst not sleeping sentries would be "on call" from those Guardhouses at need.
In the Guardhouse at Fort Scott National Historic Site, typical furnishings for guard quarters included benches, tables, shelves, a platform bed for the men resting between assignments, arms racks, a fireplace or stove, and leather buckets (used for firefighting - another duty of guards). Prison cells were unfurnished, containing simply a slop bucket and iron rings on walls for the attachment of shackles.
See also
Gatehouse
Neue Wache
Kōban
The Guardhouse, a mountain in Montana
Guard tower
Sentry box
Blockhouse
References
Further reading
- a description of the guardhouse at Colonial Williamsburg
- a description of the guardhouse at Arsenal Technical High School, tracing its history from being a combined guardhouse and jail for the U.S. Army barracks through being a school "beanery" to its current function as a school security office
- an archaeological report on the guardhouse at the Macquarie Street entrance of the Lancer Barracks in Parramatta, New South Wales
- an extract from the Fort Concho Medical History Record of May 1871, describing in detail the now nonexistent guardhouse
Buildings and structures by type | Guardhouse | Engineering | 690 |
4,773,639 | https://en.wikipedia.org/wiki/IG%20Bergbau%2C%20Chemie%2C%20Energie | The IG Bergbau, Chemie, Energie (IG BCE) is a trade union in Germany. It is one of eight industrial affiliations of the German Confederation of Trade Unions (DGB).
History and structure
The IG BCE was created in 1997 from the merger of the Chemical, Paper and Ceramic Union, Leather Union, and Union of Mining and Energy. It covers workers in the following industries: mining (especially of coal), chemicals, natural gas, glass, rubber, ceramics, plastics, leather, petrol (and related products), paper, recycling, and water. With some 645,000 members (as of 2016) IG BCE represents about one tenth of all DGB members and is the third biggest union within that confederation. There are some 1,100 locals and 900 groups of shop stewards organized in 42 regional districts, which cooperate in eight state chapters: Baden-Württemberg, Bavaria, Hesse/Thuringia, North, Northeast, North Rhine, Rhineland-Palatinate/Saarland and Westphalia.
In 2015, IG BCE successfully negotiated a pay rise for 550,0000 employees with Germany's chemical employers association BAVC.
Political positions
IG BCE has been playing a key role in Germany's energy transition. In 2014, the union proposed that Germany's utilities should pool their struggling hard coal plants into a joint entity, referring to the hard coal plants with total capacity of between 28 and 30 gigawatts (GW), most of which are owned by E.ON, RWE, EnBW, Vattenfall and STEAG. By 2015, the union proposed gradually phasing out old coal-fired power stations and building combined heat and power (CHP) stations fired with gas. Its plan included taking at least 2.7 gigawatts of brown coal-fired capacity gradually out of the market rather than risking sudden closures. On the initiative of IG BCE, thousands of coal miners and workers in coal-fired plants marched in Berlin in April 2015 to protest a proposed levy on the oldest, most polluting power stations, saying it could lead to losses of up to 100,000 jobs and the decline of the industry in Germany.
In 2016, IG BCE, with the support of the BDI industry group, again raised concerns about plans for Germany to end its use of brown coal amid calls for it to set out a timetable for ending coal-fired power production.
Presidents
1997: Hubertus Schmoldt
2009: Michael Vassiliadis
Notable members
Barbara Hendricks – former Federal Minister for the Environment, Nature Conservation, Building and Nuclear Safety
Ulla Schmidt – former Federal Minister of Health
Martin Schulz – former President of the European Parliament
Peer Steinbrück – former Minister-President of North Rhine-Westphalia
References
Citations
Sources
External links
IG BCE English site
1997 establishments in Germany
German Trade Union Confederation
Chemical industry trade unions
Energy industry in Germany
Energy industry trade unions
Mining trade unions
Trade unions established in 1997 | IG Bergbau, Chemie, Energie | Chemistry | 607 |
63,452,262 | https://en.wikipedia.org/wiki/NGC%20812 | NGC 812 is a spiral galaxy located in the Andromeda constellation, an estimated 175 million light-years from the Milky Way. NGC 812 was discovered on December 11, 1876 by astronomer Édouard Stephan.
Two supernovae have been observed in NGC 812: SN2010jj (typeIIn, mag. 17) and SN2020udy (typeIax[02cx-like], mag. 19.6)
See also
List of NGC objects (1–1000)
References
0812
Spiral galaxies
Andromeda (constellation)
008066
Astronomical objects discovered in 1876
Discoveries by Édouard Stephan | NGC 812 | Astronomy | 129 |
28,690,297 | https://en.wikipedia.org/wiki/Bubbler%20cylinder | A Bubbler cylinder is a component of a unit for the metal organic chemical vapor deposition (MOCVD). They are devices that are used for conveying electronic grade metalorganic compounds from a liquid or solid precursor into a usable vapor.
Apparatus
The container of the Bubblers is similar to the construction of a gas washing bottle and used for the protected storage of metalorganic compounds to exclude air (oxygen, moisture). The bubbler has a supply pipe and a sampling tube. The inlet tube ends just before the bottom of the tube. In this tube, an inert gas is introduced, which bubbles through the liquid chemical; a solid chemical will sublime. The mixture of the controlled inert gas and the vaporized chemical leaves the cylinder into a downstream reaction vessel. The temperature is controlled by a thermostat, so that a defined, constant steam pressure can be achieved. The supply of the often expensive and sensitive chemical is controlled by the regulated flow of inert gas and the temperature of the bubbler, resulting in a given vapor pressure of the chemical.
The tube between the bubbler and the reactor has to have a higher temperature than the bubble, otherwise the precursor would condense in the tube and therefore uncontrolled droplets would be passed into the reaction vessel. If this happens with a solid precursor, it could plug the line.
Application
For example, during the production of high brightness light emitting diodes, gallium or other group III-V elements are epitaxially deposited onto a single crystal silicon substrate. The gallium is introduced into the MOVPE reactor chamber via a vapor. This vapor is generated by bubbling an inert carrier gas (such as nitrogen or argon) through a cylinder with a dip tube through a metalorganic precursor, such as trimethylgallium. The inert carrier gas and the metalorganic vapor is then introduced into the MOVPE (or MOCVD) reactor chamber during the production of high brightness LEDs.
External links
Description of a Bubbler System
Liquid injection system based on Mass Flow Controller
Semiconductor fabrication equipment | Bubbler cylinder | Engineering | 429 |
58,507,757 | https://en.wikipedia.org/wiki/Aspergillus%20sloanii | Aspergillus sloanii is a species of fungus in the genus Aspergillus. It is from the Aspergillus section. The species was first described in 2014. It has been reported to produce auroglaucin, bisanthrons, dihydroauroglaucin, echinulins, flavoglaucin, physcion, tetracyclic, and tetrahydroauroglaucin.
Growth and morphology
A. sloanii has been cultivated on both Czapek yeast extract agar (CYA) plates and yeast extract sucrose agar (YES) plates. The growth morphology of the colonies can be seen in the pictures below.
References
sloanii
Fungi described in 2014
Fungus species | Aspergillus sloanii | Biology | 156 |
33,879 | https://en.wikipedia.org/wiki/Windows%20XP | Windows XP is a major release of Microsoft's Windows NT operating system. It was released to manufacturing on August 24, 2001, and later to retail on October 25, 2001. It is a direct successor to Windows 2000 for high-end and business users and Windows Me for home users.
Development of Windows XP began in the late 1990s under the codename "Neptune", built on the Windows NT kernel and explicitly intended for mainstream consumer use. An updated version of Windows 2000 was also initially planned for the business market. However, in January 2000, both projects were scrapped in favor of a single OS codenamed "Whistler", which would serve as a single platform for both consumer and business markets. As a result, Windows XP is the first consumer edition of Windows not based on the Windows 95 kernel or MS-DOS. Windows XP removed support for PC-98, i486, and SGI Visual Workstation 320 and 540, and will only run on 32-bit x86 CPUs and devices that use BIOS firmware.
Upon its release, Windows XP received critical acclaim, noting increased performance and stability (especially compared to Windows Me), a more intuitive user interface, improved hardware support, and expanded multimedia capabilities. Windows XP and Windows Server 2003 were succeeded by Windows Vista and Windows Server 2008, released in 2007 and 2008, respectively.
Mainstream support for Windows XP ended on April 14, 2009, and extended support ended on April 8, 2014. Windows Embedded POSReady 2009, based on Windows XP Professional, received security updates until April 2019. The final security update for Service Pack 3 was released on May 14, 2019. Unofficial methods were made available to apply the updates to other editions of Windows XP. Microsoft has discouraged this practice, citing compatibility issues.
, globally, under 0.6% of Windows PCs and 0.1% of all devices across all platforms continued to run Windows XP.
Development
In the late 1990s, initial development of what would become Windows XP was focused on two individual products: "Odyssey", which was reportedly intended to succeed the future Windows 2000 and "Neptune", which was reportedly a consumer-oriented operating system using the Windows NT architecture, succeeding the MS-DOS-based Windows 98.
However, the projects proved to be too ambitious. In January 2000, shortly prior to the official release of Windows 2000, technology writer Paul Thurrott reported that Microsoft had shelved both Neptune and Odyssey in favor of a new product codenamed "Whistler", named after Whistler, British Columbia, as many Microsoft employees skied at the Whistler-Blackcomb ski resort. The goal of Whistler was to unify both the consumer and business-oriented Windows lines under a single, Windows NT platform. Thurrott stated that Neptune had become "a black hole when all the features that were cut from Windows Me were simply re-tagged as Neptune features. And since Neptune and Odyssey would be based on the same code-base anyway, it made sense to combine them into a single project".
At PDC on July 13, 2000, Microsoft announced that Whistler would be released during the second half of 2001, and also unveiled the first preview build, 2250, which featured an early implementation of Windows XP's visual styles system and interface changes to Windows Explorer and the Control Panel.
Microsoft released the first public beta build of Whistler, build 2296, on October 31, 2000. Subsequent builds gradually introduced features that users of the release version of Windows XP would recognize, such as Internet Explorer 6.0, the Microsoft Product Activation system, and the Bliss desktop background.
Whistler was officially unveiled during a media event on February 5, 2001, under the name Windows XP, where XP stands for "eXPerience".
Release
In June 2001, Microsoft indicated that it was planning to spend at least US$1 billion on marketing and promoting Windows XP, in conjunction with Intel and other PC makers. The theme of the campaign, "Yes You Can", was designed to emphasize the platform's overall capabilities. Microsoft had originally planned to use the slogan "Prepare to Fly", but it was replaced because of sensitivity issues in the wake of the September 11 attacks.
On August 24, 2001, Windows XP build 2600 was released to manufacturing (RTM). During a ceremonial media event at Microsoft Redmond Campus, copies of the RTM build were given to representatives of several major PC manufacturers in briefcases, who then flew off on decorated helicopters. While PC manufacturers would be able to release devices running XP beginning on September 24, 2001, XP was expected to reach general retail availability on October 25, 2001. On the same day, Microsoft also announced the final retail pricing of XP's two main editions, "Home" (as a replacement for Windows Me for home computing) and "Professional" (as a replacement for Windows 2000 for high-end users).
New and updated features
User interface
While retaining some similarities to previous versions, Windows XP's interface was overhauled with a new visual appearance, with an increased use of alpha compositing effects, drop shadows, and "visual styles", which completely changed the appearance of the operating system. The number of effects enabled are determined by the operating system based on the computer's processing power, and can be enabled or disabled on a case-by-case basis. XP also added ClearType, a new subpixel rendering system designed to improve the appearance of fonts on liquid-crystal displays. A new set of system icons was also introduced. The default wallpaper, Bliss, is a photo of a landscape in the Napa Valley outside Napa, California, with rolling green hills and a blue sky with stratocumulus and cirrus clouds.
The Start menu received its first major overhaul in XP, switching to a two-column layout with the ability to list, pin, and display frequently used applications, recently opened documents, and the traditional cascading "All Programs" menu. The taskbar can now group windows opened by a single application into one taskbar button, with a popup menu listing the individual windows. The notification area also hides "inactive" icons by default. A "common tasks" list was added, and Windows Explorer's sidebar was updated to use a new task-based design with lists of common actions; the tasks displayed are contextually relevant to the type of content in a folder (e.g. a folder with music displays offers to play all the files in the folder, or burn them to a CD).
Fast user switching allows additional users to log into a Windows XP machine without existing users having to close their programs and log out. Although only one user at the time can use the console (i.e., monitor, keyboard, and mouse), previous users can resume their session once they regain control of the console. Service Pack 2 and Service Pack 3 also introduced new features to Windows XP post-release, including the Windows Security Center, Bluetooth support, Data Execution Prevention, Windows Firewall, and support for SDHC cards that are larger than 4 GB and smaller than 32 GB.
Infrastructure
Windows XP uses prefetching to improve startup and application launch times. It also became possible to revert the installation of an updated device driver, should the updated driver produce undesirable results.
A copy protection system known as Windows Product Activation was introduced with Windows XP and its server counterpart, Windows Server 2003. All non-enterprise (Volume Licensing) Windows licenses must be tied to a unique ID generated using information from the computer hardware, transmitted either via the internet or a telephone hotline. If Windows is not activated within 30 days of installation, the OS will cease to function until it is activated. Windows also periodically verifies the hardware to check for changes. If significant hardware changes are detected, the activation is voided, and Windows must be re-activated.
Networking and internet functionality
Windows XP was originally bundled with Internet Explorer 6, Outlook Express 6, Windows Messenger, and MSN Explorer. New networking features were also added, including Internet Connection Firewall, Internet Connection Sharing integration with UPnP, NAT traversal APIs, Quality of Service features, IPv6 and Teredo tunneling, Background Intelligent Transfer Service, extended fax features, network bridging, peer to peer networking, support for most DSL modems, IEEE 802.11 (Wi-Fi) connections with auto configuration and roaming, TAPI 3.1, and networking over FireWire. Remote Assistance and Remote Desktop were also added, which allow users to connect to a computer running Windows XP from across a network or the Internet and access their applications, files, printers, and devices or request help. Improvements were also made to IntelliMirror features such as Offline Files, roaming user profiles, and folder redirection.
Backward compatibility
To enable running software that targets or locks out specific versions of Windows, "Compatibility mode" was added. It allows pretending a selected earlier version of Windows to software, starting at Windows 95. This feature was first introduced in Windows 2000 Service Pack 2, released five months before the release of Windows XP, and was backported from prerelease Windows XP builds. Unlike with Windows XP, however, it was hidden from the operating system as it was not enabled by default and had to be manually activated through the Register Server utility. It was also only available to administrator users. Windows XP has this feature activated out of the box and also grants it to regular users.
Other features
Improved application compatibility and shims compared to Windows 2000.
DirectX 8.1, upgradeable to DirectX 9.0c.
A number of new features in Windows Explorer including task panes, thumbnails, and the option to view photos as a slideshow.
Improved imaging features such as Windows Picture and Fax Viewer.
Faster start-up, (because of improved Prefetch functions) logon, logoff, hibernation, and application launch sequences.
Numerous improvements to increase the system reliability such as improved System Restore, Automated System Recovery, and driver reliability improvements through Device Driver Rollback.
Hardware support improvements such as FireWire 800, and improvements to multi-monitor support under the name "DualView".
Fast user switching.
The ClearType font rendering mechanism, which is designed to improve text readability on liquid-crystal display (LCD) and similar monitors, especially laptops.
Side-by-side assemblies and registration-free COM.
General improvements to international support such as more locales, languages and scripts, MUI support in Terminal Services, improved Input Method Editors, and National Language Support.
Removed features
Some of the programs and features that were part of the previous versions of Windows did not make it to Windows XP. Various MS-DOS commands available in its Windows 9x predecessor were removed, as were the POSIX and OS/2 subsystems.
In networking, NetBEUI, NWLink and NetDDE were deprecated and not installed by default. Plug-and-play–incompatible communication devices (like modems and network interface cards) were no longer supported.
Service Pack 2 and Service Pack 3 also removed features from Windows XP, including support for TCP half-open connections and the address bar on the taskbar.
Editions
Windows XP was released in two major editions on launch: Home Edition and Professional Edition. Both editions were made available at retail as pre-loaded software on new computers and as boxed copies. Boxed copies were sold as "Upgrade" or "Full" licenses; the "Upgrade" versions were slightly cheaper, but require an existing version of Windows to install. The "Full" version can be installed on systems without an operating system or existing version of Windows. The two editions of XP were aimed at different markets: Home Edition is explicitly intended for consumer use and disables or removes certain advanced and enterprise-oriented features present on Professional, such as the ability to join a Windows domain, Internet Information Services, and Multilingual User Interface. Windows 98 or Me can be upgraded to either edition, but Windows NT 4.0 and Windows 2000 can only be upgraded to Professional. Windows' software license agreement for pre-loaded licenses allows the software to be "returned" to the OEM for a refund if the user does not wish to use it. Despite the refusal of some manufacturers to honor the entitlement, it has been enforced by courts in some countries.
Two specialized variants of XP were introduced in 2002 for certain types of hardware, exclusively through OEM channels as pre-loaded software. Windows XP Media Center Edition was initially designed for high-end home theater PCs with TV tuners (marketed under the term "Media Center PC"), offering expanded multimedia functionality, an electronic program guide, and digital video recorder (DVR) support through the Windows Media Center application. Microsoft also unveiled Windows XP Tablet PC Edition, which contains additional pen input features, and is optimized for mobile devices meeting its Tablet PC specifications. Two different 64-bit editions of XP were made available. The first, Windows XP 64-Bit Edition, was intended for IA-64 (Itanium) systems; as IA-64 usage declined on workstations in favor of AMD's x86-64 architecture, the Itanium edition was discontinued in January 2005. A new 64-bit edition supporting the x86-64 architecture, called Windows XP Professional x64 Edition, was released in April 2005.
Microsoft also targeted emerging markets with the 2004 introduction of Windows XP Starter Edition, a special variant of Home Edition intended for low-cost PCs. The OS is primarily aimed at first-time computer owners, containing heavy localization (including wallpapers and screen savers incorporating images of local landmarks), and a "My Support" area which contains video tutorials on basic computing tasks. It also removes certain "complex" features, and does not allow users to run more than three applications at a time. After a pilot program in India and Thailand, Starter was released in other emerging markets throughout 2005. In 2006, Microsoft also unveiled the FlexGo initiative, which would also target emerging markets with subsidized PCs on a pre-paid, subscription basis.
As a result of unfair competition lawsuits in Europe and South Korea, which both alleged that Microsoft had improperly leveraged its status in the PC market to favor its own bundled software, Microsoft was ordered to release special editions of XP in these markets that excluded certain applications. In March 2004, after the European Commission fined Microsoft €497 million (US$603 million), Microsoft was ordered to release "N" editions of XP that excluded Windows Media Player, encouraging users to pick and download their own media player software. As it was sold at the same price as the edition with Windows Media Player included, certain OEMs (such as Dell, who offered it for a short period, along with Hewlett-Packard, Lenovo and Fujitsu Siemens) chose not to offer it. Consumer interest was minuscule, with roughly 1,500 units shipped to OEMs, and no reported sales to consumers. In December 2005, the Korean Fair Trade Commission ordered Microsoft to make available editions of Windows XP and Windows Server 2003 that do not contain Windows Media Player or Windows Messenger. The "K" and "KN" editions of Windows XP were released in August 2006, and are only available in English and Korean, and also contain links to third-party instant messenger and media player software.
Service packs
A service pack is a cumulative update package that is a superset of all updates, and even service packs, that have been released before it. Three service packs have been released for Windows XP. Service Pack 3 is slightly different, in that it needs at least Service Pack 1 to have been installed, in order to update a live OS. However, Service Pack 3 can still be embedded into a Windows installation disc; SP1 is not reported as a prerequisite for doing so.
The boot screens for all editions of Windows XP have been unified by Service Pack 2 for Windows XP with a new one that no longer displays the SKU, with the boot screen for Home Edition using a blue progress bar instead of green. The copyright years on the boot screen were also removed.
Service Pack 1
Service Pack 1 (SP1) for Windows XP was released on September 9, 2002. It contained over 300 minor, post-RTM bug fixes, along with all security patches released since the original release of XP. SP1 also added USB 2.0 support, the Microsoft Java Virtual Machine, .NET Framework support, and support for technologies used by the then-upcoming Media Center and Tablet PC editions of XP. The most significant change on SP1 was the addition of Set Program Access and Defaults, a settings page which allows programs to be set as default for certain types of activities (such as media players or web browsers) and for access to bundled, Microsoft programs (such as Internet Explorer or Windows Media Player) to be disabled. This feature was added to comply with the settlement of United States v. Microsoft Corp., which required Microsoft to offer the ability for OEMs to bundle third-party competitors to software it bundles with Windows (such as Internet Explorer and Windows Media Player), and give them the same level of prominence as those normally bundled with the OS.
On February 3, 2003, Microsoft released Service Pack 1a (SP1a). It was the same as SP1, except the Microsoft Java Virtual Machine was excluded.
Service Pack 2
Service Pack 2 (SP2) for Windows XP Home edition and Professional edition was released on August 25, 2004. Headline features included WPA encryption compatibility for Wi-Fi and usability improvements to the Wi-Fi networking user interface, partial Bluetooth support, and various improvements to security systems.
Headed by former computer hacker Window Snyder, the service pack's security improvements (codenamed "Springboard", as these features were intended to underpin additional changes in Longhorn) included a major revision to the included firewall (renamed Windows Firewall, and now enabled by default), and an update to Data Execution Prevention, which gained hardware support in the NX bit that can stop some forms of buffer overflow attacks. Raw socket support is removed (which supposedly limits the damage done by zombie machines) and the Windows Messenger service (which had been abused to cause pop-up advertisements to be displayed as system messages without a web browser or any additional software) became disabled by default. Additionally, security-related improvements were made to e-mail and web browsing. Service Pack 2 also added Security Center, an interface that provides a general overview of the system's security status, including the state of the firewall and automatic updates. Third-party firewall and antivirus software can also be monitored from Security Center.
In August 2006, Microsoft released updated installation media for Windows XP and Windows Server 2003 SP2 (SP2b), in order to incorporate a patch requiring ActiveX controls in Internet Explorer to be manually activated before a user may interact with them. This was done so that the browser would not violate a patent owned by Eolas. Microsoft has since licensed the patent, and released a patch reverting the change in April 2008. In September 2007, another minor revision known as SP2c was released for XP Professional, extending the number of available product keys for the operating system to "support the continued availability of Windows XP Professional through the scheduled system builder channel end-of-life (EOL) date of January 31, 2009."
Windows XP Service Pack 2 was later included in Windows Embedded for Point of Service and Windows Fundamentals for Legacy PCs.
Service Pack 3
The third and final Service Pack, SP3, was released through different channels between April 21 and June 10, 2008, about a year after the release of Windows Vista, and about a year before the release of Windows 7. Service Pack 3 was not available for Windows XP x64 Edition, which was based on the Windows Server 2003 kernel and, as a result, used its service packs rather than the ones for the other editions.
It began being automatically pushed out to Automatic Updates users on July 10, 2008. A feature set overview which detailed new features available separately as stand-alone updates to Windows XP, as well as backported features from Windows Vista, was posted by Microsoft. A total of 1,174 fixes are included in SP3. Service Pack 3 could be installed on systems with Internet Explorer up to and including version 8; Internet Explorer 7 was not included as part of SP3. It also did not include Internet Explorer 8, but instead was included in Windows 7, which was released one year after XP SP3.
Service Pack 3 included security enhancements over and above those of SP2, including APIs allowing developers to enable Data Execution Prevention for their code, independent of system-wide compatibility enforcement settings, the Security Support Provider Interface, improvements to WPA2 security, and an updated version of the Microsoft Enhanced Cryptographic Provider Module that is FIPS 140-2 certified.
In incorporating all previously released updates not included in SP2, Service Pack 3 included many other key features. Windows Imaging Component allowed camera vendors to integrate their own proprietary image codecs with the operating system's features, such as thumbnails and slideshows. In enterprise features, Remote Desktop Protocol 6.1 included support for ClearType and 32-bit color depth over RDP, while improvements made to Windows Management Instrumentation in Windows Vista to reduce the possibility of corruption of the WMI repository were backported to XP SP3.
In addition, SP3 contains updates to the operating system components of Windows XP Media Center Edition (MCE) and Windows XP Tablet PC Edition, and security updates for .NET Framework version 1.0, which is included in these editions. However, it does not include update rollups for the Windows Media Center application in Windows XP MCE 2005. SP3 also omits security updates for Windows Media Player 10, although the player is included in Windows XP MCE 2005. The Address Bar DeskBand on the Taskbar is no longer included because of antitrust violation concerns.
Unofficial SP3 ZIP download packages were released on a now-defunct website called The Hotfix from 2005 to 2007. The owner of the website, Ethan C. Allen, was a former Microsoft employee in Software Quality Assurance and would comb through the Microsoft Knowledge Base articles daily and download new hotfixes Microsoft would put online within the articles. The articles would have a "kbwinxppresp3fix" and/or "kbwinxpsp3fix" tag, thus allowing Allen to easily find and determine which fixes were planned for the official SP3 release to come. Microsoft publicly stated at the time that the SP3 pack was unofficial and advised users to not install it. Allen also released a Vista SP1 package in 2007, for which Allen received a cease-and-desist email from Microsoft.
Windows XP Service Pack 3 was later included in Windows Embedded Standard 2009 and Windows Embedded POSReady 2009.
System requirements
System requirements for Windows XP are as follows:
Notes
Physical memory limits
The maximum amount of RAM that Windows XP can support varies depending on the product edition and the processor architecture. All 32-bit editions of XP support up to 4 GB, except the Windows XP Starter edition, which supports up to 512 MB of RAM. The 64-bit editions support up to 128 GB.
Processor limits
Windows XP Professional supports up to two physical processors;
Windows XP Home Edition supports only one.
However, XP supports a greater number of logical processors:
32-bit editions support up to 32 logical processors, and 64-bit editions support up to 64 logical processors.
Upgradeability
Several Windows XP components are upgradable to the latest versions, which include new versions introduced in later versions of Windows, and other major Microsoft applications are available. These latest versions for Windows XP include:
ActiveSync 4.5
DirectX 9.0c (June 7, 2010, Redistributable)
Internet Explorer 8
Windows Media Format Runtime and Windows Media Player 11
Microsoft Virtual PC 2007 SP1
.NET Framework 4.0
Visual Studio 2010
Windows Script Host 5.7
Windows Installer 4.5
Microsoft NetMeeting 3.02
Windows Sidebar
Windows Defender
Office 2010 SP2
The Windows Services for UNIX subsystem can be installed to allow certain Unix-based applications to run on the operating system.
Support lifecycle
Support for the original release of Windows XP (without a service pack) ended on August 30, 2005. Both Windows XP Service Pack 1 and 1a were retired on October 10, 2006, and both Windows 2000 and Windows XP SP2 reached their end of support on July 13, 2010, about 24 months after the launch of Windows XP Service Pack 3. The company stopped general licensing of Windows XP to OEMs and terminated retail sales of the operating system on June 30, 2008, 17 months after the release of Windows Vista. However, an exception was announced on April 3, 2008, for OEMs producing what it defined as "ultra low-cost personal computers", particularly netbooks, until one year after the availability of Windows 7 on October 22, 2009. Analysts felt that the move was primarily intended to compete against Linux-based netbooks, although Microsoft's Kevin Hutz stated that the decision was due to apparent market demand for low-end computers with Windows.
Variants of Windows XP for embedded systems have different support policies: Windows XP Embedded SP3 and Windows Embedded for Point of Service SP3 were supported until January and April 2016, respectively. Windows Embedded Standard 2009, which was succeeded by Windows Embedded Standard 7, and Windows Embedded POSReady 2009, which was succeeded by Windows Embedded POSReady 7, were supported until January and April 2019, respectively. These updates, while intended for the embedded editions, could also be downloaded on standard Windows XP with a registry hack, which enabled unofficial patches until April 2019. However, Microsoft advised Windows XP users against installing these fixes, citing compatibility issues.
End of support
On April 14, 2009, the main Windows XP exited mainstream support and entered the extended support phase; Microsoft continued to provide security updates every month for Windows XP, however, free technical support, warranty claims, and design changes were no longer being offered. Extended support for the main version ended on April 8, 2014, over 12 years after the release of Windows XP; normally Microsoft products have a support life cycle of only 10 years. Beyond the final security updates released on April 8 for the main version, no more security patches or support information are provided for XP free-of-charge; "critical patches" will still be created, and made available only to customers subscribing to a paid "Custom Support" plan. As it is a Windows component, all versions of Internet Explorer for Windows XP also became unsupported.
In January 2014, it was estimated that more than 95% of the 3 million automated teller machines in the world were still running Windows XP (which largely replaced IBM's OS/2 as the predominant operating system on ATMs); ATMs have an average lifecycle of between seven and ten years, but some have had lifecycles as long as 15. Plans were being made by several ATM vendors and their customers to migrate to Windows 7-based systems over the course of 2014, while vendors have also considered the possibility of using Linux-based platforms in the future to give them more flexibility for support lifecycles, and the ATM Industry Association (ATMIA) has since endorsed Windows 10 as a further replacement. However, ATMs typically run the embedded variant of Windows XP, which was supported through January 2016. As of May 2017, around 60% of the 220,000 ATMs in India still run Windows XP.
Furthermore, at least 49% of all computers in China still ran XP at the beginning of 2014. These holdouts were influenced by several factors; prices of genuine copies of later versions of Windows in the country are high, while Ni Guangnan of the Chinese Academy of Sciences warned that Windows 8 could allegedly expose users to surveillance by the United States government, and the Chinese government banned the purchase of Windows 8 products for government use in May 2014 in protest of Microsoft's inability to provide "guaranteed" support. The government also had concerns that the impending end of support could affect their anti-piracy initiatives with Microsoft, as users would simply pirate newer versions rather than purchasing them legally. As such, government officials formally requested that Microsoft extend the support period for XP for these reasons. While Microsoft did not comply with their requests, a number of major Chinese software developers, such as Lenovo, Kingsoft and Tencent, will provide free support and resources for Chinese users migrating from XP. Several governments, in particular those of the Netherlands and the United Kingdom, elected to negotiate "Custom Support" plans with Microsoft for their continued, internal use of Windows XP; the British government's deal lasted for a year, and also covered support for Office 2003 (which reached end-of-life the same day) and cost £5.5 million.
On March 8, 2014, Microsoft deployed an update for XP that, on the 8th of each month, displays a pop-up notification to remind users about the end of support; however, these notifications may be disabled by the user. Microsoft also partnered with Laplink to provide a special "express" version of its PCmover software to help users migrate files and settings from XP to a computer with a newer version of Windows.
Despite the approaching end of support of the main version, there were still notable holdouts that had not migrated past XP; many users elected to remain on XP because of the poor reception of Windows Vista, sales of newer PCs with newer versions of Windows declined because of the Great Recession and the effects of Vista, and deployments of new versions of Windows in enterprise environments require a large amount of planning, which includes testing applications for compatibility (especially those that are dependent on Internet Explorer 6, which is not compatible with newer versions of Windows). Major security software vendors (including Microsoft itself) planned to continue offering support and definitions for Windows XP past the end of support to varying extents, along with the developers of Google Chrome, Mozilla Firefox, and Opera web browsers; despite these measures, critics similarly argued that users should eventually migrate from XP to a supported platform.
The United States' Computer Emergency Readiness Team released an alert in March 2014 advising users of the impending end of support, and informing them that using XP after April 8 may prevent them from meeting US government information security requirements.
Microsoft continued to provide Security Essentials virus definitions and updates for its Malicious Software Removal Tool (MSRT) for XP until July 14, 2015. As the end of extended support approached, Microsoft began to increasingly urge XP customers to migrate to newer versions such as Windows 7 or 8 in the interest of security, suggesting that attackers could reverse engineer security patches for newer versions of Windows and use them to target equivalent vulnerabilities in XP. Windows XP is remotely exploitable by numerous security holes that were discovered after Microsoft stopped supporting it.
Similarly, specialized devices that run XP, particularly medical devices, must have any revisions to their software—even security updates for the underlying operating system—approved by relevant regulators before they can be released. For this reason, manufacturers often did not allow any updates to devices' operating systems, leaving them open to security exploits and malware.
Despite the end of support of the main version, Microsoft has released three emergency security updates for the operating system to patch major security vulnerabilities:
A patch released in May 2014 to address recently discovered vulnerabilities in Internet Explorer 6 through 11 on all versions of Windows.
A patch released in May 2017 to address a vulnerability that was being leveraged by the WannaCry ransomware attack.
A patch released in May 2019 to address a critical code execution vulnerability in Remote Desktop Services which can be exploited in a similar way as the WannaCry vulnerability.
Researchers reported in August 2019 that Windows 10 users may be at risk for "critical" system compromise because of design flaws of hardware device drivers from multiple providers. In the same month, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Microsoft Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may now include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from the older Windows XP version to the most recent Windows 10 versions; a patch to correct the flaw is currently available.
Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows XP and Windows Me would end on July 31, 2019 (and for Windows 7 on January 22, 2020).
In 2020, Microsoft announced that it would disable the Windows Update service for SHA-1 endpoints; since Windows XP did not get an update for SHA-2, Windows Update Services are no longer available on the OS as of late July 2020. However, as of March 2024, many of the old updates for Windows XP are still available on the Microsoft Update Catalog. A third-party tool named Legacy Update allows previously-released updates for Windows XP to be installed from the Update Catalog.
Third-party support
In February 2016, Opera announced that version 36 of its web browser would be the last version of the web browser to support Windows XP and Windows Vista. Google Chrome ended support for Windows XP and Windows Vista in April 2016. Firefox 52 ESR (Extended Support Release), which was released in March 2017, was the last version to support Windows XP and Windows Vista. Support for Firefox 52 ESR ended in June 2018.
Blizzard Entertainment ended support for World of Warcraft, StarCraft II, Diablo III, Hearthstone, and Heroes of the Storm on Windows XP and Vista in October 2017. Steam ended support for Windows XP and Vista on January 1, 2019.
Supermium, a fork of the Chromium project that Google Chrome is based on, is maintained for Windows XP and later unsupported versions of Windows as of 2024. MyPal, a fork of Firefox 68, is also being actively maintained for Windows XP.
Reception
On release, Windows XP received critical acclaim. CNET described the operating system as being "worth the hype", considering the new interface to be "spiffier" and more intuitive than previous versions, but feeling that it may "annoy" experienced users with its "hand-holding". XP's expanded multimedia support and CD burning functionality were also noted, along with its streamlined networking tools. The performance improvements of XP in comparison to 2000 and Me were also praised, along with its increased number of built-in device drivers in comparison to 2000. The software compatibility tools were also praised, although it was noted that some programs, particularly older MS-DOS software, may not work correctly on XP because of its differing architecture. They panned Windows XP's new licensing model and product activation system, considering it to be a "slightly annoying roadblock", but acknowledged Microsoft's intent for the changes. PC Magazine provided similar praise, although noting that a number of its online features were designed to promote Microsoft-owned services, and that aside from quicker boot times, XP's overall performance showed little difference over Windows 2000. Windows XP's default theme, Luna, was criticized by some users for its childish look.
Despite extended support for the main Windows XP ending in 2014, many users – including some enterprises – were reluctant to move away from an operating system they viewed as a stable known quantity despite the many security and functionality improvements in subsequent releases of Windows. Windows XP's longevity was viewed as testament to its stability and Microsoft's successful attempts to keep it up to date, but also as an indictment of its direct successor's perceived failings.
Market share
According to web analytics data generated by Net Applications, Windows XP was the most widely used operating system until August 2012, when Windows 7 overtook it (later overtaken by Windows 10), while StatCounter indicates it happening almost a year earlier. In January 2014, Net Applications reported a market share of 29.23% of "desktop operating systems" for XP (when XP was introduced there was not a separate mobile category to track), while W3Schools reported a share of 11.0%.
, in most regions or continents, Windows XP market share on PCs, as a fraction of the total Windows share, had gone below 1% (0.5% in Africa). XP retained a double-digit market share into the 2020s in a few countries, such as Armenia, where it was still over 50% in 2021.
Source code leak
On September 23, 2020, source code for Windows XP with Service Pack 1 and Windows Server 2003 was leaked onto the imageboard 4chan by an unknown user. Anonymous users managed to compile the code, as well as a Twitter user who posted videos of the process on YouTube proving that the code was genuine. The videos were later removed on copyright grounds by Microsoft. The leak was incomplete as it was missing Winlogon and some other components. The original leak itself was spread using magnet links and torrent files whose payload originally included Server 2003 and XP source code and which was later updated with additional files, among which were previous leaks of Microsoft products, its patents, media about conspiracy theories on Bill Gates by anti-vaccination movements and an assortment of PDF files on different topics.
Microsoft issued a statement stating that it was investigating the leaks.
See also
BlueKeep (security vulnerability)
Comparison of operating systems
History of operating systems
List of operating systems
References
Further reading
External links
Windows XP End of Support
2001 software
Products and services discontinued in 2014
Products and services discontinued in 2019
XP
IA-32 operating systems
Obsolete technologies
Products introduced in 2001
Microsoft Windows | Windows XP | Technology | 7,765 |
37,147,913 | https://en.wikipedia.org/wiki/Tuber%20polyspermum | Tuber polyspermum is a species of truffle in the family Tuberaceae. Found in China, it was described as new to science in 2011. Fruit bodies of the truffle are small and brown, measuring up to in diameter.
Taxonomy
The species was first described scientifically in the journal Mycotaxon in 2011. The type collection—made by local farmers in Kunming, China (Yunnan Province) in 2002—was found in soil under Yunnan Pine (Pinus yunnanensis). Molecular phylogenetic analysis of ribosomal DNA sequences suggests that the species occupies a distinct clade in the genus Tuber. The specific epithet polyspermum refers to the large quantity of spores present in the fruit bodies.
Description
The truffles are brown to grayish-brown in color, measure in diameter, and usually have an umbilicate (navel-shaped) depression at the base. The peridium (outer skin) is 150–200 μm thick and comprises two distinct layers of tissue. The outer tissue layer, 50–100 μm thick, is made of somewhat angular to roughly spherical light brown cells that are typically 7.5–15 wide. The inner layer (100–150 μm thick) is a type of tissue known as a textura intricata, consisting of irregularly interwoven hyphae. These hyphae are thin-walled, hyaline (translucent), and 2.5–5 μm thick.
The internal spore-bearing tissue of the truffle, the gleba, is brown to purple-brown in mature specimens. It has a few large white veins running through it, and contains many spores. The asci (spore-bearing cells) are spherical (or nearly so), usually contain between one and four spores (although rarely there are five spores), and measure 65–85 by 45–60 μm. The roughly elliptical spores are initially hyaline, but become brown to yellowish-brown in age. They measure 25–37.5 by 20–25 and feature a mesh-like surface ornamentation with ridges up to 2.5–3 μm high.
Tuber polyspermum very closely resembles the North American species T. lyonii in both macro- and microscopic characteristics; the only differences noted by the authors were the umbilicate depression and the darker gleba of T. polyspermum. This difference alone would not have been enough to justify creating a new species, but the molecular analysis revealed that the North American and Chinese species were clearly distinct.
References
External links
Fungi described in 2011
Fungi of China
polyspermum
Truffles (fungi)
Fungus species | Tuber polyspermum | Biology | 546 |
5,748,852 | https://en.wikipedia.org/wiki/Iron%28II%29%20bromide | Iron(II) bromide refers to inorganic compounds with the chemical formula FeBr2(H2O)x. The anhydrous compound (x = 0) is a yellow or brownish-colored paramagnetic solid. The tetrahydrate is also known, all being pale colored solids. They are common precursor to other iron compounds.
Structure
Like most metal halides, FeBr2 adopts a polymeric structure consisting of isolated metal centers cross-linked with halides. It crystallizes with the CdI2 structure, featuring close-packed layers of bromide ions, between which are located Fe(II) ions in octahedral holes. The packing of the halides is slightly different from that for FeCl2, which adopts the CdCl2 motif. The tetrahydrates FeX2(H2O)4 (X = Cl, Br) have similar structures, with octahedral metal centers and mutually trans halides.
Synthesis and reactions
FeBr2 is synthesized using a methanol solution of concentrated hydrobromic acid and iron powder. It adds the methanol solvate [Fe(MeOH)6]Br2 together with hydrogen gas. Heating the methanol complex in a vacuum gives pure FeBr2.
FeBr2 reacts with two equivalents of tetraethylammonium bromide to give [(C2H5)4N]2FeBr4. FeBr2 reacts with bromide and bromine to form the intensely colored, mixed-valence species [FeBr3Br9]−.
Magnetism
FeBr2 possesses a strong metamagnetism at 4.2 K and has long been studied as a prototypical metamagnetic compound.
References
Bromides
Iron(II) compounds
Metal halides | Iron(II) bromide | Chemistry | 373 |
25,161,777 | https://en.wikipedia.org/wiki/HD%2044219%20b | HD 44219 b is an extrasolar planet which orbits the G-type main sequence star HD 44219, located approximately 164 light years away in the constellation Monoceros. This planet has at least three-fifths the mass of Jupiter and takes 1.29 years to orbit the star at a semimajor axis of 1.18 AU. However unlike most other known exoplanets, its eccentricity is not known, but it is typical that its inclination is not known. This planet was detected by HARPS on October 19, 2009, together with 29 other planets.
References
Exoplanets discovered in 2009
Exoplanets detected by radial velocity
Giant planets
Monoceros | HD 44219 b | Astronomy | 143 |
35,197,358 | https://en.wikipedia.org/wiki/Slut-shaming | Slut-shaming is the practice of criticizing people who violate expectations of behavior and appearance regarding issues related to sexuality. It may also be used in reference to gay men, who may face disapproval for promiscuous sexual behaviors. Gender-based violence primarily affecting women can be a result of slut-shaming. The term is commonly used to reclaim the word slut and empower women to have agency over their own sexuality.
Examples of slut-shaming include criticism or punishment for: violating dress code policies by dressing in sexually provocative ways; requesting access to birth control; having premarital, extramarital, casual, or promiscuous sex; or engaging in prostitution or other sex work. It can also include being victim-blamed for being raped or otherwise sexually assaulted.
Definitions and characteristics
Slut-shaming involves criticizing women for their transgression of accepted codes of sexual conduct, i.e., admonishing them for behavior, attire or desires that are more sexual than society finds acceptable. Author Jessalynn Keller stated, "The phrase [slut-shaming] became popularized alongside the SlutWalk marches and functions similarly to the 'War on Women,' producing affective connections while additionally working to reclaim the word 'slut' as a source of power and agency for girls and women."
Slut-shaming may be practiced by men and women. Women who slut-shame other women continuously apply unfavorable sexual double standards. The term is also used to describe victim blaming for rape and other sexual assault. This blaming is done by stating the crime was caused (either in part or in full) by the woman wearing revealing clothing or acting in a sexually provocative manner, before refusing consent to sex, thereby absolving the perpetrator of guilt. Sexually lenient individuals can be at risk of social isolation.
Kennair, et al., (2023) found no signs of a sexual double standard in short-term or long-term mating contexts, nor in choosing friends, except that, contrary to expectations, women's self-stimulation was more acceptable than men's.
The action of slut-shaming can be a form of social punishment and is an aspect of sexism, as well as female intrasexual competition. Slut-shaming is a form of intrasexual competition because the term "slut" reduces the value of a woman. Being termed a "slut" is against a woman's gender norms.
Researchers from Cornell University found that sentiments similar to slut-shaming appeared in a nonsexual, same-sex friendship context as well. The researchers had college women read a vignette describing an imaginary female peer, "Joan", then rate their feelings about her personality. To one group of women, Joan was described as having two lifetime sexual partners; to another group, she had had 20 partners. The study found that women—even women who were more promiscuous themselves—rated the Joan with 20 partners as "less competent, emotionally stable, warm, and dominant" than the Joan with two.
History
There is no documented date of origin for the term slut-shaming; nor the act of it. Rather, although the act of slut-shaming has existed for centuries, discussion of it has grown out of social and cultural relations and the trespassing of boundaries of what is considered normative and acceptable behavior. While the origin is unknown, by the late 2000s, the term was popular enough to merit usage in newspaper articles. American writer Katha Pollitt, in an article published in March 2008, used the term the following way: “the abstinence only, father-knows-best, slut-shaming crabbed misogyny of the Republican right.”
Before the late 2000s, the term “slutty” was sometimes used to refer to behavior considered to be unacceptable such as the 2000s trend of showing off one’s thong underwear to create a whale tail. In 2005, Suzy Menkes, a veteran journalist covering the fashion industry, considered the act of revealing thongs above one's clothing to be “slutty chic” and hoped that the act would lose popularity.
Literary characters who were killed or died by suicide as a result of their sexual choices include Ophelia of Hamlet (); Little Em'ly of David Copperfield (1850); Hester Prynne of The Scarlet Letter (1850); Madame Bovary (1856); Anna Karenina (1878); Daisy Miller (1878); Tess of the d'Urbervilles (1891); Lily Bart of House of Mirth (1905); and Charity Royall in Summer (1917).
In 1892, Canadian writer E. Pauline Johnson criticized the 1887 novel An Algonquin Maiden for killing its protagonist and having male characters posthumously label her a "squaw," a racial and sexual slur that displayed "glaring accusations against her virtue," which Johnson felt was undeserved.
Beginning in the 1960s, second-wave feminism contributed significantly to the definition and act of slut-shaming. Tracing back to the Industrial Revolution and the Second World War, men made up a majority of the labor force while women were socialized and taught to embrace the cult of domesticity and homemaking. Author Emily Poole argues that the sexual revolution of the 1960s and 1970s increased the rate of both birth control use and premarital sex. Moreover, feminist writers during the 1960s and 1970s such as Betty Friedan, Gloria Steinem, and Kate Millett encouraged women to be more open about their sexuality in public settings.
Slut-shaming has correlation to an individual's socio-economic status, which is characterized by wealth, education, and occupation. In the 18th century, "slut" was a common term used by men and upper-class women to degrade lower-class female servants. The context behind upper-class women and men calling their servants a "slut" includes when the servants were being sexually assaulted by their male employers.
The Internet
Slut-shaming is prevalent on social media platforms, including the most commonly used: YouTube, Instagram, Twitter, and Facebook. Slut-shaming has occurred on Facebook in controversial exchanges between users that have resulted in convictions to menace, harass and cause offense.
In 2014, The Pew Research Center reported the most common targets of harassment on the Internet are often young women. Citing that 50% of young female respondents have been called offensive names and or shamed online. In particular, those who were 18 to 24 years of age experienced varying amounts of severe harassment at astoundingly high rates. Women who have been stalked online were at 26%, while the targets of online sexual harassment were at 25%.
In the Women Studies International Forum, researcher Jessica Megarry used the Twitter hashtag campaign #mencallmethings as a case study of online sexual harassment. Women used the hashtag to report harassment they received from men, including insults related to appearance, name calling, rape threats, and death threats.
Media
In 2011, the SlutWalk protest march originated in Toronto in response to an incident when a Toronto Police officer told a group of students that they could avoid sexual assault by not dressing like "'sluts'".
Amber Rose's second annual walk in Los Angeles in 2016 had "several hundred" participants. A similar event occurred in Washington DC in 2014.
The Slut Walk movement has embraced the slut-shame label and has engaged in an act of resignification. Ringrose et al. call the Slut Walk a "collective movement" where the focus goes back to the perpetrator and no longer rests on the victim. This act of resignification comes from the work of feminist scholar Judith Butler. In her 1997 work, she argued that labels do not just name and marginalize individuals to categories, but also open up an opportunity for resistance.
Krystal Ball characterized the comments of Rush Limbaugh during the Rush Limbaugh–Sandra Fluke controversy as follows: "If you are a woman who stands up for your rights, you are a slut and your parents should be ashamed of you and we should all have the right to view your sex tapes online. This type of despicable behavior is part and parcel of a time-worn tradition of Slut-Shaming. When women step out line [sic], they are demeaned and degraded into silence. If you say Herman Cain sexually harassed you, you are a slut. If you say Supreme Court Justice Clarence Thomas sexually harassed you, you are a slut."
Slut-shaming has been used as a form of bullying on social media, with some people using revenge pornography tactics to spread intimate photos without consent. In 2012, a teenager from California, Audrie Pott, was sexually assaulted by three boys at a party. She committed suicide eight days after photos of her being assaulted were distributed among her peer group.
James Miller, editor-in-chief of the Ludwig von Mises Institute of Canada, wrote a controversial article defending slut shaming. The article was later taken down, but still received criticism from some libertarians, such as Gina Luttrell of Thoughts on Liberty, an all-female libertarian blog.
Comedians Krystyna Hutchinson and Corinne Fischer of Sorry About Last Night host a podcast entitled Guys We Fucked, The Anti-slut shaming podcast. This podcast has over 200,000 listeners on each episode that is on SoundCloud. The podcast exists to de-stigmatize discussing sex so that slut-shaming becomes less of an issue. Hutchinson told The Huffington Post: "We want to make people feel more comfortable in their own skin. We just got a message from a girl from New Delhi, India, about how she loves the podcast because it makes her feel like it's OK to be comfortable with your sexuality and enjoy sex. And that made me so happy".
Activism
Activism against slut-shaming takes place worldwide. Participants have covered their bodies in messages reading "Don't Tell Me How to Dress" and "I am not a slut but I like having consensual sex" and march under a giant banner with the word slut on it. Activism has occurred in Vancouver, New York City, Rio, Jerusalem, Hong Kong, and others.
In 2008, hundreds of South African women protested at the local taxi rank wearing miniskirts and t-shirts that read, "Pissed-Off Women" after a taxi driver and multiple hawkers confronted a young girl about wearing a short denim miniskirt and penetrated her with their fingers, calling her "slut" repeatedly. Protesters wanted to make their message clear; they wanted men to stop harassing women, no matter how short their skirts were and that no matter how short it may be, it is never an invitation.
After the gang rape of an unconscious 16-year-old girl in Steubenville, Ohio, August 2012, football players spread videos of the assault to other classmates, some of whom posted the videos to Twitter and Instagram. The pictures and video were later removed by authorities; however, that did not stop people from hash-tagging "Whore status" or "I have no sympathy for whores" in their tweets. Members of the collective Anonymous reported names of the rapists and classmates who spread the footage to local authorities. They took to the streets and internet requesting help from the community to bring justice to the Jane Doe who was raped.
Members of The Arts Effect All-Girl Theater Company have developed a play, Slut: The Play, in which they address the damaging impact of slut-shaming and slut culture. The creators state that their play "is a call to action – a reminder" that slut-shaming is happening every day, almost everywhere. Slut is inspired by real-life experiences of 14- to 17-year-old girls from New York, New Jersey, Connecticut, and Pennsylvania. The play was shown at the 2013 New York Fringe Festival.
In her statement on the production, and of slut-shaming in general, author of Slut! Growing Up Female with a Bad Reputation, Leora Tanenbaum writes:
After experiencing slut-shaming firsthand, Olivia Melville, Paloma Brierly Newton and approximately a dozen other Australian women founded the organization, Sexual Violence Won't Be Silenced, on August 25, 2015. The association seeks to raise awareness of cyber-bullying and online sexual violence. The founders also launched a petition to the Australian government, requesting that they better train and educate law enforcement officers on how to prevent and punish violent harassment on social media.
Among gay and bisexual men
Gay and bisexual men are also victimized through slut-shaming because of their sexual activity. There has been research supporting that LGBT students were more likely to be bullied and called sluts than heterosexual students. Researchers discussed how these negative experiences of victimization by peers, friends and strangers can lead to physical harm, social shaming, and loss of friendships. Unlike heterosexual people, LGBT people are more likely to learn about safe sex practices from friends. Gay and bisexual men are at highest risk of HIV. Slut-shaming has been cited as an obstacle to men who have sex with men accessing ways to prevent infection by HIV, such as PrEP. Most of the education that young gay and bisexual men receive about safe sex practices is learned from friends, the Internet, hearsay or trial and error.
Criticism of non-heterosexual men's sexual activity can either be said in a humorous context or not. Judgementalism happens when someone mentions gay men's sexual risk behavior or that they have multiple sex partners. This implies that their behavior is "slutty" and dirty.
Street harassment includes cat-calling, victim blaming, and slut-shaming. Judgmentalism is not a pejorative word compared to women, and slut-shaming may have a positive connotation with men depending on context and relationship.
Among Black women
Though slut-shaming affects women from different racial, cultural, and economic backgrounds, black women are disproportionately affected by the act of slut-shaming. This can be attributed to both misogynoir and historical myths, which have worked together in dictating much of the public perception of black women. Due to these biases, black women must stand against more prejudice based on an often false perception of their sexual activity. Furthermore, it is also true that women from low-income backgrounds are at greater risk of being slut-shamed. Black women experience financial disenfranchisement in comparison to their white peers which in turn adds to the unbalanced nature of the slut-shaming they experience.
Myths about black women were established and cultivated during the time of slavery and onward to further oppress black women and justify committing acts of rape and sexual assault against them. One of the myths popularized was the myth of the hyper-sexual black woman or the Jezebel. This myth, also known as, the myth of promiscuity, popularized the idea that black women were inherently more sexually charged and deviant than their white counterparts. Black women were determined by the Western world to have a wild, promiscuous nature and immoral, loose, and impure practices and values. This myth was then used as a justification for violating black women with no consequence.
Furthermore, black women were forced into sexualized positions regularly, for instance during slave actions they were forcefully stripped of all clothes, and required to be paraded around for the masses, nude. These same involuntary actions would then be spun and used by white society in order to shame black women and reinforce the ideas created by the myth of promiscuity. This forceful sexualization of black women only furthered the ideologies prescribed to them by white society in a process of dehumanization and shaming that would be continued throughout history in new, inventive ways.
Today the myth of promiscuity or the “Jezebel” still permeates the black female experience in a myriad of ways. For instance, hip hop's portrayal of black women is hypersexualized and deeply stereotypes them both in character and in physicality. The industry not only perpetuated the already existing ideas of the myth of promiscuity but additionally, it directly correlates the physical characteristics often found in black women such as a large behind which is often addressed in hip hop music, with having a sexually promiscuous nature. Although these ideas were not created by the hip hop/rap industry and have been around since the time of enslavement, they are popularized and reinforced by the overwhelming sexualization of specifically black women within the music being created today. Hip hop contributes to the overexposure to slut-shaming experienced by black women, manifest both verbally through lyrics used and visually through imagery in music videos and album covers. The imagery that accompanies overtly sexual lyrics is often of the stereotypical normative black female body often adorned in minimal clothing. This imagery of black femininity is then streamlined into the media to be absorbed by the public, therein altering the public's perception of both the standard black female body and the behaviors of black women in general. The effect of this reinforces the public perception that black women are inherently hypersexual beings.
Yet another contributor to the high rates at which black women encounter slut-shaming is because of income inequality. slut-shaming does not permeate high-status circles of women at nearly as high a rate as it does within communities of women that are low-income. As those individuals coming from the powerful position within the already existing ruling class have the ability to dictate what activities, attire, and body standards are deemed respectable they can remove themselves from experiencing slut-shaming much more readily than their marginalized, low-income, BIPOC counterparts. On the flip side, black women are more likely to face poverty because they are statistically more likely to experience unemployment, have lesser pay when employed, and often less of a chance to rise in position in comparison to their white female counterparts, let alone their white male counterparts. So therein, these systems of misogynoir place black women into a financially disadvantaged state, and those who are from lesser incomes are more likely to experience slut-shaming, thus black women are further placed in the position to experience slut-shaming at a greater rate. Furthermore, white and upper-class women are often participators in the shaming of fellow women, especially black women, as this practice allows them to maintain the existing hierarchical dynamics wherein white, high-class women are a symbol of ideals of “true womanhood” and purity and black women are ascribing the characteristics of as deviance and sexually immorality.
Although the myth of promiscuity was constructed by white society and spread through all social orders including, political, economic, and educational, today those same ideals can be seen distinctly within the interior social structures of the black community. Black women, in order to gain any social standing, had to do everything in their power to remove themselves from the idea of being promiscuous. For black women distancing themselves from all forms of perceived sensuality allowed them to a rise in social positioning. In turn, they forced themselves into a strict modesty culture in order to assimilate and rise in standing within white social structures. Those same women who conformed to the modesty standards set by white society, and did so in order to shirk their preconceived sexual nature that was also ascribed to them by white society seem to have little to no sympathy for black women within the community that chose not to assimilate into this system of forced modesty. Therein the process of slut-shaming is even perpetuated internally from black female individuals to one another.
Among lesbian and bisexual women
Along with gay and bisexual men, lesbian and bisexual women are also key victims of slut-shaming and bullying. Bisexual women are mainly bullied by other women due to their “open option” choice of preference for both genders. However on the other hand, bisexual and lesbian women are fetishised due to the porn industry, and from this fetishisation a group of people would only see them as a “porn category”. Lesbian porn is one of the most searched for categories in the porn industry. Lesbian women have struggled with the concept of straight men trying to convert them into a different sexuality as they are viewed by that group of men as a porn category or a “sexual object”, this is due how some groups only understand the concept of lesbianism only what they see on media. Women in the porn industry are putting on a show to entertain, so they’re shown as feminine seductress figure that’s very promiscuous, which of course is false to reality. Linking to that issue, lesbianism is considered to exist outside of the male gaze, therefore there’s this frustration for some men that they are unable to pursue women in these groups. However, there’s also groups of individuals that view any members of the LGBTQ+ community as predatory. Especially some women towards lesbians, how they assume members of that community will actively pursue them due to their preference and stereotype them as predatory. For bisexual and lesbian women, both genders vent their frustration based on homophobia, false media and their indifference to “societal stereotypes”. However, this does not only apply to lesbian women, but gay men are also stereotyped to be predatory by some groups of heterosexual/straight males.
See also
Fahisha (Islam)
Female intrasexual competition
Free the nipple
Haya (Islam)
Honor killing
Madonna–whore complex
Post-assault treatment of sexual assault victims
Sexual bullying
Victim blaming
References
External links
Harassment and bullying
Feminism and sexuality
Feminist terminology
Feminist theory
Misogyny
Sexuality and society
Prejudice and discrimination
Birth control
Victimology
Causal fallacies
Sexual harassment
Shame | Slut-shaming | Biology | 4,547 |
16,710,409 | https://en.wikipedia.org/wiki/Sandcrete | Sandcrete is a yellow-white building material made from a binder (typically Portland cement), sand in a ratio of circa 1:8, and water. Sometimes other ingredients may be added to reduce the amount of expensive Portland cement such as pozzolanas and rice husk ash. Sandcrete is similar but weaker than mortar, for which the ratio is circa 1:5. Soil cement and landcrete are similar materials but use other types of soil and hydraform blocks which are compressed, stabilized, earth blocks.
Sandcrete is usually used as hollow rectangular blocks similar to concrete masonry units, often wide, thick, and with hollows that run from top to bottom and occupy around one third of the volume of the block. The blocks are joined together with mortar.
Strength and usage
The final compressive strength of sandcrete can be as high as 4.6 N/mm2, which is much less than concrete's 40 N/mm2. Sandcrete is unsuitable for load-bearing columns, and is mainly used for walls, or for foundations if no suitable alternative is available. As material for walls, its strength is less than that of fired clay bricks, but sandcrete is considerably cheaper.
Sandcrete is the main building material for walls of single-storey buildings (such as houses and schools) in countries such as Ghana and Nigeria.
Research has shown using organic ash to replace Portland cement, which is better than simply using less Portland cement.
Coarse aggregate
Addition of coarse aggregates has been tried, since this is a cheap way to increase compressive strength, but since the cement content of sandcrete is small, so is the amount of water that is added to the sand/cement mix to cure it. Adding more solid materials makes the mix much less fluid, making it difficult to cast into blocks.
References
Soil-based building materials
Cement
Masonry | Sandcrete | Engineering | 384 |
2,379,726 | https://en.wikipedia.org/wiki/Calcification | Calcification is the accumulation of calcium salts in a body tissue. It normally occurs in the formation of bone, but calcium can be deposited abnormally in soft tissue, causing it to harden. Calcifications may be classified on whether there is mineral balance or not, and the location of the calcification. Calcification may also refer to the processes of normal mineral deposition in biological systems, such as the formation of stromatolites or mollusc shells (see Biomineralization).
Signs and symptoms
Calcification can manifest itself in many ways in the body depending on the location.
In the pulpal structure of a tooth, calcification often presents asymptomatically, and is diagnosed as an incidental finding during radiographic interpretation. Individual teeth with calcified pulp will typically respond negatively to vitality testing; teeth with calcified pulp often lack sensation of pain, pressure, and temperature.
Causes of soft tissue calcification
Calcification of soft tissue (arteries, cartilage, heart valves, etc.) can be caused by vitamin K2 deficiency or by poor calcium absorption due to a high calcium/vitamin D ratio. This can occur with or without a mineral imbalance.
A common misconception is that calcification is caused by excess amount of calcium in diet. Dietary calcium intake is not associated with accumulation of calcium in soft tissue, and calcification occurs irrespective of the amount of calcium intake.
Intake of excessive vitamin D can cause vitamin D poisoning and excessive intake of calcium from the intestine which, when accompanied by a deficiency of vitamin K (perhaps induced by an anticoagulant), can result in calcification of arteries and other soft tissue. Such metastatic soft tissue calcification is mainly in tissues containing "calcium catchers" such as elastic fibres or mucopolysaccharides. These tissues especially include the lungs (pumice lung) and the aorta.
Mineral balance
Dystrophic calcification, without a systemic mineral imbalance.
Metastatic calcification, a systemic elevation of calcium levels in the blood and all tissues.
Forms
Calcification can be pathological or a standard part of the aging process. Nearly all adults show calcification of the pineal gland.
Location
Extraskeletal calcification, e.g. calciphylaxis
Brain, e.g. primary familial brain calcification (Fahr's syndrome)
Choroid plexus usually in the lateral ventricles
Tumor calcification
Arthritic bone spurs
Kidney stones
Gall stones
Heterotopic bone
Tonsil stones
Pulp stone
Breast disease
In a number of breast pathologies, calcium is often deposited at sites of cell death or in association secretions or hyalinized stroma, resulting in pathologic calcification. For example, small, irregular, linear calcifications may be seen, via mammography, in a ductal carcinoma-in-situ to produce visible radio-opacities.
Arteriosclerotic calcification
One of the principal causes of arterial stiffening with age is vascular calcification. Vascular calcification is the deposition of mineral in the form of calcium phosphate salts in the smooth muscle-rich medial layer of large arteries including the aorta. DNA damage, especially oxidative DNA damage, causes accelerated vascular calcification. Vascular calcification could also be linked to the chronic leakage of blood lysates into the vessel wall since red blood cells have been shown to contain a high concentration of calcium.
Diagnosis
In terms of diagnosis, in this case vascular calcification, an ultrasound and radiography of said area is sufficient.
Treatment
Treatment of high calcium/vitamin D ratio may most easily be accomplished by intake of more vitamin D if vitamin K is normal. Intake of too much vitamin D would be evident by anorexia, loss of appetite, or soft tissue calcification.
See also
Calcinosis
Marine biogenic calcification
Monckeberg's arteriosclerosis
Pineal gland
References
Calcium
Histopathology
Biomineralization | Calcification | Chemistry | 853 |
3,312,150 | https://en.wikipedia.org/wiki/Pilot%20chute | A pilot chute is a small auxiliary parachute used to deploy the main or reserve parachute. The pilot chute is connected by a bridle to the deployment bag containing the parachute. Pilot chutes are a critical component of all modern skydiving and BASE jumping gear. Pilot chutes are also used as a component of spacecraft such as NASA's Orion.
Deployment methods
Spring-loaded
The spring-loaded pilot chute is used in conjunction with a ripcord. When the user pulls the ripcord, the container opens, allowing the pilot chute compressed inside and loaded with a large spring inside it to jump out. Spring-loaded pilot chutes are mainly used to deploy reserve parachutes. They are often also used to deploy the main parachute on skydiving students' parachute equipment. They are also commonly used in drogue parachutes in cars or in planes such as the B52 Bomber.
Pull-out
The pull-out and throw-out pilot chutes are identical in construction; the difference is in their connection to the handle and the bridle, and in the way they are packed.
With the pull-out system, the pilot chute is packed inside the container. The activation handle is attached to a lanyard, which in turn is attached to the closing pin. The lanyard is also attached to base of the pilot chute, at the point of connection to the bridle. When the user pulls the handle, the closing pin is pulled, opening the container. Continuing the pull, the user pulls the pilot chute out of the container and into the airstream, at which point the pilot chute inflates and pulls the main parachute out of the container.
Throw-out
The throw-out pilot chute is the most popular type in use today. The pilot chute is packed in a pouch at the bottom of the container. The handle is attached to the apex of the pilot chute. When the user grabs the handle and throws the pilot chute into the airstream, the bridle extends, pulling the closing pin and opening the container; as the pilot chute continues in the airstream it extracts the deployment bag containing the main parachute from the container. The pull-out pilot chute and the throw-out pilot chute were both invented by Bill Booth.
Drogues
Drogues used on tandem-systems are basically large throw-out pilot chutes, but the bridle is anchored on the container with a release system. When the user throws the drogue, the drogue inflates and the bridle extends. The deployed drogue slows down the free-fall speed of the tandem pair. When the user wants to open the parachute, they pull a ripcord, releasing the bridle and allowing the drogue to open the main container.
Types
Collapsible
With the advent of smaller higher performance canopies, the drag induced by trailing a pilot chute behind a canopy has become a significant concern. To reduce this drag some pilot chute designs of the Pull-out and Throw-out variety are collapsible. Once deployment of the parachute has occurred a kill line running up the center of the pilot chute bridle becomes loaded. This kill line pulls down on the apex of the pilot chute collapsing it and greatly reducing its drag on the canopy.
Some designs replace the kill line with a fixed length of shock cord, which stretches when the pilot chute is moving quickly, allowing it to inflate. When the pilot slows down (after opening a canopy, for example) the shock cord retracts, killing the pilot chute. While this avoids the possibility of pilot-in-tow malfunction due to an un-cocked pilot, it has the disadvantage of requiring significant airspeed to operate. This could cause a delayed deployment if used for a BASE or balloon jump, or any other jump with a low speed deployment. This type may also begin to re-inflate behind a highly loaded, fast moving canopy, negating the usefulness of a collapsible pilot chute.
Vented
Pilot chutes for BASE jumping gear are typically larger than skydiving pilot chutes, and often include air vents on the surface. Research on the development of early round parachutes showed that vents can increase stability and reduce oscillation of the parachute. BASE jumpers often use pilot chutes with either apex vents, or ring vents.
See also
Index of aviation articles
Ballistic parachute
Cirrus Airframe Parachute System
Drogue parachute
Paragliding
Parachute
Parachute landing fall
Parachuting
Rogallo wing
References
Parachuting
Aerospace engineering | Pilot chute | Engineering | 954 |
217,387 | https://en.wikipedia.org/wiki/Mite | Mites are small arachnids (eight-legged arthropods). Mites span two large orders of arachnids, the Acariformes and the Parasitiformes, which were historically grouped together in the subclass Acari. However, most recent genetic analyses do not recover the two as each other's closest relative within Arachnida, rendering the group non-monophyletic. Most mites are tiny, less than in length, and have a simple, unsegmented body plan. The small size of most species makes them easily overlooked; some species live in water, many live in soil as decomposers, others live on plants, sometimes creating galls, while others are predators or parasites. This last type includes the commercially destructive Varroa parasite of honey bees, as well as scabies mites of humans. Most species are harmless to humans, but a few are associated with allergies or may transmit diseases.
The scientific discipline devoted to the study of mites is called acarology.
Evolution and taxonomy
The mites are not a defined taxon, but is used for two distinct groups of arachnids, the Acariformes and the Parasitiformes. The phylogeny of the Acari has been relatively little studied, but molecular information from ribosomal DNA is being extensively used to understand relationships between groups. The 18 S rRNA gene provides information on relationships among phyla and superphyla, while the ITS2, and the 18S ribosomal RNA and 28S ribosomal RNA genes, provide clues at deeper levels.
Taxonomy
Superorder Parasitiformes – ticks and a variety of mites
Opilioacarida – a small order of large mites that superficially resemble harvestmen (Opiliones), hence their name
Holothyrida - small group of predatory mites native to former Gondwana landmasses
Ixodida – ticks
Mesostigmata – a large order of predatory and parasitic mites
Trigynaspida - large, diverse order
Monogynaspida - diverse order of parasitic and predatory mites
Sejida - small order of mites containing five families
Superorder Acariformes – the most diverse group of mites
Endeostigmata (probably paraphyletic)
Eriophyoidea – gall mites and relatives
Trombidiformes – plant parasitic mites (spider mites, peacock mites, red-legged earth mites, etc.), snout mites, chiggers, hair follicle mites, velvet mites, water mites, etc.
Sphaerolichida - small order of mites containing two families
Prostigmata - large order of sucking mites
Sarcoptiformes
Oribatida – oribatid mites, beetle mites, armored mites (formerly known as Cryptostigmata)
Astigmatina – stored product, fur, feather, dust, and human itch mites, etc.
Fossil record
The mite fossil record is sparse, due to their small size and low preservation potential. The oldest fossils of acariform mites are from the Rhynie Chert, Scotland, which dates to the early Devonian, around 410 million years ago while the earliest fossils of Parasitiformes are known from amber specimens dating to the mid-Cretaceous, around 100 million years ago. Most fossil acarids are no older than the Tertiary (up to 65 mya).
Phylogeny
Members of the superorders Opilioacariformes and Acariformes (sometimes known as Actinotrichida) are mites, as well as some of the Parasitiformes (sometimes known as Anactinotrichida). Recent genetic research has suggested that Acari is polyphyletic (of multiple origins). A study using molecular data from the mitochondria and nucleus recovered Acariformes as sister to the Solifugae (camel spiders) and Parasitiformes as sister to the Pseudoscorpionida, with other arachnid orders separating these two groupings on the phylogenetic tree, as shown below.
However, a few phylogenomic studies have found strong support for monophyly of Acari and a sister relationship between Acariformes and Parasitiformes, although this finding has been questioned, with other studies suggesting that this likely represents a long branch attraction artefact.
Anatomy
External
Mites are tiny members of the class Arachnida; most are in the size range but some are larger and some are no bigger than as adults. The body plan has two regions, a cephalothorax (with no separate head) or prosoma, and an opisthosoma or abdomen. Segmentation has almost entirely been lost and the prosoma and opisthosoma are fused, only the positioning of the limbs indicating the location of the segments.
At the front of the body is the gnathosoma or capitulum. This is not a head and does not contain the eyes or the brain, but is a retractable feeding apparatus consisting of the chelicerae, the pedipalps and the oral cavity. It is covered above by an extension of the body carapace and is connected to the body by a flexible section of cuticle.
Two-segmented chelicerae is the ancestral condition in Acariformes, but in more derived groups they are single-segmented. And three-segmented chelicerae is the ancestral condition in Parasitiformes, but has been reduced to just two segments in more derived groups. The pedipalps differ between taxa depending on diet; in some species the appendages resemble legs while in others they are modified into chelicerae-like structures. The oral cavity connects posteriorly to the mouth and pharynx.
Most mites have four pairs of legs (two pairs in Eriophyoidea), each with six segments, which may be modified for swimming or other purposes. The dorsal surface of the body is clad in hardened tergites and the ventral surface by hardened sclerites; sometimes these form transverse ridges. The gonopore (genital opening) is located on the ventral surface between the fourth pair of legs. Some species have one to five median or lateral eyes but many species are blind, and slit and pit sense organs are common. Both body and limbs bear setae (bristles) which may be simple, flattened, club-shaped or sensory. Mites are usually some shade of brown, but some species are red, orange, black or green, or some combination of these colours.
Many mites have stigmata (openings used in respiration). In some mites, the stigmata are associated with peritremes: paired, tubular, elaborated extensions of the tracheal system. The higher taxa of mites are defined by these structures:
Oribatida, formerly known as Cryptostigmata (crypto- = hidden), and Endeostigmata (endeo- = internal) lack primary stigmata and peritremes but may have secondary respiratory systems. For example, oribatids in the suborder Brachypylina have stigmata on the ventral plate of the body that are difficult to see (thus the former name Cryptostigmata).
Astigmata (a- = without) lack stigmata and respire through their cuticle.
Prostigmata (pro- = before/in front) have stigmata at the front of the body, usually on the lateral margins or between the chelicerae. These are associated with peritremes that may be on the prodorsum near the cheliceral bases, or be horn-like and emergent, or form a line or network on the dorsum of the gnathosomal capsule.
Opilioacaridae have four pairs of dorsolateral stigmata that are added sequentially during development.
The other three orders of Parasitiformes, Holothyrida, Ixodida, and Mesostigmata (meso- = middle), have just one pair of stigmata in the region of the fourth pair of legs. They also have peritremes: in Ixodida these consist of paired encircling plates around the stigmata, while the peritremes in Mesostigmata and Holothyrida are grooves extending from the stigmata anteriorly (sometimes also posteriorly).
Internal
Mite digestive systems have salivary glands that open into the preoral space rather than the foregut. Most species carry two to six pairs of salivary glands that empty at various points into the subcheliceral space. A few mite species lack an anus: they do not defecate during their short lives. The circulatory system consists of a network of sinuses and most mites lacks a heart, movement of fluid being driven by the contraction of body muscles. But ticks, and some of the larger species of mites, have a dorsal, longitudinal heart. Gas exchange is carried out across the body surface, but many species additionally have between one and four pairs of tracheae. The excretory system includes a nephridium and one or two pairs of Malpighian tubules. Several families of mites, such as Tetranychidae, Eriophyidae, Camerobiidae, Cunaxidae, Trombidiidae, Trombiculidae, Erythraeidae and Bdellidae have silk glands used to produce silk for various purposes. Additionally, water mites (Hydrachnidia) produce long thin threads that may be silk.
Reproduction and life cycle
The sexes are separate in mites; males have a pair of testes in the mid-region of the body, each connected to the gonopore by a vas deferens, and in some species there is a chitinous penis; females have a single ovary connected to the gonopore by an oviduct, as well as a seminal receptacle for the storage of sperm. In most mites, sperm is transferred to the female indirectly; the male either deposits a spermatophore on a surface from which it is picked up by the female, or he uses his chelicerae or third pair of legs to insert it into the female's gonopore. In some of the Acariformes, insemination is direct using the male's penis. The spermatophora in all mites are aflagellate.
The eggs are laid in the substrate, or wherever the mite happens to live. They take up to six weeks to hatch, according to species, with the next stage being the six-legged larvae. After three moults, the larvae become nymphs, with eight legs, and after a further three moults, they become adults. Longevity varies between species, but the lifespan of mites is short compared to many other arachnids.
Ecology
Niches
Mites occupy a wide range of ecological niches. For example, Oribatida mites are important decomposers in many habitats. They eat a wide variety of material including living and dead plant and fungal material, lichens and carrion; some are predatory, though no oribatid mites are parasitic. Mites are among the most diverse and successful of all invertebrate groups. They have exploited a wide array of habitats, and because of their small size go largely unnoticed. They are found in freshwater (e.g. the water mites or Hydrachnidia) and saltwater (most Halacaridae), in the soil, in forests, pastures, agricultural crops, ornamental plants, thermal springs and caves. They inhabit organic debris of all kinds and are extremely numerous in leaf litter. They feed on animals, plants and fungi and some are parasites of plants and animals. Some 48,200 species of mites have been described, but there may be a million or more species as yet undescribed. The tropical species Archegozetes longisetosus is one of the strongest animals in the world, relative to its mass (100 μg): It lifts up to 1,182 times its own weight, over five times more than would be expected of such a minute animal. A mite also holds a speed record: for its length, Paratarsotomus macropalpis is the fastest animal on Earth.
The mites living in soil consist of a range of taxa. Oribatida and Prostigmata are more numerous in soil than Mesostigmata, and have more soil-dwelling species. When soil is affected by an ecological disturbance such as agriculture, most mites (Astigmata, Mesostigmata and Prostigmata) recolonise it within a few months, whereas Oribatida take multiple years.
Parasitism
Many mites are parasitic on plants and animals. One family of mites, Pyroglyphidae, or nest mites, live primarily in the nests of birds and other animals. These mites are largely parasitic and consume blood, skin and keratin. Dust mites, which feed mostly on dead skin and hair shed from humans instead of consuming them from the organism directly, evolved from these parasitic ancestors. Ticks are a prominent group of mites that are parasitic on vertebrates, mostly mammal and birds, feeding on blood with specialised mouthparts.
Parasitic mites sometimes infest insects. Varroa destructor attaches to the body of honey bees, and Acarapis woodi (family Tarsonemidae) lives in their tracheae. Hundreds of species are associated with other bees, mostly poorly described. They attach to bees in a variety of ways. For example, Trigona corvina workers have been found with mites attached to the outer face of their hind tibiae. Some are thought to be parasites, while others are beneficial symbionts. Mites also parasitize some ant species, such as Eciton burchellii. Most larvae of Parasitengona are ectoparasites of arthropods, while later life stages in this group tend to shift to being predators.
Plant pests include the so-called spider mites (family Tetranychidae), thread-footed mites (family Tarsonemidae), and the gall mites (family Eriophyidae). Among the species that attack animals are members of the sarcoptic mange mites (family Sarcoptidae), which burrow under the skin. Demodex mites (family Demodecidae) are parasites that live in or near the hair follicles of mammals, including humans.
Dispersal
Being unable to fly, mites need some other means of dispersal. On a small scale, walking is used to access other suitable locations in the immediate vicinity. Some species mount to a high point and adopt a dispersal posture and get carried away by the wind, while others waft a thread of silk aloft to balloon to a new position.
Parasitic mites use their hosts to disperse, and spread from host to host by direct contact. Another strategy is phoresy; the mite, often equipped with suitable claspers or suckers, grips onto an insect or other animal, and gets transported to another place. A phoretic mite is just a hitch-hiker and does not feed during the time it is carried by its temporary host. These travelling mites are mostly species that reproduce rapidly and are quick to colonise new habitats.
Relationship with humans
Mites are tiny and apart from those that are of economic concern to humans, little studied. The majority are beneficial, living in the soil or aqueous environments and assisting in the decomposition of decaying organic material, as part of the carbon cycle.
Two species live on humans, namely Demodex folliculorum and Demodex brevis; both are frequently referred to as eyelash mites.
Medical significance
The majority of mite species are harmless to humans and domestic animals, but a few species can colonize mammals directly, acting as vectors for disease transmission, and causing or contributing to allergenic diseases. Mites which colonize human skin are the cause of several types of itchy skin rashes, such as gamasoidosis, rodent mite dermatitis, grain itch, grocer's itch, and scabies; Sarcoptes scabiei is a parasitic mite responsible for scabies, which is one of the three most common skin disorders in children. Demodex mites, which are common cause of mange in dogs and other domesticated animals, have also been implicated in the human skin disease rosacea, although the mechanism by which demodex contributes to the disease is unclear. Ticks are well known for carrying diseases, such as Lyme disease and Rocky Mountain spotted fever.
Chiggers are known primarily for their itchy bite, but they can also spread disease in some limited circumstances, such as scrub typhus. The house-mouse mite is the only known vector of the disease rickettsialpox. House dust mites, found in warm and humid places such as beds, cause several forms of allergic diseases, including hay fever, asthma and eczema, and are known to aggravate atopic dermatitis.
Among domestic animals, sheep are affected by the mite Psoroptes ovis which lives on the skin, causing hypersensitivity and inflammation. Hay mites are a suspected reservoir for scrapie, a prion disease of sheep.
In beekeeping
The mite Varroa destructor is a serious pest of honey bees, contributing to colony collapse disorder in commercial hives. This organism is an obligate external parasite, able to reproduce only in bee colonies. It directly weakens its host by sucking up the bee's fat, and can spread RNA viruses including deformed wing virus. Heavy infestation causes the death of a colony, generally over the winter. Since 2006, more than 10 million beehives have been lost.
Biological pest control
Various mites prey on other invertebrates and can be used to control their populations. Phytoseiidae, especially members of Amblyseius, Metaseiulus, and Phytoseiulus, are used to control pests such as spider mites. Among the Laelapidae, Gaeolaelaps aculeifer and Stratiolaelaps scimitus are used to control fungus gnats, poultry red mites and various soil pests.
In culture
Mites were first observed under the microscope by the English polymath Robert Hooke. In his 1665 book Micrographia, he stated that far from being spontaneously generated from dirt, they were "very prettily shap'd Insects". In 1898, Arthur Conan Doyle wrote a satirical poem, "A Parable", with the conceit of some cheese mites disputing the origin of the round cheddar cheese in which they all lived. The world's first science documentary featured cheese mites, seen under the microscope; the short film was shown in London's Alhambra music hall in 1903, causing a boom in the sales of simple microscopes.
See also
Chigger bite
Copra itch
Gamasoidosis
Grain itch
Grocer's itch
List of mites associated with cutaneous reactions
References
External links
Bitingmites.org: What's biting you?
Mites and Ticks chapter in United States Environmental Protection Agency and University of Florida/Institute of Food and Agricultural Sciences National Public Health Pesticide Applicator Training Manual
Acari
Paraphyletic groups
Arthropod common names | Mite | Biology | 4,119 |
78,607,818 | https://en.wikipedia.org/wiki/Redmi%209T | The Redmi 9T is a series of Android smartphones from Redmi. It was introduced on January 8, 2021, together with the Redmi Note 9T. In India, the 9T was introduced on December 17, 2020, as the Redmi 9 Power. Also in China, on November 26, 2020, along with the Redmi Note 9 5G and Redmi Note 9 Pro 5G, the Redmi Note 9 4G was introduced, which is identical to the Redmi 9T and 9 Power except for the lack of a macro module.
Also, on November 24, 2020, the company POCO, whose global office had just separated from Xiaomi, introduced the POCO M3, which is similar to the Redmi 9T model but with a different design, the absence of NFC in all markets, and an ultrawide-angle module.
Design
The screen is made of Corning Gorilla Glass 3. The smartphone body is made of plastic and has a "wavy" texture on the Redmi 9T/9 Power and Redmi Note 9 4G, while the POCO M3 has a leather-like texture.
The only design difference between the Redmi 9T/9 Power and the Redmi Note 9 4G is that the Redmi Note 9 4G has the inscription "AI" instead of the fourth camera module. In the POCO M3, the upper part has a black glossy insert with the brand logo, which extends almost the entire width of the back panel.
At the bottom is a USB-C port, a speaker, a microphone, and a 3.5mm audio jack. At the top, there's a second microphone and an IR blaster. On the left side, you'll find a slot for two SIM cards and a microSD memory card of up to 512GB. On the right side, there are volume control buttons and the smartphone's power button, which also has a built-in fingerprint scanner.
There are several color options, depending on the model:
9T: Carbon Gray, Twilight Blue, Sunrise Orange, Ocean Green
9 Power (India): Mighty Black, Fiery Red, Electric Green, Blazing Blue
Note 9 4G: Gray, Green, Blue, Orange
Poco M3: Cool Blue, Poco Yellow, Power Black
Technical specifications
Processor
All smartphones are powered with a Qualcomm Snapdragon 662 processor and an Adreno 610 graphics processor.
Battery
The battery has a capacity of 6000mAh, supports 18W fast charging, and 2.5W reverse wired charging. Additionally, a 22.5W charging block is included in the box.
Camera
The Redmi 9T and 9 Power feature a quad-camera setup with a 48MP main sensor with phase detection autofocus, aperture (wide), 8MP, aperture (ultrawide), 2MP, aperture (macro), and 2MP, aperture (depth sensor). The Redmi Note 9 4G has the same cameras but lacks the macro module, while the POCO M3 doesn't have an ultrawide module.
Both models feature an 8MP front-facing camera with an aperture (wide-angle).
Both the main and front cameras can record video in 1080p resolution at 30 frames per second.
Main camera modes
Document
Night Mode
AI Scene Detection
Google Lens
AI Beautify
Portrait Mode
Cinematic Video
Portrait Mode Background Blur
Panorama
RAW Mode
Front camera modes
Selfie Timer
Cinematic Video
AI Beautify
Built-in Filters
Palm Shutter
AI Portrait Mode
Panorama Selfie
Display
It has an IPS LCD screen, 6.53 inches, Full HD+ (2340 x 1080) with an aspect ratio of 19.5:9 and a waterdrop notch for the front camera.
Sound
Smartphones have received stereo speakers, located on the upper and lower ends.
Storae
The Redmi 9T was available in 4/64GB, 4/128GB, and 6/128GB configurations. In Ukraine, only the 4/64GB and 4/128GB versions were sold.
The Redmi 9 Power was available in 4/64GB and 4/128GB configurations.
The Redmi Note 9 4G was available in 4/128GB, 6/128GB, 8/128GB, and 8/256GB configurations.
The POCO M3 was available in 4/64GB, 6/64GB, 4/128GB, and 6/128GB configurations. In Ukraine, only the 4/64GB and 4/128GB versions were sold.
Software
The Redmi 9T, 9 Power, and Note 9 4G were initially launched with MIUI 12, while the POCO M3 came with MIUI 12 for POCO. Both interfaces were based on Android 10. Subsequently, the Redmi 9T, 9 Power, and Note 9 4G were updated to MIUI 14, and the POCO M3 received MIUI 14 for POCO. These updated interfaces are based on Android 12.
References
External links
9T
Mobile phones with multiple rear cameras
Mobile phones with infrared transmitter
Mobile phones introduced in 2020
Phablets
Discontinued smartphones | Redmi 9T | Technology | 1,037 |
62,325,867 | https://en.wikipedia.org/wiki/Palmitate%20mediated%20localization | Palmitate mediated localization is a biological process that trafficks a palmitoylated protein to ordered lipid domains.
Biological function
One function is thought to cluster proteins to increase the efficiency of protein-protein interactions and facilitate biological processes. In the opposite scenario palmitate mediated localization sequesters proteins away from a non-localized molecule. In theory, disruption of palmitate mediated localization then allows a transient interaction of two molecules through lipid mixing. In the case of an enzyme, palmitate can sequester an enzyme away from its substrate. Disruption of palmitate mediated localization then activates the enzyme by substrate presentation.
Mechanism of sequestration
Palmitate mediated localization is integral to spatial biology; in particular, lipid partitioning and the formation of lipid rafts. Sequestration of palmitoylated proteins is regulated by cholesterol. Depletion of cholesterol with methyl-beta cyclodextrin disrupts palmitate mediated localization.
References
Biological processes | Palmitate mediated localization | Biology | 204 |
2,541,868 | https://en.wikipedia.org/wiki/Extracellular%20digestion | Extracellular phototropic digestion is a process in which saprobionts feed by secreting enzymes through the cell membrane onto the food. The enzymes catalyze the digestion of the food, i.e., diffusion, transport, osmotrophy or phagocytosis. Since digestion occurs outside the cell, it is said to be extracellular. It takes place either in the lumen of the digestive system, in a gastric cavity or other digestive organ, or completely outside the body. During extracellular digestion, food is broken down outside the cell either mechanically or with acid
by special molecules called enzymes. Then the newly broken down nutrients can be absorbed by the cells nearby. Humans use extracellular digestion when they eat. Their teeth grind the food up, enzymes and acid in the stomach liquefy it, and additional enzymes in the small intestine break the food down into parts their cells can use.
Extracellular digestion is a form of digestion found in all saprobiontic annelids, crustaceans, arthropods, lichens and chordates, including vertebrates.
In fungi
Fungi are heterotrophic organisms. Heterotrophic nutrition means that fungi utilize extracellular sources of organic energy, organic material or organic matter, for their maintenance, growth and reproduction. Energy is derived from the breakdown of the chemical bond between carbon and either carbon or other components of compounds such as a phosphate ion. The extracellular sources of energy may be simple sugars, polypeptides or more complex carbohydrate.
Fungi can only absorb small molecules through their walls. For fungi to gain their energy needs, they find and absorb organic molecules appropriate to their needs, either immediately or following some form of enzyme diminution outside the thallus. The small molecules are then absorbed, used directly or reconstituted (transformed) into organic molecules within the cell.
When a skeletonized leaf is seen in the litter, it is because recalcitrant materials remain and digestion is continuing. The fungi that utilize a variety of energy sources usually absorb the simplest compounds first, then the more complex. For instance, the formation of cellulose is repressed by high concentrations of glucose in the cytoplasm. On depletion of primary sources of glucose, enzymes to degrade more complex molecules such as cellulose and starch, are then released. Thus soluble sugars and amino acids are removed first from a leaf released from a tree. Starch is then broken down and absorbed. Subsequently, pectin and cellulose are digested. Finally, waxes are degraded and lignin oxidized. The staggering of energy acquisition results in the efficient utilization of available energy.
Detection of digestive enzymes in fungi
The regulation of nutrient acquisition appears to be controlled by general phenomena. Only a small group of enzymes, mostly hydrolases, can be detected in the culture filtrate of well-fed fungi. This suggests that specific inducers control the manufacture and release of enzymes for degradation. The most common complex carbohydrate available in the environment is cellulose. In the absence of glucose, detection of cellulose, for instance, induces the expression of celluloses. As a consequence, fungi specifically target the breakdown of the cellulose in their environment, and do not waste energy on the unnecessary formation of enzymes for degradation of molecules that may not be present. Fungi have an efficient process to gain energy.
Because of the huge range of potential food sources, fungi have evolved enzymes suitable for the environments in which they are usually found. The range of enzymes, though wide in many species, is not sufficient for survival in all environments. Fungi require other competitive attributes to ensure continued survival.
The opposite is also true. Some fungi have highly specific metabolic capabilities which enable occupation of specific habitats, utilizing molecules which are unavailable to other fungi. Further, utilization of a common and abundant substrate has led many fungi to evolve a range of highly specific degradative enzymes. Among the fungi are species that are generalist in their nutrient requirements, some that have specific nutrient requirements, and many that are in between.
Excretion of digestive enzymes
Enzymes are manufactured close to the hyphal tip. Some are packaged in vesicles associated with the Golgi and then delivered to the hyphal tip. The contents are released at the tip. Some enzymes are actively excreted through the plasma membrane, where they diffuse through or act in the cell wall. Note that the enzymes released from the hyphal tip require an aqueous environment for release and subsequent degradative activity.
Absorption of digested products
The molecules absorbed through the plasma membrane tend to be smaller than 5,000 Da, so only simple sugars, amino acids, fatty acids and other small molecules can be taken up following digestion. The molecules are taken up in solution. In some cases, the molecules are processed by enzymes located within the cell wall. For instance, sucrose inverters have been localized in walls of yeasts. Glucose appears to be the sugar preferred by most fungi. Uptake of other sugars is repressed when glucose is available. Similarly, ammonium, glutamine and asparagine regulate the uptake of nitrogen compounds, and cysteine of sulphur compounds.
Joint intracellular and extracellular digestion in cnidarians
In hydra and other cnidarians, the food is caught by the tentacles and ingested through the mouth into the single large digestive cavity, the gastrovascular cavity. Enzymes are secreted from the cells bordering this cavity and poured on the food for extracellular digestion. Small particles of the partially digested food are engulfed into the vacuoles of the digestive cells for intracellular digestion. Any undigested and un-absorbed food is finally thrown out of the mouth.
Invert digestive systems are bags and tubes
Single-celled organisms as well as sponges digest their food intracellularly. Other multi-cellular organisms digest their food extracellularly, within a digestive cavity. In this case the digestive enzymes are released into a cavity that is continuous with the animal's external environment. In cnidarians and in flatworms such as planarians, the digestive cavity, called a gastrovascular cavity, has only one opening that serves as both mouth and anus. There is no specialization within this type of digestive system because every cell is exposed to all stages of food digestion.
Specializing occurs when the digestive tract or alimentary canal has a separate mouth and anus so that transport of food is one-way. The most primitive digestive tract is seen in nematodes (phylum Nematode), where it is simply a tubular gut lined by an epithelial membrane. Earthworms (phylum Annelids) have a digestive tract specialized in different regions for the ingestion, storage, fragmentation, digestion and absorption of food. All more complex animal groups, including all vertebrates, show similar specializations.
The ingested food may be stored in a specialized region of the digestive tract or subjected to physical fragmentation. This fragmentation may occur through the chewing action of teeth (in the mouth of many vertebrates) or the grinding action of pebbles (in the gizzard of earthworms and birds).
Chemical digestion then occurs, breaking down the larger food molecules of polysaccharides and disaccharides, fats, and proteins into their smallest sub-units.
Chemical digestion involves hydrolysis reactions that liberate the sub unit molecules—primarily monosaccharides, amino acids and fatty acids—from the food. These products of chemical digestion pass through the epithelial lining of the gut into the blood, in a process known as absorption. Any molecules in the food that are not absorbed cannot be used by the animal. These waste products are excreted, or defecated from the anus.
Extracellular digestion in other animals
Annelids
The echiuran gut is long and highly convoluted, and there is no gut in pogonophoran adults. Among other annelids, the gut is linear and unsegmented, with a mouth opening on the peristomium and an anus opening at the posterior end of the animal (pygidium). Food is moved through the gut by cilia and/or by muscular contractions. Digestion is primarily extracellular, although some species show an intracellular component as well.
Arthropods
The arthropod digestive system is divisible into three areas: the fore gut, mid gut, and hind gut. All free-living species exhibit a distinct and separate mouth and anus, and in all species, food must be moved through the digestive tract by muscular activity rather than cilia activity since the lumen of the fore gut and hind gut is lined with cuticle. Digestion is generally extracellular. Nutrients are distributed to the tissues through the hemal system.
Molluscs
Most molluscs have a complete digestive system with a separate mouth and anus. The mouth leads into a short esophagus which leads to a stomach. Associated with the stomach are one or more digestive glands or digestive caeca. Digestive enzymes are secreted into the lumen of these glands. Additional extracellular digestion takes place in the stomach. In cephalopods, digestion is entirely extracellular. In the most other mollusks, the terminal stages of digestion are completed intracellularly, within the tissue of the digestive glands. The absorbed nutrients enter the circulatory system for distribution throughout the body or are stored in the digestive glands for later use. Undigested waste pass through an intestine and out through the anus. Other aspects of food collection and processing have already been discussed where appropriate for each group.
Humans
The initial components of the gastrointestinal tract are the mouth and the pharynx, which is the common passage of the oral and nasal cavities. The pharynx leads to the esophagus, a muscular tube that delivers food to the stomach, where some preliminary digestion occurs; here, the digestion is extracellular.
From the stomach, food passes to the small intestine, where a battery of digestive enzymes continue the digestive process. The products of digestion are absorbed across the wall of the intestine into the bloodstream. What remains is emptied into the large intestine, where some of the remaining water and minerals are absorbed; here the digestion is intracellular.
See also
Saprotrophic nutrition
References
Eating behaviors | Extracellular digestion | Biology | 2,212 |
194,808 | https://en.wikipedia.org/wiki/Superminicomputer | A superminicomputer, colloquially supermini, is a high-end minicomputer. The term is used to distinguish the emerging 32-bit architecture midrange computers introduced in the mid to late 1970s from the classical 16-bit systems that preceded them. The development of these computers was driven by the need of applications to address larger memory. The term midicomputer had been used earlier to refer to these systems. Virtual memory was often an additional criteria that was considered for inclusion in this class of system. The computational speed of these machines was significantly greater than the 16-bit minicomputers and approached the performance of small mainframe computers. The name has at times been described as a "frivolous" term created by "marketeers" that lacks a specific definition. Describing a class of system has historically been seen as problematic: "In the computer kingdom, taxonomic classification of equipment is more of a black art than a science." There is some disagreement about which systems should be included in this class. The origin of the name is uncertain.
As technology improved rapidly the distinction between minicomputer and superminicomputer performance blurred. Companies that sold mainframe computers began to offer machines in the same price and performance range as superminicomputers. By the mid-1980s microprocessors with the hardware architecture of superminicomputers were used to produce scientific and engineering workstations. The minicomputer industry then declined through the early 1990s. The term is now considered obsolete but still remains of interest for students/researchers of computer history.
Notable companies
Notable manufacturers of superminicomputers in 1980 included: Digital Equipment Corporation, Perkin-Elmer, and Prime Computer. Other makers of systems included SEL/Gould and Data General. Four years later there were about a dozen companies producing a significant number of superminicomputers.
Perkin-Elmer spun off their Data Systems Group in 1985 to form Concurrent Computer Corporation which continued making these systems. Nixdorf Computer, Norsk Data, and Toshiba also produced systems.
Significant superminicomputers
Interdata 7/32, 1974
Digital Equipment Corporation VAX-11/780, 1978
Prime Computer 750, 1979
Data General Eclipse MV/8000, 1980
IBM 4361, 1983
IBM 9370, 1987
External links
References
Super
Classes of computers | Superminicomputer | Technology | 481 |
2,025,859 | https://en.wikipedia.org/wiki/Myology | Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance.
See also
References
External links
British Myology Society
Physiology | Myology | Biology | 89 |
369,981 | https://en.wikipedia.org/wiki/Glossary%20of%20tensor%20theory | This is a glossary of tensor theory. For expositions of tensor theory from different points of view, see:
Tensor
Tensor (intrinsic definition)
Application of tensor theory in engineering science
For some history of the abstract theory see also multilinear algebra.
Classical notation
Ricci calculus
The earliest foundation of tensor theory – tensor index notation.
Order of a tensor
The components of a tensor with respect to a basis is an indexed array. The order of a tensor is the number of indices needed. Some texts may refer to the tensor order using the term degree or rank.
Rank of a tensor
The rank of a tensor is the minimum number of rank-one tensor that must be summed to obtain the tensor. A rank-one tensor may be defined as expressible as the outer product of the number of nonzero vectors needed to obtain the correct order.
Dyadic tensor
A dyadic tensor is a tensor of order two, and may be represented as a square matrix. In contrast, a dyad is specifically a dyadic tensor of rank one.
Einstein notation
This notation is based on the understanding that whenever a multidimensional array contains a repeated index letter, the default interpretation is that the product is summed over all permitted values of the index. For example, if aij is a matrix, then under this convention aii is its trace. The Einstein convention is widely used in physics and engineering texts, to the extent that if summation is not to be applied, it is normal to note that explicitly.
Kronecker delta
Levi-Civita symbol
Covariant tensor
Contravariant tensor
The classical interpretation is by components. For example, in the differential form aidxi the components ai are a covariant vector. That means all indices are lower; contravariant means all indices are upper.
Mixed tensor
This refers to any tensor that has both lower and upper indices.
Cartesian tensor
Cartesian tensors are widely used in various branches of continuum mechanics, such as fluid mechanics and elasticity. In classical continuum mechanics, the space of interest is usually 3-dimensional Euclidean space, as is the tangent space at each point. If we restrict the local coordinates to be Cartesian coordinates with the same scale centered at the point of interest, the metric tensor is the Kronecker delta. This means that there is no need to distinguish covariant and contravariant components, and furthermore there is no need to distinguish tensors and tensor densities. All Cartesian-tensor indices are written as subscripts. Cartesian tensors achieve considerable computational simplification at the cost of generality and of some theoretical insight.
Contraction of a tensor
Raising and lowering indices
Symmetric tensor
Antisymmetric tensor
Multiple cross products
Algebraic notation
This avoids the initial use of components, and is distinguished by the explicit use of the tensor product symbol.
Tensor product
If v and w are vectors in vector spaces V and W respectively, then
is a tensor in
That is, the ⊗ operation is a binary operation, but it takes values into a fresh space (it is in a strong sense external). The ⊗ operation is a bilinear map; but no other conditions are applied to it.
Pure tensor
A pure tensor of V ⊗ W is one that is of the form v ⊗ w.
It could be written dyadically aibj, or more accurately aibj ei ⊗ fj, where the ei are a basis for V and the fj a basis for W. Therefore, unless V and W have the same dimension, the array of components need not be square. Such pure tensors are not generic: if both V and W have dimension greater than 1, there will be tensors that are not pure, and there will be non-linear conditions for a tensor to satisfy, to be pure. For more see Segre embedding.
Tensor algebra
In the tensor algebra T(V) of a vector space V, the operation becomes a normal (internal) binary operation. A consequence is that T(V) has infinite dimension unless V has dimension 0. The free algebra on a set X is for practical purposes the same as the tensor algebra on the vector space with X as basis.
Hodge star operator
Exterior power
The wedge product is the anti-symmetric form of the ⊗ operation. The quotient space of T(V) on which it becomes an internal operation is the exterior algebra of V; it is a graded algebra, with the graded piece of weight k being called the k-th exterior power of V.
Symmetric power, symmetric algebra
This is the invariant way of constructing polynomial algebras.
Applications
Metric tensor
Strain tensor
Stress–energy tensor
Tensor field theory
Jacobian matrix
Tensor field
Tensor density
Lie derivative
Tensor derivative
Differential geometry
Abstract algebra
Tensor product of fields
This is an operation on fields, that does not always produce a field.
Tensor product of R-algebras
Clifford module
A representation of a Clifford algebra which gives a realisation of a Clifford algebra as a matrix algebra.
Tor functors
These are the derived functors of the tensor product, and feature strongly in homological algebra. The name comes from the torsion subgroup in abelian group theory.
Symbolic method of invariant theory
Derived category
Grothendieck's six operations
These are highly abstract approaches used in some parts of geometry.
Spinors
See:
Spin group
Spin-c group
Spinor
Pin group
Pinors
Spinor field
Killing spinor
Spin manifold
References
Books
Tensor theory
Wikipedia glossaries using description lists | Glossary of tensor theory | Engineering | 1,112 |
44,523,891 | https://en.wikipedia.org/wiki/Women%20in%20Science%20Hall%20of%20Fame%20%28U.S.%20State%20Department%29 | Women in Science Hall of Fame was established in 2010 by the U.S. State Department Environment, Science, Technology, and Health Hub for the Middle East and North Africa to recognize the exceptional women scientists in this region of the world.
Annual awards were made 2011-2015 and coordinated by the U.S. Embassy in Amman, Jordan.
References
Women's halls of fame
United States Department of State
Awards established in 2010
2010 establishments in Jordan
Science and technology halls of fame
Awards disestablished in 2016
Jordan–United States relations
Halls of fame in Jordan
Women in Jordan
Women in science and technology | Women in Science Hall of Fame (U.S. State Department) | Technology | 120 |
25,896,411 | https://en.wikipedia.org/wiki/Divergence%20%28statistics%29 | In information geometry, a divergence is a kind of statistical distance: a binary function which establishes the separation from one probability distribution to another on a statistical manifold.
The simplest divergence is squared Euclidean distance (SED), and divergences can be viewed as generalizations of SED. The other most important divergence is relative entropy (also called Kullback–Leibler divergence), which is central to information theory. There are numerous other specific divergences and classes of divergences, notably f-divergences and Bregman divergences (see ).
Definition
Given a differentiable manifold of dimension , a divergence on is a -function satisfying:
for all (non-negativity),
if and only if (positivity),
At every point , is a positive-definite quadratic form for infinitesimal displacements from .
In applications to statistics, the manifold is typically the space of parameters of a parametric family of probability distributions.
Condition 3 means that defines an inner product on the tangent space for every . Since is on , this defines a Riemannian metric on .
Locally at , we may construct a local coordinate chart with coordinates , then the divergence is where is a matrix of size . It is the Riemannian metric at point expressed in coordinates .
Dimensional analysis of condition 3 shows that divergence has the dimension of squared distance.
The dual divergence is defined as
When we wish to contrast against , we refer to as primal divergence.
Given any divergence , its symmetrized version is obtained by averaging it with its dual divergence:
Difference from other similar concepts
Unlike metrics, divergences are not required to be symmetric, and the asymmetry is important in applications. Accordingly, one often refers asymmetrically to the divergence "of q from p" or "from p to q", rather than "between p and q". Secondly, divergences generalize squared distance, not linear distance, and thus do not satisfy the triangle inequality, but some divergences (such as the Bregman divergence) do satisfy generalizations of the Pythagorean theorem.
In general statistics and probability, "divergence" generally refers to any kind of function , where are probability distributions or other objects under consideration, such that conditions 1, 2 are satisfied. Condition 3 is required for "divergence" as used in information geometry.
As an example, the total variation distance, a commonly used statistical divergence, does not satisfy condition 3.
Notation
Notation for divergences varies significantly between fields, though there are some conventions.
Divergences are generally notated with an uppercase 'D', as in , to distinguish them from metric distances, which are notated with a lowercase 'd'. When multiple divergences are in use, they are commonly distinguished with subscripts, as in for Kullback–Leibler divergence (KL divergence).
Often a different separator between parameters is used, particularly to emphasize the asymmetry. In information theory, a double bar is commonly used: ; this is similar to, but distinct from, the notation for conditional probability, , and emphasizes interpreting the divergence as a relative measurement, as in relative entropy; this notation is common for the KL divergence. A colon may be used instead, as ; this emphasizes the relative information supporting the two distributions.
The notation for parameters varies as well. Uppercase interprets the parameters as probability distributions, while lowercase or interprets them geometrically as points in a space, and or interprets them as measures.
Geometrical properties
Many properties of divergences can be derived if we restrict S to be a statistical manifold, meaning that it can be parametrized with a finite-dimensional coordinate system θ, so that for a distribution we can write .
For a pair of points with coordinates θp and θq, denote the partial derivatives of D(p, q) as
Now we restrict these functions to a diagonal , and denote
By definition, the function D(p, q) is minimized at , and therefore
where matrix g(D) is positive semi-definite and defines a unique Riemannian metric on the manifold S.
Divergence D(·, ·) also defines a unique torsion-free affine connection ∇(D) with coefficients
and the dual to this connection ∇* is generated by the dual divergence D*.
Thus, a divergence D(·, ·) generates on a statistical manifold a unique dualistic structure (g(D), ∇(D), ∇(D*)). The converse is also true: every torsion-free dualistic structure on a statistical manifold is induced from some globally defined divergence function (which however need not be unique).
For example, when D is an f-divergence for some function ƒ(·), then it generates the metric and the connection , where g is the canonical Fisher information metric, ∇(α) is the α-connection, , and .
Examples
The two most important divergences are the relative entropy (Kullback–Leibler divergence, KL divergence), which is central to information theory and statistics, and the squared Euclidean distance (SED). Minimizing these two divergences is the main way that linear inverse problems are solved, via the principle of maximum entropy and least squares, notably in logistic regression and linear regression.
The two most important classes of divergences are the f-divergences and Bregman divergences; however, other types of divergence functions are also encountered in the literature. The only divergence for probabilities over a finite alphabet that is both an f-divergence and a Bregman divergence is the Kullback–Leibler divergence. The squared Euclidean divergence is a Bregman divergence (corresponding to the function ) but not an f-divergence.
f-divergences
Given a convex function such that , the f-divergence generated by is defined as
.
Bregman divergences
Bregman divergences correspond to convex functions on convex sets. Given a strictly convex, continuously differentiable function on a convex set, known as the Bregman generator, the Bregman divergence measures the convexity of: the error of the linear approximation of from as an approximation of the value at :
The dual divergence to a Bregman divergence is the divergence generated by the convex conjugate of the Bregman generator of the original divergence. For example, for the squared Euclidean distance, the generator is , while for the relative entropy the generator is the negative entropy .
History
The use of the term "divergence" – both what functions it refers to, and what various statistical distances are called – has varied significantly over time, but by c. 2000 had settled on the current usage within information geometry, notably in the textbook .
The term "divergence" for a statistical distance was used informally in various contexts from c. 1910 to c. 1940. Its formal use dates at least to , entitled "On a measure of divergence between two statistical populations defined by their probability distributions", which defined the Bhattacharyya distance, and , entitled "On a Measure of Divergence between Two Multinomial Populations", which defined the Bhattacharyya angle. The term was popularized by its use for the Kullback–Leibler divergence in and its use in the textbook . The term "divergence" was used generally by for statistically distances. Numerous references to earlier uses of statistical distances are given in and .
actually used "divergence" to refer to the symmetrized divergence (this function had already been defined and used by Harold Jeffreys in 1948), referring to the asymmetric function as "the mean information for discrimination ... per observation", while referred to the asymmetric function as the "directed divergence". referred generally to such a function as a "coefficient of divergence", and showed that many existing functions could be expressed as f-divergences, referring to Jeffreys' function as "Jeffreys' measure of divergence" (today "Jeffreys divergence"), and Kullback–Leibler's asymmetric function (in each direction) as "Kullback's and Leibler's measures of discriminatory information" (today "Kullback–Leibler divergence").
The information geometry definition of divergence (the subject of this article) was initially referred to by alternative terms, including "quasi-distance" and "contrast function" , though "divergence" was used in for the -divergence, and has become standard for the general class.
The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. For example, the term "Bregman distance" is still found, but "Bregman divergence" is now preferred.
Notationally, denoted their asymmetric function as , while denote their functions with a lowercase 'd' as .
See also
Statistical distance
Notes
References
Bibliography
. Republished by Dover Publications in 1968; reprinted in 1978:
Statistical distance
F-divergences | Divergence (statistics) | Physics | 1,918 |
1,252,448 | https://en.wikipedia.org/wiki/Web%20engineering | The World Wide Web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these Web applications exhibit complex behaviour and place some unique demands on their usability, performance, security, and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad hoc way, contributing to problems of usability, maintainability, quality and reliability. While Web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations. In recent years, there have been developments towards addressing these considerations.
Web engineering focuses on the methodologies, techniques, and tools that are the foundation of Web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information systems, or computer application development.
Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, data engineering, information science, information indexing and retrieval, testing, modelling and simulation, project management, and graphic design and presentation. Web engineering is neither a clone nor a subset of software engineering, although both involve programming and software development. While Web Engineering uses software engineering principles, it encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of Web-based applications.
As a discipline
Proponents of Web engineering supported the establishment of Web engineering as a discipline at an early stage of Web. Major arguments for Web engineering as a new discipline are:
Web-based Information Systems (WIS) development process is different and unique.
Web engineering is multi-disciplinary; no single discipline (such as software engineering) can provide a complete theory basis, body of knowledge and practices to guide WIS development.
Issues of evolution and lifecycle management when compared to more 'traditional' applications.
Web-based information systems and applications are pervasive and non-trivial. The prospect of Web as a platform will continue to grow and it is worth being treated specifically.
However, it has been controversial, especially for people in other traditional disciplines such as software engineering, to recognize Web engineering as a new field. The issue is how different and independent Web engineering is, compared with other disciplines.
Main topics of Web engineering include, but are not limited to, the following areas:
Modeling disciplines
Business Processes for Applications on the Web
Process Modelling of Web applications
Requirements Engineering for Web applications
B2B applications
Design disciplines, tools, and methods
UML and the Web
Conceptual Modeling of Web Applications (aka. Web modeling)
Prototyping Methods and Tools
Web design methods
CASE Tools for Web Applications
Web Interface Design
Data Models for Web Information Systems
Implementation disciplines
Integrated Web Application Development Environments
Code Generation for Web Applications
Software Factories for/on the Web
Web 2.0, AJAX, E4X, ASP.NET, PHP and Other New Developments
Web Services Development and Deployment
Testing disciplines
Testing and Evaluation of Web systems and Applications.
Testing Automation, Methods, and Tools.
Applications categories disciplines
Semantic Web applications
Document centric Web sites
Transactional Web applications
Interactive Web applications
Workflow-based Web applications
Collaborative Web applications
Portal-oriented Web applications
Ubiquitous and Mobile Web Applications
Device Independent Web Delivery
Localization and Internationalization of Web Applications
Personalization of Web Applications
Attributes
Web quality
Web Metrics, Cost Estimation, and Measurement
Personalisation and Adaptation of Web applications
Web Quality
Usability of Web Applications
Web accessibility
Performance of Web-based applications
Content-related
Web Content Management
Content Management System (CMS)
Multimedia Authoring Tools and Software
Authoring of adaptive hypermedia
Education
Master of Science: Web Engineering as a branch of study within the MSc program Web Sciences at the Johannes Kepler University Linz, Austria
Diploma in Web Engineering: Web Engineering as a study program at the International Webmasters College (iWMC), Germany
See also
DevOps
Web developer
Web modeling
References
Sources
Robert L. Glass, "Who's Right in the Web Development Debate?" Cutter IT Journal, July 2001, Vol. 14, No.7, pp 6–0.
S. Ceri, P. Fraternali, A. Bongio, M. Brambilla, S. Comai, M. Matera. "Designing Data-Intensive Web Applications". Morgan Kaufmann Publisher, Dec 2002,
Web engineering resources
Organizations
International Society for Web Engineering e.V.: http://www.iswe-ev.de/
Web Engineering Community: http://www.webengineering.org
WISE Society: http://www.wisesociety.org/
ACM SIGWEB: http://www.acm.org/sigweb
World Wide Web Consortium: http://www.w3.org
Books
"Engineering Web Applications", by Sven Casteleyn, Florian Daniel, Peter Dolog and Maristella Matera, Springer, 2009,
"Web Engineering: Modelling and Implementing Web Applications", edited by Gustavo Rossi, Oscar Pastor, Daniel Schwabe and Luis Olsina, Springer Verlag HCIS, 2007,
"Cost Estimation Techniques for Web Projects", Emilia Mendes, IGI Publishing,
"Web Engineering - The Discipline of Systematic Development of Web Applications", edited by Gerti Kappel, Birgit Pröll, Siegfried Reich, and Werner Retschitzegger, John Wiley & Sons, 2006
"Web Engineering", edited by Emilia Mendes and Nile Mosley, Springer-Verlag, 2005
"Web Engineering: Principles and Techniques", edited by Woojong Suh, Idea Group Publishing, 2005
"Form-Oriented Analysis -- A New Methodology to Model Form-Based Applications", by Dirk Draheim, Gerald Weber, Springer, 2005
"Building Web Applications with UML" (2nd edition), by Jim Conallen, Pearson Education, 2003
"Information Architecture for the World Wide Web" (2nd edition), by Peter Morville and Louis Rosenfeld, O'Reilly, 2002
"Web Site Engineering: Beyond Web Page Design", by Thomas A. Powell, David L. Jones and Dominique C. Cutts, Prentice Hall, 1998
"Designing Data-Intensive Web Applications", by S. Ceri, P. Fraternali, A. Bongio, M. Brambilla, S. Comai, M. Matera. Morgan Kaufmann Publisher, Dec 2002,
Conferences
World Wide Web Conference (by IW3C2, since 1994): http://www.iw3c2.org
International Conference on Web Engineering (ICWE) (since 2000)
2018: http://icwe2018.webengineering.org/ (Caceres, Spain)
2017: http://icwe2017.webengineering.org/ (Rome, Italy)
2016: http://icwe2016.webengineering.org/ (Lugano, Switzerland)
2007: http://www.icwe2007.org/
2006: http://www.icwe2006.org
2005: http://www.icwe2005.org
2004: http://www.icwe2004.org
ICWE Conference Proceedings
ICWE2007: LNCS 4607 https://www.springer.com/computer/database+management+&+information+retrieval/book/978-3-540-73596-0
ICWE2005: LNCS 3579 https://www.springer.com/east/home/generic/search/results?SGWID=5-40109-22-58872076-0
ICWE2004: LNCS 3140 https://www.springer.com/east/home/generic/search/results?SGWID=5-40109-22-32445543-0
ICWE2003: LNCS 2722 https://www.springer.com/east/home/generic/search/results?SGWID=5-40109-22-3092664-0
Web Information Systems Engineering Conference (by WISE Society, since 2000): http://www.wisesociety.org/
International Conference on Web Information Systems and Technologies (Webist) (since 2005): http://www.webist.org/
International Workshop on Web Site Evolution (WSE): http://www.websiteevolution.org/
International Conference on Software Engineering: http://www.icse-conferences.org/
Book chapters and articles
Pressman, R.S., 'Applying Web Engineering', Part 3, Chapters 16–20, in Software Engineering: A Practitioner's Perspective, Sixth Edition, McGraw-Hill, New York, 2004. http://www.rspa.com/'
Journals
Journal of Web Engineering: http://www.rintonpress.com/journals/jwe/
International Journal of Web Engineering and Technology: http://www.inderscience.com/browse/index.php?journalID=48
ACM Transactions on Internet Technology: http://toit.acm.org/
World Wide Web (Springer): https://link.springer.com/journal/11280
Web coding journal: http://www.web-code.org/
Web Reference: https://www.kevi.my/
Special issues
Web Engineering, IEEE MultiMedia, Jan.–Mar. 2001 (Part 1) and April–June 2001 (Part 2). http://csdl2.computer.org/persagen/DLPublication.jsp?pubtype=m&acronym=mu
Usability Engineering, IEEE Software, January–February 2001.
Web Engineering, Cutter IT Journal, 14(7), July 2001.*
Testing E-business Applications, Cutter IT Journal, September 2001.
Engineering Internet Software, IEEE Software, March–April 2002.
Usability and the Web, IEEE Internet Computing, March–April 2002.
Citations
Web development | Web engineering | Engineering | 2,101 |
11,455,576 | https://en.wikipedia.org/wiki/Gibberella%20tricincta | Gibberella tricincta is a fungal plant pathogen. Gibberella tricincta produces the antifungal alkaloid Fungerin.
See also
Gibberellic acid
References
External links
Index Fungorum
USDA ARS Fungal Database
tricincta
Fungal plant pathogens and diseases
Fungi described in 1838
Fungus species | Gibberella tricincta | Biology | 67 |
42,681,567 | https://en.wikipedia.org/wiki/Tolypocladium%20ophioglossoides | Tolypocladium ophioglossoides, also known by two of its better known synonyms Elaphocordyceps ophioglossoides and Cordyceps ophioglossoides and commonly known as the goldenthread cordyceps, is a species of fungus in the family Ophiocordycipitaceae. It is parasitic on fruit bodies of the truffle-like Elaphomyces. The species is considered inedible, but is valued in traditional Chinese medicine.
Taxonomy
This species was first described in 1785 as Sphaeria ophioglossoides by German naturalist Jakob Friedrich Ehrhart.
The specific epithet ophioglossoides, derived from Ancient Greek, means "like a snake's tongue".
Description
T. ophioglossoides falls under the morphological category of earth tongue fungi. Its sporocarps are long, clavate and simple or rarely branched. Rhizomorphs attach the fruiting body to its host.
Similar species
It is similar to species within the genus including T. capitatum. Other earth tongues typically lack distinctive bumps.
Distribution and habitat
Its geographical distribution is throughout the Northern Hemisphere. It fruits in late summer and fall, often under oak or pine trees because Elaphomyces, its host, prefers those tree species.
Uses
The species is considered inedible.
Medicinal
In traditional Chinese medicine, T. ophioglossoides is used as an herbal remedy of hot temperature (sharing phylogenetic branch, genetic material and habitat with other species of that classification) for relieving postmenopausal syndrome in women.
The mycelium of T. ophioglossoides may protect humans from Alzheimer's disease. Production of intracellular polysaccharides in T. ophioglossoides may explain its medicinal antioxidant properties, used to fight menopause symptoms and neurodegenerative disease.
Model organism
T. ophioglossoides has also been used as a model organism to understand genetic mechanisms that drive transitions from parasitism on insects to truffles. In the lab, secondary metabolite core genes are upregulated when T. ophioglossoides is grown on insect cuticles, but downregulated when grown on species of the genus Elaphomyces.
Bioactive compounds
Because of its beneficial medicinal properties, scientists have begun to conduct research on the genes of T. ophioglossoides to understand secondary metabolite synthesis. T. ophioglossoides produces most notably peptaibiotics and balanol.
T. ophioglossoides produces peptaibiotics via nonribosomal peptide synthetases. Peptaibiotics have antibiotic and antifungal properties.
Balanol is a protein kinase inhibitor which inhibits cancer cells from growing in humans and affects other human disease states, including central nervous system diseases, cardiovascular diseases, diabetes, asthma and HIV. T. ophioglossoides has been cultured with genetic modification to produce balanol at higher concentrations.
A novel nontoxic form of arsenic called Arsenocholine-O-sulfate has been found within the body of T. ophioglossoides in significant amounts. The functionality of Arsenocholine-O-Sulfate in T. ophioglossoides is unknown. It is unclear whether T. ophioglossoides takes up Arsenocholine-O-Sulfate as a byproduct of uptaking choline-O-sulfate, a compound used as for sulfate storage and as an osmolyte, whether it takes up AC-O-Sulfate for a biological function, or whether it synthesizes Arsenocholine-O-Sulfate internally.
References
External links
Fungi described in 1787
Fungi of North America
Inedible fungi
Ophiocordycipitaceae
Taxa named by Jakob Friedrich Ehrhart
Fungus species | Tolypocladium ophioglossoides | Biology | 823 |
41,348,442 | https://en.wikipedia.org/wiki/Windows%20Hardware%20Error%20Architecture | Windows Hardware Error Architecture (WHEA) is an operating system hardware error handling mechanism introduced with Windows Vista SP1 and Windows Server 2008 as a successor to Machine Check Architecture (MCA) on previous versions of Windows. The architecture consists of several software components that interact with the hardware and firmware of a given platform to handle and notify regarding hardware error conditions. Collectively, these components provide: a generic means of discovering errors, a common error report format for those errors, a way of preserving error records, and an error event model based up on Event Tracing for Windows (ETW).
WHEA "builds on the PCI Express Advanced Reporting to provide more detailed information about system errors and a common reporting structure."
WHEA allows third-party software to interact with the operating system and react to certain hardware events. For example, when a new CPU is added to a running system—a Windows Server feature known as Dynamic Hardware Partitioning—the hardware error component stack is notified that a new processor was installed.
In contrast, Linux supports the ACPI Platform Error Interface (APEI) which is introduced in ACPI 5.0.
See also
Machine-check exception (MCE)
Reliability, availability and serviceability (RAS)
RAMS (reliability, availability, maintainability and safety)
High availability (HA)
Blue screen of death
References
Windows components
Windows Vista
Windows Server 2008
Computer errors | Windows Hardware Error Architecture | Technology | 278 |
2,025,438 | https://en.wikipedia.org/wiki/Climate%20ensemble | A climate ensemble involves slightly different models of the climate system.
The ensemble average is expected to perform better than individual model runs.
There are at least five different types, to be described below.
Aims
The aim of running an ensemble is usually in order to be able to deal with uncertainties in the system. An ultimate aim may be to produce policy relevant information such as a probability distribution function of different outcomes. This is proving to be very difficult due to a number of problems. These include:
The ensemble has to be wide-ranging to ensure it covers the whole range where the climate models may be good.
Measuring what is a good model is difficult. This may need to consider not only errors in the observation but also in the model.
Any prior assumptions about distribution can influence the probability distribution function produced.
Multi-model ensemble
Multi-model ensembles (MMEs) are widely used in IPCC assessments, and a comprehensive collection of climate models can be accessed in the Coupled Model Intercomparison Project. Members of a multi-model ensemble are developed by different organisations involved in climate change research and can differ substantially in their software design and programming approach, their handling of spatial discretisation and exact formulation of physical, chemical and biological processes. The benefits of using a multi-model ensemble are seen in "the consistently better performance of the multi-model when considering all aspects of the predictions".
Perturbed physics ensemble
Perturbed physics ensembles (PPEs) form the main scientific focus of the Climateprediction.net project. Modern climate models do a good job of simulating many large-scale features of present-day climate. However, these models contain large numbers of adjustable parameters which are known, individually, to have a significant impact on simulated climate. While many of these are well constrained by observations, there are many which are subject to considerable uncertainty. We do not know the extent to which different choices of parameter-settings or schemes may provide equally realistic simulations of 20th century climate but different forecast for the 21st century. The most thorough way to investigate this uncertainty is to run a massive ensemble experiment in which each relevant parameter combination is investigated. A more general approach is coined "perturbed parameter ensemble" (also abbreviated as PPE), as apart from physical parameters other parameters, relating to the carbon cycle, atmospheric chemistry, land use etc. can be perturbed.
Initial condition ensemble
Initial condition ensembles involve the same model in terms of the same atmospheric physics parameters and forcings, but run from variety of different starting states. Because the climate system is chaotic, tiny changes in things such as temperatures, winds, and humidity in one place can lead to very different paths for the system as a whole. We can work around this by setting off several runs started with slightly different starting conditions, and then look at the evolution of the group as a whole. This is similar to what they do in weather forecasting.
Having an initial condition ensemble can help to identify natural variability in the system and deal with it.
Forcing ensemble
A model can be subjected to different forcings. These may correspond with different scenarios such as those described in the Special Report on Emissions Scenarios and more recently in the Representative Concentration Pathway.
Grand ensemble
A grand ensemble is an ensemble of ensembles. There has to be at least two nested ensembles. This is best illustrated in the diagram opposite.
Weather
Weather forecasting uses initial condition ensembles.
Applications
Climate ensembles were used to project future changes in the occurrence of selected pests of crops.
Analysis of climate ensembles
A variety of statistical methods can be used to analyze climate ensembles, such as Principal component analysis, Anova and Directional component analysis.
See also
Ensemble forecasting
Ensemble (fluid mechanics)
National Climate Projections
Sensitivity analysis
Uncertainty analysis
References
Climate and weather statistics
Climate modeling | Climate ensemble | Physics | 758 |
49,278,852 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20TabPro%20S | The Samsung Galaxy TabPro S is a 12-inch Windows 10-based 2-in-1 PC produced and marketed by Samsung Electronics. It came in a standard version and a Gold version. The TabPro S marked the first device in the Samsung Galaxy series to run Microsoft Windows, making it a departure from the traditionally Android-powered Galaxy lineup and marked the end of the Samsung Ativ brand. Unveiled at Consumer Electronics Show 2016, alongside Samsung Notebook 9, the TabPro S was released on March 18, 2016.
Features
The tablet has a first-party keyboard attachment included. It is a folio keyboard, which means it can be in two different positions depending how the stand is set up. When closed, it has a leather-like texture to protect from normal wear and tear when traveling.
The USB Type-C Multi-port Adapter now integrates with USB-A 3.1 Port, HDMI Port and USB-C Port. It has a 128 GB solid-state drive (256 GB in the Gold version) and 4 GB RAM (8 GB in the Gold version).
The Galaxy TabPro Pen (not to be confused with the S-Pen, the traditional Samsung stylus) is a digital stylus pen that works as an input device for this Tablet. It features 1024 pressure levels. It can be seamlessly paired via Bluetooth. It uses rechargeable battery instead of disposable AAAA battery, which allows the digital stylus pen's battery to be recharged via Micro USB 2.0 Port. the initial charge last up to 30 days.
Samsung Flow is the fingerprint access application. It can also be paired with a phone via Bluetooth and NFC tag. With this, users can unlock the tablet with their phone's fingerprint sensor.
Although the Samsung Galaxy Book was unveiled at MWC 2017, and features several improvements over the TabPro S model, it is not considered a direct successor.
See also
Samsung Galaxy Tab Pro
Samsung Galaxy Book
References
External links
Samsung Galaxy Tab series | Samsung Galaxy TabPro S | Technology | 415 |
2,899,529 | https://en.wikipedia.org/wiki/Psi%20Aurigae | The Bayer designation Psi Aurigae (ψ Aur, ψ Aurigae) is shared by nine star systems in the constellation Auriga and one in Lynx:
ψ1 Aurigae = 46 Aurigae
ψ2 Aurigae = 50 Aurigae
ψ3 Aurigae = 52 Aurigae
ψ4 Aurigae = 55 Aurigae
ψ5 Aurigae = 56 Aurigae
ψ6 Aurigae = 57 Aurigae
ψ7 Aurigae = 58 Aurigae
ψ8 Aurigae = 60/61 Aurigae
ψ9 Aurigae
ψ10 Aurigae = 16 Lyncis
The Psi Aurigae stars mostly belonged to the now obsolete constellation Telescopium Herschelii, that is now part of Auriga.
Other names of the Psi Aurigae stars include:
Βουλήγες in Greek, meaning goads
Dolones in Latin
Almost of them were member of asterism 座旗 (Zuò Qí), Seat Flags, Turtle Beak mansion.
References
Auriga
Aurigae, Psi | Psi Aurigae | Astronomy | 249 |
62,793,986 | https://en.wikipedia.org/wiki/Non-fullerene%20acceptor | Non-fullerene acceptors (NFAs) are types of acceptors used in organic solar cells (OSCs). The name Fullerene comes from another type of acceptor-molecule which was used as the main acceptor material for bulk heterojunction Organic solar cells. Non-fullerene acceptors are thus defined as not being a part of this sort of acceptors.
Research in non-fullerene acceptors did not show promising results starting up when being compared to fullerene based organic solar cells. However, recent developments in this field launched a series of new opportunities for the NFA based OSCs. The most important breakthrough was the development of the small molecule acceptors (SMAs). These acceptors are showing promising results to be better alternatives for Fullerene acceptors because of their properties. The property that makes these SMAs such a big research topic is their tunability. SMAs can be modified to a much greater extent than Fullerene acceptors. There are, however, still many improvements to make on the design of the SMAs in order become profitable to use in OSCs.
Recent research on designing NFA-OSCs showed an efficiency of 15% with a so-called tandem solar cell which made use of Non-fullerene acceptors as well as fullerene acceptors. With a good chance that researchers will be able to boost this percentage up to 18%, it is clear that NFA-OSCs have a great potential in becoming a profitable photovoltaic in commercial application.
NFA Potential
Advantages
Fullerene acceptors (FAs) have been used extensively in OSCs. This is rationalized by several characteristics of fullerenes. The three-dimensional character causes them to be suitable materials for bulk heterojunction structures. Additionally, its electronic configuration (delocalized LUMOs) allows for efficient percolation and high electron mobility. Another consequence is that they are easily coupled to compatible donor polymers.
However, fullerene acceptor organic solar cells (FA-OSCs) encounter a limited efficiency. The energy levels in fullerene compounds are relatively constant and difficult to alter. Moreover, they employ weak absorption in the visible spectrum and the near-infrared spectrum and low thermal instability and photochemical instability. The acceptors need to be purified extensively, adding to the economical and temporal disadvantages of using FAs.
The organic NFAs, in the form of small molecular acceptors (SMAs), can be used to overcome these fullerene deficiencies. They have more structural degrees of freedom, allowing higher electron affinity tunability; they absorb incidental visible-NIR radiation more strongly; they are more stable; they are compatible with donor polymers and they are (in general) easier to synthesize. NF-OSCs with power conversion efficiencies (PCE) of over 13% have been reported, reaching a higher value than its FA-based counterpart.
Disadvantages
One of the downsides of using SMAs is the fact that, under atmospheric conditions, they tend to engage in disordered (anisotropic) states as a result of their planar structures. They are often planar as aromaticity is required for sufficient electron mobility. The lack of order may diminish electron transport and effective extraction routes that lead to induced current. Moreover, the corresponding lack of orientation affects donor-acceptor exciton formation. This makes them less compatible for bulk heterojunction blends than FAs.
Another downside to research on SMA usage is the profound scala of possibilities of donor-acceptor pairs that scientists are challenged to induce.
Physics
The mechanism of current induction in organic solar cells involves a charge transfer. After electromagnetic absorption and exciton formation in the electron donor polymer, the excited electron is moved towards the acceptor conduction band (LUMO) as a result of the lower energy value than the donor LUMO. This process is called a charge separation, and the corresponding energy value satisfies where CS denotes charge separation, A denotes the acceptor and D denotes the donor molecule. Along with the Coulombic potential that needs to be surpassed, the maximum energy obtained from the process is defined as the Charge Transfer energy, . The difference between the optical excitation energy (the optical band gap energy, ) and the charge transfer energy is the driving force of the system.
An advantage of NF-OSCs over current fullerene-based OSCs is that the SMAs used are relatively compatible with donors, as a result of their electronic affinity tunability. Their compatibility originates from their LUMO-energy value similarity. The driving force is minimized to solely Coulombic contributions (<0.3 eV) with negligible charge separation loss. This results in low potential spillage, , which depends explicitly on the value of the driving force, along with radiative and non-radiative losses during the current induction process. Thus, for NF-OSCs, , with q the electron's charge, is minimized, leading to a higher useful energy output. The result is a high open-circuit voltage of the solar cell compared to fullerene counterparts, with reports of values as high as 1.1V.
However, the diminished charge separation energy cost negatively influences the tendency of excited electrons in the donor conduction band to transport to the acceptor LUMO as it is less preferred energetically. This gives rise to the fact that electrons induced in the current are more energetic, but fewer electrons are induced. This means that the short-circuit current density and the fill factor (FF) are decreased.
In terms of the PCE, the higher open-circuit voltage is compensated by the lower short-circuit current density and fill factor. Researchers showed that ultrafast charge separation is possible with negligible driving force. In fact, the electrical external quantum efficiency is highest for donor-acceptor blends with lowest driving force.
Types
One of the main advantages of the non-fullerene acceptors is their ability to be tuned and customized by chemical modification. This in contrary to fullerene acceptors. It also immediately creates a bottleneck because of the huge amount of possibilities there are which could be applied as an SMA. A wide variety of SMAs are tested to be a successful acceptor, but two classes of SMAs have proven to give the best results concerning Power Conversion Efficiency (PCE) and have made the greatest attribute to the recent development in NFA-OSCs.
Rylene diimides
Rylene diimides are, as said, one of the two main subclasses which are a basis for acceptor-molecules in modern NFA-OSCs. Rylene diimides are industrial dyes and can be divided into, once again, two subclasses: Perylene Diimides (PDIs) and Naphthalene Diimides (NDIs). Rylene diimides consist of a planar rylene framework and numerous constructions can be made by attaching certain subgroups and by using more PDI molecules in one acceptor. The mono-PDI molecule is shown in the figure on the right.
Rylene diimides are considered good acceptors because of their favourable properties. Rylene diimides usually have high electron mobility values due to intermolecular π-stacking. These values are comparable to ones of fullerene acceptors. Furthermore, Rylene diimides also have a high absorbance spectrum in the visible area, high thermal and oxidative stability and their electric affinities can be tuned to a great extent by adding side groups and 3D-structure which leads to a significant higher open-circuit voltage ()
Challenges that must be faced by designing and improving Rylene diimides based OSCs are mainly concerned by synthesis of PDIs because the planar structure of the molecule makes that it tends to aggregate into a crystal structure. This greatly enhances the domain size, larger than the preferred 20 nm, in the bulk heterojunction which leads to a lower charge transport ability. Researchers have tried to reduce this aggregation by three structure adaptions, all focused on enhancing the mobility of Rylene diimides molecules. The first approach is to link two PDI molecules with a single carbon bond, to form a so-called twisted dimer. The second synthesis forms highly twisted 3D-structures of PDI molecules and the third approach forms a fused-ring structure. For all three possible ways, an example molecule is shown in the figure below. These derivatives are examples of acceptor-molecules which were tested and assessed in OSCs for their performance and PCE. Future research will focus on developing better PDIs resulting in higher PCE values for the OSC.
Fused-ring electron acceptors
Fused-ring electron acceptors (FREAs) are completely different from Rylene diimides. They consist of two electron withdrawing groups in between of a donor group. This donor group is a π-bridge of fused aromatic rings. FREAs have values for similar to those of fullerene acceptors and have a wide absorption range. Electron affinities can be tuned by substituting the side chains, the core and the end groups. Current research focusses on designing the best FREA with varying all these groups. Another development issue is the expensive synthesis of these molecules. Finding the most efficient synthetic route is therefore also an important subject concerning these acceptors
Future development
In current research, rylene diimides (for small band-gap energy donors) and FREAs (for large band-gap energy donors) have shown the most potential for becoming commercially viable solar cell materials for bulk heterojunction blend cells. Wide band gap donors are known to enhance voltage and diminish current density, but in combination with FREAs both values can be relatively high.
There are still a lot of improvements to be made before an NFA-OSC can be commercially profitable. First of all, the PCE should be increased to at least 15% since this is the minimal value for commercial application. As said, PCEs already have exceeded 13% so recent development is on the right track. PCEs can be increased by designing even better NFAs, for instance, on the level of electron mobility NFAs still can increase a lot compared to FAs ( for the best NFAs compared to for the best FAs). Improvements can also be made in the following aspects: better donor matching, tandem constructions, BHJ morphology and domain purity of the donor and acceptor.
Besides these theoretical research aspects, implementation in a life size commercial solar cell also brings a lot of challenges, such as easy and sustainable device fabrication methods and long-term stability of the organic compounds. Studies also show that with upscaling, the PCE in general drops. On all of these areas, NFA-OSCs show great potential but it will take a lot of research before a solid non-fullerene acceptor-organic solar cell can compete with inorganic solar cells.
See also
Rylene dye
References
Organic solar cells | Non-fullerene acceptor | Chemistry,Materials_science | 2,257 |
36,443,533 | https://en.wikipedia.org/wiki/Chlorociboria%20macrospora | Chlorociboria macrospora is a species of fungus in the family Chlorociboriaceae. It is found in New Zealand.
References
External links
Helotiaceae
Fungi described in 2005
Fungi of New Zealand
Fungus species | Chlorociboria macrospora | Biology | 48 |
11,422,403 | https://en.wikipedia.org/wiki/VA%20RNA | The VA (viral associated) RNA is a type of non-coding RNA found in adenovirus. It plays a role in regulating translation. There are two copies of this RNA called VAI or VA RNAI and VAII or VA RNAII. These two VA RNA genes are distinct genes in the adenovirus genome. VA RNAI is the major species with VA RNAII expressed at a lower level. Neither transcript is polyadenylated and both are transcribed by PolIII.
Function
VAI stimulates the translation of both early and late viral genes including E3 and hexon. VAII does not stimulate translation. Transient transfection assays have shown that VAI-RNA increases the stability of ribosome-bound transcripts.
VAI RNA is processed in the cell to create 22 nucleotide long RNAs that can act as siRNA or miRNA. VAI RNA functions as a decoy RNA for the double stranded RNA activated protein kinase R which would otherwise phosphorylate eukaryotic initiation factor 2.
Structure
VA RNA is composed of two stem-loops separated by a central region essential for function.
References
External links
Non-coding RNA | VA RNA | Chemistry | 240 |
56,185,694 | https://en.wikipedia.org/wiki/Aspergillus%20spinosus | Aspergillus spinosus is a species of fungus in the genus Aspergillus. Aspergillus spinosus produces aszonalenins, 2-pyrovoylaminobenzamide, fumigachlorin and pseurotins.
Growth and morphology
A. spinosus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
Further reading
spinosus
Fungi described in 1989
Fungus species | Aspergillus spinosus | Biology | 127 |
24,723,843 | https://en.wikipedia.org/wiki/Preventable%20fraction%20among%20the%20unexposed | In epidemiology, preventable fraction among the unexposed (PFu), is the proportion of incidents in the unexposed group that could be prevented by exposure. It is calculated as , where is the incidence in the exposed group, is the incidence in the unexposed group, and is the relative risk. It is a synonym of the relative risk reduction.
It is used when an exposure reduces the risk, as opposed to increasing it, in which case its symmetrical notion is attributable fraction among the exposed.
Numerical example
See also
Population Impact Measures
Preventable fraction for the population
References
Epidemiology
Medical statistics | Preventable fraction among the unexposed | Environmental_science | 131 |
60,440,104 | https://en.wikipedia.org/wiki/Toilet%20History%20Museum | The Toilet History Museum is a private museum in Kyiv, Ukraine, that contains the largest collection of toilet-related souvenirs and items in the world, including historic chamber pots, squatting pans, and urinals. The museum was founded in 2006 by a Ukrainian couple who worked in the plumbing business and is currently housed in a building within the Kyiv Fortress. In 2016, the Guinness World Records recognized it as "the largest collection of souvenir toilet bowls in the world".
Background
The museum was created by Nikolay and Marina Bogdanenko, a Ukrainian couple who had previously worked in the plumbing supply business and wanted to teach people about the enduring importance of hygiene. It opened in 2006 with items that the Bogdanenkos had collected from around the world, many while on vacation. In 2013 Bogdanenko published a 521-page book on the history of hygiene and toiletry, World History (of Toilets). Today, the museum draws an estimated 1,000 visitors per month and is housed in the Kyiv Fortress, a building which dates to the 19th century.
Organization
The museum covers the toilet from prehistoric times to the present day and related topics, including the dressing room and clothes worn to clean toilets. Exhibits are arranged sequentially, dividing history into primitive society, antiquity, the Middle Ages, Renaissance, 17th–20th century, modernity, and art water closets. The museum has replicas of some of the first toilet seats and explains the invention of toilet paper more than 2,000 years ago in China. A display shows visitors waste disposal methods from medieval castles and why medieval toilets were called wardrobes. The flushing toilet, first sketched by Leonardo da Vinci, is brought to life in a wood model. A separate room contains a movie theater that shows videos about toilets in alternating languages. There are more than 580 items in the permanent collection, which earned it recognition in the Guinness World Records in 2016.
See also
Haewoojae
Sulabh International Museum of Toilets
External links
Official website
References
Museums in Kyiv
2006 establishments in Ukraine
Museums established in 2006
History museums in Ukraine
World record holders
Toilets | Toilet History Museum | Biology | 427 |
2,616,711 | https://en.wikipedia.org/wiki/Nielsen%E2%80%93Olesen%20string | In theoretical physics, Nielsen–Olesen string is a one-dimensional object or equivalently a classical solution of certain equations of motion. The solution does not depend on the direction along the string; the dependence on the other two, transverse dimensions is identical as in the case of a Nielsen–Olesen vortex.
References
Quantum field theory | Nielsen–Olesen string | Physics | 68 |
866,358 | https://en.wikipedia.org/wiki/Edmund%20Scientific%20Corporation | Edmund Scientific Corporation, based in Barrington, New Jersey, was founded in 1942 as a retailer of surplus optical parts like lenses. It later branched out into complete systems like telescopes and microscopes, and in the 1960s, a wide variety of science toys and kits. Through the 1970s and 80s they were best known for their mail order sales and associated catalogs, although they also maintained a retail presence at their factory store.
In 1984, the company split into Edmund Scientific and Edmund Industrial Optics, the latter taking over their optical manufacturing. Later known simply as Edmund Optics, the commercial side of the company continued to expand and now has a multinational presence. In 2001, the two companies were purchased by Boreal Science, which was in turn purchased by VWR International. Many of the science toys and kits are currently offered by the online retailer Scientifics Direct.
Among the company's best-known products were the Astroscan reflector telescope and their inexpensive bimetallic jumping disks.
History
Origins
In 1942, amateur photographer Norman W. Edmund (1916–2012) found it hard to find lenses he needed for his hobby. He found that the military was happy to sell off less-than-perfect optics for next to nothing and began using these. Buying in bulk, he began to sell his own surplus through advertisements in photography magazines. It was so successful he founded "'Edmund Salvage Corporation'" in 1942. Working from a card table in his home, the company soon had so much stock that they had to rent space in more than 30 separate garages.
Post-war
The business continued in the post-war era and owned so much stock that when the Korean War started the military came to him for the optics needed to repair war-era systems. One official told him, "Gee, you have more optics than the Army!" In 1948 they completed a new building and warehouse in Barrington and opened a retail store at the front. Among its displays was a complete periscope from a WWII Japanese submarine. The core of the company in this era remained surplus lenses. These were single-element lenses, shipped in coin envelopes, with the approximate diameter and focal length stenciled on them. Reflecting their salvage and surplus origins, available diameters and focal lengths did not fall into regular progressions.
In addition to optics, the company soon branched out into various kits and plans for optics-related systems like telescopes and microscopes. It soon changed its name to Edmund Scientific and made its name with ads in publications like Scientific American. Its advertisements caught the attention of hobbyists, amateur astronomers, high school students, and cash-strapped researchers. The company also began publishing a series of pamphlets on telescopes in a do-it-yourself fashion that was popular in contemporary magazines like Popular Mechanics. These were later collected into book form in 1967, "All About Telescopes", which contained many plans for telescope systems that became a best seller and was republished repeatedly into the 1980s.
Heyday
Following Sputnik, Edmund was able to capitalize on a growing national interest in science and astronomy. They expanded their business into a full line of telescopes and telescope kits as well as equipment, parts, and supplies for other scientific fields such as physics, optics, chemistry, microscopy, electronics, and meteorology. They continued to grow as a supplier to teachers and schools with demonstration devices and kits which covered most fields of science.
Edmund's catered to the 1960s generation by expanding and highlighting their line of projectors, color wheels, black lights, filters, and other optical devices which could be used by rock bands and in psychedelic light shows. Other items catering to the counterculture were eventually added to the catalog covering the fields of Biofeedback, ESP, Kirlian photography, Pyramid power, and alternative energy.
In 1971, in the Whole Earth Catalog of items "relevant to independent education", Stewart Brand noted: "Edmund is the best source we know of for low-cost scientific gadgetry (including math and optics gear). [In this category,] many of the items we found independently... turned up in the Edmund catalog, so we were obliged to recommend that in this area we've been precluded."
The company became briefly famous in 1973 when Comet Kohoutek approached Earth and the company sold out of telescopes, a fact that made national news. Neil deGrasse Tyson would later comment that "The Edmund Scientific catalog was a geek's paradise. At a time when no one had access to lasers, they had them for sale."
Some sources claim that certain of the original polyhedral dice used in the Dungeons & Dragons role-playing game system were obtained from Edmund Scientific.
Restructuring
Norman W. Edmund retired in 1975 and left the company to his son, Robert. The company continued on as before into the 1980s, but the original business model began to wane. Robert split the company into Edmund Scientifics and Edmund Optics. Edmund Scientifics marketed to consumers and specialized in science-themed toys, vaguely high-tech household gadgets, and "science gifts." Edmund Optics did not have a public showroom like Edmund Scientifics, although the two organizations shared the same building. The large back room of Edmund Scientifics still sold military surplus from World War II and other wars well into the 1980s and into the mid-1990s. Some of the items in the surplus room were from German and other non-American militaries. None of these items were in the mail-order catalogs. They also sold other surplus wares of interest to hobbyists, including specialized motors and other miscellaneous electronics, parts from toys, and other household items.
Acquisition
In 2000 Edmund Scientific was purchased by Science Kit and Boreal Laboratories, a western New York based science supply company. Science Kit and Boreal Laboratories is part of a group of companies that provide science supplies to elementary, middle, and high schools, as well as colleges and universities. This group falls under the unofficial umbrella "VWR Education", and its constituent enterprises are owned by VWR International, a multi-national conglomerate with offices in India, China, Europe, Canada, and the United States. They are no longer affiliated with Edmund Optics Inc.
Beginning in 2000, Edmund Optics offered a variety of experimental grade and stock clearance items via a print catalog and online under a separate business named Anchor Optics, but this operation ceased in 2016, and the current Anchor Optics web site now redirects to a page at Edmund Optics listing clearance items.
In 2001, the Barrington, New Jersey, store closed after Edmund Scientific was acquired by Science Kit and Boreal Laboratories.
As of 2009, online sales made up the bulk of Edmund Scientific's revenues. The company was still selling telescopes (including an updated version of their Astroscan Telescope), microscopes (mostly they have carried the Boreal brand, manufactured for their parent company Science Kit LLC), surplus optics, magnets, and Fresnel lenses. They continued to sell many of their old favorites along with new items such as the Impossiball and hand boilers as well as other science-themed toys, novelty items, gifts, and gadgets.
As of 2017, Edmund Optics continued to offer brand-new stock optics, as well as offering custom and specialized optics to corporations and higher education institutions.
In popular culture
Edmund Scientific has provided items used in television shows such as House, MythBusters, 24, Modern Marvels, and motion pictures such as Star Trek, and the 1975 version of Escape to Witch Mountain. Wah Chang, the artist who designed and built several props in the 1960s for the Star Trek television show, used moiré patterns found in the Edmund Scientific Educator's and Designer's Moiré Kit for the texture used in the Starfleet communicator props.
In the Simpsons episode "Two Bad Neighbors", Bart Simpson releases locusts from a box labeled Edmund Scientific.
See also
Astroscan, a wide-field Newtonian reflector telescope produced by the Edmund Scientific Corporation.
References
Bibliography
Preface to Edmund Scientific Catalog 751 Copyright 1974, Edmund Scientific Co.
External links
www.edmundoptics.com — Edmund Optics professional optics company
www.scientificsonline.com — Edmund Scientifics science supplies and gifts company
Companies based in Camden County, New Jersey
Mail-order retailers
Surplus stores
Telescope manufacturers
Barrington, New Jersey | Edmund Scientific Corporation | Astronomy | 1,686 |
29,486,469 | https://en.wikipedia.org/wiki/Phase%20curve%20%28astronomy%29 | In astronomy, a phase curve describes the brightness of a reflecting body as a function of its phase angle (the arc subtended by the observer and the Sun as measured at the body). The brightness usually refers the object's absolute magnitude, which, in turn, is its apparent magnitude at a distance of one astronomical unit from the Earth and Sun.
The phase curve is useful for characterizing an object's regolith (soil) and atmosphere. It is also the basis for computing the geometrical albedo and the Bond albedo of the body. In ephemeris generation, the phase curve is used in conjunction with the distances from the object to the Sun and the Earth to calculate the apparent magnitude.
Mercury
The phase curve of Mercury is very steep, which is characteristic of a body on which bare regolith (soil) is exposed to view. At phase angles exceeding 90° (crescent phase) the brightness falls off especially sharply. The shape of the phase curve indicates a mean slope on the surface of Mercury of about 16°, which is slightly smoother than that of the Moon. Approaching phase angle 0° (fully illuminated phase) the curve rises to a sharp peak. This surge in brightness is called the opposition effect because for most bodies (though not Mercury) it occurs at astronomical opposition when the body is opposite from the Sun in the sky. The width of the opposition surge for Mercury indicates that both the compaction state of the regolith and the distribution of particle sizes on the planet are similar to those on the Moon.
Early visual observations contributing to the phase curve of Mercury were obtained by G. Muller in the 1800s and by André-Louis Danjon in the mid-twentieth century. W. Irvine and colleagues used photoelectric photometry in the 1960s. Some of these early data were analyzed by G. de Vaucouleurs, summarized by D. Harris and used for predicting apparent magnitudes in the Astronomical Almanac for several decades. Highly accurate new observations covering the widest range of phase angles to date (2 to 170°) were carried out by A. Mallama, D. Wang and R. Howard using the Large Angle and Spectrometric Coronograph (LASCO) on the Solar and Heliospheric Observatory (SOHO) satellite. They also obtained new CCD observations from the ground. These data are now the major source of the phase curve used in the Astronomical Almanac for predicting apparent magnitudes.
The apparent brightness of Mercury as seen from Earth is greatest at phase angle 0° (superior conjunction with the Sun) when it can reach magnitude −2.6. At phase angles approaching 180° (inferior conjunction) the planet fades to about magnitude +5 with the exact brightness depending on the phase angle at that particular conjunction. This difference of more than 7 magnitudes corresponds to a change of over a thousand times in apparent brightness.
Venus
The relatively flat phase curve of Venus is characteristic of a cloudy planet. In contrast to Mercury where the curve is strongly peaked approaching phase angle zero (full phase) that of Venus is rounded. The wide illumination scattering angle of clouds, as opposed to the narrower scattering of regolith, causes this flattening of the phase curve. Venus exhibits a brightness surge near phase angle 170°, when it is a thin crescent, due to forward scattering of sunlight by droplets of sulfuric acid that are above the planet's cloud tops. Even beyond 170° the brightness does not decline very steeply.
The history of observation and analysis of the phase curve of Venus is similar to that of Mercury. The best set of modern observations and interpretation was reported by A. Mallama, D. Wang and R. Howard. They used the LASCO instrument on SOHO and ground-based, CCD equipment to observe the phase curve from 2 to 179°. As with Mercury, these new data are the major source of the phase curve used in the Astronomical Almanac for predicting apparent magnitudes.
In contrast to Mercury the maximal apparent brightness of Venus as seen from Earth does not occur at phase angle zero. Since the phase curve of Venus is relatively flat while its distance from the Earth can vary greatly, maximum brightness occurs when the planet is a crescent, at phase angle 125°, at which time Venus can be as bright as magnitude −4.9. Near inferior conjunction the planet typically fades to about magnitude −3 although the exact value depends on the phase angle. The typical range in apparent brightness for Venus over the course of one apparition is less than a factor of 10 or merely 1% that of Mercury.
Earth
The phase curve of the Earth has not been determined as accurately as those for Mercury and Venus because its integrated brightness is difficult to measure from the surface. Instead of direct observation, earthshine reflected from the portion of the Moon not lit by the Sun has served as a proxy. A few direct measurements of the Earth's luminosity have been obtained with the EPOXI spacecraft. While they do not cover much of the phase curve they reveal a rotational light curve caused by the transit of dark oceans and bright land masses across the hemisphere. P. Goode and colleagues at Big Bear Solar Observatory have measured the earthshine and T. Livengood of NASA analyzed the EPOXI data.
Earth as seen from Venus near opposition from the Sun would be extremely bright at magnitude −6. To an observer outside the Earth's orbit on Mars our planet would appear most luminous near the time of its greatest elongation from the Sun, at about magnitude −1.5.
Mars
Only about half of the Martian phase curve can be observed from Earth because it orbits farther from the Sun than our planet. There is an opposition surge but it is less pronounced than that of Mercury. The rotation of bright and dark surface markings across its disk and variability of its atmospheric state (including its dust storms) superimpose variations on the phase curve. R. Schmude obtained many of the Mars brightness measurements used in a comprehensive phase curve analysis performed by A. Mallama.
Because the orbit of Mars is considerably eccentric its brightness at opposition can range from magnitude −3.0 to −1.4. The minimum brightness is about magnitude +1.6 when Mars is on the opposite site of the Sun from the Earth. Rotational variations can elevate or suppress the brightness of Mars by 5% and global dust storms can increase its luminosity by 25%.
Giant planets
The outermost planets (Jupiter, Saturn, Uranus, and Neptune) are so distant that only small portions of their phase curves near 0° (full phase) can be evaluated from the Earth. That part of the curve is generally fairly flat, like that of Venus, for these cloudy planets.
The apparent magnitude of Jupiter ranges from −2.9 to −1.4, Saturn from −0.5 to +1.4, Uranus from +5.3 to +6.0, and Neptune from +7.8 to +8.0. Most of these variations are due to distance. However, the magnitude range for Saturn also depends on its ring system as explained below.
The rings of Saturn
The brightness of the Saturn system depends on the orientation of its ring system. The rings contribute more to the overall brightness of the system when they are more inclined to the direction of illumination from the Sun and to the view of the observer. Wide open rings contribute about one magnitude of brightness to the disk alone. The icy particles that compose the rings also produce a strong opposition surge. Hubble Space Telescope and Cassini spacecraft images have been analyzed in an attempt to characterize the ring particles based on their phase curves.
The Moon
The phase curve of the Moon approximately resembles that of Mercury due to the similarities of the surfaces and the lack of an atmosphere on either body. Clementine spacecraft data analyzed by J. Hillier, B. Buratti and K. Hill indicate a lunar opposition surge. The Moon's apparent magnitude at full phase is −12.7 while at quarter phase it is 21 percent as bright.
Planetary satellites
The phase curves of many natural satellites of other planets have been observed and interpreted. The icy moons often exhibit opposition brightness surges. This behavior has been used to model their surfaces.
Asteroids
The phase curves of many asteroids have also been observed and they too may exhibit opposition surges. Asteroids can be physically classified in this way. The effects of rotation can be very large and have to be factored in before the phase curve is computed. An example of such a study is reported by R. Baker and colleagues.
Exoplanets
Programs for characterizing planets outside of the solar system depend largely on spectroscopy to identify atmospheric constituents and states, especially those that point to the presence of life forms or which could support life. However, brightness can be measured for very distant Earth-sized objects that are too faint for spectroscopic analysis. A. Mallama has demonstrated that phase curve analysis may be a useful tool for identifying planets that are Earth-like. Additionally, J. Bailey has pointed out that phase curve anomalies such as the brightness excess of Venus could be useful indicators of atmospheric constituents such as water, which might be essential to life in the universe.
Criticisms on phase curve modelling
Inferences about regoliths from phase curves are frequently based on Hapke parameterization. However, in a blind test M. Shepard and P. Helfenstein found no strong evidence that a particular set of Hapke parameters derived from photometric data could uniquely reveal the physical state of laboratory samples. These tests included modeling the three-term Henyey-Greenstein phase functions and the coherent backscatter opposition effect. This negative finding suggests that the radiative transfer model developed by B. Hapke may be inadequate for physical modeling based on photometry.
References
Observational astronomy
Radiometry
Scattering, absorption and radiative transfer (optics) | Phase curve (astronomy) | Chemistry,Astronomy,Engineering | 2,011 |
34,590,724 | https://en.wikipedia.org/wiki/Intelligent%20design%20and%20science | The relationship between intelligent design and science has been a contentious one. Intelligent design (ID) is presented by its proponents as science and claims to offer an alternative to evolution. The Discovery Institute, a politically conservative think tank and the leading proponent of intelligent design, launched a campaign entitled "Teach the Controversy", which claims that a controversy exists within the scientific community over evolution. The scientific community rejects intelligent design as a form of creationism, and the basic facts of evolution are not a matter of controversy in science.
"Teach the Controversy"
The intelligent design movement states that there is a debate among scientists about whether life evolved. The movement stresses the importance of recognizing the existence of this supposed debate, seeking to convince the public, politicians, and cultural leaders that schools should "Teach the Controversy". In fact, there is no such controversy in the scientific community; the scientific consensus is that life evolved. Intelligent design is widely viewed as a stalking horse for its proponents' campaign against what they say is the materialist foundation of science, which they argue leaves no room for the possibility of God.
Neo-creationism
Advocates of intelligent design from a Christian standpoint seek to keep God and the Bible out of the discussion, and present intelligent design in the language of science as though it were a scientific hypothesis. However, among a significant proportion of the general public in the United States the major concern is whether conventional evolutionary biology is compatible with belief in God and in the Bible, and how this issue is taught in schools. The public controversy was given widespread media coverage in the United States, particularly during the Kitzmiller v. Dover trial in late 2005 and after President George W. Bush expressed support for the idea of teaching intelligent design alongside evolution in August 2005. In response to Bush's statement and the pending federal trial, Time magazine ran an eight-page cover story on the Evolution Wars in which they examined the issue of teaching intelligent design in the classroom. The cover of the magazine featured a parody of The Creation of Adam from the Sistine Chapel. Rather than pointing at Adam, Michelangelo's God points at the image of a chimpanzee contemplating the caption reading "The push to teach 'intelligent design' raises a question: Does God have a place in science class?". In the Kitzmiller v. Dover case, the court ruled that intelligent design was a religious and creationist position, finding that God and intelligent design were both distinct from the material that should be covered in a science class.
Theistic science
Empirical science uses the scientific method to create a posteriori knowledge based on observation and repeated testing of hypotheses and theories. Intelligent design proponents seek to change this fundamental basis of science by eliminating "methodological naturalism" from science and replacing it with what the leader of the intelligent design movement, Phillip E. Johnson, calls "theistic realism". Some have called this approach "methodological supernaturalism", which means belief in a transcendent, nonnatural dimension of reality inhabited by a transcendent, nonnatural deity. Intelligent design proponents argue that naturalistic explanations fail to explain certain phenomena and that supernatural explanations provide a very simple and intuitive explanation for the origins of life and the universe. Proponents say evidence exists in the forms of irreducible complexity and specified complexity that cannot be explained by natural processes. They also hold that religious neutrality requires the teaching of both evolution and intelligent design in schools, saying that teaching only evolution unfairly discriminates against those holding creationist beliefs. Teaching both, they argue, allows for the possibility of religious belief, without causing the state to actually promote such beliefs. Many intelligent design followers believe that "Scientism" is itself a religion that promotes secularism and materialism in an attempt to erase theism from public life, and they view their work in the promotion of intelligent design as a way to return religion to a central role in education and other public spheres. Some allege that this larger debate is often the subtext for arguments made over intelligent design, though others note that intelligent design serves as an effective proxy for the religious beliefs of prominent intelligent design proponents in their efforts to advance their religious point of view within society.
It has been argued that methodological naturalism is not an assumption of science, but a result of science well done: the God explanation is the least parsimonious, so according to Occam's razor, it cannot be a scientific explanation.
Intelligent design has not presented a credible scientific case, substituting public support for scientific research. If the argument to give "equal time for all theories" were actually practiced, there would be no logical limit to the number of mutually incompatible supernatural "theories" regarding the origins and diversity of life to be taught in the public school system, including intelligent design parodies such as the Flying Spaghetti Monster "theory"; intelligent design does not provide a mechanism for discriminating among them. Philosopher of biology Elliott Sober, for example, states that intelligent design is not falsifiable because "[d]efenders of ID always have a way out". Intelligent design proponent Michael Behe concedes "You can't prove intelligent design by experiment".
The inference that an intelligent designer created life on Earth, which advocate William Dembski has said could alternately be an "alien" life force, has been compared to the a priori claim that aliens helped the ancient Egyptians build the pyramids. In both cases, the effect of this outside intelligence is not repeatable, observable or falsifiable, and it violates the principle of parsimony. From a strictly empirical standpoint, one may list what is known about Egyptian construction techniques, but one must admit ignorance about exactly how the Egyptians built the pyramids.
Inter-faith outreach
Supporters of intelligent design have also reached out to other faith groups with similar accounts of creation with the hope that the broader coalition will have greater influence in supporting science education that does not contradict their religious views. Many religious bodies have responded by expressing support for evolution. The Roman Catholic Church has stated that religious faith is fully compatible with science, which is limited to dealing only with the natural world—a position described by the term theistic evolution. While some in the Roman Catholic Church reject Intelligent design for various philosophical and theological reasons, others, such as Christoph Schönborn, Archbishop of Vienna, have shown support for it. The arguments of intelligent design have been directly challenged by the over 10,000 clergy who signed the Clergy Letter Project. Prominent scientists who strongly express religious faith, such as the astronomer George Coyne and the biologist Ken Miller, have been at the forefront of opposition to intelligent design. While creationist organizations have welcomed intelligent design's support against naturalism, they have also been critical of its refusal to identify the designer, and have pointed to previous failures of the same argument.
Rabbi Natan Slifkin directly criticized the advocates of intelligent design as presenting a perspective of God that is dangerous to religion. Those who promote it as parallel to religion, he asserts, do not truly understand it. Slifkin criticizes intelligent design's advocacy of teaching their perspective in biology classes, wondering why no one claims that God's hand should be taught in other secular classes, such as history, physics or geology. Slifkin also asserts that the intelligent design movement is inordinately concerned with portraying God as "in control" when it comes to things that cannot be easily explained by science, but not in control in respect to things which can be explained by scientific theory. Kenneth Miller expressed a view similar to Slifkin's: "[T]he struggles of the Intelligent Design movement are best understood as clamorous and disappointing double failures—rejected by science because they do not fit the facts, and having failed religion because they think too little of God.
Intelligent design also has advocates from an Islamic standpoint who believe that, while life may have developed in stages over time, human beings are uniquely created by Allah and not evolved from our common ancestor with apes. It is from Adam and Hawwa (Eve) that humanity is said to have originated from.
Defining science
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the world. The boundaries between what is and what is not to be considered science, known as the demarcation problem, continues to be debated among philosophers of science and scientists in various fields.
The U.S. National Academy of Sciences has stated that "creationism, intelligent design, and other claims of supernatural intervention in the origin of life or of species are not science because they are not testable by the methods of science."
The U.S. National Science Teachers Association and the American Association for the Advancement of Science have termed it pseudoscience.
Others in the scientific community have concurred,
and some have called it junk science.
For a theory to qualify as scientific, it is expected to be:
Consistent
Parsimonious (sparing in its proposed entities or explanations, see Occam's Razor)
Useful (describes and explains observed phenomena, and can be used predictively)
Empirically testable and falsifiable (see Falsifiability)
Based on multiple observations, often in the form of controlled, repeated experiments
Correctable and dynamic (modified in the light of observations that do not support it)
Progressive (refines previous theories)
Provisional or tentative (is open to experimental checking, and does not assert certainty)
For any theory, hypothesis or conjecture to be considered scientific, it must meet most, and ideally all, of these criteria. The fewer criteria are met, the less scientific it is; and if it meets only a few or none at all, then it cannot be treated as scientific in any meaningful sense of the word. Typical objections to defining intelligent design as science are that it lacks consistency, violates the principle of parsimony, is not scientifically useful, is not falsifiable, is not empirically testable, and is not correctable, dynamic, provisional or progressive.
Critics also say that the intelligent design doctrine does not meet the Daubert Standard, the criteria for scientific evidence mandated by the US Supreme Court. The Daubert Standard governs which evidence can be considered scientific in United States federal courts and most state courts. Its four criteria are:
The theoretical underpinnings of the methods must yield testable predictions by means of which the theory could be falsified.
The methods should preferably be published in a peer-reviewed journal.
There should be a known rate of error that can be used in evaluating the results.
The methods should be generally accepted within the relevant scientific community.
In Kitzmiller v. Dover Area School District, using these criteria and others mentioned above, Judge Jones ruled that "... we have addressed the seminal question of whether ID is science. We have concluded that it is not, and moreover that ID cannot uncouple itself from its creationist, and thus religious, antecedents".
At the Kitzmiller trial, philosopher Robert T. Pennock described a common approach to distinguishing science from non-science as examining a theory's compliance with methodological naturalism, the basic method in science of seeking natural explanations without assuming the existence or nonexistence of the supernatural. Intelligent design proponents criticize this method and argue that science, if its goal is to discover truth, must be able to accept evidentially supported, supernatural explanations. Additionally, philosopher of science Larry Laudan and cosmologist Sean Carroll argue against any a priori criteria for distinguishing science from pseudoscience. Laudan, as well as philosopher Barbara Forrest, state that the content of the hypothesis must first be examined to determine its ability to solve empirical problems. Methodological naturalism is therefore an a posteriori criterion due to its ability to yield consistent results.
Peer review
The failure to follow the procedures of scientific discourse and the failure to submit work to the scientific community that withstands scrutiny have weighed against intelligent design being accepted as valid science. The intelligent design movement has not published a properly peer-reviewed article supporting ID in a scientific journal, and has failed to publish peer-reviewed research or data supporting ID.
Intelligent design, by appealing to a supernatural agent, directly conflicts with the principles of science, which limit its inquiries to empirical, observable and ultimately testable data and which require explanations to be based on empirical evidence. Dembski, Behe and other intelligent design proponents say bias by the scientific community is to blame for the failure of their research to be published. Intelligent design proponents believe that their writings are rejected for not conforming to purely naturalistic, non-supernatural mechanisms rather than because their research is not up to "journal standards", and that the merit of their articles is overlooked. Some scientists describe this claim as a conspiracy theory. Michael Shermer has rebutted the claim, noting "Anyone who thinks that scientists do not question Darwinism has never been to an evolutionary conference." He noted that scientists such as Joan Roughgarden and Lynn Margulis have challenged certain Darwinist theories and offered explanations of their own and despite this they "have not been persecuted, shunned, fired or even expelled. Why? Because they are doing science, not religion." The issue that supernatural explanations do not conform to the scientific method became a sticking point for intelligent design proponents in the 1990s, and is addressed in the wedge strategy as an aspect of science that must be challenged before intelligent design can be accepted by the broader scientific community.
Critics and advocates debate over whether intelligent design produces new research and has legitimately attempted to publish this research. For instance, the Templeton Foundation, a former funder of the Discovery Institute and a major supporter of projects seeking to reconcile science and religion, says that it asked intelligent design proponents to submit proposals for actual research, but none were ever submitted. Charles L. Harper Jr., foundation vice-president, said: "From the point of view of rigor and intellectual seriousness, the intelligent design people don't come out very well in our world of scientific review".
The only article published in a peer-reviewed scientific journal that made a case for intelligent design was quickly withdrawn by the publisher for having circumvented the journal's peer-review standards. Written by the Discovery Institute's Center for Science & Culture Director Stephen C. Meyer, it appeared in the peer-reviewed journal Proceedings of the Biological Society of Washington in August 2004. The article was a literature review, which means that it did not present any new research, but rather culled quotations and claims from other papers to argue that the Cambrian explosion could not have happened by natural processes. The choice of venue for this article was also considered problematic, because it was so outside the normal subject matter (see Sternberg peer review controversy). Dembski has written that "perhaps the best reason [to be skeptical of his ideas] is that intelligent design has yet to establish itself as a thriving scientific research program."
In a 2001 interview, Dembski said that he stopped submitting to peer-reviewed journals because of their slow time-to-print and that he makes more money from publishing books.
In the Dover trial, the judge found that intelligent design features no scientific research or testing. There, intelligent design proponents cited just one paper, on simulation modeling of evolution by Behe and David Snoke, which mentioned neither irreducible complexity nor intelligent design and which Behe admitted did not rule out known evolutionary mechanisms. Michael Lynch called the conclusions of the article "an artifact of unwarranted biological assumptions, inappropriate mathematical modeling, and faulty logic". In sworn testimony, however, Behe said: "There are no peer reviewed articles by anyone advocating for intelligent design supported by pertinent experiments or calculations which provide detailed rigorous accounts of how intelligent design of any biological system occurred".<ref>Kitzmiller v. Dover Area School District, October 19, 2005, AM session Kitzmiller Testimony, Behe</ref> As summarized by the judge, Behe conceded that there are no peer-reviewed articles supporting his claims of intelligent design or irreducible complexity. In his ruling, the judge wrote: "A final indicator of how ID has failed to demonstrate scientific warrant is the complete absence of peer-reviewed publications supporting the theory".
The Discovery Institute has published lists of articles and books which they say support intelligent design and have been peer-reviewed, including the two articles mentioned above. Critics, largely members of the scientific community, reject this claim, stating that no established scientific journal has yet published an intelligent design article. Rather, intelligent design proponents have set up their own journals with peer review that lacks impartiality and rigor, consisting entirely of intelligent design supporters. Critics also state that even if these papers could be accepted as cases of support for intelligent design passing peer review, the output from the ID community is still fairly minuscule, especially when compared to the number of peer reviewed articles supporting evolution. Critics state that publishing material is not enough; that scientific ideas must withstand scrutiny and be built upon and that any papers supporting ID have not led to any productive work.
Intelligence as an observable quality
The phrase intelligent design'' makes use of an assumption of the quality of an observable intelligence, a concept that has no scientific consensus definition. William Dembski, for example, has written that "Intelligence leaves behind a characteristic signature". The characteristics of intelligence are assumed by intelligent design proponents to be observable without specifying what the criteria for the measurement of intelligence should be. Dembski, instead, asserts that "in special sciences ranging from forensics to archaeology to SETI (the Search for Extraterrestrial Intelligence), appeal to a designing intelligence is indispensable". How this appeal is made and what this implies as to the definition of intelligence are topics left largely unaddressed. Seth Shostak, a researcher with the SETI Institute, disputed Dembski's comparison of SETI and intelligent design, saying that intelligent design advocates base their inference of design on complexity—the argument being that some biological systems are too complex to have been made by natural processes—while SETI researchers are looking primarily for artificiality.
Critics say that the design detection methods proposed by intelligent design proponents are radically different from conventional design detection, undermining the key elements that make it possible as legitimate science. Intelligent design proponents, they say, are proposing both searching for a designer without knowing anything about that designer's abilities, parameters, or intentions (which scientists do know when searching for the results of human intelligence), as well as denying the very distinction between natural/artificial design that allows scientists to compare complex designed artifacts against the background of the sorts of complexity found in nature.
As a means of criticism, certain skeptics have pointed to a challenge of intelligent design derived from the study of artificial intelligence. The criticism is a counter to intelligent design claims about what makes a design intelligent, specifically that "no preprogrammed device can be truly intelligent, that intelligence is irreducible to natural processes". This claim is similar in type to an assumption of Cartesian dualism that posits a strict separation between "mind" and the material Universe. However, in studies of artificial intelligence, while there is an implicit assumption that supposed "intelligence" or creativity of a computer program is determined by the capabilities given to it by the computer programmer, artificial intelligence need not be bound to an inflexible system of rules. Rather, if a computer program can access randomness as a function, this effectively allows for a flexible, creative, and adaptive intelligence. Evolutionary algorithms, a subfield of machine learning (itself a subfield of artificial intelligence), have been used to mathematically demonstrate that randomness and selection can be used to "evolve" complex, highly adapted structures that are not explicitly designed by a programmer. Evolutionary algorithms use the Darwinian metaphor of random mutation, selection and the survival of the fittest to solve diverse mathematical and scientific problems that are usually not solvable using conventional methods. Intelligence derived from randomness is essentially indistinguishable from the "innate" intelligence associated with biological organisms, and poses a challenge to the intelligent design conception that intelligence itself necessarily requires a designer. Cognitive science continues to investigate the nature of intelligence along these lines of inquiry. The intelligent design community, for the most part, relies on the assumption that intelligence is readily apparent as a fundamental and basic property of complex systems.
Notes
References
science
Creationist objections to evolution
Religion and science | Intelligent design and science | Engineering | 4,196 |
3,174,690 | https://en.wikipedia.org/wiki/Lawrence%20Paulson | Lawrence Charles Paulson is an American computer scientist. He is a Professor of Computational Logic at the University of Cambridge Computer Laboratory and a Fellow of Clare College, Cambridge.
Education
Paulson graduated from the California Institute of Technology in 1977, and obtained his PhD in Computer Science from Stanford University in 1981 for research on programming languages and compiler-compilers supervised by John L. Hennessy.
Research
Paulson came to the University of Cambridge in 1983 and became a Fellow of Clare College, Cambridge in 1987. He is best known for the cornerstone text on the programming language ML, ML for the Working Programmer. His research is based around the interactive theorem prover Isabelle, which he introduced in 1986. He has worked on the verification of cryptographic protocols using inductive definitions, and he has also formalised the constructible universe of Kurt Gödel. Recently he has built a new theorem prover, MetiTarski, for real-valued special functions.
Paulson teaches an undergraduate lecture course in the Computer Science Tripos, entitled Logic and Proof which covers automated theorem proving and related methods. (He used to teach Foundations of Computer Science which introduces functional programming, but this course was taken over by Alan Mycroft and Amanda Prorok in 2017, and then Anil Madhavapeddy and Amanda Prorok in 2019.)
Awards and honours
Paulson was elected a Fellow of the Royal Society (FRS) in 2017, a Fellow of the Association for Computing Machinery in 2008 and a Distinguished Affiliated Professor for Logic in Informatics at the Technical University of Munich.
Personal life
Paulson has two children by his first wife, Dr Susan Mary Paulson, who died in 2010. Since 2012, he has been married to Dr Elena Tchougounova.
References
1955 births
Living people
American computer scientists
Members of the University of Cambridge Computer Laboratory
California Institute of Technology alumni
Stanford University alumni
Fellows of Clare College, Cambridge
2008 fellows of the Association for Computing Machinery
Fellows of the Royal Society
Formal methods people | Lawrence Paulson | Technology | 398 |
24,880,495 | https://en.wikipedia.org/wiki/Interface%20segregation%20principle | In the field of software engineering, the interface segregation principle (ISP) states that no code should be forced to depend on methods it does not use. ISP splits interfaces that are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them. Such shrunken interfaces are also called role interfaces. ISP is intended to keep a system decoupled and thus easier to refactor, change, and redeploy. ISP is one of the five SOLID principles of object-oriented design, similar to the High Cohesion Principle of GRASP. Beyond object-oriented design, ISP is also a key principle in the design of distributed systems in general and one of the six IDEALS principles for microservice design.
Importance in object-oriented design
Within object-oriented design, interfaces provide layers of abstraction that simplify code and create a barrier preventing coupling to dependencies. A system may become so coupled at multiple levels that it is no longer possible to make a change in one place without necessitating many additional changes. Using an interface or an abstract class can prevent this side effect.
Origin
The ISP was first used and formulated by Robert C. Martin while consulting for Xerox. Xerox had created a new printer system that could perform a variety of tasks such as stapling and faxing. The software for this system was created from the ground up. As the software grew, making modifications became more and more difficult so that even the smallest change would take a redeployment cycle of an hour, which made development nearly impossible.
The design problem was that a single Job class was used by almost all of the tasks. Whenever a print job or a stapling job needed to be performed, a call was made to the Job class. This resulted in a 'fat' class with multitudes of methods specific to a variety of different clients. Because of this design, a staple job would know about all the methods of the print job, even though there was no use for them.
The solution suggested by Martin utilized what is today called the Interface Segregation Principle. Applied to the Xerox software, an interface layer between the Job class and its clients was added using the Dependency Inversion Principle. Instead of having one large Job class, a Staple Job interface or a Print Job interface was created that would be used by the Staple or Print classes, respectively, calling methods of the Job class. Therefore, one interface was created for each job type, which was all implemented by the Job class.
Typical violation
A typical violation of the Interface Segregation Principle is given in Agile Software Development: Principles, Patterns, and Practices in 'ATM Transaction example' and in an article also written by Robert C. Martin specifically about the ISP. This example discusses the User Interface for an ATM, which handles all requests such as a deposit request, or a withdrawal request, and how this interface needs to be segregated into individual and more specific interfaces.
See also
SOLID – the "I" in SOLID stands for Interface segregation principle
References
External links
Principles Of OOD – Description and links to detailed articles on SOLID.
Software design
Programming principles
Object-oriented programming | Interface segregation principle | Engineering | 649 |
10,574,758 | https://en.wikipedia.org/wiki/NGC%206565 | NGC 6565 (also known as ESO 456-70) is a planetary nebula in the constellation of Sagittarius. The object formed when a star ejected its outer layers during the late stages of its evolution. The remnant core of the star, a white dwarf, is emitting vast amounts of ultraviolet radiation that ionizes, or excites, the gas surrounding it, making the nebula visible to the human eye through a telescope. Over the course of around 10,000 years the white dwarf will cool down dramatically, diminishing the light of the nebula and making it only visible in a long-exposure photograph.
The nebula of NGC 6565 is thought to be composed of three different zones. The inner zone, which is the bright circle seen in the featured image, is heavily ionized gas. A surrounding envelope of gas, seen as a faint red glow in the image, is ionized to a much lesser extent. Lastly, an extended halo, invisible in this image, is nearly neutral, or almost not ionized at all. The inner zone is the shape of a triaxial ellipsoid that we see nearly pole-on. This is the part that can be seen by the human eye through a telescope. An unknown force pushed out the gas at the poles, creating two faint cups at either end of the ellipse. It is thought that this inner zone would look like NGC 6886 if viewed from a different angle. In fact, NGC 6565 is very similar to NGC 6886 in many respects, including nebular spectrum, structure, luminosity, and temperature of the central star.
NGC 6565 was entered into the New General Catalog (NGC) by John Louis Emil Dreyer in 1888. The object has a visual magnitude of about 13 and a diameter of 8 x 10 arcseconds. It is heading in the direction of the sun at 4.9 kilometers (3 miles) a second. Distances to all but a few planetary nebulae are notoriously difficult to determine; older estimates put NGC 6565 at 6500 light-years, and newer estimates place it at up to 15,000 light-years.
See also
The Ring and Helix Nebulae (which have undergone a similar process and are larger planetary nebulae of the same type)
List of NGC objects
Planetary nebulae
References
Robert Burnham, Jr, Burnham's Celestial Handbook: An observer's guide to the universe beyond the solar system, vol 3, p. 1556
External links
Planetary nebulae
6565
Sagittarius (constellation) | NGC 6565 | Astronomy | 519 |
44,591,049 | https://en.wikipedia.org/wiki/Tylopilus%20virens | Tylopilus virens is a bolete fungus in the family Boletaceae found in Asia. It was described as new to science in 1948 by Wei-Fan Chiu as a species of Boletus; Japanese mycologist Tsuguo Hongo transferred it to Tylopilus in 1964. The fruit body has a convex to flattened cap that is in diameter. The tubes on the cap underside are up to 2 cm long, while the roundish pores are about 1–2 mm wide. The mushroom is similar in appearance to Tylopilus felleus, but unlike that species, has a greenish cap when young. T. virens typically grows near the conifer species Keteleeria evelyniana. It has elliptical spores measuring 11–14 by 5.5–6 μm.
References
External links
virens
Fungi described in 1948
Fungi of Asia
Fungus species | Tylopilus virens | Biology | 184 |
477,513 | https://en.wikipedia.org/wiki/Helmholtz%27s%20theorems | In fluid mechanics, Helmholtz's theorems, named after Hermann von Helmholtz, describe the three-dimensional motion of fluid in the vicinity of vortex lines. These theorems apply to inviscid flows and flows where the influence of viscous forces are small and can be ignored.
Helmholtz's three theorems are as follows:
Helmholtz's first theorem
The strength of a vortex line is constant along its length.
Helmholtz's second theorem
A vortex line cannot end in a fluid; it must extend to the boundaries of the fluid or form a closed path.
Helmholtz's third theorem
A fluid element that is initially irrotational remains irrotational.
Helmholtz's theorems apply to inviscid flows. In observations of vortices in real fluids the strength of the vortices always decays gradually due to the dissipative effect of viscous forces.
Alternative expressions of the three theorems are as follows:
The strength of a vortex tube does not vary with time.
Fluid elements lying on a vortex line at some instant continue to lie on that vortex line. More simply, vortex lines move with the fluid. Also vortex lines and tubes must appear as a closed loop, extend to infinity or start/end at solid boundaries.
Fluid elements initially free of vorticity remain free of vorticity.
Helmholtz's theorems have application in understanding:
Generation of lift on an airfoil
Starting vortex
Horseshoe vortex
Wingtip vortices.
Helmholtz's theorems are now generally proven with reference to Kelvin's circulation theorem. However Helmholtz's theorems were published in 1858, nine years before the 1867 publication of Kelvin's theorem.
Notes
References
M. J. Lighthill, An Informal Introduction to Theoretical Fluid Mechanics, Oxford University Press, 1986,
P. G. Saffman, Vortex Dynamics, Cambridge University Press, 1995,
G. K. Batchelor, An Introduction to Fluid Dynamics, Cambridge University Press (1967, reprinted in 2000).
Kundu, P and Cohen, I, Fluid Mechanics, 2nd edition, Academic Press 2002.
George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists, 4th edition, Academic Press: San Diego (1995) pp. 92–93
A.M. Kuethe and J.D. Schetzer (1959), Foundations of Aerodynamics, 2nd edition. John Wiley & Sons, Inc. New York
Aerodynamics
Vortices
Theorems in mathematical physics
Hermann von Helmholtz | Helmholtz's theorems | Physics,Chemistry,Mathematics,Engineering | 528 |
69,046,334 | https://en.wikipedia.org/wiki/Dialane | Dialane is an unstable compound of aluminium and hydrogen with formula Al2H6. Dialane is unstable in that it reacts with itself to form a polymer, aluminium hydride. Isolated molecules can be stabilised and studied in solid hydrogen.
References
Aluminium compounds
Hydrogen compounds | Dialane | Chemistry | 56 |
802,042 | https://en.wikipedia.org/wiki/Hydroxycarbamide | Hydroxycarbamide, also known as hydroxyurea, is an antimetabolite medication used in sickle-cell disease, essential thrombocythemia, chronic myelogenous leukemia, polycythemia vera, and cervical cancer. In sickle-cell disease it increases fetal hemoglobin and decreases the number of attacks. It is taken by mouth.
Common side effects include bone marrow suppression, fevers, loss of appetite, psychiatric problems, shortness of breath, and headaches. There is also concern that it increases the risk of later cancers. Use during pregnancy is typically harmful to the fetus. Hydroxycarbamide is in the antineoplastic family of medications. It is believed to work by blocking the making of DNA.
Hydroxycarbamide was approved for medical use in the United States in 1967. It is on the World Health Organization's List of Essential Medicines. Hydroxycarbamide is available as a generic medication.
Medical uses
Hydroxycarbamide is used for the following indications:
Myeloproliferative disease (primarily essential thrombocythemia and polycythemia vera). It has been found to be superior to anagrelide for the control of ET.
Sickle-cell disease (increases production of fetal hemoglobin that then interferes with the hemoglobin polymerisation as well as by reducing white blood cells that contribute to the general inflammatory state in sickle cell patients.)
Second line treatment for psoriasis (slows down the rapid division of skin cells)
Systemic mastocytosis with associated hematological neoplasm(SM-AHN) (The utility in treating SM-AHN with hydroxycarbamide stems from its myelosuppressive activity, it does not however exhibit any selective anti-mast cell activity)
Chronic myelogenous leukemia (largely replaced by imatinib, but still in use for its cost-effectiveness)
Side effects
Reported side effects are: neurological reactions (e.g., headache, dizziness, drowsiness, disorientation, hallucinations, and convulsions), nausea, vomiting, diarrhea, constipation, mucositis, anorexia, stomatitis, bone marrow toxicity (dose-limiting toxicity; may take 7–21 days to recover after the drug has been discontinued), megaloblastic anemia, thrombocytopenia, bleeding, hemorrhage, gastrointestinal ulceration and perforation, immunosuppression, leukopenia, alopecia (hair loss), skin rashes (e.g., maculopapular rash), erythema, pruritus, vesication or irritation of the skin and mucous membranes, pulmonary edema, abnormal liver enzymes, creatinine and blood urea nitrogen.
Due to its negative effect on the bone marrow, regular monitoring of the full blood count is vital, as well as early response to possible infections. In addition, renal function, uric acid and electrolytes, as well as liver enzymes, are commonly checked. Moreover, because of this, its use in people with leukopenia, thrombocytopenia or severe anemia is contraindicated.
Hydroxycarbamide has been used primarily for the treatment of myeloproliferative diseases, which has an inherent risk of transforming to acute myeloid leukemia. There has been a longstanding concern that hydroxycarbamide itself carries a leukemia risk, but large studies have shown that the risk is either absent or very small. Nevertheless, it has been a barrier for its wider use in patients with sickle-cell disease.
Mechanism of action
Hydroxycarbamide decreases the production of deoxyribonucleotides via inhibition of the enzyme ribonucleotide reductase by scavenging tyrosyl free radicals as they are involved in the reduction of nucleoside diphosphates (NDPs). Additionally, hydroxycarbamide causes production of reactive oxygen species in cells, leading to disassembly of replicative DNA polymerase enzymes and arresting DNA replication.
In the treatment of sickle-cell disease, hydroxycarbamide increases the concentration of fetal hemoglobin. The precise mechanism of action is not yet clear, but it appears that hydroxycarbamide increases nitric oxide levels, causing soluble guanylyl cyclase activation with a resultant rise in cyclic GMP, and the activation of gamma globin gene expression and subsequent gamma chain synthesis necessary for fetal hemoglobin (HbF) production (which does not polymerize and deform red blood cells like the mutated HbS, responsible for sickle cell disease). Adult red cells containing more than 1% HbF are termed F cells. These cells are progeny of a small pool of immature committed erythroid precursors (BFU-e) that retain the ability to produce HbF. Hydroxyurea also suppresses the production of granulocytes in the bone marrow which has a mild immunosuppressive effect particularly at vascular sites where sickle cells have occluded blood flow.
Natural occurrence
Hydroxyurea has been reported as endogenous in human blood plasma at concentrations of approximately 30 to 200 ng/ml.
Chemistry
Hydroxyurea has been prepared in many different ways since its initial synthesis in 1869. The original synthesis by Dresler and Stein was based around the reaction of hydroxylamine hydrochloride and potassium cyanate. Hydroxyurea lay dormant for more than fifty years until it was studied as part of an investigation into the toxicity of protein metabolites. Due to its chemical properties hydroxyurea was explored as an antisickling agent in the treatment of hematological conditions.
One common mechanism for synthesizing hydroxyurea is by the reaction of calcium cyanate with hydroxylamine nitrate in absolute ethanol and by the reaction of a cyanate salt and hydroxylamine hydrochloride in aqueous solution. Hydroxyurea has also been prepared by converting a quaternary ammonium anion exchange resin from the chloride form to the cyanate form with sodium cyanate and reacting the resin in the cyanate form with hydroxylamine hydrochloride. This method of hydroxyurea synthesis was patented by Hussain et al. (2015).
Pharmacology
Hydroxyurea is a monohydroxyl-substituted urea (hydroxycarbamate) antimetabolite. Similar to other antimetabolite anti-cancer drugs, it acts by disrupting the DNA replication process of dividing cancer cells in the body. Hydroxyurea selectively inhibits ribonucleoside diphosphate reductase, an enzyme required to convert ribonucleoside diphosphates into deoxyribonucleoside diphosphates, thereby preventing cells from leaving the G1/S phase of the cell cycle. This agent also exhibits radiosensitizing activity by maintaining cells in the radiation-sensitive G1 phase and interfering with DNA repair.
Biochemical research has explored its role as a DNA replication inhibitor which causes deoxyribonucleotide depletion and results in DNA double strand breaks near replication forks (see DNA repair). Repair of DNA damaged by chemicals or irradiation is also inhibited by hydroxyurea, offering potential synergy between hydroxyurea and radiation or alkylating agents.
Hydroxyurea has many pharmacological applications under the Medical Subject Headings classification system:
Antineoplastic agents – Substances that inhibit or prevent the proliferation of neoplasms.
Antisickling agents – Agents used to prevent or reverse the pathological events leading to sickling of erythrocytes in sickle cell conditions.
Nucleic acid synthesis inhibitors – Compounds that inhibit cell production of DNA or RNA.
Enzyme inhibitors – Compounds or agents that combine with an enzyme in such a manner as to prevent the normal substrate-enzyme combination and the catalytic reaction.
Cytochrome P-450 CYP2D6 inhibitors – Agents that inhibit one of the most important enzymes involved in the metabolism of xenobiotics in the body, CYP2D6, a member of the cytochrome P450 mixed oxidase system.
Society and culture
Brand names
Brand names include: Hydrea, Litalir, Droxia, and Siklos.
References
Antineoplastic antimetabolites
Drugs developed by Bristol Myers Squibb
IARC Group 3 carcinogens
Wikipedia medicine articles ready to translate
Sickle-cell disease
Ureas
World Health Organization essential medicines
X | Hydroxycarbamide | Chemistry | 1,828 |
34,602,583 | https://en.wikipedia.org/wiki/Pledge%20%28brand%29 | Pledge is an American cleaning product made by S. C. Johnson & Son. First sold in 1958, it is used to help dust and clean. Pledge is known as Pliz in France, and Blem in Argentina. In several countries, it is sold as Pronto.
Products
Lemon (odor only) Clean Furniture Spray
Wipes
Extra Moisturizing Furniture Spray
Dust & Allergen Furniture Spray
Specialty Surfaces Furniture Spray
Multi Surface Everyday Cleaner
Multi Surface Antibacterial Everyday Cleaner
Multi Surface Everyday Wipes
Multi Surface Everyday Cleaner 99% Natural
Revitalizing Oil
Pet Hair Fabric Sweeper
Multi Surface Duster
Dust & Allergen Dry Cloths
4-in-1 Wood Floor Cleaner
Wood Floor Concentrated Cleaner with Almond Oil
Wood Floor Finish With Future Shine
Clean & Shine Multi Surface Floor Cleaner
4-in-1 tile & Vinyl Floor Cleaner
Tile & Vinyl Floor Finish with Future Shine
SC Johnson One Step No Buff Wax
SC Johnson Paste Wax
Pronto Liquid Wax
Wipe & Shine Liquid Polish
See also
Swiffer
References
External links
Products introduced in 1958
Cleaning product brands
Cleaning product components
Cleaning products
S. C. Johnson & Son brands
Reckitt brands | Pledge (brand) | Chemistry,Technology | 229 |
8,063,375 | https://en.wikipedia.org/wiki/PDMS%20%28software%29 | PDMS (Plant Design Management System) as it is known in the 3D CAD industry, is a customizable, multi-user and multi-discipline, engineer controlled design software package for engineering, design and construction projects in offshore and onshore.
The Computer-Aided Design Centre (or CADCentre as it was more commonly referred to, and later formally became) was created in Cambridge, England, UK in 1967 by the UK Ministry of Technology. Its mission was to develop computer-aided design techniques and promote their take-up by British industry. The centre carried out much pioneering CAD research, and many of its early staff members went on to become prominent in the worldwide CAD community, such as brothers Dick Newell and Martin Newell.
Dick Newell oversaw the creation of the Plant Design Management System (PDMS) for 3D process plant design. He later co-founded two software companies – Cambridge Interactive Systems (CIS) which was known for its Medusa 2D/3D CAD system, and Smallworld with its eponymous Smallworld GIS (Geographical Information System). Martin Newell later went to the University of Utah where he did pioneering 3D solid modelling work; he was also one of the progenitors of PostScript.
Subsequently, the UK government, via the British Technology Group (BTG) established a separate company, Compeda Ltd, to exploit software developed and owned by the government and they took over the marketing and user support of PDMS, while the software continued to be developed by the CADCentre, with funding from Compeda.
When the UK government decided to privatise (sell) anything that did not need to be government owned, Compeda Ltd was sold to Prime Computer Inc. for a net negative sum of money. Prime Computer decided that PDMS had no commercial value or future and returned the marketing rights for the product to CADCentre.
CADCentre was privatised and in 2001 changed its name to AVEVA.
The latest release, as of March 2021, is AVEVA PDMS 12.1.SP5
AVEVA has introduced the latest version of PDMS is AVEVA Everything 3D (E3D). The current version of AVEVA Everything 3D is 2.1 (Expected to Launch 3.1 very soon)
AVEVA Everything 3D has been introduced with the new UI and with advanced functions. Ease of modelling, Quick Modelling functionality, More User friendly.
User Groups
Existing
Defunct
See also
MPDS4
External links
AVEVA Group website
References
Computer-aided design software
History of computing in the United Kingdom
Science and technology in Cambridgeshire | PDMS (software) | Technology | 518 |
42,779,171 | https://en.wikipedia.org/wiki/GU%20Piscium%20b | GU Piscium b (GU Psc b) is a directly imaged planetary-mass companion orbiting the star GU Piscium, with an extremely large orbit of , and an apparent angular separation of 42 arc seconds. The planet is located at right ascension declination at a distance of .
Properties
An orbital revolution around its parent star (which is 1/3 the mass of the Sun) or "year", would take approximately 163,000 years to complete, considering a circular orbit with 2000 AU as the semi-major axis. It is a gas giant located in the constellation of Pisces, 155 light-years from the Solar System, and estimated to have a mass nine to thirteen times that of Jupiter, and a surface temperature of 1000 K.
It is a relatively young stellar system, part of the AB Doradus moving group of ca. 30 main stars created from the same molecular cloud less than 100 million years ago, and the only one found among the 90 stars of the group examined.
The spectral type was initially determined to be T3.5 ±1. This team also found that it is a weak binary candidate. A later work found it more similar to known tight binary T-dwarfs and assigned a spectral type of T2+T8. This object was found to be variable. First a study with the Canada-France-Hawaii Telescope found a rotation period of around 6 hours and an amplitude of 4 ±1% on 2014 October 11. On two other occasions this object was not variable. Later the variability was studied with Hubble Space Telescope WFC3 at 1.1-1.67 μm. GU Psc b showed variability with an amplitude of 2.7% and a rotation period of around 8 hours. The largely gray light curve modulation show that this object has heterogeneous clouds.
Discovery
The discovery was made by an international team of astronomers led by Marie-Eve Naud of the Université de Montréal in Quebec, combining observations from telescopes of the Gemini Observatory, the Mont Mégantic Observatory (OMM), the Canada–France–Hawaii Telescope (CFHT) and the W. M. Keck Observatory. Its large distance away from its parent star permitted the use of combined infrared and visible light images to detect it, a technique astronomers hope to reproduce to discover much closer planets with the Gemini Planet Imager (GPI) in Chile.
Near-infrared spectroscopy of the companion was obtained with the GNIRS spectrograph on the Gemini North Telescope, which shows evidence of low surface gravity confirming the planet's youth. Weak methane absorption was detected in H and K band corresponding to a spectral type of T3.5.
See also
List of exoplanet extremes
List of directly imaged exoplanets
CFBDSIR 2149−0403 - Possible rogue planet in the AB Doradus moving group
WD 0806−661 B
Notes
References
External links
IOPscience: DISCOVERY OF A WIDE PLANETARY-MASS COMPANION TO THE YOUNG M3 STAR GU PSC
arXiv: Discovery of a wide planetary-mass companion to the young M3 star GU Psc
NASA ADS: Discovery of a Wide Planetary-mass Companion to the Young M3 Star GU Psc
Exoplanets detected by direct imaging
Exoplanets discovered in 2014
Pisces (constellation) | GU Piscium b | Astronomy | 675 |
2,027,898 | https://en.wikipedia.org/wiki/Control%20room | A control room or operations room is a central space where a large physical facility or physically dispersed service can be monitored and controlled. It is often part of a larger command center.
Overview
A control room's purpose is production control, and serves as a central space where a large physical facility or physically dispersed service can be monitored and controlled. Central control rooms came into general use in factories during the 1920s.
Control rooms for vital facilities are typically tightly secured and inaccessible to the general public. Multiple electronic displays and control panels are usually present, and there may also be a large wall-sized display area visible from all locations within the space. Some control rooms are themselves under continuous video surveillance and recording, for security and personnel accountability purposes. Many control rooms are occupied on a "24/7/365" basis, and may have multiple people on duty at all times (such as implementation of a "two-man rule"), to ensure continuous vigilance.
Other special-purpose control room spaces may be temporarily set up for special projects (such as an oceanographic exploration mission), and closed or dismantled once the project is concluded.
Examples
Control rooms are typically found in installations such as:
Nuclear power plants and other power-generating stations
Electric power distribution companies and other Utilities
Oil refineries and chemical plants
Airlines, where they are often referred to as operations control centers, and are responsible for flight operations dispatch, monitoring and support
Major transportation facilities such as bridges, tunnels, canals and rapid transit systems, where they are often staffed 24 hours a day to monitor and report on traffic congestion and to respond to emergencies
Military facilities (ranging in scale from a missile silo to NORAD), also referred to as operations rooms
NASA flight controllers work in several "flight control rooms" in mission control centers; affiliated facilities, such as the Jet Propulsion Laboratory have their own control rooms
Computerized data centers, often serving remote users in multiple time zones
Network operations centers
Large institutions such as universities, hospitals, major research facilities (such as particle accelerator laboratories), high security prisons, and theme parks
Emergency services including police, fire service and emergency medical service
Call centers, which may use them to monitor incoming and outgoing communications of customer service representatives, and to provide general oversight
Rail operations centers, such as the Union Pacific Harriman Dispatch Center, control rail operations over thousands of miles of railroad. Train dispatchers staff these facilities around the clock to manage efficient rail operations. In the UK, they are usually operated separately by each train operating company or by Network Rail, and include train crew and rolling stock resourcing.
Special hazards and mitigation
Control rooms are usually equipped with elaborate fire suppression and security systems to safeguard their contents and occupants, and to ensure continued operation in emergencies. In hazardous environments, they may also be areas of refuge for personnel trapped on-site. They are typically crowded with equipment, mounted in multi-function rack mount cabinets to allow updating. The concentration of equipment often requires special electrical uninterruptible power supply (UPS) feeds and air conditioning.
Since the control equipment is intended to control other items in the surrounding facility, these often fire-resistance rated service rooms require many penetrations for cables. Due to routine equipment updates, these penetrations are subject to frequent changes, requiring maintenance programs to include vigilant firestop management for code compliance.
Due to the sensitive equipment in control room cabinets, it is useful to ensure the use of "T-rated" firestops that are massive and thick enough to resist heat transmission to the inside of the control room. It is also common to place control rooms under positive pressure ventilation to prevent smoke or toxic gases from entering. If used, gaseous fire suppressants must occupy the space that is to be protected for a minimum period of time to be sure a fire can be completely extinguished. Openings in such spaces must therefore be kept to a minimum to prevent the escape of the suppression gas.
A mobile control room is designated as particularly in high risk facilities, such as a nuclear power station or a petrochemical facility. It can provided a guaranteed life support for the anticipated safety control.
Design
The design of a control room incorporates ergonomic and aesthetic features including optimum traffic flow, acoustics, illumination, and health and safety of the workers. Ergonomic considerations determine the placement of humans and equipment to ensure that operators can easily move into, out of, and around the control room, and can interact with each other without any hindrances during emergency situations; and to keep noise and other distractions to a minimum.
In popular culture
Control room scenes dealing with crisis situations appear frequently in thriller novels and action films. In addition, a few documentaries have been filmed with scenes in real-life control room settings.
Fail-Safe - a 1964 Cold war thriller film directed by Sidney Lumet, based on the 1962 novel of the same name by Eugene Burdick and Harvey Wheeler. It portrays a fictional account of a Cold War nuclear crisis.
The Prisoner - a 1967 British television series (17 episodes), which follows a British former secret agent who is abducted and held prisoner in a mysterious coastal village resort where his captors try to find out why he abruptly resigned from his job.
The Taking of Pelham One Two Three - a 1974 American thriller film directed by Joseph Sargent, produced by Edgar J. Scherick, and starring Walter Matthau, Robert Shaw, Martin Balsam and Héctor Elizondo. Peter Stone adapted the screenplay, from the 1973 novel of the same name by Morton Freedgood (under the pen name John Godey) about a group of criminals taking hostage for ransom the passengers of a busy New York City Subway car.
The China Syndrome - a 1979 American thriller film that tells the story of a television reporter and her cameraman who discover safety coverups at a nuclear power plant. It stars Jane Fonda, Jack Lemmon and Michael Douglas, with Douglas also serving as the film's producer.
GoldenEye - a 1995 spy film, and 17th in the James Bond franchise, features 2 control rooms used for Command and control of a fictitious satellite based weapon, the original control room belonging to the USSR and a replica built by the Janus Crime Syndicate who have taken possession of the satellite for nefarious purposes. The latter also featured as a playable level in the videogame of the same name for the Nintendo 64.
Minority Report - a 2002 American neo-noir science fiction thriller film directed by Steven Spielberg, and loosely based on the short story of the same name by Philip K. Dick. It is set primarily in Washington DC, and Northern Virginia in the year 2054, where "PreCrime", a specialized police department, apprehends criminals based on foreknowledge provided by three psychics called "precogs".
Control Room - a 2004 documentary film about Al Jazeera and its relations with the US Central Command (CENTCOM), as well as the other news organizations that covered the 2003 invasion of Iraq.
Image gallery
See also
Command center
Active fire protection
Area of refuge
Circuit integrity
Fire protection
Fireproofing
Firestop
Combat information center
Passive fire protection
Uninterruptible power supply
References
External links
US Army INSCOM Information Dominance Center
Rooms
Command and control | Control room | Engineering | 1,464 |
58,508,059 | https://en.wikipedia.org/wiki/Testosterone%20propionate/testosterone%20phenylpropionate/testosterone%20isocaproate | Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate (TP/TPP/TiC), sold under the brand name Sustanon 100 (Organon), is an injectable combination medication of three testosterone esters, all of which are androgens/anabolic steroids. They include:
20 mg testosterone propionate
40 mg testosterone phenylpropionate
40 mg testosterone isocaproate
They are provided as an oil solution and are administered by intramuscular injection. The different testosterone esters provide for different elimination half-lives in the body. Esterification of testosterone provides for a sustained but non-linear release of testosterone hormone from the injection depot into the circulation.
The medication was a smaller dose than Sustanon 250 and was usually reserved for pediatric use.
Sustanon 100 has not been produced since 2009. Sustanon 100 is manufactured in India by Zydus.
See also
Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate/testosterone decanoate
Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate/testosterone caproate
List of combined sex-hormonal preparations § Androgens
References
Abandoned drugs
Anabolic–androgenic steroids
Androstanes
Combined androgen formulations
Testosterone esters
Testosterone | Testosterone propionate/testosterone phenylpropionate/testosterone isocaproate | Chemistry | 281 |
14,817,355 | https://en.wikipedia.org/wiki/Fibromodulin | Fibromodulin is a protein that in humans is encoded by the FMOD gene.
Fibromodulin is a 42kDa protein of a family of small interstitial leucine-rich repeat proteoglycans (SLRPs). It can have up to four N-linked keratan sulfate chains attached to the core protein within the leucine-rich region. It shares significant sequence homology with biglycan and decorin.
Function
Fibromodulin participates in the assembly of the collagen fibers of the extracellular matrix. It binds to the same site on the collagen type I molecule as lumican. It also inhibits fibrillogenesis of collagen type I and collagen type III in vitro. It regulates TGF-beta activities by sequestering TGF-beta into the extracellular matrix.
Clinical significance
There is an age-dependent decline in the synthesis of keratan sulfate chains, so non-glycated forms of fibromodulin can accumulate in tissues such as cartilage.
Fibromodulin is found in the epidermis of human skin and is expressed by skin cells (keratinocytes) in culture. Mice with the gene for fibromodulin knocked out (Fmod-/-) have very fragile skin and abnormal tail and Achilles tendons. The collagen fiber bundles in these tendons are fewer and disorganised and there is less endotenon surrounding the tendon tissue. The levels of lumican, a SLRP with one of the same collagen binding sites as fibromodulin, is increased 4 fold in the tail tendons of Fmod-knockout mice.
References
Further reading
Extracellular matrix proteins
Glycoproteins | Fibromodulin | Chemistry | 373 |
36,895,225 | https://en.wikipedia.org/wiki/Gondola%20%28retail%29 | A gondola (usually pronounced in this context) is a freestanding fixture used by retailers to display merchandise. Gondolas typically consist of a flat base and a vertical component featuring notches, pegboards, or slatwalls. The vertical piece can be fitted with shelves, hooks, or other displays. Gondolas placed end-to-end can form rows of shelving, while stand-alone gondolas tend to be used for special themed displays. A gondola placed perpendicular to the end of a row of other gondolas can be used as an endcap. In Europe, gondola normally refers to double-sided shop shelving. In clothing stores, merchandising is carried out using 3-specialized shelving for clothing, and makes it possible to highlight specific products to increase the average basket at the checkout.
See also
Visual merchandising
Endcap
References
Retail display
Retail store elements | Gondola (retail) | Technology | 192 |
3,848,399 | https://en.wikipedia.org/wiki/Sensormatic | Sensormatic is a subsidiary of Tyco International (now owned by Johnson Controls) that manufactures and sells electronic article surveillance equipment. They manufacture acusto-magnetic (AM) electronic article surveillance systems.
Sensormatic Electronics Corporation was purchased by Tyco International in 2001. The acquisition was executed by a merger of Sensormatic with a subsidiary of Tyco. Sensormatic is frequently called by the name of its parent company ADT, formerly ADT/Tyco.
A product from Sensormatic is the Supertag, a hard loss prevention tag. The Supertag took over for the Sensormatic Ultragator tag. Ultragator is a tan Ultra Max tag that is sold to retail companies. Ultragator tags were improved upon during the design of the Supertag.
Sensormatic specializes in the area of sourcetagging. A sourcetag is a security tag or label applied during the manufacturing process. Recently Sensormatic introduced disposable hard source tags at a few retail chains.
Johnson Controls announced the SuperTag 4 as part of the Sensormatic portfolio. The SuperTag 4 is an iteration of the SuperTag product.
External links
References
Johnson Controls
Security technology
Wireless locating | Sensormatic | Technology | 235 |
20,905,227 | https://en.wikipedia.org/wiki/M47%20bomb | The M47 bomb was a chemical bomb designed during World War II for use by the U.S. Army Air Forces.
Design
The bomb was designed for aerial bombardment and maximum efficiency after being dropped. Therefore, the bomb had a very thin metal sheet as its only cover, as little as . The bomb is approximately in diameter, with a nose the shape of a hemisphere. The M107 bomb fuse at the nose of the bomb detonated the weapon, allowing for the release of the contents inside. The bomb was designed to carry either white phosphorus (WP) or a mustard agent (H). However, the H bomb filler was found to leak from the bomb when loaded, and the M47 and its variant M47A1 were not allowed to be loaded. This was due to the thin steel walls on the weapon. In storage and handling, both corrosion and rough handling were found to cause the bomb to leak. When the bomb is loaded with the chemical filler H, it weighed approximately , of which are from H.
The M47 bomb could also be used as an incendiary device. A mixture of rubber and gasoline could be used in the field to produce a crude incendiary bomb. A mixture of white phosphorus and jelled gasoline also produced a flammable mixture. Other mixtures included: LA-60 in which crude latex was combined with caustic soda, coconut oil, and water, crepe rubber (CR) in which crude latex reduced to a solid by precipitation and kneading, LA-100 in which crude latex was dried until it was 100% solid, smoked rubber sheets (SR) in which crude latex that has been dried over a fire until it is 100% solid.
When used with these fillers, the bomb used a black powder charge to ignite and scatter the incendiary materials. The bomb typically weighed about when the incendiary fillers are used.
Variants
The M47A1 was designed to replace the M47. It has a thicker steel cover that is about thick and an acid resistant corrosion cover inside.
The M47A2 was designed to fix the leaking problems of the M47 when the agent H was carried. On the inside it was coated with a special oil that protected against corrosion from the agent H.
See also
Air raid on Bari
References
Chemical weapon delivery systems
World War II aerial bombs of the United States
Cold War aerial bombs of the United States
Chemical weapons of the United States
Military equipment introduced in the 1940s | M47 bomb | Chemistry | 509 |
19,093,787 | https://en.wikipedia.org/wiki/Sarcopenic%20obesity | Sarcopenic obesity is a combination of two disease states, sarcopenia and obesity. Sarcopenia is the muscle mass/strength/physical function loss associated with increased age, and obesity is based off a weight to height ratio or body mass index (BMI) that is characterized by high body fat or being overweight.
The risk of sarcopenic obesity increases with age, and its consequences are a health concern in an ageing population. This condition accelerates muscle mass and function loss as mentioned above, and is a particular concern for the elderly due to its compounding effects on mobility and overall health.
An increased subset of adults over the age of 65 have been classified as having sarcopenic obesity. There is an association between the loss in muscle mass/strength/physical function of sarcopenia and high body fat in obesity as the increased inactivity (sedentary lifestyle) that can occur with a loss in physical function and aging can lead to increase in weight as body fat increases.
In regard to sarcopenic obesity prevalence, it is highest among Asian males at 14.4%. Therefore, there is a critical need for a consensus definition for sarcopenic obesity and thus its clinical importance. There is limited additional data among different populations means that future retrospective research studies could clarify statistical data and provide more robust evidence. However, this does not preclude a relationship between the two conditions or dismiss the possibility of associated symptoms and or health complications.
These two disease states are synergistic or linked together, as the increase in progression of one disease state increases the severity of the other and vice versa. A Pearson Chi-Square test performed on a sample size of 1637 patients from 2019 to 2021 in community/outpatient clinics at Prince of Wales Hospital determined that Obesity is a risk factor of sarcopenia when obesity is defined as BF% compared to BMI. This can be attributed to high amounts of lean tissue or high muscle mass even though the clinical BMI can be diagnosed as obesity.
Pathogenesis
The pathogenesis of sarcopenic obesity involves multiple factors, including aging, lack of physical activity, malnutrition/vitamin imbalances, insulin resistance, and hormonal changes -> body composition changes. The exact pathophysiology is not well understood, however these factors have been studied in the production of sarcopenic obesity. These factors increase ectopic/omental fat deposition, insulin resistance, while decreasing metabolic rate, physical activity, and anabolic hormones.
It is thought that GDF15 and FGF21 (protein/cytokine that is biomarker for cell injury/inflammation in response to stress) are increased in sarcopenic obesity. Myostatin is also increased. In the fat, lipotoxicity and chronic inflammation are increased in addition to accumulation of immune cells. In the muscle, mitochondrial dysfunction, oxidative stress (imbalance of free radicals and antioxidants that leads to cell damage), myosteaosis (fat accumulation in skeletal muscles), and anabolic resistance (reduced stimulation of muscle to amount of protein) can occur.
Overall, the cycle of adipose and muscle tissues lead to expansion of white adipose tissue into muscle tissue. This inhibits protein synthesis, resulting in decline of muscle mass and promotes other mechanisms e.g. insulin resistance. The release of cytokines as well inhibits insulin production, and other mechanisms that increase risk of disease e.g. Cardiovascular issues that increase risk of death and decreased life span.
Symptoms
The symptoms are similar to those of sarcopenia and obesity. The individual may show a body mass index that is appropriate and healthy to his or her age but will look fat in appearance.
People who have sarcopenia are experiencing gradual loss of muscle. This condition commonly presents as reduced endurance, reduced speed while walking, imbalance with increased risk of falls, struggles with everyday activity, difficulty climbing stairs, and loss of muscle size.
Sarcopenic obesity also involves obesity. People living with obesity experience an array of symptoms, including difficulty breathing, joint and back pain, a limited ability to participate in physical tasks, snoring, frequently experiencing fatigue, and excessive perspiration. In some patients, a range of comorbidities can coincide with sarcopenic obesity, for example cardoivascular disease, dementia, fractures, diabetes, and even some cancers. In some cases, if a person already has pre-existing conditions, they can worsen if they develop sarcopenic obesity. The effects of obesity are not only physical, people can also have some mental effects. Some of these include, low confidence which can present as doubting ones ability, worry, uncertainty, and being hesitant while assigned or performing tasks. People with obesity also tend to have low self-esteem.
Causes
Sarcopenic obesity primarily stems from changes in body composition due to an increase in age, hormonal changes, lack of exercise and a healthy diet, and other diseases.
Aging
Aging is the main factor that leads to a change in body composition. These are mainly decreases in muscle strength, increases in total fat mass, and decreases in peripheral subcutaneous fat, all of which can also be attributed to a decline in exercise and reduced basal metabolic rate. Hormonal changes also occur as a person ages, resulting in further changes in muscle composition.
Hormonal Changes
Insulin resistance often increases as a person ages and is commonly linked with obesity. Obesity is often characterized as extreme adipose tissue growth due to a decrease in energy expenditure as well as an increase in nutrition. Obesity can also lead to inflammation, which plays an additional factor in causing insulin resistance. Insulin plays a powerful role in protein synthesis since it increases intracellular uptake of short-chain amino acids and regulates expression of albumin and myosin. Insulin's regulation of hepatic and muscle cell enzymes also helps control protein degradation. Thus, insulin resistance can lead to an increase in protein breakdown and a decrease in protein synthesis in skeletal muscle.
Obesity can also lead to lower levels of testosterone, insulin-like growth factor 1 (IGF-1), and other anabolic hormones. The high amount of circulating free fatty acids also inhibits growth hormone production. These hormonal changes are often associated with a loss in muscle strength and mass.
Inflammation
Inflammation is one of the key factors that contributes to the reduction of muscle mass and strength among sarcopenic obesity. Adipose tissue secretes hormones and proteins, such as pro-inflammatory cytokines (TNF-α, IL-6, and IL-1) and adipokines (lectin and adiponectin).
Because there is a larger number of adipose tissue in those that are obese, the inflammatory response is up-regulated. This inflammation can induce insulin resistance, leading to a decrease in skeletal muscle strength and mass. Inflammation can also directly cause muscle atrophy by suppressing protein synthesis and inducing the breakdown of proteins. It indirectly affects muscle mass by causing metabolic disorders in the digestive system, liver, and other cells.
Exercise
One of the factors that cause sarcopenic obesity is a decline in physical activity, often as a result of aging. This decrease in exercise leads a decrease in muscle mass and strength. This leads to a decrease in basal metabolic rate, allowing for a greater accumulation of fat. As the body continues to age, the lack of physical activity, as well as other factors, further prevents a person from continuously exercising. In addition, a lack of exercise can lead to decreases in muscle protein synthesis and affect hormonal balances.
Diagnosis
Sarcopenic obesity is a combination of high body fat and low body mass index.
Can be diagnosed by measures such as waist-hip ratio.
Sarcopenic obesity is defined as the presence of increased levels of adipose tissue and a below average muscle mass and function in a patient. Diagnostic procedure for sarcopenic obesity involves a number of body composition assessments a person has to undergo. Sarcopenic obesity is slightly more challenging to diagnose compared to other disease and it tends to be under diagnosed in all populations. This is a condition that is thought to affect the older population since as people age, they tend to loose muscle mass. Older people are also less likely to engage in physical activity and this can lead to an increase in weight. The intricate definition of sarcopenic obesity is thought to cause people to be under-diagnosed especially in the younger population. Some research points to anthropometric diagnosis based on south Asian cut offs to be the most efficacious way to diagnose sarcopenic obesity. Anthropometric measurements is defined as human measurements. Diagnosing using this method involves a non-intrusive assessable measurements of the body. The measurements are height, weight, body mass index (BMI), head circumferences, skinfold thickness, and circumferences of waist, hip and limbs. Normal values are set by the Centers for Disease Control and Prevention (CDC) or World Health Organization (WHO) based on a nutritional status evaluation and people with abnormal values undergo further evaluation. Abnormal values for obesity is a BMI greater than 30 kg/m^2 or by fat levels and also modified body composition caused by low skeletal muscle operation and mass.
Treatment
As of now, there are no therapies that directly cure sarcopenic obesity. However, there are a few strategies, including lifestyle modifications and pharmacological, that can manage both disease states. An appropriate weight training and weight loss program can help to improve the patient's condition.
Weight Loss + Exercise
Through caloric restriction of at least 10%, weight loss is feasible. Though, through weight loss by diet changes, this may cause the loss of muscle mass and body mass index which exacerbates the effects of sarcopenia. Regular exercise, along with diet changes has shown to reduce muscle mass loss and increase muscle strength. Incorporating progressive resistance training may counteract sarcopenia by causing muscle hypertrophy and encouraging muscle protein synthesis. Elastic resistance training incorporated into exercise also has shown to reduce muscle mass loss while losing weight. This is important for patients to implement into their routine in order to both lose weight without losing muscle mass. In patients that combined both effective weight loss and exercise, muscle strength increased while body mass decreased, indicating that there was an increase in muscle mass. This method is known to be the most effective treatment for sarcopenic obesity.
Nutrition
As individuals age, their body composition, amount of physical activity, and diet contribute to their decrease in muscle mass. Protein, on the other hand, is a necessary macronutrient for building muscle. Although protein is an important component to a balanced meal, older patients start to lose the ability to synthesize muscle through protein and amino acid consumption, and even if elderly patients increase their protein intake, studies show that muscle mass synthesis does not increase compared to young patients. Instead, elderly patients should focus on consuming high quality protein containing leucine, an amino acid. Since sarcopenic obesity is mostly prevalent in elderly patients, it is important to consume the appropriate amount of protein to prevent muscle mass loss. Magnesium, selenium, and vitamin D supplementation may also aid in muscle mass.
Myostatin Inhibitors
Myostatin is a protein found on muscle cells that inhibit the growth of muscles. Elderly patients are known to have higher levels of myostatin compared to younger patients, thus this protein poses a risk of developing sarcopenia. By inhibiting this protein, it may help reduce the process of muscle breakdown. Elderly mice that were administered myostatin inhibitors showed to have lower levels of fat and denser muscles compared to mice that did not take myostatin inhibitors. They suggest that reducing levels of myostatin in the elderly may lessen the chance of heart disease, diabetes, and sarcopenia. Although most data seems promising for animals, there is limited and ongoing research on humans.
Testosterone
Testosterone levels are much lower in elderly individuals compared to younger individuals, and lower than normal testosterone levels in males are linked to pathologies such as cardiovascular risks, obesity, and sarcopenia. One study illustrated that both younger and older males on testosterone therapy showed improvement on muscle mass via testosterone enanthate injections, and another study described decreased fat mass in older males over the age of 65 via testosterone patches. This type of treatment is dependent on serum testosterone levels of male patients, and is not the sole type of treatment for sarcopenic obesity.
Complications/Conclusions
Low muscle mass or obesity are risk factors for reduced physical capacity and quality of life.
As a result of sarcopenic obesity, the risk of cardiovascular disease, cancer, type 2 diabetes, fractures, disability, and quality of life as above is affected. This is important because it is associated with all-cause mortality. In the event of early diagnosis, preventative treatment to delay the degradation of muscle and weight/fat management could prove to be beneficial.
Preventatively, a diet high in protein combined with physical activity outdoors can reduce the risk of sarcopenic obesity. With the controllable risk factors being lack of physical activity and malnutrition/vitamin imbalances, mitigating these can improve outcomes. Physical activity and proper nutritional supplementation is one of the important non-pharmacological options to delay and/or treat sarcopenic obesity, but it does come with limitations. If the individual cannot engage in physical activity, or is limited in walking capacity or higher intensity exercise can be a limitation to muscle growth beyond the age. Alternatively, if the individual does not have high amounts of muscle mass to begin with building muscle at a later age can prove to be challenging due to sarcopenia.
See also
Normal weight obesity
Weight training
Waist-to-height ratio
Journal of Cachexia, Sarcopenia and Muscle
References
Further reading
Medical conditions related to obesity
Aging-associated diseases | Sarcopenic obesity | Biology | 2,833 |
10,604,051 | https://en.wikipedia.org/wiki/Jane%20Cain | Ethel Jane Cain (1 May 1909 – 19 September 1996) was a British telephonist and actress, and the original voice of the speaking clock in the United Kingdom.
Working at London's Victoria Exchange, she was appointed on 21 June 1935 following a competition among GPO telephonists; there were nine finalists in total and the adjudication panel included leading actress Sybil Thorndike and Poet Laureate John Masefield, who announced that "She has a golden voice. It is beautiful." Her recording was used from 1936 until 1963, when it was replaced by Pat Simmons. She also made a record for the GPO, helping other staff members improve their speaking voices, and went on to become announcer for Henry Hall during his broadcast concerts.
Having been chosen as the 'Golden Voice Girl', in July 1935 she was offered the leading role in the Columbia Pictures film Vanity. Directed by Adrian Brunel, it began shooting at Walton-on-Thames in October and was first shown in December. Using the name Jane Cain as an actress, she then made her professional stage debut at the Open Air Theatre, Regent's Park on 17 July 1936, playing Celia in As You Like It. The Post Office had started its 'speaking clock' service on the 1st of the same month, over a year after her appointment had been announced.
In addition to working with regional repertory companies, notably a lengthy association with Scotland's Perth Theatre Company in the 1950s, she also appeared in such West End shows as A Soldier for Christmas (1944), Maigret and the Lady (1965) and The Sleeping Prince (1968). She also played supporting roles in such TV series as Starr and Company (1958) and Thirty-Minute Theatre (1961).
See also
Speaking clock
Pat Simmons, second permanent voice
Brian Cobby, third permanent voice
Lenny Henry, comedian, temporary voice
Alicia Roland, 12-year-old schoolgirl, temporary voice
Sara Mendes da Costa, fourth permanent voice
References
External links
Telecommunications Heritage Group
Includes video clips of Jane Cain.
1909 births
1996 deaths
British voice actresses
Clocks
Telephone voiceover talent
20th-century British actresses | Jane Cain | Physics,Technology,Engineering | 434 |
18,407,943 | https://en.wikipedia.org/wiki/Russula%20sanguinaria | Russula sanguinaria, commonly known as the bloody brittlegill or rosey russula, is a strikingly coloured mushroom of the genus Russula, which has the common name of brittlegills. It is bright blood-red, inedible, and grows in association with coniferous trees. It was previously widely known as Russula sanguinea.
Taxonomy
The bloody brittlegill was first described as Agaricus sanguinarius by Heinrich Christian Friedrich Schumacher in 1803, and redescribed under its current binomial name by mycologist Stephan Rauschert in 1989. Agaricus sanguinea was described by Bulliard and renamed Russula sanguinea by Elias Magnus Fries.
Both the specific epithets sanguinaria and sanguinea are derived from the Latin word sanguis ('blood'), a reference to the mushroom's colour. According to David Arora in 1986, it was unclear whether this European species is the same as the American species Russula rosacea. According to a 2012 field guide, they are the same.
Description
The robust cap grows up to in diameter. At first it is convex, but later flattens, and sometimes becomes saucer-shaped when mature. It is bright blood-red, or rose coloured at first, fading slightly with age, and often having paler areas. The cap skin peels at the margin only. The stem is firmly robust, occasionally white, but more commonly flushed with the cap colour. It is streaked vertically, and tends to turn greyish pink with age. The cream to pale ochre gills are adnate to slightly decurrent, narrow and forking. The spore print is also cream to pale ochre. The flesh is white, somewhat hot and peppery, and sometimes bitter on the tongue, with a faint fruity smell.
Similar species
Russula helodes is macroscopically identical, but tends to favour sphagnum moss in coniferous forests, and is much rarer.
Russula emetica (Schaeff.) Pers. grows in the same habitat, and has a bright red cap. It almost never has a coloured stipe, and is very crumbly and fragile.
Most of the other common bright red Russula species grow with deciduous trees.
Distribution and habitat
Russula sanguinaria appears in summer and autumn. It is widespread in the northern temperate zones, and is mycorrhizal with softwood trees, often Pinus (pine) in coniferous woodland, on sandy soils.
Edibility
This mushroom is inedible; it has a 'peppery' taste, and is sometimes quite bitter. Many similar-tasting Russulas are poisonous when eaten raw. The symptoms are mainly gastrointestinal in nature: diarrhoea, vomiting and colicky abdominal cramps. The active agent has not been identified but is thought to consist of sesquiterpenes, which have been isolated from Russula sardonia, and the related genus Lactarius.
See also
List of Russula species
References
Fungi described in 1803
Fungi of Europe
sanguinaria
Inedible fungi
Fungus species | Russula sanguinaria | Biology | 642 |
18,250,901 | https://en.wikipedia.org/wiki/Mepitiostane | Mepitiostane, sold under the brand name Thioderon, is an orally active antiestrogen and anabolic–androgenic steroid (AAS) of the dihydrotestosterone (DHT) group which is marketed in Japan as an antineoplastic agent for the treatment of breast cancer. It is a prodrug of epitiostanol. The drug was patented and described in 1968.
Medical uses
Mepitiostane is used as an antiestrogen and antineoplastic agent in the treatment of breast cancer. It is also used as an AAS in the treatment of anemia of renal failure. A series of case reports have found it to be effective in the treatment of an estrogen receptor (ER)-dependent meningiomas as well.
Side effects
Mepitiostane shows a high rate of virilizing side effects such as acne, hirsutism, and voice changes in women.
Pharmacology
Pharmacodynamics
Mepitiostane is described as similar to tamoxifen as an antiestrogen, and through its active form epitiostanol, binds directly to and antagonizes the ER. It is also an AAS.
Pharmacokinetics
Mepitiostane is converted into epitiostanol in the body.
Chemistry
Mepitiostane, also known as epitiostanol 17β-(1-methoxy)cyclopentyl ether, is a synthetic androstane steroid and a derivative of DHT. It is the C17β (1-methoxy)cyclopentyl ether of epitiostanol, which itself is 2α,3α-epithio-DHT or 2α,3α-epithio-5α-androstan-17β-ol. A related AAS is methylepitiostanol (17α-methylepitiostanol), which is an orally active variant of epitiostanol similarly to mepitiostane, though also has a risk of hepatotoxicity.
Society and culture
Generic names
Mepitiostane is the generic name of the drug and its and .
References
Androgen ethers
Anabolic–androgenic steroids
Androstanes
Antiestrogens
Cyclopentanes
Hormonal antineoplastic drugs
Prodrugs
Episulfides | Mepitiostane | Chemistry | 517 |
27,809,884 | https://en.wikipedia.org/wiki/Kirchhoff%E2%80%93Love%20plate%20theory | The Kirchhoff–Love theory of plates is a two-dimensional mathematical model that is used to determine the stresses and deformations in thin plates subjected to forces and moments. This theory is an extension of Euler-Bernoulli beam theory and was developed in 1888 by Love using assumptions proposed by Kirchhoff. The theory assumes that a mid-surface plane can be used to represent a three-dimensional plate in two-dimensional form.
The following kinematic assumptions that are made in this theory:
straight lines normal to the mid-surface remain straight after deformation
straight lines normal to the mid-surface remain normal to the mid-surface after deformation
the thickness of the plate does not change during a deformation.
Assumed displacement field
Let the position vector of a point in the undeformed plate be . Then
The vectors form a Cartesian basis with origin on the mid-surface of the plate, and are the Cartesian coordinates on the mid-surface of the undeformed plate, and is the coordinate for the thickness direction.
Let the displacement of a point in the plate be . Then
This displacement can be decomposed into a vector sum of the mid-surface displacement and an out-of-plane displacement in the direction. We can write the in-plane displacement of the mid-surface as
Note that the index takes the values 1 and 2 but not 3.
Then the Kirchhoff hypothesis implies that
If are the angles of rotation of the normal to the mid-surface, then in the Kirchhoff-Love theory
Note that we can think of the expression for as the first order Taylor series expansion of the displacement around the mid-surface.
Quasistatic Kirchhoff-Love plates
The original theory developed by Love was valid for infinitesimal strains and rotations. The theory was extended by von Kármán to situations where moderate rotations could be expected.
Strain-displacement relations
For the situation where the strains in the plate are infinitesimal and the rotations of the mid-surface normals are less than 10° the strain-displacement relations are
where as .
Using the kinematic assumptions we have
Therefore, the only non-zero strains are in the in-plane directions.
Equilibrium equations
The equilibrium equations for the plate can be derived from the principle of virtual work. For a thin plate under a quasistatic transverse load pointing towards positive direction, these equations are
where the thickness of the plate is . In index notation,
where are the stresses.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equilibrium equations for small rotations
|-
|For the situation where the strains and rotations of the plate are small the virtual internal energy is given by
where the thickness of the plate is and the stress resultants and stress moment resultants are defined as
Integration by parts leads to
The symmetry of the stress tensor implies that . Hence,
Another integration by parts gives
For the case where there are no prescribed external forces, the principle of virtual work implies that . The equilibrium equations for the plate are then given by
If the plate is loaded by an external distributed load that is normal to the mid-surface and directed in the positive direction, the external virtual work due to the load is
The principle of virtual work then leads to the equilibrium equations
|}
Boundary conditions
The boundary conditions that are needed to solve the equilibrium equations of plate theory can be obtained from the boundary terms in the principle of virtual work. In the absence of external forces on the boundary, the boundary conditions are
Note that the quantity is an effective shear force.
Constitutive relations
The stress-strain relations for a linear elastic Kirchhoff plate are given by
Since and do not appear in the equilibrium equations it is implicitly assumed that these quantities do not have any effect on the momentum balance and are neglected. The remaining stress-strain relations, in matrix form, can be written as
Then,
and
The extensional stiffnesses are the quantities
The bending stiffnesses (also called flexural rigidity) are the quantities
The Kirchhoff-Love constitutive assumptions lead to zero shear forces. As a result, the equilibrium equations for the plate have to be used to determine the shear forces in thin Kirchhoff-Love plates. For isotropic plates, these equations lead to
Alternatively, these shear forces can be expressed as
where
Small strains and moderate rotations
If the rotations of the normals to the mid-surface are in the range of 10 to 15, the strain-displacement relations can be approximated as
Then the kinematic assumptions of Kirchhoff-Love theory lead to the classical plate theory with von Kármán strains
This theory is nonlinear because of the quadratic terms in the strain-displacement relations.
If the strain-displacement relations take the von Karman form, the equilibrium equations can be expressed as
Isotropic quasistatic Kirchhoff-Love plates
For an isotropic and homogeneous plate, the stress-strain relations are
where is Poisson's Ratio and is Young's Modulus. The moments corresponding to these stresses are
In expanded form,
where for plates of thickness . Using the stress-strain relations for the plates, we can show that the stresses and moments are related by
At the top of the plate where , the stresses are
Pure bending
For an isotropic and homogeneous plate under pure bending, the governing equations reduce to
Here we have assumed that the in-plane displacements do not vary with and . In index notation,
and in direct notation
which is known as the biharmonic equation.
The bending moments are given by
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equilibrium equations for pure bending
|-
|For an isotropic, homogeneous plate under pure bending the governing equations are
and the stress-strain relations are
Then,
and
Differentiation gives
and
Plugging into the governing equations leads to
Since the order of differentiation is irrelevant we have , , and . Hence
In direct tensor notation, the governing equation of the plate is
where we have assumed that the displacements are constant.
|}
Bending under transverse load
If a distributed transverse load pointing along positive direction is applied to the plate, the governing equation is . Following the procedure shown in the previous section we get
In rectangular Cartesian coordinates, the governing equation is
and in cylindrical coordinates it takes the form
Solutions of this equation for various geometries and boundary conditions can be found in the article on bending of plates.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equilibrium equations for transverse loading
|-
|For a transversely loaded plate without axial deformations, the governing equation has the form
where is a distributed transverse load (per unit area). Substitution of the expressions for the derivatives of into the governing equation gives
Noting that the bending stiffness is the quantity
we can write the governing equation in the form
In cylindrical coordinates ,
For symmetrically loaded circular plates, , and we have
|}
Cylindrical bending
Under certain loading conditions a flat plate can be bent into the shape of the surface of a cylinder. This type of bending is called cylindrical bending and represents the special situation where . In that case
and
and the governing equations become
Dynamics of Kirchhoff-Love plates
The dynamic theory of thin plates determines the propagation of waves in the plates, and the study of standing waves and vibration modes.
Governing equations
The governing equations for the dynamics of a Kirchhoff-Love plate are
where, for a plate with density ,
and
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equations governing the dynamics of Kirchhoff-Love plates
|-
|
The total kinetic energy (more precisely, action of kinetic energy) of the plate is given by
Therefore, the variation in kinetic energy is
We use the following notation in the rest of this section.
Then
For a Kirchhof-Love plate
Hence,
Define, for constant through the thickness of the plate,
Then
Integrating by parts,
The variations and are zero at and .
Hence, after switching the sequence of integration, we have
Integration by parts over the mid-surface gives
Again, since the variations are zero at the beginning and the end of the time interval under consideration, we have
For the dynamic case, the variation in the internal energy is given by
Integration by parts and invoking zero variation at the boundary of the mid-surface gives
If there is an external distributed force acting normal to the surface of the plate, the virtual external work done is
From the principle of virtual work, or more precisely, Hamilton's principle for a deformable body, we have . Hence the governing balance equations for the plate are
|}
Solutions of these equations for some special cases can be found in the article on vibrations of plates. The figures below show some vibrational modes of a circular plate.
Isotropic plates
The governing equations simplify considerably for isotropic and homogeneous plates for which the in-plane deformations can be neglected. In that case we are left with one equation of the following form (in rectangular Cartesian coordinates):
where is the bending stiffness of the plate. For a uniform plate of thickness ,
In direct notation
For free vibrations, the governing equation becomes
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of dynamic governing equations for isotropic Kirchhoff-Love plates
|-
|
For an isotropic and homogeneous plate, the stress-strain relations are
where are the in-plane strains. The strain-displacement relations
for Kirchhoff-Love plates are
Therefore, the resultant moments corresponding to these stresses are
The governing equation for an isotropic and homogeneous plate of uniform thickness in the
absence of in-plane displacements is
Differentiation of the expressions for the moment resultants gives us
Plugging into the governing equations leads to
Since the order of differentiation is irrelevant we have . Hence
If the flexural stiffness of the plate is defined as
we have
For small deformations, we often neglect the spatial derivatives of the transverse acceleration of the
plate and we are left with
Then, in direct tensor notation, the governing equation of the plate is
|}
References
See also
Bending
Bending of plates
Infinitesimal strain theory
Linear elasticity
Plate theory
Stress (mechanics)
Stress resultants
Vibration of plates
Continuum mechanics
Gustav Kirchhoff | Kirchhoff–Love plate theory | Physics | 2,152 |
6,592,171 | https://en.wikipedia.org/wiki/Bobbinet | Bobbinet tulle or genuine tulle is a specific type of tulle which has been made in the United Kingdom since the invention of the bobbinet machine. John Heathcoat coined the term "bobbin net", or bobbinet as it is spelled today, to distinguish this machine-made tulle from the handmade "pillow lace", produced using a lace pillow to create bobbin lace. Machines based on his original designs are still in operation today producing fabrics in Perry Street, Chard, Somerset, UK.
When bobbinet is woven with spots, it is called point d'esprit.
History
The forerunner of bobbinet tulle was bobbin lace. Lace has been produced for a long time, made in tedious hand labour with thin thread and needles or bobbins. Bobbin lace is made by weaving the threads by moving the bobbins over or under each other. Much bobbin lace is based on a net ground. By the end of the 18th century, people tried to produce the net ground mechanically. In 1765, they managed to create a tulle-like fabric on a so-called stocking framework. It took, however, some more years until the first real tulle could be produced mechanically.
The forerunner of the bobbinet machine was the 1589 stocking frame, a weaving frame fitted with a bar of bearded needles that passed back and forth, to and from the operator. There was no warp. The beards were simultaneously depressed by a presser bar catching the weft and holding it back a course making a row of loops. After Strutt had modified the machine in 1759 to do ribbing, Hammond in 1764 used a tickler stick to transfer the loops 2 or 3 gaits sideways, and mechanic lace making was born. There was no carriage, no comb and the operations continued to be done in sequence by the operator.
Bobbinet lace machines were invented in 1808 by John Heathcoat. He studied the hand movements of a Northamptonshire manual lace maker and reproduced them in the roller-locker machine. Heathcoat's machine was patented in 1808 (patent no. 3151), and with a slight modification it was patented again in 1809 (patent no. 3216), with the 1809 version becoming known as the 'Old Loughborough'. The improved machine was wide, and designed for use with cotton.
Heathcoat continued to improve his machine over the years, but a number of breaches of his patent also came into production. The 'Circular' was an improvement on the machine designed in 1824 by William Morley (patent no.4921). As it gained ascendency, its distinctive name was dropped; it became the bobbinet machine, and Heathcoat's machine the Old Loughborough.
The 'Old Loughborough' became the standard lacemaking machine, particularly the 1824 form known as the 'Circular', producing two-twist plain net. The smooth, unpatterned tulle produced on these machines was on a par with real, handmade lace net. Heathcoat's bobbinet machine was so effective that modern bobbinet machines have altered little from his original design. During the next 30 years, inventors were patenting improvements to their machines. The ones that stand out are the Pusher machine, the Levers machine (now spelled Leavers) and the Nottingham lace curtain machine. Each of these developed into separate machines. Others were the Traverse Warp machine and the Straight Bolt machine.
Fabric structure
Bobbinet tulle is constructed by warp and weft yarns in which the weft yarn is looped diagonally around the vertical warp yarn to form a hexagonal mesh which is regular and clearly defined.
Bobbinet netting has a characteristic diagonal fabric appearance, is diagonally stable and slideproof, durable, sheer, the lightest bobbinet weighing no more than , with a high strength to weight ratio.
Uses
Bobbinet tulle fabrics have long been used for high-quality exclusive curtains, bridalwear, haute couture fashion, lingerie, embroidery, where it is used as base cloth for the actual embroidery, and as base nets for high-quality wigs.
Use has also extended into technical applications where the material's properties are more important than its appearance. These technical applications include sunblinds for cars and railway coaches, safety nets, parachute skirting, radar reflective fabrics for military decoys, flexible textile switches and sensor, as well as light control fabrics for the film and theatre industries. Depending on the yarns bobbinet tulle is produced with, it can, for example, be made to be almost invisible against the skin or even conductive.
References
Bibliography
Woven fabrics
Net fabrics
Stage lighting
Scenic design
Machine-made lace
Lace-making machinery | Bobbinet | Engineering | 977 |
37,177,807 | https://en.wikipedia.org/wiki/6%20Hydrae | 6 Hydrae is a single star in the equatorial constellation of Hydra, located 373 light-years away from the Sun. It has the Bayer designation a Hydrae; 6 Hydrae is the Flamsteed designation. This object is visible to the naked eye as a faint, orange-hued star with an apparent visual magnitude of 4.98. It is moving closer to the Earth with a heliocentric radial velocity of −8 km/s. Eggen (1995) listed it as a proper motion candidate for membership in the IC 2391 supercluster.
This is an aging giant star with a stellar classification of K3 III, which indicates it has exhausted the hydrogen at its core and evolved away from the main sequence. As a consequence, it has expanded to 33 times the radius of the Sun. The star is radiating 267 times the luminosity of the Sun from its swollen photosphere at an effective temperature of .
References
K-type giants
Hydra (constellation)
Hydrae, a
Durchmusterung objects
Hydrae, 06
073840
042509
3431 | 6 Hydrae | Astronomy | 225 |
2,239,896 | https://en.wikipedia.org/wiki/Renzapride | Renzapride is a prokinetic agent and antiemetic which acts as a full 5-HT4 agonist and partial 5-HT3 antagonist. It also functions as a 5-HT2B antagonist and has some affinity for the 5-HT2A and 5-HT2C receptors.
Renzapride was being developed by Alizyme plc of the United Kingdom. In May 2016, EndoLogic LLC, a US-based pharmaceutical and medical device company, acquired the US and worldwide patent rights to Renzapride.
Endologic confirmed the cardiac safety of renzapride through a “Thorough QTc” study and sold the rights to Atlantic Healthcare plc in 2019, a specialist pharmaceutical company.
Atlantic Healthcare is focusing on the development of renzapride for the management of gastrointestinal (GI) motility in a number of rare diseases, including systemic scleroderma and cystic fibrosis, both of which are associated with chronic GI motility problems and for which there are no approved therapies.
Clinical trials
In nine diabetic patients with autonomic neuropathy, renzapride reduced the mean lag phase of gastric emptying by 20–26 min at all doses (P < 0.01)
In Phase 2a studies on subjects with constipation Renzapride was shown to accelerate colonic transit (p=0.016 vs placebo P=0.009) (Ref: ATL 1251/001/CL) as well as increase daily stool frequency (p<0.005) (Ref: ATL 1251/025/CL)
Renzapride has been assessed in Phase II clinical trials with a total of 578 patients with constipation-predominant irritable bowel syndrome (IBS-C). As compared with placebo, the treatment groups reported better relief of their overall symptoms, namely abdominal pain and discomfort, increase in the number of pain free days, improved stool frequency, consistency and ease of passage of bowel movements. There were no significant differences in the reported Serious Adverse Events between treatment and placebo groups.
In the largest of these Phase II trials, 510 subjects with IBS-C received either 1, 2 or 4 mg QD renzapride, or placebo QD for 12 weeks. The Weekly responder rate based on subject's assessment of whether they had relief from abdominal pain and/or discomfort associated with IBS during weeks 5-12 was 56% (renzapride 4 mg) vs 49% (placebo). For females the treatment effect was larger, 61% (renzapride 4 mg) vs 49% (placebo). Statistically significant effects in favour of renzapride were observed for improvements in stool consistency and increased bowel movements.
In the Phase III clinical trial in IBS-C, 1798 female patients received either 2 or 4 mg Renzapride, or placebo once daily, for 12 weeks. The mean number of months with relief of overall symptoms was 0.6, 0.55 and 0.44 for renzapride 2 mg twice a day, renzapride 4 mg once a day and placebo, respectively, with both renzapride doses being statistically superior to placebo (p=0.004 and p=0.027, respectively). On responder analysis, the proportion of responders was 33.2%, 29.8%, and 24.3% for renzapride 2 mg twice a day, renzapride 4 mg once a day and placebo, respectively.
The 8.9% delta between renzapride 2 mg twice daily and placebo compares favourably with other FDA approved therapies (Ford ).
References
5-HT3 antagonists
Abandoned drugs
Anilines
Benzamides
Chloroarenes
Nitrogen heterocycles
Phenol ethers
Heterocyclic compounds with 2 rings | Renzapride | Chemistry | 831 |
68,437,256 | https://en.wikipedia.org/wiki/Plethystic%20exponential | In mathematics, the plethystic exponential is a certain operator defined on (formal) power series which, like the usual exponential function, translates addition into multiplication. This exponential operator appears naturally in the theory of symmetric functions, as a concise relation between the generating series for elementary, complete and power sums homogeneous symmetric polynomials in many variables. Its name comes from the operation called plethysm, defined in the context of so-called lambda rings.
In combinatorics, the plethystic exponential is a generating function for many well studied sequences of integers, polynomials or power series, such as the number of integer partitions. It is also an important technique in the enumerative combinatorics of unlabelled graphs, and many other combinatorial objects.
In geometry and topology, the plethystic exponential of a certain geometric/topologic invariant of a space, determines the corresponding invariant of its symmetric products.
Definition, main properties and basic examples
Let be a ring of formal power series in the variable , with coefficients in a commutative ring . Denote by
the ideal consisting of power series without constant term. Then, given , its plethystic exponential is given by
where is the usual exponential function. It is readily verified that (writing simply when the variable is understood):
Some basic examples are:
In this last example, is number of partitions of .
The plethystic exponential can be also defined for power series rings in many variables.
Product-sum formula
The plethystic exponential can be used to provide innumerous product-sum identities. This is a consequence of a product formula for plethystic exponentials themselves. If denotes a formal power series with real coefficients , then it is not difficult to show that:The analogous product expression also holds in the many variables case. One particularly interesting case is its relation to integer partitions and to the cycle index of the symmetric group.
Relation with symmetric functions
Working with variables , denote by the complete homogeneous symmetric polynomial, that is the sum of all monomials of degree k in the variables , and by the elementary symmetric polynomials. Then, the and the are related to the power sum polynomials: by Newton's identities, that can succinctly be written, using plethystic exponentials, as:
Macdonald's formula for symmetric products
Let X be a finite CW complex, of dimension d, with Poincaré polynomialwhere is its kth Betti number. Then the Poincaré polynomial of the nth symmetric product of X, denoted , is obtained from the series expansion:
The plethystic programme in physics
In a series of articles, a group of theoretical physicists, including Bo Feng, Amihay Hanany and Yang-Hui He, proposed a programme for systematically counting single and multi-trace gauge invariant operators of supersymmetric gauge theories. In the case of quiver gauge theories of D-branes probing Calabi–Yau singularities, this count is codified in the plethystic exponential of the Hilbert series of the singularity.
References
Symmetric functions | Plethystic exponential | Physics,Mathematics | 631 |
1,463,171 | https://en.wikipedia.org/wiki/List%20of%20indie%20game%20developers | This is a list of developers of indie games, which includes video game developers who are not owned by nor do they receive significant financial backing from a video game publisher. Independent developers, which can be single individuals, small groups, or large organizations, retain operational control over their organizations and processes. Some self-publish their own games while others work with publishers.
List of notable developers
There are thousands of independent game development studios which either self-publish their titles, or enter into licensing or co-development agreements with publishers. This list is not intended to be exhaustive with respect to developers or their games, and includes only notable developers and their most notable game examples.
See also
Casual game
Dojin soft
Fangame
Indie game
List of video game developers
References
External links
Indie DB, a comprehensive database of indie developers and titles.
Indie
Indie | List of indie game developers | Technology | 167 |
34,161,990 | https://en.wikipedia.org/wiki/Blue%20light%20%28pyrotechnic%20signal%29 | Blue light is an archaic signal, the progenitor of modern pyrotechnic flares. Blue light consists of a loose, chemical composition burned in an open, hand-held hemispherical wooden cup, and so is more akin to the flashpan signals of the Admiral Nelson era than the modern, encased signal flares, which are often launched by mortar or rifle and suspended by parachute. Widely used during the eighteenth and nineteenth centuries for signaling by the world's military forces, and for general illumination in the civilian sector, blue light was remarkable for its use of poisonous arsenic compounds (realgar and orpiment), which contributed to its replacement by safer flares in the early twentieth century.
Confusion with blue-colored lanterns
Blue light was famously mentioned in accounts of the H.L. Hunley, the Confederate submarine which became the first to sink an enemy vessel, the , on February 17, 1864, during the Civil War. Such blue light has been repeatedly misidentified by authors and researchers of the Hunley story as a blue lantern, since they failed to realize the 1864 meaning of "blue light" as it was known to eyewitnesses who testified to its use during the battle between the Hunley and Housatonic. Pyrotechnic blue light was commonly used by the vessels of the Federal South Atlantic Blockading Squadron off of Charleston, South Carolina and would have been a familiar sight to both Union and Confederate soldiers and sailors.
Recipes for blue light appear in early chemistry texts and often included antimony or copper compounds meant to add a blue color, but by the time of the American Civil War, standard military texts listed recipes for blue light which lacked any such coloring agent. While the generic moniker "blue light" was retained, the pyrotechnic signal was meant to burn with a vivid, white light. Modern authors have been confused by the generic name of blue light, and have imagined incorrectly that the signal which was seen during the Hunley - Housatonic encounter was blue. The oil lantern which archeologists at the Warren Lasch Conservation Center recovered from the Hunley submarine has a clear, not a blue, glass lens, further evidence which discounts the modern "blue lantern myth" of the Hunley. Blue light as made in 1864 has been reproduced according to the two recipes listed in period texts and has been tested with success over the same distances involved in the Hunley engagement.
Decline
Blue light has been obsolete for signaling since early in the twentieth century, but pyrotechnic lighting is still popular for celebratory fireworks displays, and its synonyms "Bengal light" and "Bengal fire" can still be found in modern pyrotechnic manuals. Such displays were also popular in nineteenth century civilian life: two hundred blue lights were used in the first illumination of Niagara Falls during the 1860 North American visit of the Prince of Wales.
As a nickname
"Blue Light" was a derisive nickname given to military officers of the 18th and 19th centuries, whose evangelical Christian zeal burned as brightly as its namesake signal, to the chagrin of those less ardent. During the American Civil War, Confederate General Stonewall Jackson carried the nickname "Old Blue Light" because his men said his eyes glowed with a blue light when battle commenced Shelby Foote, The Civil War; the nickname is referenced in the lyrics of "Stonewall Jackson's Way" (penned circa 1862).
References
Further reading
Lighting
Emergency communication
Military communications
Pyrotechnics | Blue light (pyrotechnic signal) | Engineering | 723 |
33,914,934 | https://en.wikipedia.org/wiki/Vertical%20pressure%20variation | Vertical pressure variation is the variation in pressure as a function of elevation. Depending on the fluid in question and the context being referred to, it may also vary significantly in dimensions perpendicular to elevation as well, and these variations have relevance in the context of pressure gradient force and its effects. However, the vertical variation is especially significant, as it results from the pull of gravity on the fluid; namely, for the same given fluid, a decrease in elevation within it corresponds to a taller column of fluid weighing down on that point.
Basic formula
A relatively simple version of the vertical fluid pressure variation is simply that the pressure difference between two elevations is the product of elevation change, gravity, and density. The equation is as follows:
where
is pressure,
is density,
is acceleration of gravity, and
is height.
The delta symbol indicates a change in a given variable. Since is negative, an increase in height will correspond to a decrease in pressure, which fits with the previously mentioned reasoning about the weight of a column of fluid.
When density and gravity are approximately constant (that is, for relatively small changes in height), simply multiplying height difference, gravity, and density will yield a good approximation of pressure difference. If the pressure at one point in a liquid with uniform density ρ is known to be P0, then the pressure at another point is P1:
where h1 - h0 is the vertical distance between the two points.
Where different fluids are layered on top of one another, the total pressure difference would be obtained by adding the two pressure differences; the first being from point 1 to the boundary, the second being from the boundary to point 2; which would just involve substituting the and values for each fluid and taking the sum of the results. If the density of the fluid varies with height, mathematical integration would be required.
Whether or not density and gravity can be reasonably approximated as constant depends on the level of accuracy needed, but also on the length scale of height difference, as gravity and density also decrease with higher elevation. For density in particular, the fluid in question is also relevant; seawater, for example, is considered an incompressible fluid; its density can vary with height, but much less significantly than that of air. Thus water's density can be more reasonably approximated as constant than that of air, and given the same height difference, the pressure differences in water are approximately equal at any height.
Hydrostatic paradox
The barometric formula depends only on the height of the fluid chamber, and not on its width or length. Given a large enough height, any pressure may be attained. This feature of hydrostatics has been called the hydrostatic paradox. As expressed by W. H. Besant,
Any quantity of liquid, however small, may be made to support any weight, however large.
The Flemish scientist Simon Stevin was the first to explain the paradox mathematically. In 1916 Richard Glazebrook mentioned the hydrostatic paradox as he described an arrangement he attributed to Pascal: a heavy weight rests on a board with area resting on a fluid bladder connected to a vertical tube with cross-sectional area α. Pouring water of weight down the tube will eventually raise the heavy weight. Balance of forces leads to the equation
Glazebrook says, "By making the area of the board considerable and that of the tube small, a large weight can be supported by a small weight of water. This fact is sometimes described as the hydrostatic paradox."
Hydraulic machinery employs this phenomenon to multiply force or torque. Demonstrations of the hydrostatic paradox are used in teaching the phenomenon.
In the context of Earth's atmosphere
If one is to analyze the vertical pressure variation of the atmosphere of Earth, the length scale is very significant (troposphere alone being several kilometres tall; thermosphere being several hundred kilometres) and the involved fluid (air) is compressible. Gravity can still be reasonably approximated as constant, because length scales on the order of kilometres are still small in comparison to Earth's radius, which is on average about 6371 km, and gravity is a function of distance from Earth's core.
Density, on the other hand, varies more significantly with height. It follows from the ideal gas law that
where
is average mass per air molecule,
is pressure at a given point,
is the Boltzmann constant,
is the temperature in kelvins.
Put more simply, air density depends on air pressure. Given that air pressure also depends on air density, it would be easy to get the impression that this was circular definition, but it is simply interdependency of different variables. This then yields a more accurate formula, of the form
where
is the pressure at height ,
is the pressure at reference point 0 (typically referring to sea level),
is the mass per air molecule,
is the acceleration due to gravity,
is height from reference point 0,
is the Boltzmann constant,
is the temperature in kelvins.
Therefore, instead of pressure being a linear function of height as one might expect from the more simple formula given in the "basic formula" section, it is more accurately represented as an exponential function of height.
Note that in this simplification, the temperature is treated as constant, even though temperature also varies with height. However, the temperature variation within the lower layers of the atmosphere (troposphere, stratosphere) is only in the dozens of degrees, as opposed to their thermodynamic temperature, which is in the hundreds, so the temperature variation is reasonably small and is thus ignored. For smaller height differences, including those from top to bottom of even the tallest of buildings, (like the CN Tower) or for mountains of comparable size, the temperature variation will easily be within the single-digits. (See also lapse rate.)
An alternative derivation, shown by the Portland State Aerospace Society, is used to give height as a function of pressure instead. This may seem counter-intuitive, as pressure results from height rather than vice versa, but such a formula can be useful in finding height based on pressure difference when one knows the latter and not the former. Different formulas are presented for different kinds of approximations; for comparison with the previous formula, the first referenced from the article will be the one applying the same constant-temperature approximation; in which case:
where (with values used in the article)
is the elevation in meters,
is the specific gas constant =
is the absolute temperature in kelvins = at sea level,
is the acceleration due to gravity = at sea level,
is the pressure at a given point at elevation in Pascals, and
is pressure at the reference point = at sea level.
A more general formula derived in the same article accounts for a linear change in temperature as a function of height (lapse rate), and reduces to above when the temperature is constant:
where
is the atmospheric lapse rate (change in temperature divided by distance) = , and
is the temperature at the same reference point for which
and the other quantities are the same as those above. This is the recommended formula to use.
See also
Barometer
Hypsometric equation
Pascal's barrel
Ruina montium
Pressure gradient
Siphon
References
External links
Atmospheric pressure
Vertical position | Vertical pressure variation | Physics,Mathematics | 1,462 |
11,381,280 | https://en.wikipedia.org/wiki/Diana%27s%20Tree | Diana's Tree ( or Dianae), also known as the Philosopher's Tree (Arbor Philosophorum), was considered a precursor to the philosopher's stone and resembled coral in regards to its structure. It is a dendritic amalgam of crystallized silver, obtained from mercury in a solution of silver nitrate; so-called by the alchemists, among whom "Diana" stood for silver. The arborescence of this amalgam, which even included fruit-like forms on its branches, led pre-modern chemical philosophers to theorize the existence of life in the kingdom of minerals.
As an alchemical product
Alchemy was a series of practices that combined philosophical, magical, and chemical experimentation. One goal of European alchemists was to create what was known as the Philosopher’s Stone, a substance that when heated and combined with a non precious metal like copper or iron (known as the “base”) would turn into gold. Although now considered a pseudoscience, the practice of alchemy has contributed experimental techniques to the chemistry world such as the process of distillation and sublimation. The Tree of Diana was thought to be a precursor to the Philosopher's Stone: experiments involving the Tree aimed to turn non precious metals into precious metals like gold or silver, similar to the Stone.
In pre-modern chemistry, the various methods for procuring Diana's Tree were exceedingly time-consuming; for example, the following process, originally described by Nicolas Lemery, required forty days to see results:
Giambattista della Porta in the 16th century described it this way:
The quickest method, as described by German natural philosopher Wilhelm Homberg (1652–1715), took about a quarter of an hour, and is described as follows:
The form of this metallic tree may be varied as desired. The stronger the user makes the first described water, the thicker the tree will be with branches, and sooner formed. Homberg also described how numerous other kinds of trees may be produced by crystallization and "digestion".
"Sophic mercury", George Starkey, and Isaac Newton
17th century American alchemist George Starkey, who wrote under the pen name "Eirenaeus Philalethes", had a recipe for "sophick mercury" that produced a branch-like structure consisting of an alloy of gold and mercury. His process involved the repeated distillation of the mercury, which he then heated while adding gold to produce the structure. In one version, a small seed of gold is mixed with mercury to create the "Philosopher’s Tree"; at the time Starkey was writing, the creation of gold from gold itself was viewed as being obviously possible. In 2018, Starkey's methods, including the above "Philosopher's Tree" were reconstructed by historian of science Lawrence Principe.
In 2016, a copy of Starkey's recipe for making the Tree of Diana was discovered in a manuscript by Isaac Newton; however, there is no evidence that Newton attempted the process. Newton kept his experiments on alchemy a secret, as the practice was illegal in England at the time. In addition, much of Newton's work regarding alchemy was lost in a fire supposedly started by his dog.
Other forms of the Tree
There is also Saturn's Tree, which was a deposit of crystallized lead, massed together in the form of a "tree". It is produced by a shaving of zinc in a solution of the lead(II) acetate. In alchemy, Saturn was associated with lead.
Modern experimentation
Experiments with the Tree of Diana have inspired modern chemists to replicate its creation, using the process to analyze reactions between metals and other substances. A 1967 experiment at the University of Seattle studied the reaction between solid copper and aqueous silver nitrate. In it, silver ions reacted with the copper metal to form a crystal structure, and this reaction continued until the concentration of silver ions was exhausted.
References
External links
Alchemical substances | Diana's Tree | Chemistry | 826 |
345,035 | https://en.wikipedia.org/wiki/Halo%20%28optical%20phenomenon%29 | A halo () is an optical phenomenon produced by light (typically from the Sun or Moon) interacting with ice crystals suspended in the atmosphere. Halos can have many forms, ranging from colored or white rings to arcs and spots in the sky. Many of these appear near the Sun or Moon, but others occur elsewhere or even in the opposite part of the sky. Among the best known halo types are the circular halo (properly called the 22° halo), light pillars, and sun dogs, but many others occur; some are fairly common while others are extremely rare.
The ice crystals responsible for halos are typically suspended in cirrus or cirrostratus clouds in the upper troposphere (), but in cold weather they can also float near the ground, in which case they are referred to as diamond dust. The particular shape and orientation of the crystals are responsible for the type of halo observed. Light is reflected and refracted by the ice crystals and may split into colors because of dispersion. The crystals behave like prisms and mirrors, refracting and reflecting light between their faces, sending shafts of light in particular directions.
Atmospheric optical phenomena like halos were part of weather lore, which was an empirical means of weather forecasting before meteorology was developed. They often do indicate that rain will fall within the next 24 hours, since the cirrostratus clouds that cause them can signify an approaching frontal system.
Other common types of optical phenomena involving water droplets rather than ice crystals include the glory and the rainbow.
History
While Aristotle had mentioned halos and parhelia, in antiquity, the first European descriptions of complex displays were those of Christoph Scheiner in Rome (), Johannes Hevelius in Danzig (1661), and Tobias Lowitz in St Petersburg ().
Chinese observers had recorded these for centuries, the first reference being a section of the "Official History of the Chin Dynasty" (Chin Shu) in 637, on the "Ten Haloes", giving technical terms for 26 solar halo phenomena.
Vädersolstavlan
While mostly known and often quoted for being the oldest color depiction of the city of Stockholm, Vädersolstavlan (Swedish; "The Sundog Painting", literally "The Weather Sun Painting") is arguably also one of the oldest known depictions of a halo display, including a pair of sun dogs. For two hours in the morning of 20 April 1535, the skies over the city were filled with white circles and arcs crossing the sky, while additional suns (i.e., sun dogs) appeared around the Sun.
Light pillar
A light pillar, or sun pillar, appears as a vertical pillar or column of light rising from the Sun near sunset or sunrise, though it can appear below the Sun, particularly if the observer is at a high elevation or altitude. Hexagonal plate- and column-shaped ice crystals cause the phenomenon. Plate crystals generally cause pillars only when the Sun is within 6 degrees of the horizon; column crystals can cause a pillar when the Sun is as high as 20 degrees above the horizon. The crystals tend to orient themselves near-horizontally as they fall or float through the air, and the width and visibility of a sun pillar depend on crystal alignment.
Light pillars can also form around the Moon, and around street lights or other bright lights. Pillars forming from ground-based light sources may appear much taller than those associated with the Sun or Moon. Since the observer is closer to the light source, crystal orientation matters less in the formation of these pillars.
Circular halo
Among the best-known halos is the 22° halo, often just called "halo", which appears as a large ring around the Sun or Moon with a radius of about 22° (roughly the width of an outstretched hand at arm's length). The ice crystals that cause the 22° halo are oriented semi-randomly in the atmosphere, in contrast to the horizontal orientation required for some other halos such as sun dogs and light pillars. As a result of the optical properties of the ice crystals involved, no light is reflected towards the inside of the ring, leaving the sky noticeably darker than the sky around it, and giving it the impression of a "hole in the sky". The 22° halo is not to be confused with the corona, which is a different optical phenomenon caused by water droplets rather than ice crystals, and which has the appearance of a multicolored disk rather than a ring.
Other halos can form at 46° to the Sun, or at the horizon, or around the zenith, and can appear as full halos or incomplete arcs.
Bottlinger's ring
A Bottlinger's ring is a rare type of halo that is elliptical instead of circular. It has a small diameter, which makes it very difficult to see in the Sun's glare and more likely to be noticed around the dimmer subsun, often seen from mountain tops or airplanes. Bottlinger's rings are not well understood yet. It is suggested that they are formed by very flat pyramidal ice crystals with faces at uncommonly low angles, suspended horizontally in the atmosphere. These precise and physically problematic requirements would explain why the halo is very rare.
Other names
In the Cornish dialect of English, a halo around the sun or the moon is called a cock's eye and is an omen of bad weather. The term is related to the Breton word kog-heol (sun cock) which has the same meaning. In Nepal, the halo round the sun is called Indrasabha with a connotation of the assembly court of Lord Indra – the Hindu god of lightning, thunder, and rain.
Artificial halos
The natural phenomena may be reproduced artificially by several means. Firstly, by computer simulations, or secondly by experimental means. Regarding the latter, this occurs when a single crystal is rotated around the appropriate axis/axes, or a chemical approach. A still further and more indirect experimental approach is to find analogous refraction geometries.
Analogous refraction approach
This approach employs the fact that in some cases the average geometry of refraction through an ice crystal may be imitated / mimicked via the refraction through another geometrical object. In this way, the circumzenithal arc, the circumhorizontal arc, and the suncave Parry arcs may be recreated by refraction through rotationally symmetric (i.e. non-prismatic) static bodies. A particularly simple table-top experiment reproduces artificially the colorful circumzenithal and circumhorizontal arcs using a water glass only. The refraction through the cylinder of water turns out to be (almost) identical to the rotationally averaged refraction through an upright hexagonal ice crystal / plate-oriented crystals, thereby creating vividly colored circumzenithal and the circumhorizontal arcs. In fact, the water glass experiment is often confused as representing a rainbow and has been around at least since 1920.
Following Huygens' idea of the (false) mechanism of the 22° parhelia, one may also illuminate (from the side) a water-filled cylindrical glass with an inner central obstruction of half the glasses' diameter to achieve upon projection on a screen an appearance which closely resembles parhelia (cf. footnote [39] in Ref., or see here), an inner red edge transitioning into a white band at larger angles on both sides of the direct transmission direction. However, while the visual match is close, this particular experiment does not involve a fake caustic mechanism and is thus no real analogue.
Chemical approaches
The earliest chemical recipes to generate artificial halos has been put forward by Brewster and studied further by A. Cornu in 1889. The idea was to generate crystals by precipitation of a salt solution. The innumerable small crystals hereby generated will then, upon illumination with light, cause halos corresponding to the particular crystal geometry and the orientation / alignment. Several recipes exist and continue to be discovered. Rings are a common outcome of such experiments. But also Parry arcs have been artificially produced in this way.
Mechanical approaches
Single axis
The earliest experimental studies on halo phenomena have been attributed to Auguste Bravais in 1847. Bravais used an equilateral glass prism which he spun around its vertical axis. When illuminated by parallel white light, this produced an artificial parhelic circle and many of the embedded parhelia. Similarly, A. Wegener used hexagonal rotating crystals to produce artificial subparhelia. In a more recent version of this experiment, many more embedded parhelia have been found using commercially available hexagonal BK7 glass crystals. Simple experiments like these can be used for educational purposes and demonstration experiments. Unfortunately, using glass crystals one cannot reproduce the circumzenithal arc or the circumhorizontal arc due to total internal reflections preventing the required ray-paths when .
Even earlier than Bravais, the Italian scientist F. Venturi experimented with pointed water-filled prisms to demonstrate the circumzenithal arc. However, this explanation was replaced later by the CZA's correct explanation by Bravais.
Artificial ice crystals have been employed to create halos which are otherwise unattainable in the mechanical approach via the use of glass crystals, e.g. circumzenithal and circumhorizontal arcs. The use of ice crystals ensures that the generated halos have the same angular coordinates as the natural phenomena. Other crystals such as sodium fluoride (NaF) also have a refractive index close to ice and have been used in the past.
Two axes
In order to produce artificial halos such as the tangent arcs or the circumscribed halo one should rotate a single columnar hexagonal crystal about 2 axes. Similarly, the Lowitz arcs can be created by rotating a single plate crystal about two axes. This can be done by engineered halo machines. The first such machine was constructed in 2003; several more followed. Putting such machines inside spherical projection screens, and by the principle of the so-called sky transform, the analogy is nearly perfect. A realization using micro-versions of the aforementioned machines produces authentic distortion-free projections of such complex artificial halos. Finally, superposition of several images and projections produced by such halo machines may be combined to create a single image. The resulting superposition image is then a representation of complex natural halo displays containing many different orientation sets of ice prisms.
Three axes
The experimental reproduction of circular halos is the most difficult using a single crystal only, while it is the simplest and typically achieved one using chemical recipes. Using a single crystal, one needs to realize all possible 3D orientations of the crystal. This has recently been achieved by two approaches. The first one using pneumatics and a sophisticated rigging, and a second one using an Arduino-based random walk machine which stochastically reorients a crystal embedded in a transparent thin-walled sphere.
Gallery
See also
References
External links
Halo explanations and image galleries at Atmospheric Optics
Meteoros AKM – Halo explanations and image galleries (in German)
Halo reports of interesting halo observations around the World
Southern Hemisphere Halo and other atmospheric phenomena
Halo in Chisinau Moldova (photo and video)
Walter Tape & Jarmo Moilanen, Atmospheric Halos and the Search for Angle x (free e-book)
Halo Phenomena – Hyperphysics
Atmospheric optical phenomena | Halo (optical phenomenon) | Physics | 2,354 |
60,215,761 | https://en.wikipedia.org/wiki/History%20of%20chemical%20warfare | Chemical weapons have been a part of warfare in most societies for centuries. However, their usage has been extremely controversial since the 20th century.
Ancient and medieval times
Ancient Greek myths about Heracles poisoning his arrows with the venom of the Hydra monster are the earliest references to toxic weapons in western literature. Homer's epics, the Iliad and the Odyssey, allude to poisoned arrows used by both sides in the legendary Trojan War (Bronze Age Greece).
Some of the earliest surviving references to toxic warfare appear in the Indian epics Ramayana and Mahabharata. The "Laws of Manu," a Hindu treatise on statecraft (c. 400 BC) forbids the use of poison and fire arrows, but advises poisoning food and water. Kautilya's "Arthashastra", a statecraft manual of the same era, contains hundreds of recipes for creating poison weapons, toxic smokes, and other chemical weapons. Ancient Greek historians recount that Alexander the Great encountered poison arrows and fire incendiaries in India at the Indus basin in the 4th century BC.
Arsenical smokes were known to the Chinese as far back as BC and Sun Tzu's "Art of War" (c. 200 BC) advises the use of incendiary weapons. In the second century BC, writings of the Mohist sect in China describe the use of bellows to pump smoke from burning balls of toxic plants and vegetables into tunnels being dug by a besieging army. Other Chinese writings dating around the same period contain hundreds of recipes for the production of poisonous or irritating smokes for use in war along with numerous accounts of their use. These accounts describe an arsenic-containing "soul-hunting fog", and the use of finely divided lime dispersed into the air to suppress a peasant revolt in 178 AD.
The earliest recorded use of gas warfare in the West dates back to the fifth century BC, during the Peloponnesian War between Athens and Sparta. Spartan forces besieging an Athenian city placed a lighted mixture of wood, pitch, and sulfur under the walls hoping that the noxious smoke would incapacitate the Athenians, so that they would not be able to resist the assault that followed. Sparta was not alone in its use of unconventional tactics in ancient Greece; Solon of Athens is said to have used hellebore roots to poison the water in an aqueduct leading from the River Pleistos around 590 BC during the siege of Kirrha.
The earliest archaeological evidence of gas warfare is during the Roman–Persian wars. Research carried out on the collapsed tunnels at Dura-Europos in Syria suggests that during the siege of the town in the third century AD, the Sasanians used bitumen and sulfur crystals to get it burning. When ignited, the materials gave off dense clouds of choking sulfur dioxide gases which killed 19 Roman soldiers and a single Sassanian, purported to be the fire-tender, in a matter of two minutes.
Quicklime may have been used in medieval naval warfare, including to use of "lime-mortars" to throw it at enemy ships. Scottish historian David Hume, in his work The History of England, recounted how during the reign of Henry III of England the English navy destroyed an invading French fleet by attacking it with quicklime.
In the late 15th century, Spanish conquistadors encountered a rudimentary type of chemical warfare on the island of Hispaniola. The Taíno threw gourds filled with ashes and ground hot peppers at the Spaniards to create a blinding smoke screen before launching their attack.
The natives of the Pernambuco province used pepper smoke during sieges; they would wait till the wind was blowing towards the enemy, and light a bonfire filled with peppers.
Early modern era
Leonardo da Vinci proposed the use of a powder of sulfide, arsenic and verdigris in the 15th century:
throw poison in the form of powder upon galleys. Chalk, fine sulfide of arsenic, and powdered verdegris may be thrown among enemy ships by means of small mangonels, and all those who, as they breathe, inhale the powder into their lungs will become asphyxiated.
It is unknown whether this powder was ever actually used.
In the 17th century during sieges, armies attempted to start fires by launching incendiary shells filled with sulfur, tallow, rosin, turpentine, saltpeter, and/or antimony. Even when fires were not started, the resulting smoke and fumes provided a considerable distraction. Although their primary function was never abandoned, a variety of fills for shells were developed to maximize the effects of the smoke.
In 1672, during his siege of the city of Groningen, Christoph Bernhard von Galen, the Bishop of Münster, employed several different explosive and incendiary devices, some of which had a fill that included belladonna, intended to produce toxic fumes. Just three years later, August 27, 1675, the French and the Holy Roman Empire concluded the Strasbourg Agreement, which included an article banning the use of "perfidious and odious" toxic devices.
Pirate Captain Thompson used "vast numbers of powder flasks, grenade shells, and stinkpots" to defeat two pirate-hunters sent by the Governor of Jamaica in 1721.
The Qing dynasty used stinkpots in naval operations. Those earthenware incendiary weapons were in part filled with sulphur, gunpowder, nails, and shot, while the other part was filled with noxious materials designed to emanate a highly unpleasant and suffocating smell to its enemies when ignited. During the War of 1812, the Royal Navy used stinkpots in a bombardment of Stonington, Connecticut on 9 August 1814.
Industrial era
The modern notion of chemical warfare emerged from the mid-19th century, with the development of modern chemistry and associated industries. The first recorded modern proposal for the use of chemical warfare was made by Lyon Playfair, Secretary of the Science and Art Department, in 1854 during the Crimean War. He proposed a cacodyl cyanide artillery shell for use against enemy ships as way to solve the stalemate during the siege of Sevastopol. The proposal was backed by Admiral Thomas Cochrane of the Royal Navy. It was considered by the Prime Minister, Lord Palmerston, but the British Ordnance Department rejected the proposal as "as bad a mode of warfare as poisoning the wells of the enemy." Playfair's response was used to justify chemical warfare into the next century:
There was no sense in this objection. It is considered a legitimate mode of warfare to fill shells with molten metal which scatters among the enemy, and produced the most frightful modes of death. Why a poisonous vapor which would kill men without suffering is to be considered illegitimate warfare is incomprehensible. War is destruction, and the more destructive it can be made with the least suffering the sooner will be ended that barbarous method of protecting national rights. No doubt in time chemistry will be used to lessen the suffering of combatants, and even of criminals condemned to death.
Later, during the American Civil War, New York school teacher John Doughty proposed the offensive use of chlorine gas, delivered by filling a 10-inch (254 millimeter) artillery shell with two to three quarts (1.892.84 liters) of liquid chlorine, which could produce many cubic feet of chlorine gas. Doughty's plan was apparently never acted on, as it was probably presented to Brigadier General James Wolfe Ripley, Chief of Ordnance.
In March 1868, during the War of Triple Alliance, the Paraguayan troops threw lit tubes full of asphixiating mixtures in their attempt to board Brazilian ironclads with canoes. The attack failed since the tubes were easily put out by the defenders.
A general concern over the use of poison gas manifested itself in 1899 at the Hague Conference with a proposal prohibiting shells filled with asphyxiating gas. The proposal was passed, despite a single dissenting vote from the United States. The American representative, Navy Captain Alfred Thayer Mahan, justified voting against the measure on the grounds that "the inventiveness of Americans should not be restricted in the development of new weapons."
World War I
The French were the first to use chemical weapons during the First World War, using the tear gases ethyl bromoacetate and chloroacetone. They likely did not realize that effects might be more serious under wartime conditions than in riot control. It is also likely that their use of tear gas escalated to the use of poisonous gases.
The Hague Declaration of 1899 and the Hague Convention of 1907 prohibit the firing of any projectiles "the sole object of which is the diffusion of asphyxiating or deleterious gases." Germany exploited this loophole by opening canisters filled with poison gas into the wind and letting it carry it towards the enemy lines, instead of launching it in artillery rounds. One of Germany's earliest uses of chemical weapons occurred on October 27, 1914, when shells containing the irritant dianisidine chlorosulfonate were fired at British troops near Neuve-Chapelle, France. Germany used another irritant, xylyl bromide, in artillery shells that were fired in January 1915 at the Russians near Bolimów, in present-day Poland. The first full-scale deployment of deadly chemical warfare agents during World War I was at the Second Battle of Ypres, on April 22, 1915, when the Germans attacked French, Canadian and Algerian troops with chlorine gas released from canisters and carried by the wind towards the Allied trenches.
A total 50,965 tons of pulmonary, lachrymatory, and vesicant agents were deployed by both sides of the conflict, including chlorine, phosgene, and mustard gas. Historians have reached a wide range of estimates on gas casualties, ranging from 500k to 1.3 million casualties directly caused by chemical warfare agents during the course of the war. A minimum of around 1300 civilians were injured due to the use of the weapons, and at least around 4000 were injured during weapon production.
World War I-era chemical ammunition is still found, unexploded, at former battle, storage, or test sites and poses an ongoing threat to inhabitants of Belgium, France and other countries. Camp American University where American chemical weapons were developed and later buried, has undergone 20 years of remediation efforts.
After the war, the most common method of disposal of chemical weapons was to dump them into the nearest large body of water. As many as 65,000 tons of chemical warfare agents may have been dumped in the Baltic Sea alone; agents dumped in that sea included mustard gas, phosgene, lewisite (β-chlorovinyldichloroarsine), adamsite (diphenylaminechloroarsine), Clark I (diphenylchloroarsine) and Clark II (diphenylcyanoarsine). Over time the containers corrode, and the chemicals leaked out. On the sea floor, at low temperatures, mustard gas tends to form lumps within a "skin" of chemical byproducts. These lumps can wash onto shore, where they look like chunks of waxy yellowish clay. They are extremely toxic, but the effects may not be immediately apparent.
Interwar period
During the interwar period, chemical agents were occasionally used to subdue populations and suppress rebellion. In 1925, 16 of the world's major nations signed the Geneva Protocol, thereby pledging never to use chemical weapons in interstate warfare again. Notably, while the United States delegation under presidential authority signed the Protocol, it was not ratified until 1975. The Protocol does not ban the development or production of chemical weapons nor it applies to non-international armed conflicts.
Alleged British use in Mesopotamia
It has been alleged that the British used chemical weapons in Mesopotamia during the Iraqi revolt of 1920. Noam Chomsky claimed that Winston Churchill at the time was keen on chemical weapons, suggesting they be used "against recalcitrant Arabs as an experiment", and that he stated to be "strongly in favour of using poisoned gas against uncivilised tribes".
According to some historians, including Geoff Simons and Charles Townshend, the British used chemical weapons in the conflict, while according to Lawrence James and Niall Ferguson the weapons were agreed by Churchill but eventually not used; R.M. Douglas of Colgate University also observed that Churchill's statement had served to convince observers of the existence of weapons of mass destruction which were not actually there.
Soviet use in Tambov, Central Asia, and China
Lenin's Soviet government employed poison gas in 1921 during the Tambov Rebellion. An order signed by military commanders Tukhachevsky and Vladimir Antonov-Ovseyenko stipulated, "The forests where the bandits are hiding are to be cleared by the use of poison gas. This must be carefully calculated, so that the layer of gas penetrates the forests and kills everyone hiding there."
In the 1930s, the Soviet Union used mustard gas deployed from planes against Basmachi rebels in Central Asia.
During the Soviet invasion of Xinjiang in 1934, Ma Zhongying's New 36th Division put up a fierce resistance at the Battle of Dawan Cheng, but was forced to retreat after Soviets delivered mustard gas from planes.
Chinese use
A chemical arms race developed during the Warlord Era. The first chemical weapons were imported by a minor Hunan warlord, who bought two small cases of "gas-producing shells" in August 1921. Marshal Cao Kun, approached a British-owned chemical company in Tianjin in 1923; he attempted to order gas bombs, but as far as is known they turned down his proposal. In 1925, Zhang Zuolin had a chemical plant built in Mukden and hired German and Russian experts to produce chlorine, phosgene and mustard gas, in the same year Feng Yuxiang also set up a ‘special arsenal’ to produce chemical weapons designed by Soviet and German experts. All of these efforts appear to have failed. There was one reported incident of chemical warfare when Zhang Zuolin's aircraft dropped gas bombs’ on the forces of Wu Peifu; who branded the use of these bombs as inhumane.
Spanish use in Morocco
Combined Spanish and French forces dropped mustard gas bombs against Berber rebels and civilians during the Rif War in Spanish Morocco (1921–1927). These attacks marked the first widespread employment of gas warfare in the post-WWI era. The Spanish army indiscriminately used phosgene, diphosgene, chloropicrin and mustard gas against civilian populations, markets and rivers. Although Spain signed the Geneva Protocol in 1925, it only prohibited the use of chemical and biological weapons in international conflicts, instead of non-international conflicts like the Rif War.
In a telegram sent by the High Commissioner of Spanish Morocco Dámaso Berenguer on August 12, 1921, to the Spanish minister of War, Berenguer stated: "I have been obstinately resistant to the use of suffocating gases against these indigenous peoples but after what they have done, and of their treacherous and deceptive conduct, I have to use them with true joy."
According to military aviation general Hidalgo de Cisneros in his autobiographical book Cambio de rumbo, he was the first warfighter to drop a 100-kilogram mustard gas bomb from his Farman F60 Goliath aircraft in the summer of 1924. About 127 fighters and bombers flew in the campaign, dropping around 1,680 bombs each day. The mustard gas bombs were brought from the stockpiles of Germany and delivered to Melilla before being carried on Farman F60 Goliath airplanes. Historian Juan Pando has been the only Spanish historian to have confirmed the usage of mustard gas starting in 1923. Spanish newspaper La Correspondencia de España published an article called Cartas de un soldado (Letters of a soldier) on August 16, 1923, which backed the usage of mustard gas.
Some have cited the chemical weapons used in the region as the main reason for the widespread occurrence of cancer among the population. In 2007, the Catalan party of the Republican Left (Esquerra Republicana de Catalunya) passed a bill to the Spanish Congress of Deputies requesting Spain to acknowledge the "systematic" use of chemical weapons against the population of the Rif mountains; however, the bill was rejected by 33 votes from the governing Socialist Labor Party and the opposition right-wing Popular Party.
Italian use in Libya and Ethiopia
Italy used mustard gas and other "gruesome measures" against Senussi forces in Libya (see Pacification of Libya, Italian colonization of Libya). Poison gas was used against the Libyans as early as January 1928 The Italians dropped mustard gas from the air.
Beginning in October 1935 and continuing into the following months, Fascist Italy used mustard gas against the Ethiopians during the Second Italo-Abyssinian War in violation of the Geneva Protocol. Italian general Rodolfo Graziani first ordered the use of chemical weapons at Gorrahei against the forces of Ras Nasibu. Benito Mussolini personally authorized Graziani to use chemical weapons. Chemical weapons dropped by warplane "proved to be very effective" and was used "on a massive scale against civilians and troops, as well as to contaminate fields and water supplies." Among the most intense chemical bombardment by the Italian Air Force in Ethiopia occurred in February and March 1936, although "gas warfare continued, with varying intensity, until March 1939." J. F. C. Fuller, who was present in Ethiopia during the conflict, stated that mustard gas "was the decisive tactical factor in the war." Some estimate that up to one-third of Ethiopian casualties of the war were caused by chemical weapons.
The Italians' deployment of mustard gas prompted international criticism. In April 1936, British Prime Minister Stanley Baldwin told Parliament: "If a great European nation, in spite of having given its signature to the Geneva Protocol against the use of such gases, employs them in Africa, what guarantee have we that they may not be used in Europe?" Mussolini initially denied the use of chemical weapons; later, Mussolini and Italian government sought to justify their use as lawful retaliation for Ethiopian atrocities.
After the liberation of Ethiopia in 1941, Ethiopia repeatedly but unsuccessfully sought to prosecute Italian war criminals. The Allied powers excluded Ethiopia from the United Nations War Crimes Commission (established in 1942) because the Allies feared that Ethiopia would seek to prosecute Italian commander Pietro Badoglio, who had ordered the use of chemical gas in the Second Italo-Ethiopian War but later "became a valuable ally against the Axis powers" after the fall of the Fascist regime in Italy, serving as a senior officer in the Italian Co-belligerent Army. In 1946, Ethiopians ruler Haile Selassie again sought "to prosecute senior Italian officials who had sanctioned the use of chemical weapons and had committed other war crimes such as torturing and executing Ethiopian prisoners and citizens during the Italian-Ethiopian War." These attempts failed, in large part because the Western Allies wished to avoid alienating the Italian government at a time when Italy was seen as key to containing the Soviet Union.
Following World War II, the Italian government denied that Italy had ever used chemical weapons in Africa; only in 1995 did Italy formally acknowledge that it had used chemical weapons in colonial wars.
Nerve agents
Shortly after the end of World War I, Germany's General Staff enthusiastically pursued a recapture of their preeminent position in chemical warfare. In 1923, Hans von Seeckt pointed the way, by suggesting that German poison gas research move in the direction of delivery by aircraft in support of mobile warfare. Also in 1923, at the behest of the German army, poison gas expert Dr. Hugo Stoltzenberg negotiated with the USSR to build a huge chemical weapons plant at Trotsk, on the Volga river.
Collaboration between Germany and the Soviet Union in poison gas research continued on and off through the 1920s. In 1924, German officers debated the use of poison gas versus non-lethal chemical weapons against civilians.
Chemical warfare was revolutionized by Nazi Germany's discovery of the nerve agents tabun (in 1937) and sarin (in 1939) by Gerhard Schrader, a chemist of IG Farben.
IG Farben was Germany's premier poison gas manufacturer during World War II, so the weaponization of these agents cannot be considered accidental. Both were turned over to the German Army Weapons Office prior to the outbreak of the war.
The nerve agent soman was later discovered by Nobel Prize laureate Richard Kuhn and his collaborator Konrad Henkel at the Kaiser Wilhelm Institute for Medical Research in Heidelberg in the spring of 1944. The Germans developed and manufactured large quantities of several agents, but chemical warfare was not extensively used by either side. Chemical troops were set up (in Germany since 1934) and delivery technology was actively developed.
World War II
Imperial Japanese Army
Despite the 1899 Hague Declaration IV, 2 – Declaration on the Use of Projectiles the Object of Which is the Diffusion of Asphyxiating or Deleterious Gases, Article 23 (a) of the 1907 Hague Convention IV – The Laws and Customs of War on Land, and a resolution adopted against Japan by the League of Nations on May 14, 1938, the Imperial Japanese Army frequently used chemical weapons. Because of fear of retaliation, however, those weapons were never used against Westerners, but against other Asians judged "inferior" by imperial propaganda. According to historians Yoshiaki Yoshimi and Kentaro Awaya, gas weapons, such as tear gas, were used only sporadically in 1937 but in early 1938, the Imperial Japanese Army began full-scale use of sneeze and nausea gas (red), and from mid-1939, used mustard gas (yellow) against both Kuomintang and Communist Chinese troops.
According to historians Yoshiaki Yoshimi and Seiya Matsuno, the chemical weapons were authorized by specific orders given by Emperor Hirohito himself, transmitted by the chief of staff of the army. For example, the Emperor authorized the use of toxic gas on 375 separate occasions during the Battle of Wuhan from August to October 1938. They were also profusely used during the invasion of Changde. Those orders were transmitted either by Prince Kan'in Kotohito or General Hajime Sugiyama. The Imperial Japanese Army had used mustard gas and the US-developed (CWS-1918) blister agent lewisite against Chinese troops and guerrillas. Experiments involving chemical weapons were conducted on live prisoners (Unit 731 and Unit 516).
The Japanese also carried chemical weapons as they swept through Southeast Asia towards Australia. Some of these items were captured and analyzed by the Allies. Historian Geoff Plunkett has recorded how Australia covertly imported 1,000,000 chemical weapons from the United Kingdom from 1942 onwards and stored them in many storage depots around the country, including three tunnels in the Blue Mountains to the west of Sydney. They were to be used as a retaliatory measure if the Japanese first used chemical weapons. Buried chemical weapons have been recovered at Marrangaroo and Columboola.
Nazi Germany
During the Holocaust, a genocide perpetrated by Nazi Germany, millions of Jews, Romani, Slavs, homosexuals, and other victims were gassed with carbon monoxide and hydrogen cyanide (including Zyklon B). This remains the deadliest use of poison gas in history. Nevertheless, the Nazis did not extensively use chemical weapons in combat, at least not against the Western Allies, despite maintaining an active chemical weapons program in which the Nazis used concentration camp prisoners as forced labor to secretly manufacture tabun, a nerve gas, and experimented upon concentration camp victims to test the effects of the gas. Otto Ambros of IG Farben was a chief chemical-weapons expert for the Nazis.
The Nazis' decision to avoid the use of chemical weapons on the battlefield has been variously attributed to a lack of technical ability in the German chemical weapons program and fears that the Allies would retaliate with their own chemical weapons. It also has been speculated to have arisen from Adolf Hitler's experiences as a soldier in the German army during World War I, where he was injured by a British mustard gas attack in 1918. After the Battle of Stalingrad, Joseph Goebbels, Robert Ley, and Martin Bormann urged Hitler to approve the use of tabun and other chemical weapons to slow the Soviet advance. At a May 1943 meeting in the Wolf's Lair, however, Hitler was told by Ambros that Germany had 45,000 tons of chemical gas stockpiled, but that the Allies likely had far more. Hitler responded by suddenly leaving the room and ordering production of tabun and sarin to be doubled, but "fearing some rogue officer would use them and spark Allied retaliation, he ordered that no chemical weapons be transported to the Russian front." After the Allied invasion of Italy, the Germans rapidly moved to remove or destroy both German and Italian chemical-weapon stockpiles, "for the same reason that Hitler had ordered them pulled from the Russian front—they feared that local commanders would use them and trigger Allied chemical retaliation."
Stanley P. Lovell, deputy director for Research and Development of the Office of Strategic Services, reports in his book Of Spies and Stratagems that the Allies knew the Germans had quantities of Gas Blau available for use in the defense of the Atlantic Wall. The use of nerve gas on the Normandy beachhead would have seriously impeded the Allies and possibly caused the invasion to fail altogether. He submitted the question "Why was nerve gas not used in Normandy?" to be asked of Hermann Göring during his interrogation after the war had ended. Göring answered that the reason was that the Wehrmacht was dependent upon horse-drawn transport to move supplies to their combat units, and had never been able to devise a gas mask horses could tolerate; the versions they developed would not pass enough pure air to allow the horses to pull a cart. Thus, gas was of no use to the German Army under most conditions.
The Nazis did use chemical weapons in combat on several occasions along the Black Sea, notably in Sevastopol, where they used toxic smoke to force Soviet resistance fighters out of caverns below the city, in violation of the 1925 Geneva Protocol. The Nazis also used asphyxiating gas in the catacombs of Odessa in November 1941, following their capture of the city, and in late May 1942 during the Battle of the Kerch Peninsula in eastern Crimea. Victor Israelyan, a Soviet ambassador, reported that the latter incident was perpetrated by the Wehrmacht's Chemical Forces and organized by a special detail of SS troops with the help of a field engineer battalion. Chemical Forces General Ochsner reported to German command in June 1942 that a chemical unit had taken part in the battle. After the battle in mid-May 1942, roughly 3,000 Red Army soldiers and Soviet civilians not evacuated by sea were besieged in a series of caves and tunnels in the nearby Adzhimushkay quarry. After holding out for approximately three months, "poison gas was released into the tunnels, killing all but a few score of the Soviet defenders." Thousands of those killed around Adzhimushkay were documented to have been killed by asphyxiation from gas.
In February 1943, German troops stationed in Kuban received a telegram: "Russians might have to be cleared out of the mountain range with gas." The troops also received two wagons of toxin antidotes.
Western Allies
The Western Allies did not use chemical weapons during the Second World War. The British planned to use mustard gas and phosgene to help repel a German invasion in 1940–1941, and had there been an invasion may have also deployed it against German cities. General Alan Brooke, Commander-in-Chief, Home Forces, in command of British anti-invasion preparations of the Second World War said that he "...had every intention of using sprayed mustard gas on the beaches" in an annotation in his diary. The British manufactured mustard, chlorine, lewisite, phosgene and Paris Green and stored them at airfields and depots for use on the beaches.
The mustard gas stockpile was enlarged in 1942–1943 for possible use by RAF Bomber Command against German cities, and in 1944 for possible retaliatory use if German forces used chemical weapons against the D-Day landings.
Winston Churchill, the British Prime Minister, issued a memorandum advocating a chemical strike on German cities using poison gas and possibly anthrax. Although the idea was rejected, it has provoked debate.
In July 1944, fearing that rocket attacks on London would get even worse, and saying he would only use chemical weapons if it were "life or death for us" or would "shorten the war by a year", Churchill wrote a secret memorandum asking his military chiefs to "think very seriously over this question of using poison gas." He stated "it is absurd to consider morality on this topic when everybody used it in the last war without a word of complaint..."
The Joint Planning Staff, however, advised against the use of gas because it would inevitably provoke Germany to retaliate with gas. They argued that this would be to the Allies' disadvantage in France both for military reasons and because it might "seriously impair our relations with the civilian population when it became generally known that chemical warfare was first employed by us."
In 1945, the U.S. Army's Chemical Warfare Service standardized improved chemical warfare rockets intended for the new M9 and M9A1 "Bazooka" launchers, adopting the M26 Gas Rocket, a cyanogen chloride (CK)-filled warhead for the 2.36-in rocket launcher. CK, a deadly blood agent, was capable of penetrating the protective filter barriers in some gas masks, and was seen as an effective agent against Japanese forces (particularly those hiding in caves or bunkers), whose gas masks lacked the impregnants that would provide protection against the chemical reaction of CK. While stockpiled in US inventory, the CK rocket was never deployed or issued to combat personnel.
Accidental release
On the night of December 2, 1943, German Ju 88 bombers attacked the port of Bari in Southern Italy, sinking several American ships—among them the , which was carrying mustard gas intended for use in retaliation by the Allies if German forces initiated gas warfare. The presence of the gas was highly classified, and authorities ashore had no knowledge of it, which increased the number of fatalities since physicians, who had no idea that they were dealing with the effects of mustard gas, prescribed treatment improper for those suffering from exposure and immersion.
The whole affair was kept secret at the time and for many years after the war. According to the U.S. military account, "Sixty-nine deaths were attributed in whole or in part to the mustard gas, most of them American merchant seamen" out of 628 mustard gas military casualties.
The large number of civilian casualties among the Italian population was not recorded. Part of the confusion and controversy derives from the fact that the German attack was highly destructive and lethal in itself, also apart from the accidental additional effects of the gas (the attack was nicknamed "The Little Pearl Harbor"), and attribution of the causes of death between the gas and other causes is far from easy.
Rick Atkinson, in his book The Day of Battle, describes the intelligence that prompted Allied leaders to deploy mustard gas to Italy. This included Italian intelligence that Adolf Hitler had threatened to use gas against Italy if the state changed sides, and prisoner of war interrogations suggesting that preparations were being made to use a "new, egregiously potent gas" if the war turned decisively against Germany. Atkinson concludes, "No commander in 1943 could be cavalier about a manifest threat by Germany to use gas."
Development during the Cold War
After World War II, the Allies recovered German artillery shells containing the three German nerve agents of the day (tabun, sarin, and soman), prompting further research into nerve agents by all of the former Allies.
Although the threat of global thermonuclear war was foremost in the minds of most during the Cold War, both the Soviet and Western governments put enormous resources into developing chemical and biological weapons.
Britain
In the late 1940s and early 1950s, British postwar chemical weapons research was based at the Porton Down facility. Research was aimed at providing Britain with the means to arm itself with a modern nerve-agent-based capability and to develop specific means of defense against these agents.
Ranajit Ghosh, a chemist at the Plant Protection Laboratories of Imperial Chemical Industries was investigating a class of organophosphate compounds (organophosphate esters of substituted aminoethanethiols), for use as a pesticide. In 1954, ICI put one of them on the market under the trade name Amiton. It was subsequently withdrawn, as it was too toxic for safe use.
The toxicity did not go unnoticed, and samples of it were sent to the research facility at Porton Down for evaluation. After the evaluation was complete, several members of this class of compounds were developed into a new group of much more lethal nerve agents, the V agents. The best-known of these is probably VX, assigned the UK Rainbow Code Purple Possum, with the Russian V-Agent coming a close second (Amiton is largely forgotten as VG).
On the defensive side, there were years of difficult work to develop the means of prophylaxis, therapy, rapid detection and identification, decontamination and more effective protection of the body against nerve agents, capable of exerting effects through the skin, the eyes and respiratory tract.
Tests were carried out on servicemen to determine the effects of nerve agents on human subjects, with one recorded death due to a nerve gas experiment. There have been persistent allegations of unethical human experimentation at Porton Down, such as those relating to the death of Leading Aircraftman Ronald Maddison, aged 20, in 1953. Maddison was taking part in sarin nerve agent toxicity tests. Sarin was dripped onto his arm and he died shortly afterwards.
In the 1950s, the Chemical Defence Experimental Establishment became involved with the development of CS, a riot control agent, and took an increasing role in trauma and wound ballistics work. Both these facets of Porton Down's work had become more important because of the situation in Northern Ireland.
In the early 1950s, nerve agents such as sarin were produced— about 20 tons were made from 1954 until 1956. CDE Nancekuke was an important factory for stockpiling chemical weapons. Small amounts of VX were produced there, mainly for laboratory test purposes, but also to validate plant designs and optimise chemical processes for potential mass production. However, full-scale mass production of VX agent never took place, with the 1956 decision to end the UK's offensive chemical weapons programme. In the late 1950s, the chemical weapons production plant at Nancekuke was mothballed, but was maintained through the 1960s and 1970s in a state whereby production of chemical weapons could easily re-commence if required.
United States
In 1952, the U.S. Army patented a process for the "Preparation of Toxic Ricin", publishing a method of producing this powerful toxin. In 1958 the British government traded their VX technology with the United States in exchange for information on thermonuclear weapons. By 1961 the U.S. was producing large amounts of VX and performing its own nerve agent research. This research produced at least three more agents; the four agents (VE, VG, VM, VX) are collectively known as the "V-Series" class of nerve agents.
Between 1951 and 1969, Dugway Proving Ground was the site of testing for various chemical and biological agents, including an open-air aerodynamic dissemination test in 1968 that accidentally killed, on neighboring farms, approximately 6,400 sheep by an unspecified nerve agent.
From 1962 to 1973, the Department of Defense planned 134 tests under Project 112, a chemical and biological weapons "vulnerability-testing program." In 2002, the Pentagon admitted for the first time that some of tests used real chemical and biological weapons, not just harmless simulants.
Specifically under Project SHAD, 37 secret tests were conducted in California, Alaska, Florida, Hawaii, Maryland, and Utah. Land tests in Alaska and Hawaii used artillery shells filled with sarin and VX, while Navy trials off the coasts of Florida, California and Hawaii tested the ability of ships and crew to perform under biological and chemical warfare, without the crew's knowledge. The code name for the sea tests was Project Shipboard Hazard and Defense—"SHAD" for short.
In October 2002, the Senate Armed Forces Subcommittee on Personnel held hearings as the controversial news broke that chemical agents had been tested on thousands of American military personnel. The hearings were chaired by Senator Max Cleland, former VA administrator and Vietnam War veteran.
United States chemical respiratory protection standardization
In December 2001, the United States Department of Health and Human Services, Centers for Disease Control and Prevention (CDC), National Institute for Occupational Safety and Health (NIOSH), and National Personal Protective Technology Laboratory (NPPTL), along with the U.S. Army Research, Development and Engineering Command (RDECOM), Edgewood Chemical and Biological Center (ECBC), and the U.S. Department of Commerce National Institute of Standards and Technology (NIST) published the first of six technical performance standards and test procedures designed to evaluate and certify respirators intended for use by civilian emergency responders to a chemical, biological, radiological, or nuclear weapon release, detonation, or terrorism incident.
To date NIOSH/NPPTL has published six new respirator performance standards based on a tiered approach that relies on traditional industrial respirator certification policy, next-generation emergency response respirator performance requirements, and special live chemical warfare agent testing requirements of the classes of respirators identified to offer respiratory protection against chemical, biological, radiological, and nuclear (CBRN) agent inhalation hazards. These CBRN respirators are commonly known as open-circuit self-contained breathing apparatus (CBRN SCBA), air-purifying respirator (CBRN APR), air-purifying escape respirator (CBRN APER), self-contained escape respirator (CBRN SCER) and loose- or tight-fitting powered air-purifying respirators (CBRN PAPR).
Soviet Union
Due to the secrecy of the Soviet Union's government, very little information was available about the direction and progress of the Soviet chemical weapons until relatively recently. After the fall of the Soviet Union, Russian chemist Vil Mirzayanov published articles revealing illegal chemical weapons experimentation in Russia.
In 1993, Mirzayanov was imprisoned and fired from his job at the State Research Institute of Organic Chemistry and Technology, where he had worked for 26 years. In March 1994, after a major campaign by U.S. scientists on his behalf, Mirzayanov was released.
Among the information related by Vil Mirzayanov was the direction of Soviet research into the development of even more toxic nerve agents, which saw most of its success during the mid-1980s. Several highly toxic agents were developed during this period; the only unclassified information regarding these agents is that they are known in the open literature only as "Foliant" agents (named after the program under which they were developed) and by various code designations, such as A-230 and A-232.
According to Mirzayanov, the Soviets also developed weapons that were safer to handle, leading to the development of binary weapons, in which precursors for the nerve agents are mixed in a munition to produce the agent just prior to its use. Because the precursors are generally significantly less hazardous than the agents themselves, this technique makes handling and transporting the munitions a great deal simpler.
Additionally, precursors to the agents are usually much easier to stabilize than the agents themselves, so this technique also made it possible to increase the shelf life of the agents a great deal. During the 1980s and 1990s, binary versions of several Soviet agents were developed and designated "Novichok" agents (after the Russian word for "newcomer"). Together with Lev Fedorov, he told the secret Novichok story exposed in the newspaper The Moscow News.
Use in conflicts after World War II
North Yemen
The first attack of the North Yemen Civil War took place on June 8, 1963, against Kawma, a village of about 100 inhabitants in northern Yemen, killing about seven people and damaging the eyes and lungs of 25 others. This incident is considered to have been experimental, and the bombs were described as "home-made, amateurish and relatively ineffective". The Egyptian authorities suggested that the reported incidents were probably caused by napalm, not gas.
There were no reports of gas during 1964, and only a few were reported in 1965. The reports grew more frequent in late 1966. On December 11, 1966, fifteen gas bombs killed two people and injured thirty-five. On January 5, 1967, the biggest gas attack came against the village of Kitaf, causing 270 casualties, including 140 fatalities. The target may have been Prince Hassan bin Yahya, who had installed his headquarters nearby. The Egyptian government denied using poison gas, and alleged that Britain and the US were using the reports as psychological warfare against Egypt. On February 12, 1967, it said it would welcome a UN investigation. On March 1, U Thant, the then Secretary-General of the United Nations, said he was "powerless" to deal with the matter.
On May 10, 1967, the twin villages of Gahar and Gadafa in Wadi Hirran, where Prince Mohamed bin Mohsin was in command, were gas bombed, killing at least seventy-five. The Red Cross was alerted and on June 2, 1967, it issued a statement in Geneva expressing concern. The Institute of Forensic Medicine at the University of Berne made a statement, based on a Red Cross report, that the gas was likely to have been halogenous derivatives—phosgene, mustard gas, lewisite, chloride or cyanogen bromide.
Rhodesian Bush War
Evidence points to a top-secret Rhodesian program in the 1970s to use organophosphate pesticides and heavy metal rodenticides to contaminate clothing as well as food and beverages. The contaminated items were covertly introduced into insurgent supply chains. Hundreds of insurgent deaths were reported, although the actual death toll likely rose over 1,000.
Angola
During the Cuban intervention in Angola, United Nations toxicologists certified that residue from both VX and sarin nerve agents had been discovered in plants, water, and soil where Cuban units were conducting operations against National Union for the Total Independence of Angola (UNITA) insurgents. In 1985, UNITA made the first of several claims that their forces were the target of chemical weapons, specifically organophosphates. The following year guerrillas reported being bombarded with an unidentified greenish-yellow agent on three separate occasions. Depending on the length and intensity of exposure, victims suffered blindness or death. The toxin was also observed to have killed plant life. Shortly afterwards, UNITA also sighted strikes carried out with a brown agent which it claimed resembled mustard gas. As early as 1984 a research team dispatched by the University of Ghent had examined patients in UNITA field hospitals showing signs of exposure to nerve agents, although it found no evidence of mustard gas.
The UN first accused Cuba of deploying chemical weapons against Angolan civilians and partisans in 1988. Wouter Basson later disclosed that South African military intelligence had long verified the use of unidentified chemical weapons on Angolan soil; this was to provide the impetus for their own biological warfare programme, Project Coast. During the Battle of Cuito Cuanavale, South African troops then fighting in Angola were issued with gas masks and ordered to rehearse chemical weapons drills. Although the status of its own chemical weapons program remained uncertain, South Africa also deceptively bombarded Cuban and Angolan units with colored smoke in an attempt to induce hysteria or mass panic. According to Defence Minister Magnus Malan, this would force the Cubans to share the inconvenience of having to take preventative measures such as donning NBC suits, which would cut combat effectiveness in half. The tactic was effective: beginning in early 1988 Cuban units posted to Angola were issued with full protective gear in anticipation of a South African chemical strike.
On October 29, 1988, personnel attached to Angola's 59 Brigade, accompanied by six Soviet military advisors, reported being struck with chemical weapons on the banks of the Mianei River. The attack occurred shortly after one in the afternoon. Four Angolan soldiers lost consciousness while the others complained of violent headaches and nausea. That November the Angolan representative to the UN accused South Africa of employing poison gas near Cuito Cuanavale for the first time.
Falklands War
Technically, the reported employment of tear gas by Argentine forces during the 1982 invasion of the Falkland Islands constitutes chemical warfare. However, the tear gas grenades were employed as nonlethal weapons to avoid British casualties. The barrack buildings the weapons were used on proved to be deserted in any case. The British claim that more lethal, but legally justifiable as they are not considered chemical weapons under the Chemical Weapons Convention, white phosphorus grenades were used.
Afghanistan
There were reports of chemical weapons being used by Soviet forces during the Soviet–Afghan War, sometimes against civilians.
Vietnamese border raids in Thailand
There is some evidence suggesting that Vietnamese troops used phosgene gas against Cambodian resistance forces in Thailand during the 1984–1985 dry-season offensive on the Thai-Cambodian border.
Iran–Iraq War
Chemical weapons employed by Saddam Hussein killed and injured numerous Iranians and Iraqi Kurds. According to Iraqi documents, assistance in developing chemical weapons was obtained from firms in many countries, including the United States, West Germany, the Netherlands, the United Kingdom, and France.
About 100,000 Iranian soldiers were victims of Iraq's chemical attacks. Many were hit by mustard gas. The official estimate does not include the civilian population contaminated in bordering towns or the children and relatives of veterans, many of whom have developed blood, lung and skin complications, according to the Organization for Veterans. Nerve gas agents killed about 20,000 Iranian soldiers immediately, according to official reports. Of the 80,000 survivors, some 5,000 seek medical treatment regularly and about 1,000 are still hospitalized with severe, chronic conditions.
According to the Foreign Policy, the "Iraqis used mustard gas and sarin prior to four major offensives in early 1988 that relied on U.S. satellite imagery, maps, and other intelligence. ... According to recently declassified CIA documents and interviews with former intelligence officials like Francona, the U.S. had firm evidence of Iraqi chemical attacks beginning in 1983."
Halabja
In March 1988, the Iraqi Kurdish town of Halabja was exposed to multiple chemical agents dropped from warplanes; these "may have included mustard gas, the nerve agents sarin, tabun and VX and possibly cyanide." Between 3,200 and 5,000 people were killed, and between 7,000 and 10,000 were injured. Some reports indicated that three-quarters of them were women and children. The preponderance of the evidence indicates that Iraq was responsible for the attack.
Persian Gulf War
The U.S. Department of Defense and Central Intelligence Agency's longstanding official position is that Iraqi forces under Saddam Hussein did not use chemical weapons during the Persian Gulf War in 1991. In a memorandum in 1994 to veterans of the war, Defense Secretary William J. Perry and General John M. Shalikashvili, the chairman of the Joint Chiefs of Staff, wrote that "There is no evidence, classified or unclassified, that indicates that chemical or biological weapons were used in the Persian Gulf."
However, chemical weapons expert Jonathan B. Tucker, writing in the Nonproliferation Review in 1997, determined that although "[t]he absence of severe chemical injuries or fatalities among Coalition forces makes it clear that no large-scale Iraqi employment of chemical weapons occurred," an array of "circumstantial evidence from a variety of sources suggests that Iraq deployed chemical weapons into the Kuwait Theater of Operations (KTO)—the area including Kuwait and Iraq south of the 31st Parallel, where the ground war was fought—and engaged in sporadic chemical warfare against Coalition forces." In addition to intercepts of Iraqi military communications and publicly available reporting:
Nerve agents (specifically, tabun, sarin, and cyclosarin) and blister agents (specifically, sulfur-mustard and lewisite) were detected at Iraqi sites.
The threat itself of gas warfare had a major effect on Israel, which was not part of the coalition forces led by the US. Israel was attacked with 39 scud missiles, most of which were knocked down in the air above their targets by Patriot missiles developed by Raytheon together with Israel, and supplied by the US. Sirens warned of the attacks approximately 10 minutes before their expected arrival, and Israelis donned gas masks and entered sealed "safe" rooms, over a period 5 weeks. Babies were issued special gas-safe cribs, and religious men were issued gas masks that allowed them to preserve their beards.
In 2014, tapes from Saddam Hussain's archives revealed that Saddam had given orders to use gas against Israel as a last resort if his military communications with the army were cut off.
In 2015, The New York Times published an article about the declassified report of operation Avarice in 2005 in which over 400 chemical weapons including many rockets and missiles from the Iran-Iraq war period were recovered and subsequently destroyed by the CIA. Many other stockpiles, estimated by UNSCOM up to 600 metric tons of chemical weapons, were known to have existed and even admitted by Saddam's regime, but claimed by them to have been destroyed. These have never been found but are believed to still exist.
Iraq War
During Operation Iraqi Freedom, American service members who demolished or handled older explosive ordnance may have been exposed to blister agents (mustard agent) or nerve agents (sarin). According to The New York Times, "In all, American troops secretly reported finding roughly 5,000 chemical warheads, shells or aviation bombs, according to interviews with dozens of participants, Iraqi and American officials, and heavily redacted intelligence documents obtained under the Freedom of Information Act." Among these, over 2,400 nerve-agent rockets were found in summer 2006 at Camp Taji, a former Iraqi Republican Guard compound. "These weapons were not part of an active arsenal"; "they were remnants from an Iraqi program in the 1980s during the Iran-Iraq war".
Syrian civil war
Sarin, mustard gas, and chlorine have been used during the conflict. Numerous casualties led to an international reaction, especially the 2013 Ghouta attacks. A UN fact-finding mission was requested to investigate alleged chemical weapons attacks. In four cases the UN inspectors confirmed use of sarin gas. In August 2016, a confidential report by the United Nations and the OPCW explicitly blamed the Syrian military of Bashar al-Assad for dropping chemical weapons (chlorine bombs) on the towns of Talmenes in April 2014 and Sarmin in March 2015 and ISIS for using sulfur mustard on the town of Marea in August 2015. In 2016, Jaysh al-Islam rebel group had used chlorine gas or other agents against Kurdish militia and civilians in the Sheikh Maqsood neighborhood of Aleppo.
Many countries, including the United States and the European Union have accused the Syrian government of conducting several chemical attacks. Following the 2013 Ghouta attacks and international pressure, Syria acceded to the Chemical Weapons Convention and the destruction of Syria's chemical weapons began. In 2015 the UN mission disclosed previously undeclared traces of sarin compounds in a "military research site". After the April 2017 Khan Shaykhun chemical attack, the United States launched its first attack against Syrian government forces. On 14 April 2018, the United States, France and the United Kingdom carried out a series of joint military strikes against multiple government sites in Syria, including the Barzah scientific research centre, after a chemical attack in Douma.
Ukrainian-Russian War
Russians have used tear gas against Ukrainian forces. It has been done by having a drone drop a grenade with K-51 aerosol CS gas in it. As of March 2024, Ukrainian forces reported an increase of Russian drones dropping “grenades with suffocating and tear gas”. 371 Cases of gas usage was reported over the past month, an increase of 90 incidents was recorded compared to February. Compared to Ukrainian soldiers are receiving training to deal with such attacks but lack modern gas masks. Old Soviet issued masks are “ineffective” and soldiers are having to crowd fund newer masks. Tear Gas or Captir Spray is banned under the Chemical Weapons Convention
Terrorism and anti-terrorism
For many terrorist organizations, chemical weapons might be considered an ideal choice for a mode of attack, if they are available: they are cheap, relatively accessible, and easy to transport. A skilled chemist can readily synthesize most chemical agents if the precursors are available.
In July 1974, a group calling themselves the Aliens of America successfully firebombed the houses of a judge, two police commissioners, and one of the commissioner's cars, burned down two apartment buildings, and bombed the Pan Am Terminal at Los Angeles International Airport, killing three people and injuring eight. The organization, which turned out to be a single resident alien named Muharem Kurbegovic, claimed to have developed and possessed a supply of sarin, as well as four unique nerve agents named AA1, AA2, AA3, and AA4S. Although no agents were found at the time Kurbegovic was arrested in August 1974, he had reportedly acquired "all but one" of the ingredients required to produce a nerve agent. A search of his apartment turned up a variety of materials, including precursors for phosgene and a drum containing 25 pounds of sodium cyanide.
The first successful use of chemical agents by terrorists against a general civilian population was on June 27, 1994, when Aum Shinrikyo, an apocalyptic group based in Japan that believed it necessary to destroy the planet, released sarin gas in Matsumoto, Japan, killing eight and harming 200. The following year, Aum Shinrikyo released sarin into the Tokyo subway system killing 12 and injuring over 5,000.
On December 29, 1999, four days after Russian forces began an assault of Grozny, Chechen terrorists exploded two chlorine tanks in the town. Because of the wind conditions, no Russian soldiers were injured.
Following the September 11 attacks on the U.S. cities of New York City and Washington, D.C., the organization Al-Qaeda responsible for the attacks announced that they were attempting to acquire radiological, biological, and chemical weapons. This threat was lent a great deal of credibility when a large archive of videotapes was obtained by the cable television network CNN in August 2002 showing, among other things, the killing of three dogs by an apparent nerve agent.
In an anti-terrorist attack on October 26, 2002, Russian special forces used a chemical agent (presumably KOLOKOL-1, an aerosolized fentanyl derivative), as a precursor to an assault on Chechen terrorists, which ended the Moscow theater hostage crisis. All 42 of the terrorists and 120 out of 850 hostages were killed during the raid. Although the use of the chemical agent was justified as a means of selectively targeting terrorists, it killed over 100 hostages.
In early 2007, multiple terrorist bombings had been reported in Iraq using chlorine gas. These attacks wounded or sickened more than 350 people. Reportedly the bombers were affiliated with Al-Qaeda in Iraq, and they have used bombs of various sizes up to chlorine tanker trucks. United Nations Secretary-General Ban Ki-moon condemned the attacks as "clearly intended to cause panic and instability in the country."
Chemical weapons treaties
The Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or other Gases, and the Bacteriological Methods of Warfare, or the Geneva Protocol, is an international treaty which prohibits the use of chemical and biological weapons between signatory nations in international armed conflicts. Signed into international law at Geneva on June 17, 1925, and entered into force on February 8, 1928, this treaty states that chemical and biological weapons are "justly condemned by the general opinion of the civilised world."
Chemical Weapons Convention
The most recent arms control agreement in international law, the Convention of the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction, or the Chemical Weapons Convention, outlaws the production, stockpiling, and use of chemical weapons. It is administered by the Organisation for the Prohibition of Chemical Weapons (OPCW), an intergovernmental organisation based in The Hague.
References
Bibliography
CBWInfo.com (2001). A Brief History of Chemical and Biological Weapons: Ancient Times to the 19th Century. Retrieved November 24, 2004.
Chomsky, Noam (March 4, 2001). Prospects for Peace in the Middle East, page 2. Lecture.
Cordette, Jessica, MPH(c) (2003). Chemical Weapons of Mass Destruction. Retrieved November 29, 2004.
Smart, Jeffery K., M.A. (1997). History of Biological and Chemical Warfare. Retrieved November 24, 2004.
United States Senate, 103d Congress, 2d Session. (May 25, 1994). The Riegle Report. Retrieved November 6, 2004.
Gerard J Fitzgerald. American Journal of Public Health. Washington: Apr 2008. Vol. 98, Iss. 4; p. 611
Further reading
Leo P. Brophy and George J. B. Fisher; The Chemical Warfare Service: Organizing for War Office of the Chief of Military History, 1959; L. P. Brophy, W. D. Miles and C. C. Cochrane, The Chemical Warfare Service: From Laboratory to Field (1959); and B. E. Kleber and D. Birdsell, The Chemical Warfare Service in Combat (1966). official US history;
Glenn Cross, Dirty War: Rhodesia and Chemical Biological Warfare, 1975–1980, Helion & Company, 2017
Gordon M. Burck and Charles C. Flowerree; International Handbook on Chemical Weapons Proliferation 1991
L. F. Haber. The Poisonous Cloud: Chemical Warfare in the First World War Oxford University Press: 1986
James W. Hammond Jr; Poison Gas: The Myths Versus Reality Greenwood Press, 1999
Jiri Janata, Role of Analytical Chemistry in Defense Strategies Against Chemical and Biological Attack, Annual Review of Analytical Chemistry, 2009
Ishmael Jones, The Human Factor: Inside the CIA's Dysfunctional Intelligence Culture, Encounter Books, New York 2008, revised 2010, . WMD espionage.
Benoit Morel and Kyle Olson; Shadows and Substance: The Chemical Weapons Convention Westview Press, 1993
Geoff Plunkett, Chemical Warfare in Australia: Australia's Involvement In Chemical Warfare 1914 – Today, (2nd Edition), 2013.. Leech Cup Books. A volume in the Army Military History Series published in association with the Army History Unit.
Jonathan B. Tucker. Chemical Warfare from World War I to Al-Qaeda (2006)
Chemical warfare | History of chemical warfare | Chemistry | 12,238 |
876,732 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20map | In mathematics, particularly in dynamical systems, a first recurrence map or Poincaré map, named after Henri Poincaré, is the intersection of a periodic orbit in the state space of a continuous dynamical system with a certain lower-dimensional subspace, called the Poincaré section, transversal to the flow of the system. More precisely, one considers a periodic orbit with initial conditions within a section of the space, which leaves that section afterwards, and observes the point at which this orbit first returns to the section. One then creates a map to send the first point to the second, hence the name first recurrence map. The transversality of the Poincaré section means that periodic orbits starting on the subspace flow through it and not parallel to it.
A Poincaré map can be interpreted as a discrete dynamical system with a state space that is one dimension smaller than the original continuous dynamical system. Because it preserves many properties of periodic and quasiperiodic orbits of the original system and has a lower-dimensional state space, it is often used for analyzing the original system in a simpler way. In practice this is not always possible as there is no general method to construct a Poincaré map.
A Poincaré map differs from a recurrence plot in that space, not time, determines when to plot a point. For instance, the locus of the Moon when the Earth is at perihelion is a recurrence plot; the locus of the Moon when it passes through the plane perpendicular to the Earth's orbit and passing through the Sun and the Earth at perihelion is a Poincaré map. It was used by Michel Hénon to study the motion of stars in a galaxy, because the path of a star projected onto a plane looks like a tangled mess, while the Poincaré map shows the structure more clearly.
Definition
Let (R, M, φ) be a global dynamical system, with R the real numbers, M the phase space and φ the evolution function. Let γ be a periodic orbit through a point p and S be a local differentiable and transversal section of φ through p, called a Poincaré section through p.
Given an open and connected neighborhood of p, a function
is called Poincaré map for the orbit γ on the Poincaré section S through the point p if
P(p) = p
P(U) is a neighborhood of p and P:U → P(U) is a diffeomorphism
for every point x in U, the positive semi-orbit of x intersects S for the first time at P(x)
Example
Consider the following system of differential equations in polar coordinates, :
The flow of the system can be obtained by integrating the equation: for the component we simply have
while for the component we need to separate the variables and integrate:
Inverting last expression gives
and since
we find
The flow of the system is therefore
The behaviour of the flow is the following:
The angle increases monotonically and at constant rate.
The radius tends to the equilibrium for every value.
Therefore, the solution with initial data draws a spiral that tends towards the radius 1 circle.
We can take as Poincaré section for this flow the positive horizontal axis, namely
: obviously we can use as coordinate on the section. Every point in returns to the section after a time (this can be understood by looking at the evolution of the angle): we can take as Poincaré map the restriction of to the section computed at the time , .
The Poincaré map is therefore :
The behaviour of the orbits of the discrete dynamical system is the following:
The point is fixed, so for every .
Every other point tends monotonically to the equilibrium, for .
Poincaré maps and stability analysis
Poincaré maps can be interpreted as a discrete dynamical system. The stability of a periodic orbit of the original system is closely related to the stability of the fixed point of the corresponding Poincaré map.
Let (R, M, φ) be a differentiable dynamical system with periodic orbit γ through p. Let
be the corresponding Poincaré map through p. We define
and
then (Z, U, P) is a discrete dynamical system with state space U and evolution function
Per definition this system has a fixed point at p.
The periodic orbit γ of the continuous dynamical system is stable if and only if the fixed point p of the discrete dynamical system is stable.
The periodic orbit γ of the continuous dynamical system is asymptotically stable if and only if the fixed point p of the discrete dynamical system is asymptotically stable.
See also
Poincaré recurrence
Hénon map
Recurrence plot
Mironenko reflecting function
Invariant measure
References
External links
Shivakumar Jolad, Poincare Map and its application to 'Spinning Magnet' problem, (2005)
Dynamical systems
Map | Poincaré map | Physics,Mathematics | 1,008 |
58,318,372 | https://en.wikipedia.org/wiki/European%20Women%20in%20Mathematics | European Women in Mathematics (EWM) is an international association of women working in the field of mathematics in Europe. The association participates in political and strategic work to promote the role of women in mathematics and offers its members direct support. Its goals include encouraging women to study mathematics and providing visibility to women mathematicians. It is the "first and best known" of several organizations devoted to women in mathematics in Europe.
Mission
European Women in Mathematics aims to encourage women to study mathematics, support women in their careers, provide a meeting place for like-minded people and highlight and make women mathematicians visible. In this way, and by promoting scientific communication and working with groups and organisations with similar goals, they spread their vision of mathematics and science.
Mentorship
EWM has a mentoring programme which can be joined at any time of the year. EWM brings together a younger and a more experienced member to share different experiences and perspectives for motivation and inspiration.
Grants
EWM awards travel grants for female mathematicians every year. The travel grants are awarded to EWM members who are at an early stage of their career or work in a developing country and who need financial resources (travel and/or accommodation, up to 400 EUR) to attend and speak at an important conference in their field of expertise.
Regular Activities
Every other year, EWM holds a general meeting and a summer school. A newsletter is published at least twice a year, EWM has a website, a Facebook group and an e-mail network. EWM coordinates a mentoring programme and awards a travel grant twice a year.
General Meetings
EWM hold a General Meeting every other year in the form of a week-long conference with a scientific program of mini-courses on mathematical topics, discussions on the situation of women in the field and a General Assembly.
General meetings have been held in Paris (1986), Copenhagen (1987), Warwick (1988), Lisbon (1990), Marseilles (1991), Warsaw (1993), Madrid (1995), Trieste ICTP (1997), Hannover (1999), Malta (2001), Luminy (2003), Volgograd (2005), Cambridge (2007), Novi Sad (2009), Barcelona (2011), Bonn (2013), Cortona (2015), and Graz (2018).
Activities at international conferences
EWM holds satellite conferences to the European Congress in Mathematics and takes part in ICWM International Conference of Women in Mathematics, International Congress of Women Mathematicians and now World Meeting for Women Mathematicians.
History
Although the group that became EWM began holding informal meetings as early as 1974,
EWM was founded as an organization in 1986 by Bodil Branner, Caroline Series, Gudrun Kalmbach, Marie-Françoise Roy, and Dona Strauss, inspired by the activities of the Association for Women in Mathematics in the USA. It was established as an association under Finnish law in 1993 with its seat in Helsinki.
In fact, the basic structure defining the convenor, standing committee and coordinators was established between 1987 and 1991. An EWM email net was set up in 1994 followed by a web page in 1997.
The organization has a Scientific Committee, jointly with the European Mathematical Society and its Committee on Women in Mathematics.
Convenors and Deputies
Similar Societies
There are many similar societies like the "European Women in Mathematics" society that celebrate women in Mathematics. For instance:
Women in Mathematics
International Mathematical Union (IMU) Committee for Women in Mathematics
EMS Women in Mathematics Committee
EMS/EWM Scientific Committee
Femmes et mathématiques
EWM - The Netherlands
LMS Women in Mathematics Committee
Korea Women in Mathematical Sciences
AWM, Association for Women in Mathematics
Women in Math Project
AWSE Association of Women in Science and Education in Russian
Mathematics
The European Mathematical Information Service (EMIS)
The International Mathematical Union (IMU)
Math Archives WWW server
European Union Information
EMS, European Mathematical Society
References
External links
Official website
Mathematical societies
Organizations established in 1986
Organizations for women in science and technology
Pan-European learned societies
Women in mathematics | European Women in Mathematics | Technology | 828 |
65,919 | https://en.wikipedia.org/wiki/Point-to-point%20construction | In electronics, point-to-point construction is a non-automated technique for constructing circuits which was widely used before the use of printed circuit boards (PCBs) and automated assembly gradually became widespread following their introduction in the 1950s. Circuits using thermionic valves (vacuum tubes) were relatively large, relatively simple (the number of large, hot, expensive devices which needed replacing was minimised), and used large sockets, all of which made the PCB less obviously advantageous than with later complex semiconductor circuits. Point-to-point construction is still widespread in power electronics, where components are bulky and serviceability is a consideration, and to construct prototype equipment with few or heavy electronic components. A common practice, especially in older point-to-point construction, is to use the leads of components such as resistors and capacitors to bridge as much of the distance between connections as possible, reducing the need to add additional wire between the components.
Before point-to-point connection, electrical assemblies used screws or wire nuts to hold wires to an insulating wooden or ceramic board. The resulting devices were prone to fail from corroded contacts, or mechanical loosening of the connections. Early premium marine radios, especially from Marconi, sometimes used welded copper in the bus-bar circuits, but this was expensive. The crucial invention was to apply soldering to electrical assembly. In soldering, an alloy of tin and lead (and/or other metals), known as solder, is melted and adheres to other, nonmolten metals, such as copper or tinned steel. Solder makes a strong electrical and mechanical connection.
Point-to-point wiring is not suitable for automated assembly (though see wire wrap, a similar method that is) and is carried out manually, making it both more expensive and more susceptible to wiring errors than PCBs, as connections are determined by the person doing assembly rather than by an etched circuit board. For production, rather than prototyping, errors can be minimised by carefully designed operating procedures.
An intermediate form of construction uses terminal strips (sometimes called "tag boards"), eyelet boards or turret boards. Note that if components are arranged on boards with tags, eyelets or turrets at both ends and wires going to the next components, then the construction is correctly called tag, eyelet or turret construction respectively, as the components are not going from point to point. Although cordwood construction can be wired in a similar way the density means that component placement is usually fixed by a substrate that components are inserted into.
Terminal strip construction
Terminal strip construction, which is often referred to as point-to-point construction within the tube guitar amplifier community, uses terminal strips (also called "tag boards"). A terminal strip has stamped tin-plated copper terminals, each with a hole through which wire ends could be pushed, fitted on an insulating strip, usually made of a cheap, heat-resistant material such as synthetic-resin bonded paper (FR-2), or bakelite reinforced with cotton. The insulator has an integral mounting bracket, sometimes electrically connected to one or more of the stamped loops to ground them to the chassis.
The chassis was constructed first, from sheet metal or wood. Insulated terminal strips were then riveted, nailed or screwed to the underside or interior of the chassis. Transformers, large capacitors, tube sockets and other large components were mounted to the top of the chassis. Their wires were led through holes to the underside or interior. The ends of lengths of wire or wire-ended components such as capacitors and resistors were pushed through the terminals, and usually looped and twisted. When all wires to be connected had been fitted to the terminal, they were soldered together (and to the terminal).
Professional electronics assemblers used to operate from books of photographs and follow an exact assembly sequence to ensure that they did not miss any components. This process is labor-intensive, subject to error and not suitable for automated production. Even after the introduction of printed circuit boards, it did not require laying out and manufacturing circuit boards.
Point-to-point and terminal strip construction continued to be used for some vacuum tube equipment even after the introduction of printed circuit boards. The heat of the tubes can degrade the circuit boards and cause them to become brittle and break. Circuit board degradation is often seen on inexpensive tube radios produced in the 1960s, especially around the hot output and rectifier tubes. American manufacturer Zenith continued to use point-to-point wiring in its tube-based television sets until the early 1970s.
Some audiophile equipment, such as amplifiers, continues to be point-to-point wired using terminal pins, often in very small quantities. In this application modern point-to-point wiring is often used as a marketing design feature rather than a result of the economics of very-small-scale production.
Sometimes true point-to-point wiring—without terminal strips—with very short connections, is still used at very high radio frequencies (in the gigahertz range) to minimise stray capacitance and inductance; the capacitance between a circuit-board trace and some other conductor, and the inductance of a short track, become significant or dominant at high frequencies. In some cases careful PCB layout on a substrate with good high-frequency properties (e.g., ceramic) is sufficient. An example of this design is illustrated in an application note describing an avalanche transistor-based generator of pulses with risetime of a fraction of a nanosecond; the (few) critical components are connected directly to each other and to the output connector with the shortest possible leads.
Particularly in complex equipment, wired circuits are often laid out as a "ladder" of side-by-side components, which need connecting to ladders or components by wire links. A good layout minimizes such links and wiring complexity, often approaching that of direct point-to-point. Amongst complex devices, the pre-PCB Tektronix vacuum-tube oscilloscopes stand out for their very well-designed point-to-point wiring.
If parasitic effects are significant, point-to-point and terminal strip wiring have variable parasitic components, while the inductance and capacitance due to a PCB are the same for all samples and can be compensated for reliably which may be essential for some RF circuits. In some heavily optimised point-to-point RF constructions the circuit can be tuned by bending wires around.
Placing the completed unit in an enclosure protects the circuit from its environment, and users from electrical hazards.
A few large brand names still use terminal strip-type point-to-point boards, but usually for special product lines. Electric guitar amplifier manufacturer Marshall have reissued some of their older models, using this type of construction as a design feature, although their standard products have long used PCBs. Thermionic valve equipment usually does not have the valves mounted on the PCB in order to avoid heat damage, but instead use PCBs for the wiring, achieving the economy of mass-produced PCBs without the heat damage.
Breadboard
Prototypes which are subject to modification are often not made on PCBs, using instead breadboard construction. Historically this could be literally a breadboard, a wooden board with components attached to it and joined up with wire. More recently the term is applied to a board of thin insulating material with holes at standard 0.1-inch pitch; components are pushed through the holes to anchor them, and point-to-point wired on the other side of the board. A type of breadboard specifically for prototyping has this layout, but with strips of metal spring contacts beneath a grid of holes into which components are pushed to make electrical connections like any removable connector. Some portion of the terminals in a straight line in one direction are electrically connected, commonly in groups of 5-10 with multiple groups per row, these may be interspersed with columns that span the height of the board for the more common connections (typically the power supply rails). Such breadboards, and stripboards, fall somewhere between PCBs and point-to-point; they do not require design and manufacture of a PCB, and are as easily modified as a point-to-point setup.
Stripboard
A stripboard is a board with holes in square grid pattern, commonly with a 0.1-inch pitch; all the holes in a straight line are connected by a copper strip as on a PCB. Components are pushed through from the side without strips and soldered in place. The strips can be interrupted by scraping out a section of the copper, stripboard cutters are available for this task which are effectively a drill bit with a handle, they are used by rotating on the holes in a strip.
"Dead bug" construction
Free-form construction can be used in cases where a PCB would be too big or too much work to manufacture for a small number of components. Several methods of construction are used. At one extreme a wiring pen can be used with a perforated board, producing neat and professional results. At the other extreme is "dead bug" style, with the ICs flipped upside-down with their pins sticking up into the air like a dead insect, the leads of components are usually soldered directly to other components where possible, with many small circuits having no added wires. While it is messy-looking, free-form construction can be used to make more compact circuits than other methods. This is often used in BEAM robotics and in RF circuits where component leads must be kept short. This form of construction is used by amateurs for one-off circuits, and also professionally for circuit development, particularly at high frequencies.
For high-frequency work, a grounded solderable metallic base such as the copper side of an unetched printed circuit board can be used as base and ground plane. Information on high-frequency breadboarding and illustrations of dead bug with ground plane construction are in a Linear Technologies application note.
See also
Wire wrap
References
External links
A picture of a "dead bug" style circuit patch
Progressive Wiring Techniques shows an example of point-to-point construction applied to surface-mount components.
Electronics substrates
Electronics manufacturing | Point-to-point construction | Engineering | 2,093 |
7,499,232 | https://en.wikipedia.org/wiki/European%20Federation%20of%20Pharmaceutical%20Industries%20and%20Associations | The European Federation of Pharmaceutical Industries and Associations (EFPIA) is a Brussels-based trade association and lobbying organisation, founded in 1978 and representing the pharmaceutical industry operating in Europe. Through its membership of 36 national associations and 39 leading pharmaceutical companies, the EFPIA represents 1,900 European companies.
EFPIA priorities
EFPIA priorities include speeding up regulatory approval and reimbursement processes for new medicines, creating a strong science base in Europe, joining forces with key stakeholders on political issues concerning health and addressing safety concerns. EFPIA also includes specialised groups like Vaccines Europe who produce approximately 80% of vaccines used worldwide and European Biopharmaceutical Enterprises harness biotechnology to develop approximately one-fifth of new medicines.
Innovative Medicines Initiative
The Innovative Medicines Initiative (IMI) is a public-private partnership designed by the European Commission and EFPIA. It is a pan-European collaboration that brings together large biopharmaceutical companies, small- and medium-sized enterprises (SMEs), patient organisations, academia, hospitals and public authorities. The initiative aims to accelerate the discovery and development of better medicines by removing bottlenecks in the drug development process. It focuses on creating better methods and tools to improve and enhance the drug development process, rather than developing specific, new medicines.
Controversy
From 1991 to 1998 Emer Cooke worked for the EFPIA. She became executive director of the European Medicines Agency (EMA), an agency of the European Union (EU) in charge of the evaluation and supervision of medicinal products, in November 2020.
In a session of the Austrian Parliament member of parliament Gerald Hauser on 1 April 2021 publicly criticised a potential conflict of interest, by her allowing the controversial Oxford–AstraZeneca COVID-19 vaccine to be approved, while having worked for the very same industry in the past as a lobbyist of the EFPIA.
See also
European and Developing Countries Clinical Trials Partnership (EDCTP)
EuropaBio
European Federation of Biotechnology (EFB)
International Federation of Pharmaceutical Manufacturers Associations (IFPMA)
Pharmaceutical Research and Manufacturers of America (PhRMA)
References
External links
of the European Federation of Pharmaceutical Industries and Associations – (EFPIA)
Innovative Medicines Initiative (IMI)
Vaccines Europe (VE)
European Biopharmaceutical Enterprises (EBE) (Archive Link)
European medical and health organizations
Pharmaceutical industry trade groups
Pan-European trade and professional organizations
Life sciences industry
Lobbying organizations in Europe
International organisations based in Belgium
Organizations established in 1978
1978 establishments in Europe
Pharmaceutical industry | European Federation of Pharmaceutical Industries and Associations | Chemistry,Biology | 512 |
7,463,171 | https://en.wikipedia.org/wiki/PowerPC%20applications | Microprocessors belonging to the PowerPC/Power ISA architecture family have been used in numerous applications.
Personal Computers
Apple Computer was the dominant player in the market of personal computers based on PowerPC processors until 2006 when it switched to Intel-based processors. Apple used PowerPC processors in the Power Mac, iMac, eMac, PowerBook, iBook, Mac mini, and Xserve. Classic Macintosh accelerator boards using PowerPCs were made by DayStar Digital, Newer Technology, Sonnet Technologies, and TotalImpact.
There have been several attempts to create PowerPC reference platforms for computers by IBM and others: The IBM PReP (PowerPC Reference Platform) is a system standard intended to ensure compatibility among PowerPC-based systems built by different companies; IBM POP (PowerPC Open Platform) is an open and free standard and design of PowerPC motherboards. Pegasos Open Desktop Workstation (ODW) is an open and free standard and design of PowerPC motherboards based on Marvell Discovery II (MV64361) chipset; PReP standard specifies the PCI bus, but will also support ISA, MicroChannel, and PCMCIA. PReP-compliant systems will be able to run OS/2, AIX, Solaris, Taligent, and Windows NT; and the CHRP (Common Hardware Reference Platform) is an open platform agreed on by Apple, IBM, and Motorola. All CHRP systems will be able to run Mac OS, OS/2-PPC, Windows NT, AIX, Solaris, Novell Netware. CHRP is a superset of PReP and the PowerMac platforms.
Power.org has defined the Power Architecture Platform Reference (PAPR) that provides the foundation for development of computers based on the Linux operating system.
List of computers based on PowerPC:
Amiga accelerator boards:
Phase5 Blizzard PPC.
Phase5 CyberStorm PPC.
Apple
iMac
PowerMac
Xserve
Mac mini
iBook
PowerBook
Eyetech
AmigaOne
Genesi
Pegasos Open Desktop Workstation (ODW).
EFIKA
IBM
RS/6000 AIX workstations
ACube Systems Srl
Sam440 (Samantha)
Sam460ex (Samantha)
Servers
Apple
Xserve Rack server.
Genesi
Open Server Workstation (OSW) with dual IBM PowerPC 970MP CPU.
High density blade server (rack server).
IBM
Rack server.
Supercomputers
IBM
Blue Gene/L and Blue Gene/P Supercomputer, keeping the top spots of supercomputers since 2004, also being the first systems to performa faster than one Petaflops.
System p with POWER5 processors are used as the base for many supercomputers as they are made to scale well and have powerful CPUs.
All supercomputers of Spanish Supercomputing Network, built using PowerPC 970 based blade servers. Magerit and MareNostrum are the most powerful supercomputers of the network.
Roadrunner is a new Cell/Opteron based supercomputer that will be operational in 2008, pushing the 1 PetaFLOPS mark.
Summit and Sierra, currently the world's first and second fastest supercomputers, respectively.
Apple
System X of Virginia Tech is a supercomputer based on 1100 Xserves (PowerPC 970) running Mac OS X. First built using stock PowerMac G5s making it one of the cheapest and most powerful supercomputer in its day.
Cray
The XT3, XT4 and XT5 supercomputers have Opteron CPUs but PowerPC 440 based SeaStar communications processors connecting the CPUs to a very high bandwidth communications grid.
Sony
The PlayStation 3 is the base of Cell based supercomputer grids running Yellow Dog Linux.
Personal digital assistants (smartphones and tablets)
IBM released a Personal Digital Assistant (PDA) reference platform ("Arctic") based on PowerPC 405LP (Low Power). This project is discontinued after IBM sold PowerPC 4XX design to AMCC.
Game consoles
All three major seventh-generation game consoles contain PowerPC-based processors. Sony's PlayStation 3 console, released in November 2006, contains a Cell processor, including a 3.2 GHz PowerPC control processor and eight closely threaded DSP-like accelerator processors, seven active and one spare; Microsoft's Xbox 360 console, released in 2005, includes a 3.2 GHz custom IBM PowerPC chip with three symmetrical cores, each core SMP-capable at two threads, and Nintendo's Wii console, also released in November 2006, contains an extension of the PowerPC architecture found in their previous system, the GameCube.
Several arcade system boards were also powered by PowerPC-based processors, such as Konami Viper, which was used in Police 911 and Silent Scope EX, as well as Taito Type Zero, which powered the first two games in the Battle Gear series, as well as Densha de Go! 3.
TV Set Top Boxes/Digital Recorder
IBM, Sony, and Zarlink Semiconductor had released several Set Top Box (STB) reference platforms based on IBM PowerPC 405 cores and IBM Set Top Box (STB) System-On-Chip (SOC)
Sony Set top box (STB).
Motorola Set top box.
Dreambox Set Top Box.
TiVo (Series1) personal TV/video digital recorder (VDR).
Printers/Graphics
Global Graphics, YARC Raster Image Processing (RIP) system for professional printers.
Hewlett-Packard, Kyocera, Konica-Minolta, Lexmark, Xerox laser and inkjet printers.
Network/USB Devices
Buffalo Technology
Kuro Box/LinkStation/TeraStation network-attached storage devices
Cisco routers
Culturecom - VoIP in China.
Realm Systems
BlackDog Plug-in USB mobile Linux Server
Automotive
Ford, Daimler Benz cars and other car manufacturers.
Medical Equipment
Horatio - patient simulator for training doctor and nurse.
Matrox image processing subsystem for medical equipment: MRI, CAT, PET, USG
Military and Aerospace
The RAD750 (234A510, 234A511, 244A325) radiation-hardened processors, used in several spacecraft.
Maxwell radiation hardened Single-board computer (SBC) for space and military projects.
U.S. Navy submarine sonar systems.
Canadarm for International Space Station (ISS) created by MacDonald, Detwiller & Associates (MDA).
Leclerc main battle tank fire control
Point of Sales
Culturecom - Tax Point of Sales terminal in China.
Test and Measurement Equipment
LeCroy digital oscilloscopes (certain series).
References
External links
The OpenPOWER Foundation
P
PowerPC architecture | PowerPC applications | Technology | 1,389 |
3,397,660 | https://en.wikipedia.org/wiki/Telecentric%20lens | A telecentric lens is a special optical lens (often an objective lens or a camera lens) that has its entrance or exit pupil, or both, at infinity. The size of images produced by a telecentric lens is insensitive to either the distance between an object being imaged and the lens, or the distance between the image plane and the lens, or both, and such an optical property is called telecentricity. Telecentric lenses are used for precision optical two-dimensional measurements, reproduction (e.g., photolithography), and other applications that are sensitive to the image magnification or the angle of incidence of light.
The simplest way to make a lens telecentric is to put the aperture stop at one of the lens's focal points. This allows only rays including the chief rays (light rays that pass through the center of the aperture stop), that will be about parallel to the optical axis on the other side of the lens, to pass the optical system for any object point in the field of view. Commercially available telecentric lenses are often compound lenses that include multiple lens elements, for improved optical performance. Telecentricity is not a property of the lenses inside the compound lens but is established by the location of the aperture stop in the lens. The aperture stop selects the rays that are passed through the lens and this specific selection is what makes a lens telecentric.
If a lens is not telecentric, it is either entocentric or hypercentric. Common lenses are usually entocentric. In particular, a single lens without a separate aperture stop is entocentric. For such a lens the chief ray originating at any point off of the optical axis is never parallel to the optical axis, neither in front of nor behind the lens. A non-telecentric lens exhibits varying magnification for objects at different distances from the lens. An entocentric lens has a smaller magnification for objects farther away; objects of the same size appear smaller the farther they are away. A hypercentric lens produces larger images the farther the object is away.
A telecentric lens can be object-space telecentric, image-space telecentric, or bi-telecentric (also double-telecentric). In an object-space telecentric lens the image size does not change with the object distance, and in an image-space telecentric lens the image size does not change with the image-side distance from the lens.
Object-space telecentric lenses
An object-space telecentric lens has the entrance pupil (the image of the lens's aperture stop, formed by optics before it) at infinity and provides an orthographic projection instead of the perspective projection in an entocentric lens. Object-space telecentric lenses have a working distance. Objects at this distance are in focus and imaged sharply onto the image sensor at flange focal distance in the camera. An object that is closer or farther is out of focus and may be blurry but will be the same size regardless of distance.
Telecentric lenses tend to be larger, heavier, and more expensive than normal lenses of similar focal length and f-number. This is partly due to the extra components needed to achieve telecentricity, and partly because the first element in an object-space telecentric lens must be at least as large as the largest object to be imaged. The front element in an object-space telecentric lens is often much larger than the camera mount. In contrast to entocentric lenses where lenses are made larger to increase the aperture for increased collection of light or shallower depth of field, a larger diameter (but otherwise similar) object-space telecentric lens is not faster than a smaller lens. Because of their intended applications, telecentric lenses often have higher resolution and transmit more light than normal photographic lenses.
Commercial object-space telecentric lenses are often characterized by their magnification, working distance and maximum image circle or image sensor size. A truly telecentric lens has no focus ring to adjust the position of the focal plane. Some commercial telecentric lenses, however, do feature a focus ring. This can be used to slightly adjust the working distance and magnification while losing a little bit of telecentricity. Sometimes, manufacturers specify a sensor resolution or pixel size to describe the optical quality of the lens and the maximum optical resolution it can achieve due to the lens's aberrations.
Because their images have constant magnification and constant viewing angle across the field of view, object-space telecentric lenses are used for metrology applications, where a machine vision system must determine the precise size and shape of objects independently from their exact distance and position within the field of view.
In order to optimize the telecentric effect when objects are illuminated from behind, an additional image-space telecentric lens can be used as a telecentric (or collimated) illuminator, which produces a parallel light flow, often from LED sources.
Image-space telecentric lenses
An image-space telecentric lens has the exit pupil (the image of the aperture stop formed by optics after it) at infinity and produces images of the same size regardless of the distance between the lens and the film or image sensor. This allows the lens to focus light from an object or sample to different distances without changing the size of the image. An image-space telecentric lens is a reversed object-space telecentric lens, and vice versa.
Since the chief rays (light rays that pass through the center of the aperture stop) after an image-space telecentric lens are always parallel to the optical axis, these lenses are often used in applications that are sensitive to the angle of incidence of light. Interference-based color-selective beam splitters or filters but also Fabry–Pérot interferometers are two examples where image-space telecentricity is used. Another example is minimizing crosstalk between pixels in image sensors and maximizing the quantum efficiency of a sensor. The Four Thirds System initially required image-space telecentric lenses, but with the improvement of sensors, the angle of incidence requirement has been relaxed. Since every pixel is illuminated at the same angle by an image-space telecentric lens, they are also used for radiometric and color measurement applications, where one would need the irradiance to be the same regardless of the field position.
Bi-telecentric lenses
In a bi-telecentric (or double-telecentric) lens, both entrance and exit pupil are at infinity. The magnification is constant despite variations of both the distance of the object being observed and the image sensor from the lens, allowing more precise object size measurements than with a mono-telecentric lens (i.e., the measurements being insensitive to placement errors of the object and the image sensor). A bi-telecentric lens is afocal (a system without focus) as the image of an object at infinity formed by the first part of the lens is collimated by the second part.
Commercial bi-telecentric lenses are often optimized for very low image distortion and field curvature for accurate measurements across the entire field of view at great resolution. These lenses often comprise more than 10 elements.
Large and heavy bi-telecentric lenses with many optical elements are commonly used in optical lithography (that copies a template of an electrical circuit to print or fabricate onto semiconductor wafers for mass semiconductor device production) because small image distortion and placement errors can be critical for manufactured device functionality.
References
Microscope components
Photographic lenses
Machine vision | Telecentric lens | Engineering | 1,627 |
34,879,840 | https://en.wikipedia.org/wiki/Petunidin-3-O-glucoside | Petunidin-3-O-glucoside is anthocyanin. It is found in fruits and berries, in red Vitis vinifera grapes and red wine.
See also
Phenolic compounds in wine
References
Anthocyanins | Petunidin-3-O-glucoside | Chemistry | 55 |
58,115,927 | https://en.wikipedia.org/wiki/Jiji%20Weir | The Jiji Weir () is a weir located in Nantou County, Taiwan. The weir is located at the border of three townships in the county, which are Jiji Township, Lugu Township and Zhushan Township.
History
The construction of the weir started in July 1990 and completed in December 2001.
Architecture
The weir features the Taiwan Water Museum () within Jiji Township border.
Transportation
The weir is accessible southwest of Jiji station of Taiwan Railways.
See also
List of dams and reservoirs in Taiwan
References
2001 establishments in Taiwan
Buildings and structures in Nantou County
Dams completed in 2001
Weirs | Jiji Weir | Environmental_science | 120 |
35,056,728 | https://en.wikipedia.org/wiki/Hund%27s%20cases | In rotational-vibrational and electronic spectroscopy of diatomic molecules, Hund's coupling cases are idealized descriptions of rotational states in which specific terms in the molecular Hamiltonian and involving couplings between angular momenta are assumed to dominate over all other terms. There are five cases, proposed by Friedrich Hund in 1926-27 and traditionally denoted by the letters (a) through (e). Most diatomic molecules are somewhere between the idealized cases (a) and (b).
Angular momenta
To describe the Hund's coupling cases, we use the following angular momenta (where boldface letters indicate vector quantities):
, the electronic orbital angular momentum
, the electronic spin angular momentum
, the total electronic angular momentum
, the rotational angular momentum of the nuclei
, the total angular momentum of the system (exclusive of nuclear spin)
, the total angular momentum exclusive of electron (and nuclear) spin
These vector quantities depend on corresponding quantum numbers whose values are shown in molecular term symbols used to identify the states. For example, the term symbol 2Π3/2 denotes a state with S = 1/2, Λ = 1 and J = 3/2.
Choosing the applicable Hund's case
Hund's coupling cases are idealizations. The appropriate case for a given situation can be found by comparing three strengths: the electrostatic coupling of to the internuclear axis, the spin-orbit coupling, and the rotational coupling of and to the total angular momentum .
For 1Σ states the orbital and spin angular momenta are zero and the total angular momentum is just the nuclear rotational angular momentum. For other states, Hund proposed five possible idealized modes of coupling.
The last two rows are degenerate because they have the same good quantum numbers.
In practice there are also many molecular states which are intermediate between the above limiting cases.
Case (a)
The most common case is case (a) in which is electrostatically coupled to the internuclear axis, and is coupled to by spin-orbit coupling. Then both and have well-defined axial components, and respectively. As they are written with the same Greek symbol, the spin component should not be confused with states, which are states with orbital angular component equal to zero. defines a vector of magnitude pointing along the internuclear axis. Combined with the rotational angular momentum of the nuclei , we have . In this case, the precession of and around the nuclear axis is assumed to be much faster than the nutation of and around .
The good quantum numbers in case (a) are , , , and . However is not a good quantum number because the vector is strongly coupled to the electrostatic field and therefore precesses rapidly around the internuclear axis with an undefined magnitude. We express the rotational energy operator as , where is a rotational constant. There are, ideally, fine-structure states, each with rotational levels having relative energies starting with . For example, a 2Π state has a 2Π1/2 term (or fine structure state) with rotational levels = 1/2, 3/2, 5/2, 7/2, ... and a 2Π3/2 term with levels = 3/2, 5/2, 7/2, 9/2.... Case (a) requires > 0 and so does not apply to any Σ states, and also > 0 so that it does not apply to any singlet states.
The selection rules for allowed spectroscopic transitions depend on which quantum numbers are good. For Hund's case (a), the allowed transitions must have and and and and . In addition, symmetrical diatomic molecules have even (g) or odd (u) parity and obey the Laporte rule that only transitions between states of opposite parity are allowed.
Case (b)
In case (b), the spin-orbit coupling is weak or non-existent (in the case ). In this case, we take and and assume precesses quickly around the internuclear axis.
The good quantum numbers in case (b) are , , , and . We express the rotational energy operator as , where is a rotational constant. The rotational levels therefore have relative energies starting with . For example, a 2Σ state has rotational levels = 0, 1, 2, 3, 4, ..., and each level is divided by spin-rotation coupling into two levels = ± 1/2 (except for = 0 which corresponds only to = 1/2 because cannot be negative).
Another example is the 3Σ ground state of dioxygen, which has two unpaired electrons with parallel spins. The coupling type is Hund's case b), and each rotational level N is divided into three levels = , , .
For case b) the selection rules for quantum numbers , , and and for parity are the same as for case a). However for the rotational levels, the rule for quantum number does not apply and is replaced by the rule .
Case (c)
In case (c), the spin-orbit coupling is stronger than the coupling to the internuclear axis, and and from case (a) cannot be defined. Instead and combine to form , which has a projection along the internuclear axis of magnitude . Then , as in case (a).
The good quantum numbers in case (c) are , , and . Since is undefined for this case, the states cannot be described as , or . An example of Hund's case (c) is the lowest 3Πu state of diiodine (I2), which approximates more closely to case (c) than to case (a).
The selection rules for , and parity are valid as for cases (a) and (b), but there are no rules for and since these are not good quantum numbers for case (c).
Case (d)
In case (d), the rotational coupling between and is much stronger than the electrostatic coupling of to the internuclear axis. Thus we form by coupling and and the form by coupling and .
The good quantum numbers in case (d) are , , , , and . Because is a good quantum number, the rotational energy is simply .
Case (e)
In case (e), we first form and then form by coupling and . This case is rare but has been observed. Rydberg states which converge to ionic states with spin–orbit coupling (such as 2Π) are best described as case (e).
The good quantum numbers in case (e) are , , and . Because is once again a good quantum number, the rotational energy is .
References
Spectroscopy | Hund's cases | Physics,Chemistry | 1,376 |
29,667,127 | https://en.wikipedia.org/wiki/Micellar%20solubilization | Micellar solubilization (solubilization) is the process of incorporating the solubilizate (the component that undergoes solubilization) into or onto micelles. Solubilization may occur in a system consisting of a solvent, an association colloid (a colloid that forms micelles), and at least one other solubilizate.
Usage of the term
Solubilization is distinct from dissolution because the resulting fluid is a colloidal dispersion involving an association colloid. This suspension is distinct from a true solution, and the amount of the solubilizate in the micellar system can be different (often higher) than the regular solubility of the solubilizate in the solvent.
In non-chemical literature and in everyday language, the term "solubilization" is sometimes used in a broader meaning as "to bring to a solution or (non-sedimenting) suspension" by any means, e.g., leaching by a reaction with an acid.
Application
Micellar solubilization is widely utilized, e.g. in laundry washing using detergents, in the pharmaceutical industry, for formulations of poorly soluble drugs in solution form, and in cleanup of oil spills using dispersants.
Mechanism
Literature distinguishes two major mechanisms of solubilization process of oil by surfactant micelles, affecting the kinetics of solubilization: surface reaction, i.e., by transient adsorption of micelles at the water-oil interface, and bulk reaction, whereby the surfactant micelles capture dissolved oil molecules.
See also
Hydrotrope
References
External links
Solubilization of Homopolymers by Block Copolymer Micelles in Dilute Solutions, J. Phys. Chem., 1995, 99 (11), pp 3723–3731, Jose R. Quintana, Ramiro A. Salazar, Issa Katime
Colloidal chemistry
Solutions | Micellar solubilization | Chemistry | 414 |
9,660,855 | https://en.wikipedia.org/wiki/STRIDE%20%28algorithm%29 | In protein structure, STRIDE (Structural identification) is an algorithm for the assignment of protein secondary structure elements given the atomic coordinates of the protein, as defined by X-ray crystallography, protein NMR, or another protein structure determination method. In addition to the hydrogen bond criteria used by the more common DSSP algorithm, the STRIDE assignment criteria also include dihedral angle potentials. As such, its criteria for defining individual secondary structures are more complex than those of DSSP. The STRIDE energy function contains a hydrogen-bond term containing a Lennard-Jones-like 8-6 distance-dependent potential and two angular dependence factors reflecting the planarity of the optimized hydrogen bond geometry. The criteria for individual secondary structural elements, which are divided into the same groups as those reported by DSSP, also contain statistical probability factors derived from empirical examinations of solved structures with visually assigned secondary structure elements extracted from the Protein Data Bank.
Although DSSP is the older method and continues to be the most commonly used, the original STRIDE definition reported it to give a more satisfactory structural assignment in at least 70% of cases. In particular, STRIDE was observed to correct for the propensity of DSSP to assign shorter secondary structures than would be assigned by an expert crystallographer, usually due to the minor local variations in structure that are most common near the termini of secondary structure elements. Using a sliding-window method to smooth variations in assignment of single terminal residues, current implementations of STRIDE and DSSP are reported to agree in up to 95.4% of cases. Both STRIDE and DSSP, among other common secondary structure assignment methods, are believed to underpredict pi helices.
See also
DSSP
References
External links
STRIDE - includes web interface, a print of the original STRIDE paper, and software documentation
Paper on the original webserver implementation
Protein structure | STRIDE (algorithm) | Chemistry | 376 |
17,238,580 | https://en.wikipedia.org/wiki/Daliuren | Da Liu Ren is a form of Chinese calendrical astrology dating from the later Warring States period. It is also a member of the Three Styles () of divination, along with Qi Men Dun Jia () and Taiyi ().
Li Yang describes Da Liu Ren as the highest form of divination in China. This divination form is called Da Liu Ren because the heavenly stem rén (), indicating "yang water", appears six times in the Sexagenary cycle. In order, it appears in rénshēn (), rénwǔ (), rénchén (), rényín (), rénzǐ (), and rénxū ().
In the words of a contemporary Chinese master of Da Liu Ren, the six rén indicate an entire movement of the sexagenary cycle, during which an something may appear, rise to maturity and then decline and disappear. Thus the six rén indicate the life cycle of phenomena. There is a homonym in the Chinese language which carries the meaning of pregnancy, and so the six rén also carry the meaning of the birth of a phenomenon.
Instrument
The diviner's board (shi) used for the Three Styles differ markedly. The Qi Men Dun Jia divinor's board consists of a 3 × 3 magic square, while the Tai Yi board is somewhat larger, and may be drawn as either a square or circle. The Da Liu Ren cosmic board contains positions for the Earth pan and Heaven pan, which hold the twelve Earthly Branches and the twelve spirits. In addition, the Da Liu Ren cosmic board indicates the Three Transmissions () and Four Classes ().
A shi (also known as a astrolabe) from the Six Dynasties period (222–589 CE) consists of a Heaven Plate () placed over an Earth Plate (). On the Earth Plate are three groups of inscriptions:
Outer band: 36 animals (12 associated with the earthly branches, plus the 28 animals associated with the xiù or lunar mansions)
Middle band: 28 xiù lunar mansions
Inner ring: Stems and Branches (ganzhi).
The square plate is divided diagonally into four sections that allocate 9 animals, 7 xiu, and 5 ganzhi to a section. A diviner examined current sky phenomena to set the board and adjust their position in relation to the board.
A modern version of the Da Liu Ren cosmic board places the Three Transmissions at the top of the board, along with the corresponding Earth Branch and any pertinent vacancies. The Four Classes are placed below the Three Transmissions, with the Heaven Pan and Earth Pan positions clearly indicated below the corresponding spirit position. A diagram of the Heaven Pan positions of the twelve generals and their corresponding Earth Branch positions in the Heaven Pan completes the illustration. The Earth pan is not depicted. The sexagenary cycle date is given in the upper right–hand margin, with the corresponding situation () number, the location of pertinent vacancies, and an indication of whether the array belongs to daytime or evening divination. The structured situation types for each array are provided in the left-hand margin. In some versions, an annotated description of the major aspects of each situation is provided. The description is often taken from the body of classical literature about Da Liu Ren.
Technique
Divination in Da Liu Ren is determined by relationships of five elements (wu xing ) and yin and yang () between and among the Three Transmissions, Four Classes, Twelve Generals, and the Heaven and Earth Plates. Each double-hour of the day contains a cosmic board for daytime and evening divination. The Three Transmissions are derived from configurations of the Heavenly Stem and Earthly Branch of the date. The Four Classes are determined in a similar manner.
Qi Men Dun Jia was widely used in China during the Tang and Song dynasties. By the time of the Yuan dynasty, Da Liu Ren had overtaken Qi Men Dun Jia in popularity, at least according to source documents found in the caverns of Dunhuang. The overwhelming popularity of Da Liu Ren in ancient China was perhaps due to its higher degree of precision, in comparison with Qi Men Dun Jia.
As is true with Qi Men Dun Jia, Da Liu Ren was first used in China for the purposes of devising military strategy and later developed into a more popular and widespread form of divination which grew to include medical divination, matchmaking, childbirth, travel, criminology, weather forecasting, etc. types of divination.
In view of its complex nature, Da Liu Ren was regarded as the highest of the Three Styles, since mastery of its complex rule structure required many years of memorization. In contemporary China, few claim mastery of Da Liu Ren, while aging masters worry that younger generations of Chinese will come to disdain Da Liu Ren leading to its practice dying out in China. Da Liu Ren is further complicated by the necessity of mastering a large body of rules and regulations which govern the relationships named above. Da Liu Ren contains perhaps four times as many rules as Qi Men Dun Jia, for example. The extant historical literature on Da Liu Ren by far surpasses that of Qi Men Dun Jia.
See also
Chinese astrology
Chinese astronomy
Chinese Classical Texts
Feng Shui
Flying Star Feng Shui
I Ching & I Ching divination
Jiaobei & Poe divination
Kau Cim
Qi Men Dun Jia
Shaobing Song
Siku Quanshu
Tai Yi Shen Shu
Tui bei tu
Tung Shing
References
Further reading
"Da Liu Ren Bi Jing" (大六壬必镜)
"Liu Ren Da Quan"(六壬大全 Encyclopedia of Liu Ren), published in the Siku Quanshu
"Lingtai jing," an astrological treatise preserved in the Daozang
Chinese books of divination
Chinese astrology
Astrological texts
Taoist divination
History of astrology | Daliuren | Astronomy | 1,194 |
35,893,338 | https://en.wikipedia.org/wiki/Taraxacum%20farinosum | Taraxacum farinosum, common name in Turkish cırtlık, is a type of perennial dandelion that grows between 800 and 1200 m on salty soils in central Turkey. It is herbaceous halophyte plant up to 5–15 cm tall. Irano-Turanian Region or Iran-Turan Plant Geography Region element.
References
farinosum
Endemic flora of Turkey
Halophytes
Taxa named by Joseph Friedrich Nicolaus Bornmüller
Taxa named by Heinrich Carl Haussknecht
Taxa named by Heinrich von Handel-Mazzetti | Taraxacum farinosum | Chemistry | 116 |
1,506,024 | https://en.wikipedia.org/wiki/Crystal%20violet | Crystal violet or gentian violet, also known as methyl violet 10B or hexamethyl pararosaniline chloride, is a triarylmethane dye used as a histological stain and in Gram's method of classifying bacteria. Crystal violet has antibacterial, antifungal, and anthelmintic (vermicide) properties and was formerly important as a topical antiseptic. The medical use of the dye has been largely superseded by more modern drugs, although it is still listed by the World Health Organization.
The name gentian violet was originally used for a mixture of methyl pararosaniline dyes (methyl violet), but is now often considered a synonym for crystal violet. The name refers to its colour, being like that of the petals of certain gentian flowers; it is not made from gentians or violets.
Production
A number of possible routes can be used to prepare crystal violet. The original procedure developed by the German chemists Kern and Caro involved the reaction of dimethylaniline with phosgene to give 4,4′-bis(dimethylamino)benzophenone (Michler's ketone) as an intermediate. This was then reacted with additional dimethylaniline in the presence of phosphorus oxychloride and hydrochloric acid.
The dye can also be prepared by the condensation of formaldehyde and dimethylaniline to give a leuco dye:
CH2O + 3 C6H5N(CH3)2 → CH(C6H4N(CH3)2)3 + H2O
Second, this colourless compound is oxidized to the coloured cationic form (hereafter with oxygen, but a typical oxidizing agent is manganese dioxide, MnO2):
CH(C6H4N(CH3)2)3 + HCl + O2 → [C(C6H4N(CH3)2)3]Cl + H2O
Dye colour
When dissolved in water, the dye has a blue-violet colour with an absorbance maximum at 590 nm and an extinction coefficient of 87,000 M−1 cm−1. The colour of the dye depends on the acidity of the solution. At a pH of +1.0, the dye is green with absorption maxima at 420 nm and 620 nm, while in a strongly acidic solution (pH −1.0), the dye is yellow with an absorption maximum at 420 nm.
The different colours are a result of the different charged states of the dye molecule. In the yellow form, all three nitrogen atoms carry a positive charge, of which two are protonated, while the green colour corresponds to a form of the dye with two of the nitrogen atoms positively charged. At neutral pH, both extra protons are lost to the solution, leaving only one of the nitrogen atoms positive charged. The pKa for the loss of the two protons are approximately 1.15 and 1.8.
In alkaline solutions, nucleophilic hydroxyl ions attack the electrophilic central carbon to produce the colourless triphenylmethanol or carbinol form of the dye. Some triphenylmethanol is also formed under very acidic conditions when the positive charges on the nitrogen atoms lead to an enhancement of the electrophilic character of the central carbon, which allows the nucleophilic attack by water molecules. This effect produces a slight fading of the yellow colour.
Applications
Industry
Crystal violet is used as a textile and paper dye, and is a component of navy blue and black inks for printing, ball-point pens, and inkjet printers. Historically, it was the most common dye used in early duplication machines, such as the mimeograph and the ditto machine. It is sometimes used to colourize diverse products such as fertilizer, antifreeze, detergent, and leather. Marking blue, used to mark out pieces in metalworking, is composed of methylated spirits, shellac, and gentian violet.
Science
When conducting DNA gel electrophoresis, crystal violet can be used as a nontoxic DNA stain as an alternative to fluorescent, intercalating dyes such as ethidium bromide. Used in this manner, it may be either incorporated into the agarose gel or applied after the electrophoresis process is finished. Used at a 10 ppm concentration and allowed to stain a gel after electrophoresis for 30 minutes, it can detect as little as 16 ng of DNA. Through use of a methyl orange counterstain and a more complex staining method, sensitivity can be improved further to 8 ng of DNA. When crystal violet is used as an alternative to fluorescent stains, it is not necessary to use ultraviolet illumination; this has made crystal violet popular as a means of avoiding UV-induced DNA destruction when performing DNA cloning in vitro.
In biomedical research, crystal violet can be used to stain the nuclei of adherent cells. In this application, crystal violet works as an intercalating dye and allows the quantification of DNA which is proportional to the number of cells.
The dye is used as a histological stain, particularly in Gram staining for classifying bacteria.
In forensics, crystal violet was used to develop fingerprints.
Crystal violet is also used as a tissue stain in the preparation of light microscopy sections. In laboratory, solutions containing crystal violet and formalin are often used to simultaneously fix and stain cells grown in tissue culture to preserve them and make them easily visible, since most cells are colourless. It is also sometimes used as a cheap way to put identification markings on laboratory mice; since many strains of lab mice are albino, the purple colour stays on their fur for several weeks.
Crystal violet can be used as an alternative to Coomassie brilliant blue (CBB) in staining of proteins separated by SDS-PAGE, reportedly showing a 5x improved sensitivity vs CBB.
Medical
Gentian violet has antibacterial, antifungal, antihelminthic, antitrypanosomal, antiangiogenic, and antitumor properties. It is used medically for these properties, in particular for dentistry, and is also known as "pyoctanin" (or "pyoctanine"). It is commonly used for:
Marking the skin for surgery preparation and allergy testing;
Treating Candida albicans and related fungal infections, such as thrush, yeast infections, various types of tinea (ringworm, athlete's foot, jock itch);
Treating impetigo; it was used primarily before the advent of antibiotics, but still useful to persons who may be allergic to penicillin.
In resource-limited settings, gentian violet is used to manage burn wounds, inflammation of the umbilical cord stump (omphalitis) in the neonatal period, oral candidiasis in HIV-infected patients and mouth ulcers in children with measles.
In body piercing, gentian violet is commonly used to mark the location for placing piercings, including surface piercings.
Veterinary
Because of its antimicrobial activity, it is used to treat ich in fish. However, it usually is illegal to use in fish intended for human consumption.
History
Synthesis
Crystal violet is one of the components of methyl violet, a dye first synthesized by Charles Lauth in 1861. From 1866, methyl violet was manufactured by the Saint-Denis-based firm of Poirrier et Chappat and marketed under the name "Violet de Paris". It was a mixture of the tetra-, penta- and hexamethylated pararosanilines.
Crystal violet itself was first synthesized in 1883 by (1850–1893) working in Basel at the firm of Bindschedler & Busch. To optimize the difficult synthesis which used the highly toxic phosgene, Kern entered into a collaboration with the German chemist Heinrich Caro at BASF. Kern also found that by starting with diethylaniline rather than dimethylaniline, he could synthesize the closely related violet dye now known as C.I. 42600 or C.I. Basic violet 4.
Gentian violet
The name "gentian violet" (or Gentianaviolett in German) is thought to have been introduced by the German pharmacist Georg Grübler, who in 1880 started a company in Leipzig that specialized in the sale of staining reagents for histology. The gentian violet stain marketed by Grübler probably contained a mixture of methylated pararosaniline dyes. The stain proved popular and in 1884 was used by Hans Christian Gram to stain bacteria. He credited Paul Ehrlich for the aniline-gentian violet mixture. Grübler's gentian violet was probably very similar, if not identical, to Lauth's methyl violet, which had been used as a stain by Victor André Cornil in 1875.
Although the name gentian violet continued to be used for the histological stain, the name was not used in the dye and textile industries. The composition of the stain was not defined and different suppliers used different mixtures. In 1922, the Biological Stain Commission appointed a committee chaired by Harold Conn to look into the suitability of the different commercial products. In his book Biological Stains, Conn describes gentian violet as a "poorly defined mixture of violet rosanilins".
The German ophthalmologist Jakob Stilling is credited with discovering the antiseptic properties of gentian violet. He published a monograph in 1890 on the bactericidal effects of a solution that he christened "pyoctanin", which was probably a mixture of aniline dyes similar to gentian violet. He set up a collaboration with E. Merck & Co. to market "Pyoktanin caeruleum" as an antiseptic.
In 1902, Drigalski and Conradi found that although crystal violet inhibited the growth of many bacteria, it has little effect on Bacillus coli (Escherichia coli) and Bacillus typhi (Salmonella typhi), which are both gram-negative bacteria. A much more detailed study of the effects of Grübler's gentian violet on different strains of bacteria was published by John Churchman in 1912. He found that most gram-positive bacteria (tainted) were sensitive to the dye, while most gram-negative bacteria (not tainted) were not, and observed that the dye tended to act as a bacteriostatic agent rather than a bactericide.
Precautions
One study in mice demonstrated dose-related carcinogenic potential at several different organ sites. The Food and Drug Administration in the US (FDA) has determined that gentian violet has not been shown by adequate scientific data to be safe for use in animal feed. Use of gentian violet in animal feed causes the feed to be adulterated and is a violation of the Federal Food, Drug, and Cosmetic Act in the US. On June 28, 2007, the FDA issued an "import alert" on farm raised seafood from China because unapproved antimicrobials, including gentian violet, had been consistently found in the products. The FDA report states:
"Like MG (malachite green), CV (crystal violet) is readily absorbed into fish tissue from water exposure and is reduced metabolically by fish to the leuco moiety, leucocrystal violet (LCV). Several studies by the National Toxicology Program reported the carcinogenic and mutagenic effects of crystal violet in rodents. The leuco form induces renal, hepatic and lung tumor in mice."
In 2019, Health Canada found medical devices that use gentian violet to be safe for use but recommended to stop using all drug products that contain gentian violet, including on animals, causing Canadian engineering schools to revisit the usage of this dye during orientation.
See also
Methyl green
Methyl violet
Fluorescein
Prussian blue
Egyptian blue
Methyl blue
Methylene blue
New methylene blue
Han purple
Potassium ferrocyanide
Potassium ferricyanide
References
Further reading
.
External links
Triarylmethane dyes
Antifungals
Disinfectants
Staining dyes
PH indicators
Chlorides
Dimethylamino compounds | Crystal violet | Chemistry,Materials_science | 2,545 |
54,629,293 | https://en.wikipedia.org/wiki/Single%20cell%20epigenomics | Single cell epigenomics is the study of epigenomics (the complete set of epigenetic modifications on the genetic material of a cell) in individual cells by single cell sequencing. Since 2013, methods have been created including whole-genome single-cell bisulfite sequencing to measure DNA methylation, whole-genome ChIP-sequencing to measure histone modifications, whole-genome ATAC-seq to measure chromatin accessibility and chromosome conformation capture.
Single-cell DNA methylome sequencing
Single cell DNA genome sequencing quantifies DNA methylation. This is similar to single cell genome sequencing, but with the addition of a bisulfite treatment before sequencing. Forms include whole genome bisulfite sequencing, and reduced representation bisulfite sequencing
Single-cell ATAC-seq
ATAC-seq stands for Assay for Transposase-Accessible Chromatin with high throughput sequencing. It is a technique used in molecular biology to identify accessible DNA regions, equivalent to DNase I hypersensitive sites. Single cell ATAC-seq has been performed since 2015, using methods ranging from FACS sorting, microfluidic isolation of single cells, to combinatorial indexing. In initial studies, the method was able to reliably separate cells based on their cell types, uncover sources of cell-to-cell variability, and show a link between chromatin organization and cell-to-cell variation.
Single-cell ChIP-seq
ChIP-sequencing, also known as ChIP-seq, is a method used to analyze protein interactions with DNA. ChIP-seq combines chromatin immunoprecipitation (ChIP) with massively parallel DNA sequencing to identify the binding sites of DNA-associated proteins. In epigenomics, this is often used to assess histone modifications (such as methylation). ChIP-seq is also often used to determine transcription factor binding sites.
Single-cell ChIP-seq is extremely challenging due to background noise caused by nonspecific antibody pull-down, and only one study so far has performed it successfully. This study used a droplet-based microfluidics approach, and the low coverage required thousands of cells to be sequenced in order to assess cellular heterogeneity.
Single-cell Hi-C
Chromosome conformation capture techniques (often abbreviated to 3C technologies or 3C-based methods) are a set of molecular biology methods used to analyze the spatial organization of chromatin in a cell. These methods quantify the number of interactions between genomic loci that are nearby in three dimensional space, even if the loci are separated by many kilobases in the linear genome.
Currently, 3C methods start with a similar set of steps, performed on a sample of cells. First, the cells are cross-linked, which introduces bonds between proteins, and between proteins and nucleic acids, that effectively "freeze" interactions between genomic loci. The genome is then cut digested into fragments through the use of restriction enzymes. Next, proximity based ligation is performed, creating long regions of hybrid DNA. Lastly, the hybrid DNA is sequenced to determine genomic loci that are in close proximity to each other.
Single-cell Hi-C is a modification of the original Hi-C protocol, which is an adaptation of the 3C method, that allows you to determine proximity of different regions of the genome in a single cell. This method was made possible by performing the digestion and ligation steps in individual nuclei, as opposed to the original Hi-C protocol, where ligation was performed after cell lysis in a pool containing crosslinked chromatin complexes. In single cell Hi-C, after ligation, single cells are isolated and the remaining steps are performed in separate compartments, and hybrid DNA is tagged with a compartment specific barcode. High-throughput sequencing is then performed on the pool of the hybrid DNA from the single cells. Although the recovery rate of sequenced interactions (hybrid DNA) can be as low as 2.5% of potential interactions, it has been possible to generate three dimensional maps of entire genomes using this method. Additionally, advances have been made in the analysis of Hi-C data, allowing for the enhancement of HiC datasets to generate even more accurate and detailed contact maps and 3D models.
See also
Single cell sequencing
Epigenomics
Chromosome conformation capture
References
Epigenetics
DNA sequencing
Genomics
Cell biology | Single cell epigenomics | Chemistry,Biology | 916 |
1,160,484 | https://en.wikipedia.org/wiki/Phrase%20structure%20grammar | The term phrase structure grammar was originally introduced by Noam Chomsky as the term for grammar studied previously by Emil Post and Axel Thue (Post canonical systems). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining character of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars.
History
In 1956, Chomsky wrote, "A phrase-structure grammar is defined by a finite vocabulary (alphabet) Vp, and a finite set Σ of initial strings in Vp, and a finite set F of rules of the form: X → Y, where X and Y are strings in Vp."
Constituency relation
In linguistics, phrase structure grammars are all those grammars that are based on the constituency relation, as opposed to the dependency relation associated with dependency grammars; hence, phrase structure grammars are also known as constituency grammars. Any of several related theories for the parsing of natural language qualify as constituency grammars, and most of them have been developed from Chomsky's work, including
Government and binding theory
Generalized phrase structure grammar
Head-driven phrase structure grammar
Lexical functional grammar
The minimalist program
Nanosyntax
Further grammar frameworks and formalisms also qualify as constituency-based, although they may not think of themselves as having spawned from Chomsky's work, e.g.
Arc pair grammar, and
Categorial grammar.
See also
Catena
Notes
References
Allerton, D. 1979. Essentials of grammatical theory. London: Routledge & Kegan Paul.
Borsley, R. 1991. Syntactic theory: A unified approach. London: Edward Arnold.
Chomsky, Noam 1957. Syntactic structures. The Hague/Paris: Mouton.
Matthews, P. Syntax. 1981. Cambridge, UK: Cambridge University Press, .
McCawley, T. 1988. The syntactic phenomena of English, Vol. 1. Chicago: The University of Chicago Press.
Mel'cuk, I. 1988. Dependency syntax: Theory and practice. Albany: SUNY Press.
Sag, I. and T. Wasow. 1999. Syntactic theory: A formal introduction. Stanford, CA: CSLI Publications.
Tesnière, Lucien 1959. Éleménts de syntaxe structurale. Paris: Klincksieck.
van Valin, R. 2001. An introduction to syntax. Cambridge, UK: Cambridge University Press.
Generative syntax
Syntax
Noam Chomsky
Natural language processing | Phrase structure grammar | Technology | 553 |
26,068,326 | https://en.wikipedia.org/wiki/Good%20documentation%20practice | Good documentation practice (recommended to abbreviate as GDocP to distinguish from "good distribution practice" also abbreviated GDP) is a term in the pharmaceutical and medical device industries to describe standards by which documents are created and maintained. While some GDocP standards are codified by various competent authorities, others are not but are considered cGMP (with emphasis on the "c", or "current"). Some competent authorities release or adopt guidelines, and they may include non-codified GDocP expectations. While not law, authorities will inspect against these guidelines and cGMP expectations in addition to the legal requirements and make comments or observations if departures are seen.
In the past years, the application of GDocP is also expanding to cosmetic industry, excipient and ingredient manufacturers.
GDocP standards
Documentation creation
Contemporaneous with the event they describe
Not handwritten (except for handwritten entries thereon)
When electronically produced, the documentation must be checked for accuracy
Free from errors
For some types of data, it is recommended that records are in a format that permits trend evaluation
Document approval
Approved, signed, and dated by appropriate authorized personnel
Handwritten entries
Adequate space is provided for expected handwritten entries
Handwritten entries are in indelible ink
Errors (i.e. misspelling, illegible entries, etc.) are corrected and reason is documented
Critical entries must be independently checked (SPV, or second person verified)
No spaces for handwritten entries are left blank – if unused, they are crossed out or "N/A" (or similar text) entered
Ditto marks or continuation lines are not acceptable
Correction fluid are not allowed to be used in correcting errors
A stamp in lieu of a handwritten signature is not acceptable
Copies of documents
Clear, legible
Errors are not introduced
Document maintenance
Regularly reviewed and kept current
Retained and available for appropriate duration
Electronic document management systems are validated
Electronic records are backed up
Document modification
Handwritten modifications are signed and dated
Altered text is not obscured (e.g., no correction fluid)
Where appropriate, the reason for alteration must be noted
Controls exist to prevent the inadvertent use of superseded documents
Electronic versions can only be modified by authorized personnel
A history (audit trail) must be maintained of changes and deletions to electronic versions
Supporting documents can be added to the original document as an attachment for clarification or recording data. Attachments should be referenced at least once within the original document. Ideally, each page of the attachment is clearly identified (i.e. labeled as "Attachment X", "Page X of X", signed and dated by person who attached it, etc.)
GDocP interpretation
From the regulatory guidance above, additional expectations or allowances can be inferred by extension. Among these are:
Prohibition against removing pages – the removal of a page would obscure the data that were present, so this is not permissible.
Page numbering – the addition of page numbers, particularly in "Page x of y" format, allows a reviewer to ensure that there are no missing pages.
Stamped signatures in Asia – the culture of certain Asian countries, and the controls they employ, are such that their use of a stamp in lieu of handwritten signatures has been accepted.
Date and time formats – dates may be written in a variety of formats that can be confusing if read by personnel with a different cultural background. In the context where different cultures interact, a date such as "07-05-10" can have numerous different meanings and therefore, by GDocP standards above, violates the requirement for being clear.
Transcription – a transcription of data, where the original document is not retained, effectively obscures the original data and would be prohibited. Transcription may be helpful where the original is of poor quality writing or is physically damaged, but it should be clearly marked as a transcription and the original retained nevertheless.
Scrap paper, Post-it notes – intentionally recording raw data on non-official records is a set-up for transcription and is therefore prohibited.
Avoiding asterisks as part of the notation of a hand-change – where insufficient white space permits a fully notated hand change, a common practice is to use an asterisk (or other mark) near the correction, and elsewhere record the same mark and the notation. The risk is that additional changes are made by another person who uses the same mark, and now the notation can be interpreted to apply to all changes with the mark. Some will therefore advise against the use of the asterisk. Others will accept it, if the notation clearly includes the number of changes that it applies to, such as, "* Three entries changed above due to entry errors. KAM 13-Jan-2011". There are no known instances of an agency rejecting such a notation.
Enforcement
The competent authorities are empowered to inspect establishments to enforce the law and the interpretations of the law (e.g., the content of guidance documents and the cGMPs).
Departures from GDocP that involved the regulator have included: documentation not contemporaneous, use of ditto marks, signature stamps., obscured original data, Use of pencil, inaccurate records, and not dating changes.
See also
Best practice
Good manufacturing practice
Site Master File
References
Pharmaceutical industry
Good practice | Good documentation practice | Chemistry,Biology | 1,075 |
1,571,880 | https://en.wikipedia.org/wiki/Graphic%20designer | A graphic designer is a professional who practices the discipline of graphic design, either within companies or organizations or independently. They are professionals in design and visual communication, with their primary focus on transforming linguistic messages into graphic manifestations, whether tangible or intangible. They are responsible for planning, designing, projecting, and conveying messages or ideas through visual communication. Graphic design is one of the most in-demand professions with significant job opportunities, as it allows leveraging technological advancements and working online from anywhere in the world.
Generally, a graphic designer works in areas such as branding, corporate identity, advertising, technical and artistic drawing, multimedia, etc. It is a profession that exposes individuals to various academic fields during their university career, because they need to understand human anatomy, psychology, photography, painting and printing techniques, mathematics, marketing, digital animation, 3D modeling, and some professionals even complement their skills with programming, providing a comprehensive view of a company by addressing the three essential factors evaluated: structure, team, and product.
Graphic designers can work with singular clients or multiple people including collaborations. This is where communication is crucial because misunderstandings can lead to setbacks.
Professional requirements for graphic designers vary from one place to another. Designers must undergo specialized training, including advanced education and practical experience (internship) to develop skills and expertise in the workplace, which is necessary to obtain a credential that allows them to practice the profession. Practical, technical, and academic requirements to become a graphic designer vary by country or jurisdiction, although the formal study of design in academic institutions has played a crucial role in the overall development of the profession.
Salary
According the Bureau of Labor Statistics, the median salary for graphic designers is $58,900 as of May 2023. The bottom 10% earned less than $36,420 while the top 10% earned more than $100,450.
Qualifications
Designers should be able to solve visual communication problems or challenges. In doing so, the designer must identify the communications issue, gather and analyze information related to the issue, and generate potential approaches aimed at solving the problem. Iterative prototyping and user testing can be used to determine the success or failure of a visual solution. Approaches to a communications problem are developed in the context of an audience and a media channel. Graphic designers must understand the social and cultural norms of that audience in order to develop visual solutions that are perceived as relevant, understandable and effective. Directly speaking with individuals from set audiences can prevent any complications.
Graphic designers should also have a thorough understanding of production and rendering methods. Some of the technologies and methods of production are drawing, offset printing, photography, and time-based and interactive media (film, video, computer multimedia). Frequently, designers are also called upon to manage color in different media. For instance, graphic designers use different colors for digital and print advertisements. RGB — standing for red, green, blue — is an additive color model used for digital media designs. However, the CMYK color model is made up of subtractive colors — cyan, magenta, yellow, and black — and used in designing print media. The reason for the different models is that when designing print ads, colors look different on the screen and when printed onto paper. For example, the colors appear darker on paper than on screen.
See also
Graphic arts
Graphic design occupations
List of graphic designers
Mood board
References
Computer occupations
Computational fields of study
Mass media occupations
Visual arts occupations
Office and administrative support occupations | Graphic designer | Technology | 702 |
47,956,649 | https://en.wikipedia.org/wiki/Xiaomi%20Mi%204c | The Xiaomi Mi 4c () is a smartphone developed by Xiaomi Inc. It is part of Xiaomi's mid-range smartphone line, and was released in September 2015.
It is only available in mainland China.
The Xiaomi Mi4C is equipped with a Snapdragon 808 processor, a 5-inch 1080p IPS display and a 3080 mAh battery. with 2 GB / 3 GB of memory, and 16 GB / 32 GB storage in the two versions. It ran Android 5.1.1 Lollipop with the MIUI 7.2.4.0 ROM and now it's upgradeable to Android 7.0 Nougat with the MIUI 9.5.1.0. Unofficially, the device supports up to Android versions up to Android 10 (with LineageOS 17.1). As of 2020, new LineageOS versions are still actively developed for Mi 4c.
Its rear camera is 13 MP with Sony IMX258 or Samsung S5K3M2 and front camera is 5 MP with 1080×1920 resolution for video recording. It weighs 132 grams and its dimensions are 138.1 mm × 69.6 mm × 7.8 mm. It has Wi-Fi 802.11 a/b/g/n/ac, Bluetooth 4.1, GPS, Infrared and USB-C.
The phone was described as an outstanding piece of high-end technology that does not cost as much as its rivals.
It was revealed by AnTuTu that the Mi 4c was one of the 10 most popular phones in China.
References
External links
Xiaomi Website
Mi 4c
Discontinued smartphones
Mobile phones introduced in 2015
Mobile phones with infrared transmitter | Xiaomi Mi 4c | Technology | 353 |
18,097,115 | https://en.wikipedia.org/wiki/University%20of%20Central%20Florida%20College%20of%20Engineering%20and%20Computer%20Science | The University of Central Florida College of Engineering and Computer Science is an academic college of the University of Central Florida located in Orlando, Florida, United States. The college offers degrees in engineering, computer science, information technology and management systems, and houses UCF's Department of Electrical Engineering and Computer Science. The dean of the college is Michael Georgiopoulos, Ph.D.
UCF is listed as a university with "very high research activity" by The Carnegie Foundation for the Advancement of Teaching. With an enrollment of over 7,500 undergraduate and graduate students as of Fall 2012, the college is one of the premier engineering schools in the United States. The college is recognized by U.S. News & World Report as one of the nation's best Engineering schools, and as one of the world's best in the ARWU rankings. The university has made noted research contributions to modeling and simulation, digital media, and engineering and computer science.
History
The College of Engineering and Computer Science was one of the four original academic colleges when UCF began classes in 1968 as Florida Technological University. The State University System of Florida's Board of Regents approved the creation of a college of engineering on September 16, 1966. The college was launched as the university's College of Engineering and Technologies on March 28, 1969.
The college saw the completion of a third Engineering Building which was designed in 2000-2002 for the School of EECS with a $15 million allocation from the State of Florida. In 2005, Harris Corporation donated $3 million to the College of Engineering & Computer Science, causing the building's name to be the Harris Corporation Engineering Center.
Academics
Housing some of the university's showcase majors, the College of Engineering and Computer Science is made up of the following departments:
Civil, Environmental, and Construction Engineering (CECE)
Computer Science (CS), and Information Technology (IT)
Electrical and Computer Engineering (ECE)
Industrial Engineering & Management Systems (IEMS)
Mechanical and Aerospace Engineering (MAE)
Materials Science and Engineering (MSE)
The college has 13 undergraduate programs, 14 master's degree programs, and eight doctoral degree programs. UCF has been classified as a research university (very high research activity) by the Carnegie Foundation for the Advancement of Teaching. The Graduate School of the College of Engineering & Computer Science is ranked #70 in the Top 100 engineering schools by the U.S. News & World Report. It also featured/features in the Top 100 Engineering/Technology and Computer Sciences schools in the world in the ARWU by Shanghai Jiao Tong University.
The college consists of the Department of Electrical Engineering & Computer Science (EECS), the Civil, Environmental, and Construction Engineering (CECE) Department, the Industrial Engineering and Management Systems (IEMS) Department, and the Mechanical, Materials and Aerospace Engineering (MMAE) Department. The ROTC Division consists of the Aerospace Studies Department (Air Force ROTC) and the Military Science Department (Army ROTC).
Electrical Engineering and Computer Science
The School of Electrical Engineering and Computer Science was founded under the leadership of Professor Erol Gelenbe in 1999, who was appointed Director of the School of Computer Science in 1998, by the merger the School of Computer Science with the two Departments of Electrical Engineering and Computer Engineering, and the creation of the Information Technology Program. In 2005, Computer Science and ECE programs were merged curriculum-wise, as a unified School of EECS. In the summer of 2010, the School of EECS was renamed to the Department of EECS.
The Electrical Engineering and Computer Science Departments were then separated, but have many major accomplishments both in common, and separately, in their history. The Computer Science Programming Team participates in the Association of Computer Machinery's International Collegiate Programming Contest (ACM-ICPC), placing 1st in the fall 2016 and 2017 Southeast ACM Regional Programming Contests. Since 1982, the college has placed in the 'Top 3' of the 5 state region. The team finished 13th in the spring 2017 World Finals (Top U.S. team and 2nd in North America). The team improved their ranking in the Spring 2018 World Finals, held in Beijing, China. The team placed 10th overall out of 140 teams, earning a Bronze Medal and North America Champion title. The Programming Team has qualified for and attended 29 Finals since 1983, placing as high as 2nd in the competition.
The Computer Science department is also home to the UCF Collegiate Cyber Defense Competition Team. Although the National Collegiate Cyber Defense Competition was established in 2005, it wasn't until January 2013 that UCF entered a team in this competition. In their inaugural season, the UCF CCDC Team finished in 1st Place in the Southeastern Collegiate Cyber Defense Competition and placed 10th at the National Collegiate Cyber Defense Competition. The UCF CCDC Team came back stronger in 2014 and once again won the Southeast Collegiate Cyber Defense Competition and placed 1st at the National Collegiate Cyber Defense Competition to become the reigning National Champions of Cyber Defense. UCF maintained its winning traditions in 2015 finishing in 1st Place at the Southeast Collegiate Cyber Defense Competition and claiming the National Championship at the National Collegiate Cyber Defense competition for the second consecutive year.
Rankings
The Electrical Engineering graduate program is ranked 57th nationally in the 2010 U.S. News & World Report America's Best Graduate Schools.
Computer Science was ranked in the top 100 departments worldwide in 2010 by the Academic Ranking of World Universities.
The CS Doctoral Program was ranked in the top 20 programs by NAGPS in 2001.
Research
Metropolitan Orlando sustains the world's largest recognized cluster of modeling, simulation and training companies. Located directly south of the main campus in the Central Florida Research Park, which is one of the largest research parks in the nation. Providing more than 10,000 jobs, the Research Park is the largest research park in Florida, the fourth largest in the United States by number of companies, and the seventh largest in the United States by number of employees. Collectively, UCF's research centers and the park manage over $5.5 billion in contracts annually.
The university fosters partnerships with corporations such as Lockheed Martin, Boeing, and Siemens, and through partnerships with local community colleges. UCF also houses a satellite campus at the Kennedy Space Center in Cape Canaveral, Florida. UCF is also a member of the Florida High Tech Corridor Council.
References
External links
UCF College of Engineering and Computer Science
UCF Department of Electrical Engineering and Computer Science
UCF Industrial Engineering and Management Systems
UCF Materials, Mechanical, and Aerospace Engineering
UCF Civil, Environmental, and Construction Engineering
University of Central Florida Official Website
Engineering And Computer Science
Educational institutions established in 1968
1968 establishments in Florida
Engineering schools and colleges in the United States
Engineering universities and colleges in Florida
Computer science departments in the United States
Electrical and computer engineering departments | University of Central Florida College of Engineering and Computer Science | Engineering | 1,371 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.