id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
2,277,043 | https://en.wikipedia.org/wiki/Ursa%20Minor%20Dwarf | The Ursa Minor Dwarf is a dwarf spheroidal galaxy, discovered by A.G. Wilson of the Lowell Observatory, in the United States, during the Palomar Sky Survey in 1955. It appears in the Ursa Minor constellation, and is a satellite galaxy of the Milky Way. The galaxy consists mainly of older stars and seems to house little to no ongoing star formation. Its centre is around 225,000 light years distant from Earth.
Evolutionary history
In 1999, Kenneth Mighell and Christopher Burke used the Hubble Space Telescope to confirm that the Ursa Minor dwarf galaxy had a straightforward evolutionary history with a single burst of star formation that lasted around 2 billion years and took place around 14 billion years ago, and that the galaxy was probably as old as the Milky Way itself.
See also
Ursa Major I Dwarf
Ursa Major II Dwarf
References
External links
Dwarf galaxies
Dwarf elliptical galaxies
Local Group
Milky Way Subgroup
Ursa Minor
09749
54074
? | Ursa Minor Dwarf | [
"Astronomy"
] | 194 | [
"Ursa Minor",
"Constellations"
] |
2,277,097 | https://en.wikipedia.org/wiki/TeraGrid | TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.
The TeraGrid integrated high-performance computers, data resources and tools, and experimental facilities. Resources included more than a petaflops of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance computer network connections. Researchers could also access more than 100 discipline-specific databases.
TeraGrid was coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with the resource provider sites in the United States.
History
The US National Science Foundation (NSF) issued a solicitation asking for a "distributed terascale facility" from program director Richard L. Hilderbrandt.
The TeraGrid project was launched in August 2001 with $53 million in funding to four sites: the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, the University of Chicago Argonne National Laboratory, and the Center for Advanced Computing Research (CACR) at the California Institute of Technology in Pasadena, California.
The design was meant to be an extensible distributed open system from the start.
In October 2002, the Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh joined the TeraGrid as major new partners when NSF announced $35 million in supplementary funding. The TeraGrid network was transformed through the ETF project from a 4-site mesh to a dual-hub backbone network with connection points in Los Angeles and at the Starlight facilities in Chicago.
In October 2003, NSF awarded $10 million to add four sites to TeraGrid as well as to establish a third network hub, in Atlanta. These new sites were Oak Ridge National Laboratory (ORNL), Purdue University, Indiana University, and the Texas Advanced Computing Center (TACC) at The University of Texas at Austin.
TeraGrid construction was also made possible through corporate partnerships with Sun Microsystems, IBM, Intel Corporation, Qwest Communications, Juniper Networks, Myricom, Hewlett-Packard Company, and Oracle Corporation.
TeraGrid construction was completed in October 2004, at which time the TeraGrid facility began full production.
Operation
In August 2005, NSF's newly created office of cyberinfrastructure extended support for another five years with a $150 million set of awards. It included $48 million for coordination and user support to the Grid Infrastructure Group at the University of Chicago led by Charlie Catlett.
Using high-performance network connections, the TeraGrid featured high-performance computers, data resources and tools, and high-end experimental facilities around the USA. The work supported by the project is sometimes called e-Science.
In 2006, the University of Michigan's School of Information began a study of TeraGrid.
In May 2007, TeraGrid integrated resources included more than 250 teraflops of computing capability and more than 30 petabytes (quadrillions of bytes) of online and archival data storage with rapid access and retrieval over high-performance networks. Researchers could access more than 100 discipline-specific databases. In late 2009, The TeraGrid resources had grown to 2 petaflops of computing capability and more than 60 petabytes storage. In mid 2009, NSF extended the operation of TeraGrid to 2011.
Transition to XSEDE
A follow-on project was approved in May 2011.
In July 2011, a partnership of 17 institutions announced the Extreme Science and Engineering Discovery Environment (XSEDE). NSF announced funding the XSEDE project for five years, at $121 million.
XSEDE is led by John Towns at the University of Illinois's National Center for Supercomputing Applications.
Architecture
TeraGrid resources are integrated through a service-oriented architecture in that each resource provides a "service" that is defined in terms of interface and operation. Computational resources run a set of software packages called "Coordinated TeraGrid Software and Services" (CTSS). CTSS provides a familiar user environment on all TeraGrid systems, allowing scientists to more easily port code from one system to another. CTSS also provides integrative functions such as single-signon, remote job submission, workflow support, data movement tools, etc. CTSS includes the Globus Toolkit, Condor, distributed accounting and account management software, verification and validation software, and a set of compilers, programming tools, and environment variables.
TeraGrid uses a 10 Gigabits per second dedicated fiber-optical backbone network, with hubs in Chicago, Denver, and Los Angeles. All resource provider sites connect to a backbone node at 10 Gigabits per second. Users accessed the facility through national research networks such as the Internet2 Abilene backbone and National LambdaRail.
Usage
TeraGrid users primarily came from U.S. universities. There are roughly 4,000 users at over 200 universities. Academic researchers in the United States can obtain exploratory, or development allocations (roughly, in "CPU hours") based on an abstract describing the work to be done. More extensive allocations involve a proposal that is reviewed during a quarterly peer-review process. All allocation proposals are handled through the TeraGrid website. Proposers select a scientific discipline that most closely describes their work, and this enables reporting on the allocation of, and use of, TeraGrid by scientific discipline. As of July 2006 the scientific profile of TeraGrid allocations and usage was:
Each of these discipline categories correspond to a specific program area of the National Science Foundation.
Starting in 2006, TeraGrid provided application-specific services to Science Gateway partners, who serve (generally via a web portal) discipline-specific scientific and education communities. Through the Science Gateways program TeraGrid aims to broaden access by at least an order of magnitude in terms of the number of scientists, students, and educators who are able to use TeraGrid.
Resource providers
Argonne National Laboratory (ANL) operated by the University of Chicago and the Department of Energy
Indiana University - Big Red - IBM BladeCenter JS21 Cluster
Louisiana Optical Network Initiative (LONI)
National Center for Atmospheric Research (NCAR)
National Center for Supercomputing Applications (NCSA)
National Institute for Computational Sciences (NICS) operated by University of Tennessee at Oak Ridge National Laboratory.
Oak Ridge National Laboratory (ORNL)
Pittsburgh Supercomputing Center (PSC) operated by University of Pittsburgh and Carnegie Mellon University.
Purdue University
San Diego Supercomputer Center (SDSC)
Texas Advanced Computing Center (TACC)
Similar projects
Distributed European Infrastructure for Supercomputing Applications (DEISA), integrating eleven European supercomputing centers
Enabling Grids for E-sciencE (EGEE)
National Research Grid Initiative (NAREGEGI) involving several supercomputer centers in Japan from 2003
Open Science Grid - a distributed computing infrastructure for scientific research
Extreme Science and Engineering Discovery Environment (XSEDE) - the TeraGrid successor
References
External links
TeraGrid website
Grid computing
National Science Foundation
Supercomputing | TeraGrid | [
"Technology"
] | 1,502 | [
"Supercomputing"
] |
2,277,192 | https://en.wikipedia.org/wiki/Geotextile | Geotextiles are versatile permeable fabrics that, when used in conjunction with soil, can effectively perform multiple functions, including separation, filtration, reinforcement, protection, and drainage. Typically crafted from polypropylene or polyester, geotextile fabrics are available in two primary forms: woven, which resembles traditional mail bag sacking, and nonwoven, which resembles felt.
Geotextile composites have been introduced and products such as geogrids and meshes have been developed. Geotextiles are durable and are able to soften a fall. Overall, these materials are referred to as geosynthetics and each configuration—geonets, geosynthetic clay liners, geogrids, geotextile tubes, and others—can yield benefits in geotechnical and environmental engineering design.
History
Geotextiles were originally intended to be a substitute for granular soil filters. Geotextiles can also be referred to as filter fabrics. In the 1950s, R.J. Barrett began working using geotextiles behind precast concrete seawalls, under precast concrete erosion control blocks, beneath large stone riprap, and in other erosion control situations. He used different styles of woven monofilament fabrics, all characterized by a relatively high percentage open area (varying from 6 to 30%). He discussed the need for both adequate permeability and soil retention, along with adequate fabric strength and proper elongation and tone setting for geotextile use in filtration situations.
Applications
Geotextiles and related products have many applications and currently support many civil engineering applications including roads, airfields, railroads, embankments, retaining structures, reservoirs, canals, dams, bank protection, coastal engineering and construction site silt fences or to form a geotextile tube. Geotextiles can also serve as components of other geosynthetics such as the reinforcing material in a bituminous geomembrane. Usually geotextiles are placed at the tension surface to strengthen the soil. Geotextiles are also used for sand dune armoring to protect upland coastal property from storm surge, wave action and flooding. A large sand-filled container (SFC) within the dune system prevents storm erosion from proceeding beyond the SFC. Using a sloped unit rather than a single tube eliminates damaging scour.
Erosion control manuals comment on the effectiveness of sloped, stepped shapes in mitigating shoreline erosion damage from storms. Geotextile sand-filled units provide a "soft" armoring solution for upland property protection. Geotextiles are used as matting to stabilize flow in stream channels and swales.
Geotextiles can improve soil strength at a lower cost than conventional soil nailing. In addition, geotextiles allow planting on steep slopes, further securing the slope.
Geotextiles have been used to protect the fossil hominid footprints of Laetoli in Tanzania from erosion, rain, and tree roots.
In building demolition, geotextile fabrics in combination with steel wire fencing can contain explosive debris.
Coir (coconut fiber) geotextiles are popular for erosion control, slope stabilization and bioengineering, due to the fabric's substantial mechanical strength. Coir geotextiles last approximately 3 to 5 years depending on the fabric weight. The product degrades into humus, enriching the soil.
Global warming
Glacial retreat
Geotextiles with reflective properties are often used in protecting the melting glaciers. In north Italy, they use Geotextiles to cover the glaciers for protection from the Sun. The reflective properties of the geotextile reflect the sun away from the melting glacier in order to slow the process. However, this process has proven to be more expensive than effective.
Design methods
While many possible design methods or combinations of methods are available to the geotextile designer, the ultimate decision for a particular application usually takes one of three directions: design by cost and availability, design by specification, or design by function. Extensive literature on design methods for geotextiles has been published in the peer reviewed journal Geotextiles and Geomembranes.
Requirements
Geotextiles are needed for specific requirements, just as anything else in the world. Some of these requirements consist of polymers composed of a minimum of 85% by weight poly-propylene, polyesters, polyamides, polyolefins, and polyethylene.
See also
Geomembrane
Hard landscape materials
Polypropylene raffia
Sediment control
References
Further reading
John, N. W. M. (1987). Geotextiles. Glasgow: Blackie Publishing Ltd.
Koerner, R. M. (2012). Designing with Geosynthetics, 6th Edition. Xlibris Publishing Co.
Koerner, R. M., ed. (2016). Geotextiles: From Design to Applications. Amsterdam: Woodhead Publishing Co.
External links
Building materials
Geosynthetics
Landscape architecture
Plastics applications
Textiles | Geotextile | [
"Physics",
"Engineering"
] | 1,036 | [
"Building engineering",
"Landscape architecture",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
2,277,296 | https://en.wikipedia.org/wiki/Phlegmatized%20explosive | A phlegmatized explosive is an explosive that has had an agent (a phlegmatizer) added to stabilize or desensitize it. Phlegmatizing usually improves the handling properties of an explosive (e.g. when munitions are filled in factories.)
TNT explosive can itself be used to phlegmatize more sensitive explosives such as RDX (to form Cyclotol), HMX (to form Octol), or PETN (to form Pentolite). Other typical phlegmatizing agents include paraffin wax (5% used in OKFOL and Composition H6), paper, or even water (used in water gel explosives). Such agents are nearly always flammable themselves (therefore adding fuel to the blast) or will at least boil off easily. Typically, a small amount of phlegmatizing agent is used e.g. Composition B, which has 1% paraffin wax added, or the Russian RGO hand grenade which contains 90 grams of "A-IX-1" explosive, comprising 96% RDX and 4% paraffin wax by weight. Another example of use is the VS-50 antipersonnel mine, which contains an explosive filling of 43 grams of RDX, again phlegmatized by combining it with 10% paraffin wax by weight.
References
Explosives | Phlegmatized explosive | [
"Chemistry"
] | 277 | [
"Explosives",
"Explosions"
] |
2,277,564 | https://en.wikipedia.org/wiki/Resist%20%28semiconductor%20fabrication%29 | In semiconductor fabrication, a resist is a thin layer used to transfer a circuit pattern to the semiconductor substrate which it is deposited upon. A resist can be patterned via lithography to form a (sub)micrometer-scale, temporary mask that protects selected areas of the underlying substrate during subsequent processing steps. The material used to prepare said thin layer is typically a viscous solution. Resists are generally proprietary mixtures of a polymer or its precursor and other small molecules (e.g. photoacid generators) that have been specially formulated for a given lithography technology. Resists used during photolithography are called photoresists.
Background
Semiconductor devices (as of 2005) are built by depositing and patterning many thin layers. The patterning steps, or lithography, define the function of the device and the density of its components.
For example, in the interconnect layers of a modern microprocessor, a conductive material (copper or aluminum) is inlaid in an electrically insulating matrix (typically fluorinated silicon dioxide or another low-k dielectric). The metal patterns define multiple electrical circuits that are used to connect the microchip's transistors to one another and ultimately to external devices via the chip's pins.
The most common patterning method used by the semiconductor device industry is photolithography -- patterning using light. In this process, the substrate of interest is coated with photosensitive resist and irradiated with short-wavelength light projected through a photomask, which is a specially prepared stencil formed of opaque and transparent regions - usually a quartz substrate with a patterned chromium layer. The shadow of opaque regions in the photomask forms a submicrometer-scale pattern of dark and illuminated regions in the resist layer -- the areal image. Chemical and physical changes occur in the exposed areas of the resist layer. For example, chemical bonds may be formed or destroyed, inducing a change in solubility. This latent image is then developed for example by rinsing with an appropriate solvent. Selected regions of the resist remain, which after a post-exposure bake step form a stable polymeric pattern on the substrate. This pattern can be used as a stencil in the next process step. For example, areas of the underlying substrate that are not protected by the resist pattern may be etched or doped. Material may be selectively deposited on the substrate. After processing, the remaining resist may be stripped. Sometimes (esp. during Microelectromechanical systems fabrication), the patterned resist layer may be incorporated in the final product. Many photolithography and processing cycles may be performed to create complex devices.
Resists may also be formulated to be sensitive to charged particles, such as the electron beams produced in scanning electron microscopes. This is the basis of electron-beam direct-write lithography.
A resist is not always necessary. Several materials may be deposited or patterned directly using techniques like soft lithography, Dip-Pen Nanolithography, evaporation through a shadow mask or stencil.
Typical process
Resist Deposition: The precursor solution is spin-coated on a clean (semiconductor) substrate, such as a silicon wafer, to form a very thin, uniform layer.
Soft Bake: The layer is baked at a low temperature to evaporate residual solvent.
Exposure: A latent image is formed in the resist e.g. (a) via exposure to ultraviolet light through a photomask with opaque and transparent regions or (b) by direct writing using a laser beam or electron beam.
Post-Exposure Bake
Development: Areas of the resist that have (or have not) been exposed are removed by rinsing with an appropriate solvent.
Processing through the resist pattern: wet or dry etching, lift-off, doping...
Resist Stripping
See also
Electron beam lithography
Nanolithography
Photolithography
External links
MicroChem
Shipley (now Rohm and Haas Electronic Materials)
Clariant
micro resist technology
Semiconductor device fabrication | Resist (semiconductor fabrication) | [
"Materials_science"
] | 841 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
2,277,570 | https://en.wikipedia.org/wiki/Pertechnetate | The pertechnetate ion () is an oxyanion with the chemical formula . It is often used as a convenient water-soluble source of isotopes of the radioactive element technetium (Tc). In particular it is used to carry the 99mTc isotope (half-life 6 hours) which is commonly used in nuclear medicine in several nuclear scanning procedures.
Pertechnetate is poorly hydrated as [TcO4(H2O)n]− and [TcO4(H2O)n-m]−[H3O]+m (n = 1–50, m = 1–4) clusters that have been demonstrated by simulation with DFT. First hydration shell of TcO4− is asymmetric and contains no more than 7 water molecules. Only three of the four oxygen atoms of TcO4− form hydrogen bonds with water molecules.
A technetate(VII) salt is a compound containing this ion. Pertechnetate compounds are salts of technetic(VII) acid. Pertechnetate is analogous to permanganate but it has little oxidizing power. Pertechnetate has higher oxidation power than perrhenate.
Understanding pertechnetate is important in understanding technetium contamination in the environment and in nuclear waste management.
Chemistry
is the starting material for most of the chemistry of technetium. Pertechnetate salts are usually colorless. is produced by oxidizing technetium with nitric acid or with hydrogen peroxide. The pertechnetate anion is similar to the permanganate anion but is a weaker oxidizing agent. It is tetrahedral and diamagnetic. The standard electrode potential for / is only +0.738 V in acidic solution, as compared to +1.695 V for /. Because of its diminished oxidizing power, is stable in alkaline solution. is more similar to . Depending on the reducing agent, can be converted to derivatives containing Tc(VI), Tc(V), and Tc(IV). In the absence of strong complexing ligands, is reduced to a +4 oxidation state via the formation of hydrate.
Some metals like actinides, barium, scandium, yttrium or zirconium may form complex salts with pertechnetate thus strongly effecting its liquid-liquid extraction behavior.
Preparation of 99mTcO4−
is conveniently available in high radionuclidic purity from molybdenum-99, which decays with 87% probability to . The subsequent decay of leads to either or . can be produced in a nuclear reactor via irradiation of either molybdenum-98 or naturally occurring molybdenum with thermal neutrons, but this is not the method currently in use today. Currently, is recovered as a product of the nuclear fission reaction of , separated from other fission products via a multistep process and loaded onto a column of alumina that forms the core of a / radioisotope generator.
As the continuously decays to , the can be removed periodically (usually daily) by flushing a saline solution (0.15 M NaCl in water) through the alumina column: the more highly charged is retained on the column, where it continues to undergo radioactive decay, while the medically useful radioisotope is eluted in the saline. The eluate from the column must be sterile and pyrogen free, so that the Tc drug can be used directly, usually within 12 hours of elution. In a few cases, sublimation or solvent extraction may be used.
Examples
A complex that can penetrate the blood–brain barrier is generated by reduction of with tin(II) in the presence of the ligand hexamethylpropylene amine oxime (HMPAO) to form TcO-D,L-HMPAO.
A complex that for imaging the lungs, Tc-MAA, is generated by reduction of with in the presence of human serum albumin.
, which is both water and air stable, is generated by reduction of with carbon monoxide. This compound is a precursor to complexes that can be used in cancer diagnosis and therapy involving DNA-DNA pretargeting.
Compounds
Reactions
Radiolysis of in nitrate solutions proceeds through the reduction to which induces complex disproportionation processes:
Pertechnetate can be reduced by H2S to give Tc2S7.
Pertechnetate is also reduced to Tc(IV/V) compounds in alkaline solutions in nuclear waste tanks without adding catalytic metals, reducing agents, or external radiation. Reactions of mono- and disaccharides with 99m yield Tc(IV) compounds that are water-soluble.
Uses
Pharmaceutical use
The half-life of is long enough that labelling synthesis of the radiopharmaceutical and scintigraphic measurements can be performed without significant loss of radioactivity. The energy emitted from is 140 keV, which allows for the study of deep body organs. Radiopharmaceuticals have no intended pharmacologic effect and are used in very low concentrations. Radiopharmaceuticals containing are currently being applied in the determining morphology of organs, testing of organ function, and scintigraphic and emission tomographic imaging. The gamma radiation emitted by the radionuclide allows organs to be imaged in vivo tomographically. Currently, over 80% of radiopharmaceuticals used clinically are labelled with . A majority of radiopharmaceuticals labelled with are synthesized by the reduction of the pertechnetate ion in the presence of ligands chosen to confer organ specificity of the drug. The resulting compound is then injected into the body and a "gamma camera" is focused on sections or planes in order to image the spatial distribution of the .
Specific imaging applications
is used primarily in the study of the thyroid gland - its morphology, vascularity, and function. and iodide, due to their comparable charge/radius ratio, are similarly incorporated into the thyroid gland. The pertechnetate ion is not incorporated into the thyroglobulin. It is also used in the study of blood perfusion, regional accumulation, and cerebral lesions in the brain, as it accumulates primarily in the choroid plexus.
Pertechnetate salts, such as sodium pertechnetate, cannot pass through the blood–brain barrier. In addition to the salivary and thyroid glands, localizes in the stomach. is renally eliminated for the first three days after being injected. After a scanning is performed, it is recommended that a patient drink large amounts of water in order to expedite elimination of the radionuclide. Other methods of administration include intraperitoneal, intramuscular, subcutaneous, as well as orally. The behavior of the ion is essentially the same, with small differences due to the difference in rate of absorption, regardless of the method of administration.
Synthesis of 99mTcO4− radiopharmaceuticals
is advantageous for the synthesis of a variety of radiopharmaceuticals because Tc can adopt a number of oxidation states. The oxidation state and coligands dictate the specificity of the radiopharmaceutical. The starting material , made available after elution from the generator column, as mentioned above, can be reduced in the presence of complexing ligands. Many different reducing agents can be used, but transition metal reductants are avoided because they compete with for ligands. Oxalates, formates, hydroxylamine, and hydrazine are also avoided because they form complexes with the technetium. Electrochemical reduction is impractical.
Ideally, the synthesis of the desired radiopharmaceutical from , a reducing agent, and desired ligands should occur in one container after elution, and the reaction must be performed in a solvent that can be injected intravenously, such as a saline solution. Kits are available that contain the reducing agent, usually tin(II) and ligands. These kits are sterile, pyrogen-free, easily purchased, and can be stored for long periods of time. The reaction with takes place directly after elution from the generator column and shortly before its intended use. A high organ specificity is important because the injected activity should accumulate in the organ under investigation, as there should be a high activity ratio of the target organ to nontarget organs. If there is a high activity in organs adjacent to the one under investigation, the image of the target organ can be obscured. Also, high organ specificity allows for the reduction of the injected activity, and thus the exposure to radiation, in the patient. The radiopharmaceutical must be kinetically inert, in that it must not change chemically in vivo en route to the target organ.
As a 99mTc carrier
A technetium-99m generator provides the pertechnetate containing the short-lived isotope 99mTc for medical uses. This compound is generated directly from molybdate held on alumina within the generator (see this topic for detail).
In nuclear medicine
Pertechnetate has a wide variety of uses in diagnostic nuclear medicine. Since technetate(VII) can substitute for iodine in the Na/I symporter (NIS) channel in follicular cells of the thyroid gland, inhibiting uptake of iodine into the follicular cells, 99mTc-pertechnetate can be used as an alternative to 123I in imaging of the thyroid, although it specifically measures uptake and not organification. It has also been used historically to evaluate for testicular torsion, although ultrasound is more commonly used in current practice, as it does not deliver a radiation dose to the testes. It is also used in labeling of autologus red blood cells for MUGA scans to evaluate left ventricular cardiac function, localization of gastrointestinal bleeding prior to embolization or surgical management, and in damaged red blood cells to detect ectopic splenic tissue.
It is actively accumulated and secreted by the mucoid cells of the gastric mucosa, and therefore, technetate(VII) radiolabeled with technetium-99m is injected into the body when looking for ectopic gastric tissue as is found in a Meckel's diverticulum with Meckel's scans.
Non-radioactive uses
All technetium salts are mildly radioactive, but some of them have explored use of the element for its chemical properties. In these uses, its radioactivity is incidental, and generally the least radioactive (longest-lived) isotopes of Tc are used. In particular, 99Tc (half-life 211,000 years) is used in corrosion research, because it is the decay product of the easily obtained commercial 99mTc isotope. Solutions of technetate(VII) react with the surface of iron to form technetium dioxide, in this way it is able to act as an anodic corrosion inhibitor.
See also
Permanganate
Perrhenate
Sodium pertechnetate
References
Transition metal oxyanions
Radiopharmaceuticals
Medical physics
Corrosion inhibitors | Pertechnetate | [
"Physics",
"Chemistry"
] | 2,336 | [
"Applied and interdisciplinary physics",
"Medicinal radiochemistry",
"Radiopharmaceuticals",
"Medical physics",
"Corrosion inhibitors",
"Chemicals in medicine",
"Process chemicals"
] |
2,277,747 | https://en.wikipedia.org/wiki/Metal%20clay | Metal clay is a crafting medium consisting of very small particles of metal such as silver, gold, bronze, or copper mixed with an organic binder and water for use in making jewelry, beads and small sculptures. Originating in Japan in 1990, metal clay can be shaped just like any soft clay, by hand or using molds. After drying, the clay can be fired in a variety of ways such as in a kiln, with a handheld gas torch, or on a gas stove, depending on the type of clay and the metal in it. The binder burns away, leaving the pure sintered metal. Shrinkage of between 8% and 30% occurs (depending on the product used). Alloys such as bronze, sterling silver, and steel also are available.
History
Metal clay first came out in Japan in 1990 to allow craft jewelry makers to make sophisticated looking jewelry without the years of study needed to make fine jewelry.
Silver metal clay
Fine silver metal clay results in objects containing 99.9% pure silver, which is suitable for enameling. Lump metal clay is sold in sealed packets to keep it moist and workable. The silver versions are also available as a softer paste in a pre-filled syringe which can be used to produce extruded forms, in small jars of slip and as paper-like sheets, from which most of the moisture has been removed. The oldest brand of silver metal clay currently available is Art Clay Silver (ACS). The newest is Project X by ClayRevolution.com (CR).
Another available alloy, EZ960 Sterling Silver Metal Clay was invented by Bill Struve from Metal Adventures, the inventor of BRONZclay™ and COPPRclay™. Because the clay is a sterling silver alloy, one of its best attributes is its post firing strength, in comparison to fine silver. This clay is fired open shelf on a raised hard ceramic kiln shelf at for 2 hours, full ramp. No carbon required. Its shrinkage rate is smaller than other clays, at 10–11%. CoolTools.us now own the rights to both EZ960 sterling and EZ999 fine silver clays.
Precious Metal Clay (PMC)
PMC was developed in the early 1990s in Japan by metallurgist Masaki Morikawa. As a solid-phase sintered product of a precious metal powder used to form a precious metal article, the material consists of microscopic particles of pure silver or fine gold and a water-soluble, non-toxic, organic binder that burns off during firing. Success was first achieved with gold and later duplicated with silver.
The PMC brand includes the following products:
The original formula of PMC, now called "standard": fired at for 2 hours, shrinks by 30% during firing.
PMC+ & PMCflex: fired at for 10 minutes or for 30 minutes; shrinks 15%, due to a particle size reduction. PMC+ is also available in sheet form which can be worked like paper; for example, for origami.
PMC3: fired at for 45 minutes or for 10 minutes; shrinks by 10%. It can also be fired using a butane torch by heating it to orange heat for at least 2 minutes. It has a longer working life than the older formulations. It is also available in slip and paste forms which can be painted onto the surface of an object to be used as a mold.
Aura 22: a 22-carat gilding material, a gold paste intended to be painted onto the surface of silver PMC pieces, or ready-made silver objects.
PMC Pro: a harder product which is only 0.900 fineness silver, hence it cannot be hallmarked as sterling silver. It also requires kiln firing in a tub of activated carbon for 1 hour at .
PMC Sterling: is fired at and shrinks by 10–20%. Because of the copper content in this formula, firing is a two-step process; step one is an open-shelf firing and step two requires a firing pan with activated carbon media.
Mitsubishi discontinued all PMC production in 2023.
Art Clay Silver (ACS)
ACS was developed by AIDA Chemical Industries, also a Japanese company. ACS followed PMC Standard with their Art Clay Original clay which allows the user to fire with a handheld torch or on a gas hob. Owing to subtle differences in the binder and suggested firing times, this clay shrinks less than the PMC versions, approximately 8–10%.
Further developments introduced the Art Clay Slow Dry, a clay with a longer working time. Art Clay 650 and Art Clay 650 Slow Dry soon followed; both clays can be fired at , allowing the user to combine the clay with glass and sterling silver, which are affected negatively by the higher temperatures needed to fire the first generation clays. AIDA also manufacturers Oil Paste, a product used only on fired metal clay or milled fine silver, and Overlay Paste, which is designed for drawing designs on glass and porcelain.
In 2006 AIDA introduced the Art Clay Gold Paste, a more economical way to work with gold. The paste is painted onto the fired silver clay, then refired in a kiln, or with a torch or gas stove. When fired, it bonds with the silver, giving a 22-carat gold accent. The same year also saw Art Clay Slow Tarnish introduced, a clay that tarnishes less rapidly than the other metal clays.
Base metal clays
Lump metal clay in bronze was introduced in 2008 by Metal Adventures Inc. (MA) and in 2009 by Prometheus. Lump metal clays in copper were introduced in 2009 by Metal Adventures Inc. and Aida. Because of the lower cost, the bronze and copper metal clays are used by artists more often than the gold and silver metal clays in the American market place. The actual creation time of a bronze or copper piece is also far greater than that of its silver counterpart. Base metal clays, such as bronze, copper, and steel metal clays are best fired in the absence of oxygen to eliminate the oxidation of the metal by atmospheric oxygen. A means to accomplish this –- to place the pieces in activated carbon inside a container – was discovered and developed by Bill Struve. RioGrande.com owns the rights to BRONZclay (Original and FastFire), COPPRclay and any other (MA) base metal clays. Rio has discontinued production.
Powders
Metal clays are also available as dry powders to which water is added to hydrate and kneaded to attain a clay consistency. One advantage to the powders is their unlimited shelf life. The first silver clay in powder form was released in 2006 as Silver Smiths' Metal Clay Powder. In the following years base metal clays by Hadar Jacobson and Goldie World released several variation containing copper, brass, and even steel.
Firing methods
Metal clay can be fired by a variety of methods. The three most common are:
Electric kiln- Kilns designed for metal clay are programmable and easy to use. All clay types can be fired by this method. This is the only way paper type and copper clays can be fired.
Stove top- Either natural or bottled gas can be used, provided it reaches the temperature necessary to sinter. Color of the piece determines the firing time.
Torch- Any type of hand-held torch will work as long as it is hot enough to sinter the metal. Color determines firing time.
See also
References
Ceramic materials
Crafts
Fashion accessories
Jewellery making
Metalworking
Natural materials
Phyllosilicates
Products introduced in 1990
Modelling clay
Clay | Metal clay | [
"Physics",
"Engineering"
] | 1,577 | [
"Natural materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
2,277,871 | https://en.wikipedia.org/wiki/Metastability%20%28electronics%29 | In electronics, metastability is the ability of a digital electronic system to persist for an unbounded time in an unstable equilibrium or metastable state.
In digital logic circuits, a digital signal is required to be within certain voltage or current limits to represent a '0' or '1' logic level for correct circuit operation; if the signal is within a forbidden intermediate range it may cause faulty behavior in logic gates the signal is applied to. In metastable states, the circuit may be unable to settle into a stable '0' or '1' logic level within the time required for proper circuit operation. As a result, the circuit can act in unpredictable ways, and may lead to a system failure, sometimes referred to as a "glitch". Metastability is an instance of the Buridan's ass paradox.
Metastable states are inherent features of asynchronous digital systems, and of systems with more than one independent clock domain. In self-timed asynchronous systems, arbiters are designed to allow the system to proceed only after the metastability has resolved, so the metastability is a normal condition, not an error condition.
In synchronous systems with asynchronous inputs, synchronizers are designed to make the probability of a synchronization failure acceptably small.
Metastable states are avoidable in fully synchronous systems when the input setup and hold time requirements on flip-flops are satisfied.
Example
A simple example of metastability can be found in an SR NOR latch, when Set and Reset inputs are true (R=1 and S=1) and then both transition to false (R=0 and S=0) at about the same time. Both outputs Q and are initially held at 0 by the simultaneous Set and Reset inputs. After both Set and Reset inputs change to false, the flip-flop will (eventually) end up in one of two stable states, one of Q and true and the other false. The final state will depend on which of R or S returns to zero first, chronologically, but if both transition at about the same time, the resulting metastability, with intermediate or oscillatory output levels, can take arbitrarily long to resolve to a stable state.
Arbiters
In electronics, an arbiter is a circuit designed to determine which of several signals arrive first. Arbiters are used in asynchronous circuits to order computational activities for shared resources to prevent concurrent incorrect operations. Arbiters are used on the inputs of fully synchronous systems, and also between clock domains, as synchronizers for input signals. Although they can minimize the occurrence of metastability to very low probabilities, all arbiters nevertheless have metastable states, which are unavoidable at the boundaries of regions of the input state space resulting in different outputs.
Synchronous circuits
Synchronous circuit design techniques make digital circuits that are resistant to the failure modes that can be caused by metastability. A clock domain is defined as a group of flip-flops with a common clock. Such architectures can form a circuit guaranteed free of metastability (below a certain maximum clock frequency, above which first metastability, then outright failure occur), assuming a low-skew common clock. However, even then, if the system has a dependence on any continuous inputs then these are likely to be vulnerable to metastable states.
Synchronizer circuits are used to reduce the likelihood of metastability when receiving an asynchronous input or when transferring signals between different clock domains. Synchronizers may take the form of a cascade of D flip-flops (e.g. the shift register in Figure 3). Although each flip-flop stage adds an additional clock cycle of latency to the input data stream, each stage provides an opportunity to resolve metastability. Such synchronizers can be engineered to reduce metastability to a tolerable rate.
Schmitt triggers can also be used to reduce the likelihood of metastability, but as the researcher Chaney demonstrated in 1979, even Schmitt triggers may become metastable. He further argued that it is not possible to entirely remove the possibility of metastability from unsynchronized inputs within finite time and that "there is a great deal of theoretical and experimental evidence that a region of anomalous behavior exists for every device that has two stable states." In the face of this inevitability, hardware can only reduce the probability of metastability, and systems can try to gracefully handle the occasional metastable event.
Failure modes
Although metastability is well understood and architectural techniques to control it are known, it persists as a failure mode in equipment.
Serious computer and digital hardware bugs caused by metastability have a fascinating social history. Many engineers have refused to believe that a bistable device can enter into a state that is neither true nor false and has a positive probability that it will remain indefinite for any given period of time, albeit with exponentially decreasing probability over time. However, metastability is an inevitable result of any attempt to map a continuous domain to a discrete one. At the boundaries in the continuous domain between regions which map to different discrete outputs, points arbitrarily close together in the continuous domain map to different outputs, making a decision as to which output to select a difficult and potentially lengthy process. If the inputs to an arbiter or flip-flop arrive almost simultaneously, the circuit most likely will traverse a point of metastability. Metastability remains poorly understood in some circles, and various engineers have proposed their own circuits said to solve or filter out the metastability; typically these circuits simply shift the occurrence of metastability from one place to another. Chips using multiple clock sources are often tested with tester clocks that have fixed phase relationships, not the independent clocks drifting past each other that will be experienced during operation. This usually explicitly prevents the metastable failure mode that will occur in the field from being seen or reported. Proper testing for metastability frequently employs clocks of slightly different frequencies and ensuring correct circuit operation.
See also
Analog-to-digital converter
Buridan's ass
Asynchronous CPU
Ground bounce
Tri-state logic
References
External links
Metastability Performance of Clocked FIFOs
The 'Asynchronous' Bibliography
Asynchronous Logic
Efficient Self-Timed Interfaces for Crossing Clock Domains
Dr. Howard Johnson: Deliberately inducing the metastable state
Detailed explanations and Synchronizer designs
Metastability Bibliography
Clock Domain Crossing: Closing the Loop on Clock Domain Functional Implementation Problems, Cadence Design Systems
Stephenson, Jennifer. Understanding Metastability in FPGAs. Altera Corporation white paper. July 2009.
Bahukhandi, Ashirwad. Metastability. Lecture Notes for Advanced Logic Design and Switching Theory. January 2002.
Cummings, Clifford E. Synthesis and Scripting Techniques for Designing Multi-Asynchronous Clock Designs. SNUG 2001.
Haseloff, Eilhard. Metastable Response in 5-V Logic Circuits. Texas Instruments Report. February 1997.
Nystrom, Mika, and Alain J. Martin. Crossing the Synchronous Asynchronous Divide. WCED 2002.
Patil, Girish, IFV Division, Cadence Design Systems. Clock Synchronization Issues and Static Verification Techniques. Cadence Technical Conference 2004.
Smith, Michael John Sebastian. Application-Specific Integrated Circuits. Addison Wesley Longman, 1997, Chapter 6.4.1.
Stein, Mike. Crossing the abyss: asynchronous signals in a synchronous world EDN design feature. July 24, 2003.
Cox, Jerome R. and Engel, George L., Blendics, Inc. White Paper "Metastability and Fatal System Errors"] Nov. 2010
Adam Taylor, "Wrapping One's Brain Around Metastability", EE Times, 2013-11-20
Electrical engineering
Digital electronics | Metastability (electronics) | [
"Engineering"
] | 1,665 | [
"Electrical engineering",
"Electronic engineering",
"Digital electronics"
] |
2,278,116 | https://en.wikipedia.org/wiki/Special%20right%20triangle | A special right triangle is a right triangle with some regular feature that makes calculations on the triangle easier, or for which simple formulas exist. For example, a right triangle may have angles that form simple relationships, such as 45°–45°–90°. This is called an "angle-based" right triangle. A "side-based" right triangle is one in which the lengths of the sides form ratios of whole numbers, such as 3 : 4 : 5, or of other special numbers such as the golden ratio. Knowing the relationships of the angles or ratios of sides of these special right triangles allows one to quickly calculate various lengths in geometric problems without resorting to more advanced methods.
Angle-based
Angle-based special right triangles are specified by the relationships of the angles of which the triangle is composed. The angles of these triangles are such that the larger (right) angle, which is 90 degrees or radians, is equal to the sum of the other two angles.
The side lengths are generally deduced from the basis of the unit circle or other geometric methods. This approach may be used to rapidly reproduce the values of trigonometric functions for the angles 30°, 45°, and 60°.
Special triangles are used to aid in calculating common trigonometric functions, as below:
The 45°–45°–90° triangle, the 30°–60°–90° triangle, and the equilateral/equiangular (60°–60°–60°) triangle are the three Möbius triangles in the plane, meaning that they tessellate the plane via reflections in their sides; see Triangle group.
45° - 45° - 90° triangle
In plane geometry, dividing a square along its diagonal results in two isosceles right triangles, each with one right angle (90°, radians) and two other congruent angles each measuring half of a right angle (45°, or radians). The sides in this triangle are in the ratio 1 : 1 : , which follows immediately from the Pythagorean theorem.
Of all right triangles, such 45° - 45° - 90° degree triangles have the smallest ratio of the hypotenuse to the sum of the legs, namely . and the greatest ratio of the altitude from the hypotenuse to the sum of the legs, namely .
Triangles with these angles are the only possible right triangles that are also isosceles triangles in Euclidean geometry. However, in spherical geometry and hyperbolic geometry, there are infinitely many different shapes of right isosceles triangles.
30° - 60° - 90° triangle
This is a triangle whose three angles are in the ratio 1 : 2 : 3 and respectively measure 30° (), 60° (), and 90° (). The sides are in the ratio 1 : : 2.
The proof of this fact is clear using trigonometry. The geometric proof is:
Draw an equilateral triangle ABC with side length 2 and with point D as the midpoint of segment BC. Draw an altitude line from A to D. Then ABD is a 30°–60°–90° triangle with hypotenuse of length 2, and base BD of length 1.
The fact that the remaining leg AD has length follows immediately from the Pythagorean theorem.
The 30°–60°–90° triangle is the only right triangle whose angles are in an arithmetic progression. The proof of this fact is simple and follows on from the fact that if α, , are the angles in the progression then the sum of the angles = 180°. After dividing by 3, the angle must be 60°. The right angle is 90°, leaving the remaining angle to be 30°.
Side-based
Right triangles whose sides are of integer lengths, with the sides collectively known as Pythagorean triples, possess angles that cannot all be rational numbers of degrees. (This follows from Niven's theorem.) They are most useful in that they may be easily remembered and any multiple of the sides produces the same relationship. Using Euclid's formula for generating Pythagorean triples, the sides must be in the ratio
where m and n are any positive integers such that .
Common Pythagorean triples
There are several Pythagorean triples which are well-known, including those with sides in the ratios:
{| border="0" cellpadding="1" cellspacing="0"
|align="right"|3 :||align="right"| 4 :||align="right"| 5
|-
|align="right"|5 :||align="right"|12 :||align="right"|13
|-
|align="right"|8 :||align="right"|15 :||align="right"|17
|-
|align="right"|7 :||align="right"|24 :||align="right"|25
|-
|align="right"|9 :||align="right"|40 :||align="right"|41
|}
The 3 : 4 : 5 triangles are the only right triangles with edges in arithmetic progression. Triangles based on Pythagorean triples are Heronian, meaning they have integer area as well as integer sides.
The possible use of the 3 : 4 : 5 triangle in Ancient Egypt, with the supposed use of a knotted rope to lay out such a triangle, and the question whether Pythagoras' theorem was known at that time, have been much debated. It was first conjectured by the historian Moritz Cantor in 1882. It is known that right angles were laid out accurately in Ancient Egypt; that their surveyors did use ropes for measurement; that Plutarch recorded in Isis and Osiris (around 100 AD) that the Egyptians admired the 3 : 4 : 5 triangle; and that the Berlin Papyrus 6619 from the Middle Kingdom of Egypt (before 1700 BC) stated that "the area of a square of 100 is equal to that of two smaller squares. The side of one is + the side of the other." The historian of mathematics Roger L. Cooke observes that "It is hard to imagine anyone being interested in such conditions without knowing the Pythagorean theorem." Against this, Cooke notes that no Egyptian text before 300 BC actually mentions the use of the theorem to find the length of a triangle's sides, and that there are simpler ways to construct a right angle. Cooke concludes that Cantor's conjecture remains uncertain: he guesses that the Ancient Egyptians probably did know the Pythagorean theorem, but that "there is no evidence that they used it to construct right angles".
The following are all the Pythagorean triple ratios expressed in lowest form (beyond the five smallest ones in lowest form in the list above) with both non-hypotenuse sides less than 256:
{| border="0" cellpadding="1" cellspacing="0" align="left" style="margin-right: 2em"
|align="right"|11 :||align="right"| 60 :||align="right"| 61
|-
|align="right"|12 :||align="right"| 35 :||align="right"| 37
|-
|align="right"|13 :||align="right"| 84 :||align="right"| 85
|-
|align="right"|15 :||align="right"|112 :||align="right"|113
|-
|align="right"|16 :||align="right"| 63 :||align="right"| 65
|-
|align="right"|17 :||align="right"|144 :||align="right"|145
|-
|align="right"|19 :||align="right"|180 :||align="right"|181
|-
|align="right"|20 :||align="right"| 21 :||align="right"| 29
|-
|align="right"|20 :||align="right"| 99 :||align="right"|101
|-
|align="right"|21 :||align="right"|220 :||align="right"|:221
|}
Almost-isosceles Pythagorean triples
Isosceles right-angled triangles cannot have sides with integer values, because the ratio of the hypotenuse to either other side is and cannot be expressed as a ratio of two integers. However, infinitely many almost-isosceles right triangles do exist. These are right-angled triangles with integer sides for which the lengths of the non-hypotenuse edges differ by one. Such almost-isosceles right-angled triangles can be obtained recursively,
a0 = 1, b0 = 2
an = 2bn−1 + an−1
bn = 2an + bn−1
an is length of hypotenuse, n = 1, 2, 3, .... Equivalently,
where {x, y} are solutions to the Pell equation , with the hypotenuse y being the odd terms of the Pell numbers 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378... .. The smallest Pythagorean triples resulting are:
{| border="0" cellpadding="1" cellspacing="0" align="left" style="margin-right: 3em"
|align="right"| 3 :||align="right"| 4 :||align="right"| 5
|-
|align="right"| 20 :||align="right"| 21 :||align="right"| 29
|-
|align="right"| 119 :||align="right"| 120 :||align="right"| 169
|-
|align="right"| 696 :||align="right"| 697 :||align="right"| 985
|}
Alternatively, the same triangles can be derived from the square triangular numbers.
Arithmetic and geometric progressions
The Kepler triangle is a right triangle whose sides are in geometric progression. If the sides are formed from the geometric progression a, ar, ar2 then its common ratio r is given by r = where φ is the golden ratio. Its sides are therefore in the ratio . Thus, the shape of the Kepler triangle is uniquely determined (up to a scale factor) by the requirement that its sides be in geometric progression.
The 3–4–5 triangle is the unique right triangle (up to scaling) whose sides are in arithmetic progression.
Sides of regular polygons
Let be the side length of a regular decagon inscribed in the unit circle, where is the golden ratio. Let be the side length of a regular hexagon in the unit circle, and let be the side length of a regular pentagon in the unit circle. Then , so these three lengths form the sides of a right triangle. The same triangle forms half of a golden rectangle. It may also be found within a regular icosahedron of side length : the shortest line segment from any vertex to the plane of its five neighbors has length , and the endpoints of this line segment together with any of the neighbors of form the vertices of a right triangle with sides , , and .
See also
Ailles rectangle, combining several special right triangles
Integer triangle
Spiral of Theodorus
References
External links
3 : 4 : 5 triangle
30–60–90 triangle
45–45–90 triangle with interactive animations
Euclidean plane geometry
Types of triangles | Special right triangle | [
"Mathematics"
] | 2,499 | [
"Planes (geometry)",
"Euclidean plane geometry"
] |
2,278,435 | https://en.wikipedia.org/wiki/Valnoctamide | Valnoctamide (INN, USAN) has been used in France as a sedative-hypnotic since 1964. It is a structural isomer of valpromide, a valproic acid prodrug; unlike valpromide, however, valnoctamide is not transformed into its homologous acid, valnoctic acid, in vivo.
Indications
In addition to being a sedative, valnoctamide has been investigated for use in epilepsy.
It was studied for neuropathic pain in 2005 by Winkler et al., with good results: it had minimal effects on motor coordination and alertness at effective doses, and appeared to be equally effective as gabapentin.
RH Belmaker, Yuly Bersudsky and Alex Mishory started a clinical trial of valnoctamide for prophylaxis of mania in lieu of the much more teratogenic valproic acid or its salts.
Side effects
The side effects of valnoctamide are mostly minor and include somnolence and the slight motor impairments mentioned above.
Interactions
Valnoctamide is known to increase through inhibition of epoxide hydrolase the serum levels of carbamazepine-10,11-epoxide, the active metabolite of carbamazepine, sometimes to toxic levels.
Chemistry
Valnoctamide is a racemic compound with four stereoisomers, all of which were shown to be more effective than valproic acid in animal models of epilepsy and one of which [(2S,3S]-valnoctamide) was considered to be a good candidate by Isoherranen, et al. for an anticonvulsant in August 2003.
Butabarbital can be hydrolyzed to Valnoctamide.
References
Carboxamides
Anticonvulsants
GABA analogues
Mood stabilizers
GABA transaminase inhibitors
Histone deacetylase inhibitors
Prodrugs | Valnoctamide | [
"Chemistry"
] | 404 | [
"Chemicals in medicine",
"Prodrugs"
] |
2,278,914 | https://en.wikipedia.org/wiki/Uniform%20access%20principle | The uniform access principle of computer programming was put forth by Bertrand Meyer (originally in his book Object-Oriented Software Construction). It states "All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation." This principle applies generally to the syntax of object-oriented programming languages. In simpler form, it states that there should be no syntactical difference between working with an attribute, pre-computed property, or method/query of an object.
While most examples focus on the "read" aspect of the principle (i.e., retrieving a value), Meyer shows that the "write" implications (i.e., modifying a value) of the principle are harder to deal with in his monthly column on the Eiffel programming language official website.
Explanation
The problem being addressed by Meyer involves the maintenance of large software projects or software libraries. Sometimes when developing or maintaining software it is necessary, after much code is in place, to change a class or object in a way that transforms what was simply an attribute access into a method call. Programming languages often use different syntax for attribute access and invoking a method, (e.g., versus ). The syntax change would require, in popular programming languages of the day, changing the source code in all the places where the attribute was used. This might require changing source code in many different locations throughout a very large volume of source code. Or worse, if the change is in an object library used by hundreds of customers, each of those customers would have to find and change all the places the attribute was used in their own code and recompile their programs.
Going the reverse way (from method to simple attribute) really was not a problem, as one can always just keep the function and have it simply return the attribute value.
Meyer recognized the need for software developers to write code in such a way as to minimize or eliminate cascading changes in code that result from changes which convert an object attribute to a method call or vice versa. For this he developed the Uniform Access Principle.
Many programming languages do not strictly support the UAP but do support forms of it. Properties, which are provided in a number of programming languages, address the problem Meyer was addressing with his UAP in a different way. Instead of providing a single uniform notation, properties provide a way to invoke a method of an object while using the same notation as is used for attribute access. The separate method invocation syntax is still available.
UAP example
If the language uses the method invocation syntax it may look something like this.
// Assume print displays the variable passed to it, with or without parens
// Set Foo's attribute 'bar' to value 5.
Foo.bar(5)
print Foo.bar()
When executed, should display :
5
Whether or not invokes a function or simply sets an attribute is hidden from the caller.
Likewise whether simply retrieves the value of the attribute, or invokes a function
to compute the value returned, is an implementation detail hidden from the caller.
If the language uses the attribute syntax the syntax may look like this.
Foo.bar = 5
print Foo.bar
Again, whether or not a method is invoked, or the value is simply assigned to an attribute is hidden
from the calling method.
Problems
However, UAP itself can lead to problems, if used in places where the differences between access methods are not negligible, such as when the returned value is expensive to compute or will trigger cache operations.
Language examples
Python
Python properties may be used to allow a method
to be invoked with the same syntax as accessing an attribute. Whereas Meyer's UAP would have
a single notation for both attribute access and method invocation (method invocation syntax),
a language with support for properties still supports separate notations for attribute
and method access. Properties allow the attribute notation to be used, but to hide the
fact that a method is being invoked instead of simply retrieving or setting a value.
As such, Python leaves the option of adherence to UAP up to the individual programmer. The built-in function provides a simple way to decorate any given method in attribute access syntax, thus abstracting away the syntactical difference between method invocations and attribute accesses.
In Python, we may have code that access an object that could be defined such that weight and color are simple attributes as in the following
"""
>>> egg = Egg(4.0, "white")
>>> egg.color = "green"
>>> print(egg)
Egg(4.0, green)
"""
class Egg:
def __init__(self, weight, color) -> None:
self.weight = weight
self.color = color
def __str__(self) -> str:
return f"{__class__.__name__}({self.weight}, {self.color})"
Or the Egg object could use properties, and invoke getter and setter methods instead
# ...(snip)...
class Egg:
def __init__(self, weight_oz: float, color_name: float) -> None:
self.weight = weight_oz
self.color = color_name
@property
def color(self) -> str:
'''Color of the Egg'''
return to_color_str(self._color_rgb)
@color.setter
def color(self, color_name: str) -> None:
self._color_rgb = to_rgb(color_name)
@property
def weight(self) -> float:
'''Weight in Ounces'''
return self._weight_gram / 29.3
@weight.setter
def weight(self, weight_oz: float) -> None:
self._weight_gram = 29.3 * weight_oz
# ...(snip)...
Regardless of which way is defined, the calling code can remain the same. The implementation of can switch from one form to the other without affecting code that uses the Egg class. Languages which implement the UAP have this property as well.
Ruby
Consider the following
y = Egg.new("Green")
y.color = "White"
puts y.color
Now the Egg class could be defined as follows
class Egg
attr_accessor :color
def initialize(color)
@color = color
end
end
The above initial code segment would work fine with the Egg being defined as such. The Egg
class could also be defined as below, where color is instead a method. The calling code would
still work, unchanged if Egg were to be defined as follows.
class Egg
def initialize(color)
@rgb_color = to_rgb(color)
end
def color
to_color_name(@rgb_color)
end
def color=(color)
@rgb_color = to_rgb(color)
end
private
def to_rgb(color_name)
.....
end
def to_color_name(color)
....
end
end
Note how even though color looks like an attribute in one case and a pair of methods
in the next, the interface to the class remains the same. The person maintaining the Egg class can switch from one form to the other without fear of breaking any caller's code.
Ruby follows the revised UAP, the attr_accessor :color only acts as syntactic sugar for generating accessor/setter methods for color. There is no way in Ruby to retrieve an instance variable from an object without calling a method on it.
Strictly speaking, Ruby does not follow Meyer's original UAP in that the syntax for accessing an attribute is different from the syntax for invoking a method. But here, the access for an attribute will always actually be through a function which is often automatically generated. So in essence, either type of access invokes a function and the language does follow Meyer's revised Uniform Access Principle.
C#
The C# language supports class properties, which provide a means to define and operations (getters and setters) for a member variable. The syntax to access or modify the property is the same as accessing any other class member variable, but the actual implementation for doing so can be defined as either a simple read/write access or as functional code.
public class Foo
{
private string _name;
// Property
public int Size
{
get; // Getter
set; // Setter
}
// Property
public string Name
{
get { return _name; } // Getter
set { _name = value; } // Setter
}
}
In the example above, class contains two properties, and . The property is an integer that can be read (get) and written (set). Similarly, the property is a string that can also be read and modified, but its value is stored in a separate (private) class variable .
Omitting the operation in a property definition makes the property read-only, while omitting the operation makes it write-only.
Use of the properties employs the UAP, as shown in the code below.
public Foo CreateFoo(int size, string name)
{
var foo = new Foo();
foo.Size = size; // Property setter
foo.Name = name; // Property setter
return foo;
}
C++
C++ has neither the UAP nor properties, when an object is changed such that an attribute (color) becomes a pair of functions (). Any place in that uses an instance of the object and either sets or gets the attribute value ( or ) must be changed to invoke one of the functions. ( or ). Using templates and operator overloading, it is possible to fake properties, but this is more complex than in languages which directly support properties. This complicates maintenance of C++ programs. Distributed libraries of C++ objects must be careful about how they provide access to member data.
JavaScript
JavaScript has had support for computed properties since 2009.
References
Articles with example Python (programming language) code
Software design
Programming paradigms
Programming principles | Uniform access principle | [
"Engineering"
] | 2,136 | [
"Design",
"Software design"
] |
2,279,144 | https://en.wikipedia.org/wiki/Microdialysis | Microdialysis is a minimally-invasive sampling technique that is used for continuous measurement of free, unbound analyte concentrations in the extracellular fluid of virtually any tissue. Analytes may include endogenous molecules (e.g. neurotransmitter, hormones, glucose, etc.) to assess their biochemical functions in the body, or exogenous compounds (e.g. pharmaceuticals) to determine their distribution within the body. The microdialysis technique requires the insertion of a small microdialysis catheter (also referred to as microdialysis probe) into the tissue of interest. The microdialysis probe is designed to mimic a blood capillary and consists of a shaft with a semipermeable hollow fiber membrane at its tip, which is connected to inlet and outlet tubing. The probe is continuously perfused with an aqueous solution (perfusate) that closely resembles the (ionic) composition of the surrounding tissue fluid at a low flow rate of approximately 0.1-5μL/min. Once inserted into the tissue or (body)fluid of interest, small solutes can cross the semipermeable membrane by passive diffusion. The direction of the analyte flow is determined by the respective concentration gradient and allows the usage of microdialysis probes as sampling as well as delivery tools. The solution leaving the probe (dialysate) is collected at certain time intervals for analysis.
History
The microdialysis principle was first employed in the early 1960s, when push-pull canulas and dialysis sacs were implanted into animal tissues, especially into rodent brains, to directly study the tissues' biochemistry. While these techniques had a number of experimental drawbacks, such as the number of samples per animal or no/limited time resolution, the invention of continuously perfused dialytrodes in 1972 helped to overcome some of these limitations. Further improvement of the dialytrode concept resulted in the invention of the "hollow fiber", a tubular semipermeable membrane with a diameter of ~200-300μm, in 1974. Today's most prevalent shape, the needle probe, consists of a shaft with a hollow fiber at its tip and can be inserted by means of a guide cannula into the brain and other tissues. An alternative method, open flow micro-perfusion (OFM), replaces the membrane with macroscopic openings which facilitates sampling of lipophilic and hydrophilic compounds, protein bound and unbound drugs, neurotransmitters, peptides and proteins, antibodies, nanoparticles and nanocarriers, enzymes and vesicles.
Microdialysis probes
There are a variety of probes with different membrane and shaft length combinations available. The molecular weight cutoff of commercially available microdialysis probes covers a wide range of approximately 6-100kD, but also 1MD is available. While water-soluble compounds generally diffuse freely across the microdialysis membrane, the situation is not as clear for highly lipophilic analytes, where both successful (e.g. corticosteroids) and unsuccessful microdialysis experiments (e.g. estradiol, fusidic acid) have been reported. However, the recovery of water-soluble compounds usually decreases rapidly if the molecular weight of the analyte exceeds 25% of the membrane’s molecular weight cutoff.
Recovery and calibration methods
Due to the constant perfusion of the microdialysis probe with fresh perfusate, a total equilibrium cannot be established. This results in dialysate concentrations that are lower than those measured at the distant sampling site. In order to correlate concentrations measured in the dialysate with those present at the distant sampling site, a calibration factor (recovery) is needed. The recovery can be determined at steady-state using the constant rate of analyte exchange across the microdialysis membrane. The rate at which an analyte is exchanged across the semipermeable membrane is generally expressed as the analyte’s extraction efficiency. The extraction efficiency is defined as the ratio between the loss/gain of analyte during its passage through the probe (Cin−Cout) and the difference in concentration between perfusate and distant sampling site (Cin−Csample).
In theory, the extraction efficiency of a microdialysis probe can be determined by: 1) changing the drug concentrations while keeping the flow rate constant or 2) changing the flow rate while keeping the respective drug concentrations constant. At steady-state, the same extraction efficiency value is obtained, no matter if the analyte is enriched or depleted in the perfusate. Microdialysis probes can consequently be calibrated by either measuring the loss of analyte using drug-containing perfusate or the gain of analyte using drug-containing sample solutions. To date, the most frequently used calibration methods are the low-flow-rate method, the no-net-flux method, the dynamic (extended) no-net-flux method, and the retrodialysis method. The proper selection of an appropriate calibration method is critically important for the success of a microdialysis experiment. Supportive in vitro experiments prior to the use in animals or humans are therefore recommended. In addition, the recovery determined in vitro may differ from the recovery in humans. Its actual value therefore needs to be determined in every in vivo experiment.
Low-flow-rate method
The low-flow-rate method is based on the fact that the extraction efficiency is dependent on the flow-rate. At high flow-rates, the amount of drug diffusing from the sampling site into the dialysate per unit time is smaller (low extraction efficiency) than at lower flow-rates (high extraction efficiency). At a flow-rate of zero, a total equilibrium between these two sites is established (Cout = Csample). This concept is applied for the (low-)flow-rate method, where the probe is perfused with blank perfusate at different flow-rates. Concentration at the sampling site can be determined by plotting the extraction ratios against the corresponding flow-rates and extrapolating to zero-flow. The low-flow-rate method is limited by the fact that calibration times may be rather long before a sufficient sample volume has been collected.
No-net-flux-method
During calibration with the no-net-flux-method, the microdialysis probe is perfused with at least four different concentrations of the analyte of interest (Cin) and steady-state concentrations of the analyte leaving the probe are measured in the dialysate (Cout). The recovery for this method can be determined by plotting Cout−Cin over Cin and computing the slope of the regression line. If analyte concentrations in the perfusate are equal to concentrations at the sampling site, no-net flux occurs. Respective concentrations at the no-net-flux point are represented by the x-intercept of the regression line. The strength of this method is that, at steady-state, no assumptions about the behaviour of the compound in the vicinity of the probe have to be made, since equilibrium exists at a specific time and place. However, under transient conditions (e.g. after drug challenge), the probe recovery may be altered resulting in biased estimates of the concentrations at the sampling site. To overcome this limitation, several approaches have been developed that are also applicable under non-steady-state conditions. One of these approaches is the dynamic no-net-flux method.
Dynamic no-net-flux method
While a single subject/animal is perfused with multiple concentrations during the no-net-flux method, multiple subjects are perfused with a single concentration during the dynamic no-net-flux (DNNF) method. Data from the different subjects/animals is then combined at each time point for regression analysis allowing determination of the recovery over time. The design of the DNNF calibration method has proven very useful for studies that evaluate the response of endogenous compounds, such as neurotransmitters, to drug challenge.
Retrodialysis
During retrodialysis, the microdialysis probe is perfused with an analyte-containing solution and the disappearance of drug from the probe is monitored. The recovery for this method can be computed as the ratio of drug lost during passage (Cin−Cout) and drug entering the microdialysis probe (Cin). In principle, retrodialysis can be performed using either the analyte itself (retrodialysis by drug) or a reference compound (retrodialysis by calibrator) that closely resembles both the physiochemical and the biological properties of the analyte. Despite the fact that retrodialysis by drug cannot be used for endogenous compounds as it requires absence of analyte from the sampling site, this calibration method is most commonly used for exogenous compounds in clinical settings.
Applications
The microdialysis technique has undergone much development since its first use in 1972, when it was first employed to monitor concentrations of endogenous biomolecules in the brain. Today's area of application has expanded to monitoring free concentrations of endogenous as well as exogenous compounds in virtually any tissue. Although microdialysis is still primarily used in preclinical animal studies (e.g. laboratory rodents, dogs, sheep, pigs), it is now increasingly employed in humans to monitor free, unbound drug tissue concentrations as well as interstitial concentrations of regulatory cytokines and metabolites in response to homeostatic perturbations such as feeding and/or exercise.
When employed in brain research, microdialysis is commonly used to measure neurotransmitters (e.g. dopamine, serotonin, norepinephrine, acetylcholine, glutamate, GABA) and their metabolites, as well as small neuromodulators (e.g. cAMP, cGMP, NO), amino acids (e.g. glycine, cysteine, tyrosine), and energy substrates (e.g. glucose, lactate, pyruvate). Exogenous drugs to be analyzed by microdialysis include new antidepressants, antipsychotics, as well as antibiotics and many other drugs that have their pharmacological effect site in the brain. The first non-metabolite to be analyzed by microdialysis in vivo in the human brain was rifampicin.
Applications in other organs include the skin (assessment of bioavailability and bioequivalence of topically applied dermatological drug products), and monitoring of glucose concentrations in patients with diabetes (intravascular or subcutaneous probe placement). The latter may even be incorporated into an artificial pancreas system for automated insulin administration.
Microdialysis has also found increasing application in environmental research, sampling a diversity of compounds from waste-water and soil solution, including saccharides, metal ions, micronutrients, organic acids, and low molecular weight nitrogen. Given the destructive nature of conventional soil sampling methods, microdialysis has potential to estimate fluxes of soil ions that better reflect an undisturbed soil environment.
Critical analysis
Advantages
To date, microdialysis is the only in vivo sampling technique that can continuously monitor drug or metabolite concentrations in the extracellular fluid of virtually any tissue. Depending on the exact application, analyte concentrations can be monitored over several hours, days, or even weeks. Free, unbound extracellular tissue concentrations are in many cases of particular interest as they resemble pharmacologically active concentrations at or close to the site of action. Combination of microdialysis with modern imaging techniques, such positron emission tomography, further allow for determination of intracellular concentrations.
Insertion of the probe in a precise location of the selected tissue further allows for evaluation of extracellular concentration gradients due to transporter activity or other factors, such as perfusion differences. It has, therefore, been suggested as the most appropriate technique to be used for tissue distribution studies.
Exchange of analyte across the semipermeable membrane and constant replacement of the sampling fluid with fresh perfusate prevents drainage of fluid from the sampling site, which allows sampling without fluid loss. Microdialysis can consequently be used without disturbing the tissue conditions by local fluid loss or pressure artifacts, which can occur when using other techniques, such as microinjection or push-pull perfusion.
The semipermeable membrane prevents cells, cellular debris, and proteins from entering into the dialysate. Due to the lack of protein in the dialysate, a sample clean-up prior to analysis is not needed and enzymatic degradation is not a concern.
Limitations
Despite scientific advances in making microdialysis probes smaller and more efficient, the invasive nature of this technique still poses some practical and ethical limitations. For example, it has been shown that implantation of a microdialysis probe can alter tissue morphology resulting in disturbed microcirculation, rate of metabolism or integrity of physiological barriers, such as the blood–brain barrier. While acute reactions to probe insertion, such as implantation traumas, require sufficient recovery time, additional factors, such as necrosis, inflammatory responses, or wound healing processes have to be taken into consideration for long-term sampling as they may influence the experimental outcome. From a practical perspective, it has been suggested to perform microdialysis experiments within an optimal time window, usually 24–48 hours after probe insertion.
Microdialysis has a relatively low temporal and spatial resolution compared to, for example, electrochemical biosensors. While the temporal resolution is determined by the length of the sampling intervals (usually a few minutes), the spatial resolution is determined by the dimensions of the probe. The probe size can vary between different areas of application and covers a range of a few millimeters (intracerebral application) up to a few centimeters (subcutaneous application) in length and a few hundred micrometers in diameter.
Application of the microdialysis technique is often limited by the determination of the probe’s recovery, especially for in vivo experiments. Determination of the recovery may be time-consuming and may require additional subjects or pilot experiments. The recovery is largely dependent on the flow rate: the lower the flow rate, the higher the recovery. However, in practice the flow rate cannot be decreased too much since either the sample volume obtained for analysis will be insufficient or the temporal resolution of the experiment will be lost. It is therefore important to optimize the relationship between flow rate and the sensitivity of the analytical assay. The situation may be more complex for lipophilic compounds as they can stick to the tubing or other probe components, resulting in a low or no analyte recovery.
References
Biochemistry methods
Cell biology
Membrane technology | Microdialysis | [
"Chemistry",
"Biology"
] | 3,075 | [
"Biochemistry methods",
"Cell biology",
"Separation processes",
"Membrane technology",
"Biochemistry"
] |
2,279,544 | https://en.wikipedia.org/wiki/Women%20in%20computing | Women in computing were among the first programmers in the early 20th century, and contributed substantially to the industry. As technology and practices altered, the role of women as programmers has changed, and the recorded history of the field has downplayed their achievements. Since the 18th century, women have developed scientific computations, including Nicole-Reine Lepaute's prediction of Halley's Comet, and Maria Mitchell's computation of the motion of Venus.
The first algorithm intended to be executed by a computer was designed by Ada Lovelace who was a pioneer in the field. Grace Hopper was the first person to design a compiler for a programming language. Throughout the 19th and early 20th century, and up to World War II, programming was predominantly done by women; significant examples include the Harvard Computers, codebreaking at Bletchley Park and engineering at NASA. After the 1960s, the computing work that had been dominated by women evolved into modern software, and the importance of women decreased.
The gender disparity and the lack of women in computing from the late 20th century onward has been examined, but no firm explanations have been established. Nevertheless, many women continued to make significant and important contributions to the IT industry, and attempts were made to readdress the gender disparity in the industry. In the 21st century, women held leadership roles in multiple tech companies, such as Meg Cushing Whitman, president and chief executive officer of Hewlett Packard Enterprise, and Marissa Mayer, president and CEO of Yahoo! and key spokesperson at Google.
History
1700s
Nicole-Reine Etable de la Brière Lepaute was one of a team of human computers who worked with Alexis-Claude Clairaut and Joseph-Jérôme Le Français de Lalande to predict the date of the return of Halley's Comet. They began work on the calculations in 1757, working throughout the day and sometimes during mealtimes. Their methods were followed by successive human computers. They divided large calculations into "independent pieces, assembled the results from each piece into a final product" and then checked for errors. Lepaute continued to work on computing for the rest of her life, working for the Connaissance des Temps and publishing predictions of solar eclipses.
1800s
One of the first computers for the American Nautical Almanac was Maria Mitchel. Her work on the assignment was to compute the motion of the planet Venus. The Almanac never became a reality, but Mitchell became the first astronomy professor at Vassar.
Ada Lovelace was the first person to publish an algorithm intended to be executed by the first modern computer, the Analytical Engine created by Charles Babbage. As a result, she is often regarded as the first computer programmer. Lovelace was introduced to Babbage's difference engine when she was 17. In 1840, she wrote to Babbage and asked if she could become involved with his first machine. By this time, Babbage had moved on to his idea for the Analytical Engine. A paper describing the Analytical Engine, Notions sur la machine analytique, published by L.F. Menabrea, came to the attention of Lovelace, who not only translated it into English, but corrected mistakes made by Menabrea. Babbage suggested that she expand the translation of the paper with her own ideas, which, signed only with her initials, AAL, "synthesized the vast scope of Babbage's vision." Lovelace imagined the kind of impact of the Analytical Engine might have on society. She drew up explanations of how the engine could handle inputs, outputs, processing and data storage. She also created several proofs to show how the engine would handle calculations of Bernoulli Numbers on its own. The proofs are considered the first examples of a computer program. Lovelace downplayed her role in her work during her life, for example, in signing her contributions with AAL so as not be "accused of bragging."
After the Civil War in the United States, more women were hired as human computers. Many were war widows looking for ways to support themselves. Others were hired when the government opened positions to women because of a shortage of men to fill the roles.
Anna Winlock asked to become a computer for the Harvard Observatory in 1875 and was hired to work for 25 cents an hour. By 1880, Edward Charles Pickering had hired several women to work for him at Harvard because he knew that women could do the job as well as men and he could ask them to volunteer or work for less pay. The women, described as "Pickering's harem" and also as the Harvard Computers, performed clerical work that the male employees and scholars considered to be tedious at a fraction of the cost of hiring a man. The women working for Pickering cataloged around ten thousand stars, discovered the Horsehead Nebula and developed the system to describe stars. One of the "computers," Annie Jump Cannon, could classify stars at a rate of three stars per minute. The work for Pickering became so popular that women volunteered to work for free even when the computers were being paid. Even though they performed an important role, the Harvard Computers were paid less than factory workers.
By the 1890s, women computers were college graduates looking for jobs where they could use their training in a useful way. Florence Tebb Weldon, was part of this group and provided computations relating to biology and evidence for evolution, working with her husband, W.F. Raphael Weldon. Florence Weldon's calculations demonstrated that statistics could be used to support Darwin's theory of evolution. Another human computer involved in biology was Alice Lee, who worked with Karl Pearson. Pearson hired two sisters to work as part-time computers at his Biometrics Lab, Beatrice and Frances Cave-Brown-Cave.
1910s
During World War I, Karl Pearson and his Biometrics Lab helped produce ballistics calculations for the British Ministry of Munitions. Beatrice Cave-Browne-Cave helped calculate trajectories for bomb shells. In 1916, Cave-Brown-Cave left Pearson's employ and started working full-time for the Ministry. In the United States, women computers were hired to calculate ballistics in 1918, working in a building on the Washington Mall. One of the women, Elizabeth Webb Wilson, worked as the chief computer. After the war, women who worked as ballistics computers for the U.S. government had trouble finding jobs in computing and Wilson eventually taught high school math.
1920s
In the early 1920s, Iowa State College, professor George Snedecor worked to improve the school's science and engineering departments, experimenting with new punch-card machines and calculators. Snedecor also worked with human calculators most of them women, including Mary Clem. Clem coined the term "zero check" to help identify errors in calculations. The computing lab, run by Clem, became one of the most powerful computing facilities of the time.
Women computers also worked at the American Telephone and Telegraph company. These human computers worked with electrical engineers to help figure out how to boost signals with vacuum tube amplifiers. One of the computers, Clara Froelich, was eventually moved along with the other computers to their own division where they worked with a mathematician, Thornton Fry, to create new computational methods. Froelich studied IBM tabulating equipment and desk calculating machines to see if she could adapt the machine method to calculations.
Edith Clarke was the first woman to earn a degree in electrical engineering and who worked as the first professionally employed electrical engineer in the United States. She was hired by General Electric as a full engineer in 1923. Clarke also filed a patent in 1921 for a graphical calculator to be used in solving problems in power lines. It was granted in 1925.
1930s
The National Advisory Committee for Aeronautics (NACA) which became NASA hired a group of five women in 1935 to work as a computer pool. The women worked on the data coming from wind tunnel and flight tests.
1940s
"Tedious" computing and calculating was seen as "women's work" through the 1940s resulting in the term "kilogirl", invented by a member of the Applied Mathematics Panel in the early 1940s. A kilogirl of energy was "equivalent to roughly a thousand hours of computing labor." While women's contributions to the United States war effort during World War II was championed in the media, their roles and the work they did was minimized. This included minimizing the complexity, skill and knowledge needed to work on computers or work as human computers. During WWII, women did most of the ballistics computing, seen by male engineers as being below their level of expertise. Black women computers worked as hard (or more often, even harder) as their white counterparts, but in segregated situations. By 1943, almost all people employed as computers were women; one report said "programming requires lots of patience, persistence and a capacity for detail and those are traits that many girls have".
NACA expanded its pool of women human computers in the 1940s. NACA recognized in 1942 that "the engineers admit themselves that the girl computers do the work more rapidly and accurately than they could." In 1943 two groups, segregated by race, worked on the east and west side of Langley Air Force Base. The black women were the West Area Computers. Unlike their white counterparts, the black women were asked by NACA to re-do college courses they had already passed and many never received promotions.
Women were also working on ballistic missile calculations. In 1948, women such as Barbara Paulson were working on the WAC Corporal, determining trajectories the missiles would take after launch.
Women worked with cryptography and, after some initial resistance, many operated and worked on the Bombe machines. Joyce Aylard operated the Bombe machine testing different methods to break the Enigma code. Joan Clarke was a cryptographer who worked with her friend, Alan Turing, on the Enigma machine at Bletchley Park. When she was promoted to a higher salary grade, there were no positions in the civil service for a "senior female cryptanalyst," and she was listed as a linguist instead. While Clarke developed a method of increasing the speed of double-encrypted messages, unlike many of the men, her decryption technique was not named after her. Other cryptographers at Bletchley included Margaret Rock, Mavis Lever (later Batey), Ruth Briggs and Kerry Howard. In 1941, Batey's work enabled the Allies to break the Italians' naval code before the Battle of Cape Matapan. In the United States, several faster Bombe machines were created. Women, like Louise Pearsall, were recruited from the WAVES to work on code breaking and operate the American Bombe machines.
Hedy Lamarr and co-inventor, George Antheil, worked on a frequency hopping method to help the Navy control torpedoes remotely. The Navy passed on their idea, but Lamarr and Antheil received a patent for the work on August 11, 1942. This technique would later be used again, first in the 1950s at Sylvania Electronic Systems Division and is used in everyday technology such as Bluetooth and Wi-Fi.
The programmers of the ENIAC computer in 1944, were six female mathematicians; Marlyn Meltzer, Betty Holberton, Kathleen Antonelli, Ruth Teitelbaum, Jean Bartik, and Frances Spence, who were human computers at the Moore School's computation lab. Adele Goldstine was their teacher and trainer and they were known as the "ENIAC girls." The women who worked on ENIAC were warned that they would not be promoted into professional ratings which were only for men. Designing the hardware was "men's work" and programming the software was "women's work." Sometimes women were given blueprints and wiring diagrams to figure out how the machine worked and how to program it. They learned how the ENIAC worked by repairing it, sometimes crawling through the computer, and by fixing "bugs" in the machinery. Even though the programmers were supposed to be doing the "soft" work of programming, in reality, they did that and fully understood and worked with the hardware of the ENIAC. When the ENIAC was revealed in 1946, Goldstine and the other women prepared the machine and the demonstration programs it ran for the public. None of their work in preparing the demonstrations was mentioned in the official accounts of the public events. After the demonstration, the university hosted an expensive celebratory dinner to which none of the ENIAC six were invited.
In Canada, Beatrice Worsley started working at the National Research Council of Canada in 1947 where she was an aerodynamics research officer. A year later, she started working in the new Computational Centre at the University of Toronto. She built a differential analyzer in 1948 and also worked with IBM machines in order to do calculations for Atomic Energy of Canada Limited. She went to study the EDSAC at the University of Cambridge in 1949. She wrote the program that was run the first time EDSAC performed its first calculations on May 6, 1949.
Grace Hopper was the first person to create a compiler for a programming language and one of the first programmers of the Harvard Mark I computer, an electro-mechanical computer based on Analytical Engine. Hopper's work with computers started in 1943, when she started working at the Bureau of Ordnance's Computation Project at Harvard where she programmed the Harvard Mark I. Hopper not only programmed the computer, but created a 500-page comprehensive manual for it. Even though Hopper created the manual, which was widely cited and published, she was not specifically credited in it. Hopper is often credited with the coining of the term "bug" and "debugging" when a moth caused the Mark II to malfunction. While a moth was found and the process of removing it called "debugging," the terms were already part of the language of programmers.
1950s
Grace Hopper continued to contribute to computer science through the 1950s. She brought the idea of using compilers from her time at Harvard to UNIVAC which she joined in 1949. Other women who were hired to program UNIVAC included Adele Mildred Koss, Frances E. Holberton, Jean Bartik, Frances Morello and Lillian Jay. To program the UNIVAC, Hopper and her team used the FLOW-MATIC programming language, which she developed. Holberton wrote a code, C-10, that allowed for keyboard inputs into a general-purpose computer. Holberton also developed the Sort-Merge Generator in 1951 which was used on the UNIVAC I. The Sort-Merge Generator marked the first time a computer "used a program to write a program." Holberton suggested that computer housing should be beige or oatmeal in color which became a long-lasting trend. Koss worked with Hopper on various algorithms and a program that was a precursor to a report generator.
Klara Dan von Neumann was one of the main programmers of the MANIAC, a more advanced version of ENIAC. Her work helped the field of meteorology and weather prediction.
The NACA, and subsequently NASA, recruited women computers following World War II. By the 1950s, a team was performing mathematical calculations at the Lewis Research Center in Cleveland, Ohio, including Annie Easley, Katherine Johnson and Kathryn Peddrew. At the National Bureau of Standards, Margaret R. Fox was hired to work as part of the technical staff of the Electronic Computer Laboratory in 1951. In 1956, Gladys West was hired by the U.S. Naval Weapons Laboratory as a human computer. West was involved in calculations that let to the development of GPS.
At Convair Aircraft Corporation, Joyce Currie Little was one of the original programmers for analyzing data received from the wind tunnels. She used punch cards on an IBM 650 which was located in a different building from the wind tunnel. To save time in the physical delivery of the punch cards, she and her colleague, Maggie DeCaro, put on roller skates to get to and from the building faster.
In Israel, Thelma Estrin worked on the design and development of WEIZAC, one of the world's first large-scale programmable electronic computers. In the Soviet Union a team of women helped design and build the first digital computer in 1951. In the UK, Kathleen Booth worked with her husband, Andrew Booth on several computers at Birkbeck College. Kathleen Booth was the programmer and Andrew built the machines. Kathleen developed Assembly Language at this time.
Mary Coombs (of England) was employed in 1952 as the first female programmer to work on the LEO computers, and as such she is recognized as the first female commercial programmer.
Ukrainian Kateryna Yushchenko created Address (programming language) for the cоmputer "Kyiv" in 1955 and invented indirect addressing of the highest rank, called pointers.
1960s
Milly Koss who had worked at UNIVAC with Hopper, started work at Control Data Corporation (CDC) in 1965. There she developed algorithms for graphics, including graphic storage and retrieval.
Mary K. Hawes of Burroughs Corporation set up a meeting in 1959 to discuss the creation a computer language that would be shared between businesses. Six people, including Hopper, attended to discuss the philosophy of creating a common business language (CBL). Hopper became involved in developing COBOL (Common Business Oriented Language) where she innovated new symbolic ways to write computer code. Hopper developed programming language that was easier to read and "self-documenting." After COBOL was submitted to the CODASYL Executive Committee, Betty Holberton did further editing on the language before it was submitted to the Government Printing Office in 1960. IBM were slow to adopt COBOL, which hindered its progress but it was accepted as a standard in 1962, after Hopper had demonstrated the compiler working both on UNIVAC and RCA computers. The development of COBOL led to the generation of compilers and generators, most of which were created or refined by women such as Koss, Nora Moser, Deborah Davidson, Sue Knapp, Gertrude Tierney and Jean E. Sammet.
Sammet, who worked at IBM starting in 1961 was responsible for developing the programming language, FORMAC. She published a book, Programming Languages: History and Fundamentals (1969), which was considered the "standard work on programming languages," according to Denise Gürer It was "one of the most used books in the field," according to The Times in 1972.
Between 1961 and 1963, Margaret Hamilton began to study software reliability while she was working at the US SAGE air defense system. In 1965, she was responsible for programming the software for the onboard flight software on the Apollo mission computers. After Hamilton had completed the program, the code was sent to Raytheon where "expert seamstresses" called the "Little Old Ladies" actually hardwired the code by threading copper wire through magnetic rings. Each system could store more than 12,000 words that were represented by the copper wires.
In 1964, the British Prime Minister Harold Wilson announced a "White-Hot" revolution in technology, that would give greater prominence to IT work. As women still held most computing and programming positions at this time, it was hoped that it would give them more positive career prospects. In 1965, Sister Mary Kenneth Keller became the first American woman to earn a doctorate in computer science. Keller helped develop BASIC while working as a graduate student at Dartmouth, where the university "broke the 'men only' rule" so she could use its computer science center.
In 1966, Frances "Fran" Elizabeth Allen who was developing programming language compilers at IBM Research, published a paper entitled "Program Optimization,". It laid the conceptual basis for systematic analysis and transformation of computer programs. This paper introduced the use of graph-theoretic structures to encode program content in order to automatically and efficiently derive relationships and identify opportunities for optimization.
Christine Darden began working for NASA's computing pool in 1967 having graduated from the Hampton Institute. Women were involved in the development of Whirlwind, including Judy Clapp. She created the prototype for an air defense system for Whirlwind which used radar input to track planes in the air and could direct aircraft courses.
In 1969, Elizabeth "Jake" Feinler, who was working for Stanford, made the first Resource Handbook for ARPANET. This led to the creation of the ARPANET directory, which was built by Feinler with a staff of mostly women. Without the directory, "it was nearly impossible to navigate the ARPANET."
By the end of the decade, the general demographics of programmers had shifted away from being predominantly women, as they had before the 1940s. Though women accounted for around 30 to 50 percent of computer programmers during the 1960s, few were promoted to leadership roles and women were paid significantly less than their male counterparts. Cosmopolitan ran an article in the April 1967 issue about women in programming called "The Computer Girls." Even while magazines such as Cosmopolitan saw a bright future for women in computers and computer programming in the 1960s, the reality was that women were still being marginalized.
1970s
In the early 1970s, Pam Hardt-English led a group to create a computer network they named Resource One and which was part of a group called Project One. Her idea to connect Bay Area bookstores, libraries and Project One was an early prototype of the Internet. To work on the project, Hardt-English obtained an expensive SDS-940 computer as a donation from TransAmerica Leasing Corporation in April 1972. They created an electronic library and housed it in a record store called Leopold's in Berkeley. This became the Community Memory database and was maintained by hacker Jude Milhon. After 1975, the SDS-940 computer was repurposed by Sherry Reson, Mya Shone, Chris Macie and Mary Janowitz to create a social services database and a Social Services Referral Directory. Hard copies of the directory, printed out as a subscription service, were kept at city buildings and libraries. The database was maintained and in use until 2009.
In the early 1970s, Elizabeth "Jake" Feinler, who worked on the Resource Directory for ARPANET, and her team created the first WHOIS directory. Feinler set up a server at the Network Information Center (NIC) at Stanford which would work as a directory that could retrieve relevant information about a person or entity. She and her team worked on the creation of domains, with Feinler suggesting that domains be divided by categories based on where the computers were kept. For example, military computers would have the domain of .mil, computers at educational institutions would have .edu. Feinler worked for NIC until 1989.
Jean E. Sammet served as the first woman president of the Association for Computing Machinery (ACM), holding the position between 1974 and 1976.
Adele Goldberg was one of seven programmers that developed Smalltalk in the 1970s, and wrote the majority of the language's documentation. It was one of the first object-oriented programming languages the base of the current graphic user interface, that has its roots in the 1968 The Mother of All Demos by Douglas Engelbart. Smalltalk was used by Apple to launch Apple Lisa in 1983, the first personal computer with a GUI, and a year later its Macintosh. Windows 1.0, based on the same principles, was launched a few months later in 1985.
In the late 1970s, women such as Paulson and Sue Finley wrote programs for the Voyager mission. Voyager continues to carry their codes inside its own memory banks as it leaves the solar system. In 1979, Ruzena Bajcsy founded the General Robotics, Automation, Sensing and Perception (GRASP) Lab at the University of Pennsylvania.
In the mid-70s, Joan Margaret Winters began working at IBM as part of a "human factors project," called SHARE. In 1978, Winters was the deputy manager of the project and went on to lead the project between 1983 and 1987. The SHARE group worked on researching how software should be designed to consider human factors.
Erna Schneider Hoover developed a computerized switching system for telephone calls that would replace switchboards. Her software patent for the system, issued in 1971, was one of the first software patents ever issued.
1980s
Gwen Bell developed the Computer Museum in 1980. The museum, which collected computer artifacts became a non-profit organization in 1982 and in 1984, Bell moved it to downtown Boston. Adele Goldberg served as president of ACM between 1984 and 1986.
In 1981, Deborah Washington Brown became the first African American woman to earn a Ph.D. in computer science from Harvard University (at the time the degree was part of the applied mathematics program). Her thesis was titled "The solution of difference equations describing array manipulation in program loops". Shortly after, in 1982, Marsha R. Williams became the second African American woman to earn a Ph.D. in computer science.
Sometimes known as the "Betsy Ross of the personal computer," according to the New York Times, Susan Kare worked with Steve Jobs to design the original icons for the Macintosh. Kare designed the moving watch, paintbrush and trash can elements that made using a Mac user-friendly. Kare worked for Apple until the mid-1980s, going on to work on icons for Windows 3.0. Other types of computer graphics were being developed by Nadia Magnenat Thalmann in Canada. Thalmann started working on computer animation to develop "realistic virtual actors" first at the University of Montréal in 1980 and later in 1988 at the École Polytechnique Fédérale de Lausanne.
Computer and video games became popular in the 1980s, but many were primarily action-oriented and not designed from a woman's point of view. Stereotypical characters such as the damsel in distress featured prominently and consequently were not inviting towards women. Dona Bailey designed Centipede, where the player shoots insects, as a reaction to such games, later saying "It didn't seem bad to shoot a bug". Carol Shaw, considered to be the first modern female games designer, released a 3D version of tic-tac-toe for the Atari 2600 in 1980. Roberta Williams and her husband Ken, founded Sierra Online and pioneered the graphic adventure game format in Mystery House and the King's Quest series. The games had a friendly graphical user interface and introduced humor and puzzles. Cited as an important game designer, her influence spread from Sierra to other companies such as LucasArts and beyond. Brenda Laurel ported games from arcade versions to the Atari 8-bit computers in the late 1970s and early 1980s. She then went to work for Activision and later wrote the manual for Maniac Mansion.
1984 was the year of Women into Science and Engineering (WISE Campaign). A 1984 report by Ebury Publishing reported that in a typical family, only 5% of mothers and 19% of daughters were using a computer at home, compared to 25% of fathers and 51% of sons. To counteract this, the company launched a series of software titles designed towards women and publicized in Good Housekeeping. Anita Borg, who had been noticing that women were under-represented in computer science, founded an email support group, Systers, in 1987.
As Ethernet became the standard for networking computers locally, Radia Perlman, who worked at Digital Equipment Corporation (DEC), was asked to "fix" limitations that Ethernet imposed on large network traffic. In 1985, Perlman came up with a way to route information packets from one computer to another in an "infinitely scalable" way that allowed large networks like the Internet to function. Her solution took less than a few days to design and write up. The name of the algorithm she created is the Spanning Tree Protocol. In 1986, Lixia Zhang was the only woman and graduate student to participate in the early Internet Engineering Task Force (IETF) meetings. Zhang was involved in early Internet development.
In Europe, project was developed in the mid-1980s to create an academic network in Europe using the Open System Interconnection (OSI) standards. Borka Jerman Blažič, a Yugoslavian computer scientist was invited to work on the project. She was involved in establishing a Yugoslav Research and Academic Network (YUNAC) in 1989 and registered the domain of .yu for the country.
In the field of human–computer interaction (HCI), French computer scientist, Joëlle Coutaz developed the presentation-abstraction-control (PAC) model in 1987. She founded the User Interface group at the Laboratorire de Génie Informatique of IMAG where they worked on different problems relating to user interface and other software tools.
In 1988, Stacy Horn, who had been introduced to bulletin board systems (BBS) through The WELL, decided to create her own online community in New York, which she called the East Coast Hang Out (ECHO). Horn invested her own money and pitched the idea for ECHO to others after bankers refused to hear her business plan. Horn built her BBS using UNIX, which she and her friends taught to one another. Eventually ECHO moved an office in Tribeca in the early 1990s and started getting press attention. ECHO's users could post about topics that interested them, and chat with one another, and were provided email accounts. Around half of ECHO's users were women. ECHO was still online as of 2018.
1990s
By the 1990s, computing was dominated by men. The proportion of female computer science graduates peaked in 1984 around 37 per cent, and then steadily declined. Although the end of the 20th century saw an increase in women scientists and engineers, this did not hold true for computing, which stagnated. Despite this, they were very involved in working on hypertext and hypermedia projects in the late 1980s and early 1990s. A team of women at Brown University, including Nicole Yankelovich and Karen Catlin, developed Intermedia and invented the anchor link. Apple partially funded their project and incorporated their concepts into Apple operating systems. Sun Microsystems Sun Link Service was developed by Amy Pearl. Janet Walker developed the first system to use bookmarks when she created the Symbolics Document Examiner. In 1989, Wendy Hall created a hypertext project called Microcosm, which was based on digitized multimedia material found in the Mountbatten archive. Cathy Marshall worked on the NoteCards system at Xerox PARC. NoteCards went on to influence Apple's HyperCard. As the Internet became the World Wide Web, developers like Hall adapted their programs to include Web viewers. Her Microcosm was especially adaptable to new technologies, including animation and 3-D models. In 1994, Hall helped organize the first conference for the Web.
Sarah Allen, the co-founder of After Effects, co-founded a commercial software company called CoSA in 1990. In 1995, she started working on the Shockwave team for Macromedia where she was the lead developer of the Shockwave Mulituser Server, the Flash Media Server and Flash video.
Following the increased popularity of the Internet in the 1990s, online spaces were set up to cater for women, including the online community Women's WIRE and the technical and support forum LinuxChix. Women's WIRE, launched by Nancy Rhine and Ellen Pack in October 1993, was the first Internet company to specifically target this demographic. A conference for women in computer-related jobs, the Grace Hopper Celebration of Women in Computing, was first launched in 1994 by Anita Borg.
Game designer Brenda Laurel started working at Interval Research in 1992, and began to think about the differences in the way girls and boys experienced playing video games. After interviewing around 1,000 children and 500 adults, she determined that games weren't designed with girls' interests in mind. The girls she spoke with wanted more games with open worlds and characters they could interact with. Her research led to Interval Research giving Laurel's research team their own company in 1996, Purple Moon. Also in 1996, Mattel's game, Barbie Fashion Designer, became the first best-selling game for girls. Purple Moon's first two games based on a character called Rockett, made it to the 100 best-selling games in the years they were released. In 1999, Mattel bought out Purple Moon.
Jaime Levy created one of the first e-Zines in the early 1990s, starting with CyberRag, which included articles, games and animations loaded onto diskettes that anyone with a Mac could access. Later, she renamed the zine to Electronic Hollywood. Billy Idol commissioned Levy to create a disk for his album, Cyberpunk. She was hired to be the creative director of the online magazine, Word, in 1995.
Cyberfeminists, VNS Matrix, made up of Josephine Starrs, Juliane Pierce, Francesca da Rimini and Virginia Barratt, created art in the early 1990s linking computer technology and women's bodies. In 1997, there was a gathering of cyberfeminists in Kassel, called the First Cyberfeminist International.
In China, Hu Qiheng, was the leader of the team who installed the first TCP/IP connection for China, connecting to the Internet on April 20, 1994. In 1995, Rosemary Candlin went to write software for CERN in Geneva. In the early 1990s, Nancy Hafkin was an important figure in working with the Association for Progressive Communications (APC) in enabling email connections in 10 African countries. Starting in 1999, Anne-Marie Eklund Löwinder began to work with Domain Name System Security Extensions (DNSSEC) in Sweden. She later made sure that the domain, .se, was the world's first top level domain name to be signed with DNSSEC.
In the late 1990s, research by Jane Margolis led Carnegie Mellon to try to correct the male-female imbalance in computer science.
From the late 1980s until the mid-1990s, Misha Mahowald developed several key foundations of the field of Neuromorphic engineering, while working at the California Institute of Technology and later at the ETH Zurich. More than 20 years after her untimely death, the Misha Mahowald Prize was named after her to recognize excellence in the field which she helped to create.
2000s
In the 21st century, several attempts have been made to reduce the gender disparity in IT and get more women involved in computing again. A 2001 survey found that while both sexes use computers and the internet in equal measure, women were still five times less likely to choose it as a career or study the subject beyond standard secondary education. Journalist Emily Chang said a key problem has been personality tests in job interviews and the belief that good programmers are introverts, which tends to self-select the stereotype of an asocial white male nerd.
In 2004, the National Center for Women & Information Technology was established by Lucy Sanders to address the gender gap. Carnegie Mellon University has made a concerted attempt to increase gender diversity in the computer science field, by selecting students based on a wide criteria including leadership ability, a sense of "giving back to the community" and high attainment in maths and science, instead of traditional computer programming expertise. As well as increase the intake of women into CMU, the programme produced better quality students because of the increased diversity making a stronger team.
2010s
Despite the pioneering work of some designers, video games are still considered biased towards men. A 2013 survey by the International Game Developers Association revealed only 22% of game designers are women, although this is substantially higher than figures in previous decades. Working to bring inclusion to the world of open source project development, Coraline Ada Ehmke drafted the Contributor Covenant in 2014. By 2018, over 40,000 software projects have started using the Contributor Covenant, including TensorFlow, Vue and Linux. In 2014, Danielle George, professor at the School of Electrical and Electronic Engineering, University of Manchester spoke at the Royal Institution Christmas Lectures on the subject of "how to hack your home", describing simple experiments involving computer hardware and demonstrating a giant game of Tetris by remote controlling lights in an office building.
In 2017, Michelle Simmons founded the first quantum computing company in Australia. The team, which has made "great strides" in 2018, plans to develop a 10-qubit prototype silicon quantum integrated circuit by 2022. In the same year, Doina Precup became the head of DeepMind Montreal, working on artificial intelligence. Xaviera Kowo is a programmer from Cameroon, who won the Margaret award, for programming a robot which processes waste in 2022.
2020s
In 2023 the EU-Startups the leading online publication with a focus on startups in Europe published the list of top 100 of the most influential women in the startup and venture capital space in Europe. The theme of the list reflects the era of innovation and technological change. That being said, there are plenty of inspiring women in Europe's startup and all around the world in VC space who are making daily changes possible and encouraging a new generation of female for entrepreneurship and innovation.
Gender gap in computing
While computing began as a field heavily dominated by women, this changed in western countries shortly after World War II. In the US, recognizing software development was a significant expense, companies wanted to hire an "ideal programmer". Psychologists William Cannon and Dallis Perry were hired to develop an aptitude test for programmers, and from an industry that was more than 50% women they selected 1400 people, 1200 of whom were male. This paper was highly influential and claimed to have "trained the industry" in hiring programmers, with a heavy focus on introverts and men. In Britain, following the war, women programmers were selected for redundancy and forced retirement, leading to the country losing its position as computer science leader by 1974.
Popular theories are favored about the lack of women in computer science, which discount historical and social circumstances. In 1992, John Gray's Men Are from Mars, Women Are from Venus theorized that men and women tend to differ in ways of thinking, leading to them approaching technology and computing in different ways. A significant issue is that women find themselves working in an environment that is largely unpleasant, so they decline to continue in those careers. A further issue is that if a class of computer scientists contains few women, those few can be singled out, leading to isolation and feelings of non-belonging, which can culminate in leaving the area.
The gender disparity in IT is not global. The ratio of female to male computer scientists is significantly higher in India compared to the West, and in 2015, over half of internet entrepreneurs in China were women. In Europe, Bulgaria and Romania have the highest rates of women going into computer programming. In government universities in Saudi Arabia in 2014, Arab women made up 59% of students enrolled in computer science. It has been suggested there is a greater gap in countries where people of both sexes are treated more equally, contradicting any theories that society in general is to blame for any disparity. However, the ratio of African American female computer scientists in the US is significantly lower than the global average. In IT-based organisations, the ratio of men to women can vary between roles; for example, while most software developers at InfoWatch are male, half of usability designers and 80% of project managers are female.
In 1991, Massachusetts Institute of Technology undergraduate Ellen Spertus wrote an essay "Why Are There So Few Women in Computer Science?", examining inherent sexism in IT, which was responsible for a lack of women in computing. She subsequently taught computer science at Mills College, Oakland in order to increase interest in IT for women. A key problem is a lack of female role models in the IT industry, alongside computer programmers in fiction and the media generally being male.
The University of Southampton's Wendy Hall has said the attractiveness of computers to women decreased significantly in the 1980s when they "were sold as toys for boys", and believes the cultural stigma has remained ever since, and may even be getting worse. Kathleen Lehman, project manager of the BRAID Initiative at UCLA has said a problem is that typically women aim for perfection and feel disillusioned when code does not compile, whereas men may simply treat it as a learning experience. A report in the Daily Telegraph suggested that women generally prefer people-facing jobs, which many computing and IT positions do not have, while men prefer jobs geared towards objects and tasks. One issue is that the history of computing has focused on the hardware, which was a male dominated field, despite software being written predominantly by women in the early to mid 20th century.
In 2013, a National Public Radio report said 20% of computer programmers in the US are female. There is no general consensus for any key reason there are less women in computing. In 2017, an engineer was fired from Google after claiming there was a biological reason for a lack of female computer scientists.
Dame Stephanie Shirley using the name Steve Shirley addressed some of the problems facing women in computing in the UK by setting up the software company Freelance Programmers (later F.I, then Xansa now Steria Sopra) offering the chance for women to work from home and part-time work.
Awards
The Association for Computing Machinery Turing Award, sometimes referred to as the "Nobel Prize" of computing, was named in honor of Alan Turing. This award has been won by three women between 1966 and 2015.
2006 – Frances "Fran" Elizabeth Allen
2008 – Barbara Liskov
2012 – Shafi Goldwasser
The British Computer Society Information Retrieval Specialist Group (BCS IRSG) in conjunction with the British Computer Society created an award in 2008 to commemorate the achievements of Karen Spärck Jones, a Professor Emerita of Computers and Information at the University of Cambridge and one of the most remarkable women in computer science. The KSJ award has been won by four women between 2009 and 2017:
2009 – Mirella Lapata
2012 – Diane Kelly
2016 – Jaime Teevan
Organizations
Several important groups have been established to encourage women in the IT industry. The Association for Women in Computing was one of the first and is dedicated to promoting the advancement of women in computing professions. The CRA-W: Committee on the Status of Women in Computing Research established in 1991 focused on increasing the number of women in Computer Science and Engineering (CSE) research and education at all levels. AnitaB.org runs the Grace Hopper Celebration of Women in Computing yearly conference. The National Center for Women & Information Technology is a nonprofit that aims to increase the number of women in technology and computing. The Women in Technology International (WITI) is a global organization dedicated to the advancement of women in business and technology. The Arab Women in Computing has many chapters across the world and focuses on encouraging women to work with technology and provides networking opportunities between industry experts and academicians and university students.
Some major societies and groups have offshoots dedicated to women. The Association for Computing Machinery's Council on Women in Computing (ACM-W) has over 36,000 members. BCSWomen is a women-only specialist group of the British Computer Society, founded in 2001. In Ireland, the charity Teen Turn run after school training and work placements for girls, and Women in Technology and Science (WITS) advocate for the inclusion and promotion of women within STEM industries.
The Women's Technology Empowerment Centre (W.TEC) is a non-profit organization focused on providing technology education and mentoring to Nigerian women and girls. Black Girls Code is a non-profit focused on providing technology education to young African-American women.
Other organisations dedicated to women in IT include Girl Develop It, a nonprofit organization that provides affordable programs for adult women interested in learning web and software development in a judgment-free environment, Girl Geek Dinners, an International group for women of all ages, Girls Who Code: a national non-profit organization dedicated to closing the gender gap in technology, LinuxChix, a women-oriented community in the open source movement and Systers, a moderated listserv dedicated to mentoring women in the IT industry.
See also
List of female mathematicians
List of female scientists
List of organizations for women in science
List of women astronauts
List of prizes, medals, and awards for women in science
List of women in the video game industry
Timeline of women in computing
Women and video games
Women in computing in Canada
Women in engineering
Women in science
Women in STEM fields
Women in the workforce
Women in venture capital
References
Citations
Works cited
Further reading
Natarajan, Priyamvada, "Calculating Women" (review of Margot Lee Shetterly, Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race, William Morrow; Dava Sobel, The Glass Universe: How the Ladies of the Harvard Observatory Took the Measure of the Stars, Viking; and Nathalia Holt, Rise of the Rocket Girls: The Women Who Propelled Us, from Missiles to the Moon to Mars, Little, Brown), The New York Review of Books, vol. LXIV, no. 9 (May 25, 2017), pp. 38–39.
External links
Carnegie Mellon Project on Gender and Computer Science
National Center for Women & Information Technology US
Equate Scotland
Institute for Women in Trades, Technology and Science
MNT – Mulheres na Tecnologia Brazil
Resources related to Women in Computing US
Society for Canadian Women in Science and Technology
Women in Science, Engineering, and Technology UK
Women's Engineering Society UK
When Women Stopped Coding
Global Gender Gap Report 2021I INSIGHT REPORTMARCH 2021
Global Annual Results Report 2022: Gender equality
History of computer science | Women in computing | [
"Technology"
] | 9,298 | [
"History of computer science",
"Computer science",
"History of computing"
] |
2,279,684 | https://en.wikipedia.org/wiki/Guanidine | Guanidine is the compound with the formula HNC(NH2)2. It is a colourless solid that dissolves in polar solvents. It is a strong base that is used in the production of plastics and explosives. It is found in urine predominantly in patients experiencing renal failure. A guanidine moiety also appears in larger organic molecules, including on the side chain of arginine.
Structure
Guanidine can be thought of as a nitrogenous analogue of carbonic acid. That is, the C=O group in carbonic acid is replaced by a C=NH group, and each OH is replaced by a group. Isobutene can be seen as the carbon analogue in much the same way. A detailed crystallographic analysis of guanidine was elucidated 148 years after its first synthesis, despite the simplicity of the molecule. In 2013, the positions of the hydrogen atoms and their displacement parameters were accurately determined using single-crystal neutron diffraction.
Production
Guanidine can be obtained from natural sources, being first isolated in 1861 by Adolph Strecker via the oxidative degradation of an aromatic natural product, guanine, isolated from Peruvian guano.
A laboratory method of producing guanidine is gentle (180-190 °C) thermal decomposition of dry ammonium thiocyanate in anhydrous conditions:
The commercial route involves a two step process starting with the reaction of dicyandiamide with ammonium salts. Via the intermediacy of biguanidine, this ammonolysis step affords salts of the guanidinium cation (see below). In the second step, the salt is treated with base, such as sodium methoxide.
Isothiouronium salts (S-alkylated thioureas) react with amines to give guanidinium salts:
RNH2 + [CH3SC(NH2)2]+X− → [RN(H)C(NH2)2]+X− + CH3SH
The resulting guanidinium ions can often be deprotonated to give the guanidine. This approach is sometimes called the Rathke synthesis, in honor of its discoverer. after Bernhard Rathke
Chemistry
Guanidinium cation
The conjugate acid is called the guanidinium cation, (). This planar, symmetric ion consists of three amino groups each bonded to the central carbon atom with a covalent bond of order 4/3. It is a highly stable +1 cation in aqueous solution due to the efficient resonance stabilization of the charge and efficient solvation by water molecules. As a result, its pKaH is 13.6 (pKb of 0.4) meaning that guanidine is a very strong base in water; in neutral water, it exists almost exclusively as guanidinium. Due to this, most guanidine derivatives are salts containing the conjugate acid.
Testing for guanidine
Guanidine can be selectively detected using sodium 1,2-naphthoquinone-4-sulfonic acid (Folin's reagent) and acidified urea.
Uses
Industry
The main salt of commercial interest is the nitrate [C()3]. It is used as a propellant, for example in air bags.
Medicine
Since the Middle Ages in Europe, guanidine has been used to treat diabetes as the active antihyperglycemic ingredient in French lilac. Due to its long-term hepatotoxicity, further research for blood sugar control was suspended at first after the discovery of insulin. Later development of nontoxic, safe biguanides led to the long-used first-line diabetes control medicine metformin, introduced to Europe in the 1950s & United States in 1995 and now prescribed to over 17 million patients per year in the US.
Guanidinium chloride is a now-controversial adjuvant in treatment of botulism. Recent studies have shown some significant subsets of patients who see no improvement after the administration of this drug.
Biochemistry
Guanidine exists protonated, as guanidinium, in solution at physiological pH.
Guanidinium chloride (also known as guanidine hydrochloride) has chaotropic properties and is used to denature proteins. Guanidinium chloride is known to denature proteins with a linear relationship between concentration and free energy of unfolding. In aqueous solutions containing 6 M guanidinium chloride, almost all proteins lose their entire secondary structure and become randomly coiled peptide chains. Guanidinium thiocyanate is also used for its denaturing effect on various biological samples.
Recent studies suggest that guanidinium is produced by bacteria as a toxic byproduct. To alleviate the toxicity of guanidinium, bacteria have developed a class of transporters known as guanidinium exporters or Gdx proteins to expel the extra amounts of this ion to the outside of the cell. Gdx proteins, are highly selective for guanidinium and mono-substituted guanidinyl compounds and share an overlapping set of non-canonical substrates with drug exporter EmrE.
Other
Guanidinium hydroxide is the active ingredient in some non-lye hair relaxers.
Guanidine derivatives
Guanidines are a group of organic compounds sharing a common functional group with the general structure . The central bond within this group is that of an imine, and the group is related structurally to amidines and ureas. Examples of guanidines are arginine, triazabicyclodecene, saxitoxin, and creatine.
Galegine is an isoamylene guanidine.
See also
Sakaguchi test
Y-aromaticity
Amidine
References
Guanidines
Bases (chemistry)
Organic compounds with 1 carbon atom | Guanidine | [
"Chemistry"
] | 1,222 | [
"Guanidines",
"Functional groups",
"Organic compounds",
"Bases (chemistry)",
"Organic compounds with 1 carbon atom"
] |
2,279,750 | https://en.wikipedia.org/wiki/Propidium%20iodide | Propidium iodide (or PI) is a fluorescent intercalating agent that can be used to stain cells and nucleic acids. PI binds to DNA by intercalating between the bases with little or no sequence preference. When in an aqueous solution, PI has a fluorescent excitation maximum of 493 nm (blue-green), and an emission maximum of 636 nm (red). After binding DNA, the quantum yield of PI is enhanced 20-30 fold, and the excitation/emission maximum of PI is shifted to 535 nm (green) / 617 nm (orange-red). Propidium iodide is used as a DNA stain in flow cytometry to evaluate cell viability or DNA content in cell cycle analysis, or in microscopy to visualize the nucleus and other DNA-containing organelles. Propidium Iodide is not membrane-permeable, making it useful to differentiate necrotic, apoptotic and healthy cells based on membrane integrity. PI also binds to RNA, necessitating treatment with nucleases to distinguish between RNA and DNA staining. PI is widely used in fluorescence staining and visualization of the plant cell wall.
See also
Viability assay
Vital stain
SYBR Green I
Ethidium bromide
References
Flow cytometry
DNA-binding substances
Iodides
Phenanthridine dyes
Staining dyes | Propidium iodide | [
"Chemistry",
"Biology"
] | 294 | [
"Genetics techniques",
"DNA-binding substances",
"Flow cytometry"
] |
2,279,892 | https://en.wikipedia.org/wiki/Wirth%27s%20law | Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.
The adage is named after Niklaus Wirth, a computer scientist who discussed it in his 1995 article "A Plea for Lean Software".
History
Wirth attributed the saying to Martin Reiser, who in the preface to his book on the Oberon System wrote: "The hope is that the progress in hardware will cure all software ills. However, a critical observer may observe that software manages to outgrow hardware in size and sluggishness." Other observers had noted this for some time before; indeed, the trend was becoming obvious as early as 1987.
He states two contributing factors to the acceptance of ever-growing software as: "rapidly growing hardware performance" and "customers' ignorance of features that are essential versus nice-to-have". Enhanced user convenience and functionality supposedly justify the increased size of software, but Wirth argues that people are increasingly misinterpreting complexity as sophistication, that "these details are cute but not essential, and they have a hidden cost". As a result, he calls for the creation of "leaner" software and pioneered the development of Oberon, a software system developed between 1986 and 1989 based on nothing but hardware. Its primary goal was to show that software can be developed with a fraction of the memory capacity and processor power usually required, without sacrificing flexibility, functionality, or user convenience.
Other names
The law was restated in 2009 and attributed to Google co-founder Larry Page. It has been referred to as Page's law. The first use of that name is attributed to fellow Google co-founder Sergey Brin at the 2009 Google I/O Conference.
Other common forms use the names of the leading hardware and software companies of the 1990s, Intel and Microsoft, or their CEOs, Andy Grove and Bill Gates, for example "What Intel giveth, Microsoft taketh away" and Andy and Bill's law: "What Andy giveth, Bill taketh away".
Gates's law ("The speed of software halves every 18 months") is an anonymously coined variant on Wirth's law, its name referencing Bill Gates, co-founder of Microsoft. It is an observation that the speed of commercial software generally slows by 50% every 18 months, thereby negating all the benefits of Moore's law. This could occur for a variety of reasons: feature creep, code cruft, developer laziness, lack of funding, forced updates, forced porting (to a newer OS or to support a new technology) or a management turnover whose design philosophy does not coincide with the previous manager.
May's law, named after David May, is a variant stating: "Software efficiency halves every 18 months, compensating Moore's law".
See also
Code bloat
Feature creep
Jevons paradox
Minimalism (computing)
No Silver Bullet
Parkinson's law
Software bloat
Waste
References
Further reading
Adages
Computer architecture statements
Computing culture
Rules of thumb
Technology strategy | Wirth's law | [
"Technology"
] | 639 | [
"Computing culture",
"Computing and society"
] |
17,610,383 | https://en.wikipedia.org/wiki/Rodney%20Fitch | Rodney Arthur Fitch, CBE (19 August 1938 – 20 October 2014) was an English designer. He founded the design company Fitch in 1972, and rejoined it as chairman and CEO in 2004. He was appointed Commander of The Most Excellent Order of the British Empire (CBE) in 1990 for his 'influence on the British Design Industry'.
Fitch died of cancer on 20 October 2014, at the age of 76.
Background
Fitch had a successful career in design which allowed him to be active in the development of design education and the arts in the United Kingdom. At the time of his death he held the title of Senior Governor of the University of the Arts, located in London. Fitch was awarded a Commander of the Order of the British Empire in 1990 for his influence on the British design industry.
Experience
Fitch had the following experience:
trustee of the Victoria & Albert Museum
Chairman of V & A Enterprises
member of the Design Council
A member of the Council of the Royal College of Art
President of the Designers and Art Directors Association
President of Chartered Society of Designers
References
1938 births
Deaths from cancer in England
Commanders of the Order of the British Empire
English interior designers
English industrial designers
Product design
2014 deaths
Place of birth missing
Academic staff of Willem de Kooning Academy
20th-century English businesspeople | Rodney Fitch | [
"Engineering"
] | 254 | [
"Product design",
"Design"
] |
17,610,829 | https://en.wikipedia.org/wiki/Phyz | Phyz (Dax Phyz) is a public domain, 2.5D physics engine with built-in editor and DirectX graphics and sound. In contrast to most other real-time physics engines, it is vertex based and stochastic. Its integrator is based on a SIMD-enabled assembly version of the Mersenne Twister random number generator, instead of traditional LCP or iterative methods, allowing simulation of large numbers of micro objects with Brownian motion and macro effects such as object resonance and deformation.
Description
Purpose
Dax Phyz is used to model and simulate physical phenomena, to animate static graphics, and to create videos, GUI front-ends and games. There is no specified correlation between Phyz and reality.
Features
Deformable and breakable objects (soft body dynamics).
N-body particle simulation.
Rod, stick, pin, slot, rocket, charge, magnet, heat, actuator and custom constraints.
Turing complete, real-time logic components (Phyz Logics).
Explosives.
Collision and break sound effects.
Message-based application programming interface.
Real-time, constraint-aware editing.
Metaballics effects.
Bitmap import.
OpenMP 2.0 support.
Platform availability
Phyz requires Windows with DirectX 9.0c or later, a display adapter with hardware support for DirectX 9, a CPU with full SSE2 support, and 1 GB of free RAM.
The metaballics effects require a GPGPU-capable display adapter.
PhyzLizp
PhyzLizp, included with Phyz, is an external application based on the Lisp programming language (Lizp 4). It can be used to measure and control events in Phyz, and to create Phyz extensions such as graphical interfaces, network gateways, non-linear constraints or games.
Screenshots
Hammer scene (upper left; deformable objects): The hammer's centre of mass is displaced from its rotational axis, creating a torque which keeps the ruler from rotating.
Wedge scene (upper right; breakable objects): How to make an impression.
Yoda scene (lower left; bitmap import, metaballics): 3.446 vertices and 13.336 rods; the vertices form metaballs with colour information from a photograph of a clay model.
Balloon scene (lower right; heat constraints): "Why am I lighter in the water?" Dax asked after a recent swimming lesson. Dax, like balloons, floats since there are more particles pushing on the bottom than on the top, as in buoyancy.
Contained Air Burst (N-body particle system, soft body dynamics): 32.068 vertices, 35.283 constraints. After a brief mushroom formation, the semi-spherical shockwaves propagate to the rectangular container walls, where they are reflected, eventually forming a wedge shape in the middle, quickly degrading to a half-sphere under the influence of gravity.
See also
Electrostatics
Game physics
Magnetism
Physics engines
Rigid body dynamics
Soft body dynamics
References
External links
Official Dax Phyz Homepage
Articles containing video clips
Computer physics engines
Physics software
Public-domain software
de:Physik-Engine#Physik-Engines
fr:Moteur physique#Moteurs propriétaires
sv:Phyz | Phyz | [
"Physics"
] | 695 | [
"Physics software",
"Computational physics"
] |
17,611,421 | https://en.wikipedia.org/wiki/Geologic%20Calendar | The Geologic Calendar is a scale in which the geological timespan of the Earth is mapped onto a calendrical year; that is to say, the day one of the Earth took place on a geologic January 1 at precisely midnight, and today's date and time is December 31 at midnight. On this calendar, the inferred appearance of the first living single-celled organisms, prokaryotes, occurred on a geologic February 25 around 12:30pm to 1:07pm, dinosaurs first appeared on December 13, the first flower plants on December 22 and the first primates on December 28 at about 9:43pm. The first anatomically modern humans did not arrive until around 11:48 p.m. on New Year's Eve, and all of human history since the end of the last ice-age occurred in the last 82.2 seconds before midnight of the new year.
A variation of this analogy instead compresses Earth's 4.6 billion year-old history into a single day: While the Earth still forms at midnight, and the present day is also represented by midnight, the first life on Earth would appear at 4:00am, dinosaurs would appear at 10:00pm, the first flowers 10:30pm, the first primates 11:30pm, and modern humans would not appear until the last two seconds of 11:59pm.
A third analogy, created by University of Washington paleontologist Peter Ward and astronomer Donald Brownlee, who are both famous for their Rare Earth hypothesis, for their book The Life and Death of Planet Earth, alters the calendar so it includes the Earth's future leading up to the Sun's death in the next 5 billion years. As a result, each month now represents 1 of 12 billion years of the Earth's life. According to this calendar, the first life appears in January, and the first animals first appeared in May, with the present day taking place on May 18. Even though the Sun won't destroy Earth until December 31, all animals will die out by the end of May.
Use of the geologic calendar as a conceptual aid dates back at least to the mid 20th century, for example in Richard Carrington's 1956 book A Guide to Earth History and Gove Hambidge's 1941 chapter in the book Climate and Man. Some authors also used a similar imaginative device of compressing the entire history of the human species to a shorter period, whether a single year as in Ramsay Muir's 1940 book Civilization and Liberty, or fifty-year span as in James Harvey Robinson's 1921 book The Mind in the Making.
See also
Cosmic Calendar
Calendar
References
Units of time
Physical cosmology
Time in astronomy
Geology
Popular science
Scientific visualization
Analogy | Geologic Calendar | [
"Physics",
"Astronomy",
"Mathematics"
] | 560 | [
"Time in astronomy",
"Astronomical sub-disciplines",
"Units of measurement",
"Physical quantities",
"Time",
"Time stubs",
"Units of time",
"Theoretical physics",
"Quantity",
"Astrophysics",
"Spacetime",
"Physical cosmology"
] |
17,613,904 | https://en.wikipedia.org/wiki/Detect%20and%20avoid | Detect and avoid (DAA) is a set of technologies designed to avoid interference between a given emitter and the wireless environment. Its need was generated by the Ultra-wideband (UWB) standard that uses a fairly large spectrum to emit its pulses.
According to the U.S. Federal Communications Commission (FCC), UWB can use from 3.1 to 10.6 GHz. That means it could interfere with WiMAX, 3G or 4G networks.
References
External links
Detect & Avoid – Short page on DAA
Detect and Avoid Technology: For Ultra Wideband (UWB) Spectrum Usage – whitepaper presenting an experience with DAA.
Wireless networking | Detect and avoid | [
"Technology",
"Engineering"
] | 141 | [
"Wireless networking",
"Computer networks engineering"
] |
17,615,760 | https://en.wikipedia.org/wiki/Sexual%20identity%20therapy | Sexual Identity Therapy (SIT) is a framework to "aid mental health practitioners in helping people arrive at a healthy and personally acceptable resolution of sexual identity and value conflicts." It was invented by Warren Throckmorton and Mark Yarhouse, professors at small conservative evangelical colleges. It has been endorsed by former American Psychological Association president Nick Cummings, psychiatrist Robert Spitzer, and the provost of Wheaton College, Stanton Jones. Sexual identity therapy puts the emphasis on how the client wants to live, identifies the core beliefs and helps the client live according to those beliefs. The creators state that their recommendations "are not sexual reorientation therapy protocols in disguise," but that they "help clients pursue lives they value." They say clients "have high levels of satisfaction with this approach". It is presented as an alternative to both sexual orientation change efforts and gay affirmative psychotherapy.
Work for developing the framework began with the establishment of the Institute for the Study of Sexual Identity in 2004. The announcement of the framework for Sexual Identity Therapy were first released on April 16, 2007. In June 2007, the guidelines were presented at the American Psychological Association convention in San Francisco. In 2008 the authors announced they were going to review the framework because of "continual changes that are occurring in the area of therapy for individuals experiencing same-sex attractions." In 2009, the APA released a report stating that such an approach is ethical and may be beneficial for some clients.
Endorsements
Gay psychologist Lee Beckstead spoke about his "middle-ground" approach to working with those in conflict with their sexual orientation and religion. He spoke about his approach along with others who spoke about Sexual Identity Therapy at an APA conference. Robert L. Spitzer, who was instrumental in removing homosexuality from the DSM in 1973, endorsed the project, saying it was "a work that transcends polarized debates about whether gays can change their sexual orientation." Michael Bussee, an outspoken critic of the ex-gay movement endorsed the project. It has also been endorsed by several other professionals, including keynote speaker at the controversial organization National Association for Research & Therapy of Homosexuality (NARTH).
Steps
Its purpose is to help clients align their sexual identity with their beliefs and values, which in some cases means celibacy, if chosen by the client. At any point during the therapy, a previous step may be revisited for further investigation or to explore a new direction in the therapy.
Assessment
The first step is to discover the reasoning of a client requesting therapy. Clinicians should determine whether the motivation is internal or external, followed by an open dialogue about motivations while respecting the client's world view. They should assist the client to clarify their
values in order to determine their preferred course of action. This must be individualized.
Advanced informed consent
Clinicians should stay up to date on literature concerning the causes of homosexuality, but if the client asks about it, they should see how that would affect their course of action.
Psychotherapy
Sexual Identity Therapy gives a framework for existing techniques rather than a specific method of psychotherapy. If a therapist's value position is in conflict with the client's preferred direction, a referral to a more suitable mental health professional may be indicated. The goal of therapy is to help the person explore and eventually live more comfortably within a sexual identity that is consistent with personal values and beliefs. This may not be quick or complete, and the client should feel free to pursue other directions.
Sexual identity synthesis
As clients synthesize a new identity, the therapist should make them aware of the consequences. Therapeutic interventions can be employed to assist clients pursue valued behavior while avoiding unvalued behavior. While the decision is the clients, clients are advised against sexual behavior until they are comfortable with their new identity. Many clients find it helpful to attend support groups and avoid social situations that do not support the new identity.
See also
Conversion therapy
References
External links
Institute for the Study of Sexual Identity
Psychotherapy by type
Sexology
Sexual orientation and psychology | Sexual identity therapy | [
"Biology"
] | 808 | [
"Behavioural sciences",
"Behavior",
"Sexology"
] |
17,616,673 | https://en.wikipedia.org/wiki/Produced%20water | Produced water is a term used in the oil industry or geothermal industry to describe water that is produced as a byproduct during the extraction of oil and natural gas, or used as a medium for heat extraction. Water that is produced along with the hydrocarbons is generally brackish and saline water in nature. Oil and gas reservoirs often have water as well as hydrocarbons, sometimes in a zone that lies under the hydrocarbons, and sometimes in the same zone with the oil and gas. In geothermal plants, the produced water is usually hot. It contains steam with dissolved solutes and gases, providing important information on the geological, chemical, and hydrological characteristics of geothermal systems.
Oil wells sometimes produce large volumes of water with the oil, while gas wells tend to produce water in smaller proportions.
As an oilfield becomes old, its natural drive to produce hydrocarbons decreases leading to decline in production. To achieve maximum oil recovery, waterflooding is often implemented, in which water is injected into the reservoirs to help force the oil to the production wells. In offshore areas, sea water is used. In onshore installations, the injected water is obtained from rivers, treated produced water, or underground. Injected water is treated with many chemicals to make it suitable for injection. The injected water eventually reaches the production wells, and so in the later stages of water flooding, the produced water's proportion ("cut") of the total production increases.
Water quality
The water composition ranges widely from well to well and even over the life of the same well. Much produced water is brine, and most formations result in total dissolved solids too high for beneficial reuse. In oil fields, almost all produced water contains oil and suspended solids. Some produced water contains heavy metals and traces of naturally occurring radioactive material (NORM), which over time deposits radioactive scale in the piping at the well. Metals found in produced water include zinc, lead, manganese, iron, and barium. In geothermal fields, produced waters are classified into 3 chemical types: HCO3-Ca⋅Mg, HCO3-Na and SO4⋅Cl-Na. The U.S. Environmental Protection Agency (EPA) in 1987 and 1999 indicates that during drilling and operations, additives may be used to reduce solid deposition on equipment and casings. Water produced from underground formations for geothermal electric power generation often exceeds primary and secondary drinking water standards for total dissolved solids, fluoride, chloride, and sulfate.
Water management
Water is required for both traditional geothermal systems and EGS throughout the life cycle of a power plant. For traditional projects, the water available at the resource is typically used for energy generation during plant operations.
Historically, produced water was disposed of in large evaporation ponds. However, this has become an increasingly unacceptable disposal method from both environmental and social perspectives. Produced water is considered industrial waste.
The broad management options for re-use are direct injection, environmentally acceptable direct-use of untreated water, or treatment to a government-issued standard before disposal or supply to users. Treatment requirements vary throughout the world. In the United States, these standards are issued by the U.S. Environmental Protection Agency (EPA) for underground injection and discharges to surface waters. Although beneficial reuse for drinking water and agriculture have been researched, the industry has not adopted these measures due to cost, water availability, and social acceptance.
Gravity separators, hydrocyclones, plate coalescers, dissolved gas flotation, and nut shell filters are some of the technologies used in treating wastes from produced water.
Radioactivity
The use of produced water for road deicing has been criticized as unsafe.
In January 2020, Rolling Stone magazine published an extensive report about radioactivity content in produced water and its effects on workers and communities across the United States. It was reported that brine sampled from a plant in Ohio was tested in a University of Pittsburgh laboratory and registered radium levels above 3,500 pCi/L. The Nuclear Regulatory Commission requires industrial discharges to remain below 60 pCi/L for each of the most common isotopes of radium, radium-226 and radium-228.
See also
Industrial waste water treatment
Oil–water separator
References
Petroleum production
Water pollution
Natural gas | Produced water | [
"Chemistry",
"Environmental_science"
] | 882 | [
"Water pollution"
] |
17,617,197 | https://en.wikipedia.org/wiki/Linear%20diode%20array | A Linear diode array is used for digitizing x-ray images. The LDA system consists of an array of photodiode modules. The diodes are laminated with a scintillation screen to create x-ray sensitive diodes. The scintillation screen converts the photon energy emitted by the x-ray tube into visible light on the diodes. The diodes produce a voltage when the light energy is received. This voltage is amplified, multiplexed, and converted to a digital signal.
Use
One of the unique characteristics of the LDA is that it has an excellent dynamic range. This means that it is capable of generating useful data when x-raying both very thick (tread) and thin (sidewall) sections of a tire simultaneously. However, the human eye is capable of visualizing only a small fraction of the LDA's full dynamic range. To compensate for the limitations of the human eye while still taking advantage of this feature of the LDA, a variety of selectable contrast and brightness enhancing tables are available.
References
X-ray instrumentation
Optical diodes | Linear diode array | [
"Technology",
"Engineering"
] | 222 | [
"X-ray instrumentation",
"Measuring instruments"
] |
17,617,478 | https://en.wikipedia.org/wiki/Tecticornia%20pergranulata | Tecticornia pergranulata (commonly known as the blackseed glasswort or blackseed samphire) is a succulent halophytic plant species in the family Chenopodiaceae, native to Australia. This plant is commonly tested in labs involving its C3 photosynthesis and its unique resistance to salinity and adversity.
Background
Tecticornia pergranulata is a species of small erect sub-shrubs with articulate, succulent stems that grow around 1 meter in height. They also contain swollen branches with small leaf lobes and are mostly located on the boundaries of salt lakes and salty swamps all across southern Australia. They are most well known for their ability to adapt to high salinity levels and flooding.
Adaptations
When dealing with floods, this species has a unique method it uses to survive. Through research done by Sarah M Rich, Martha Ludwig, and Timothy Colmer, it was discovered that photosynthesis that takes place within Tecticornia pergranulata roots allows this species to survive through intense flooding. Larger Tecticornia pergranulata plants grow an extensive system of adventitious roots from their woody basal stem regions. Smaller plants do not form aquatic roots, but do grow adventitious roots within the soil. The aquatic roots grown by the larger plants exhibit two distinctive growth forms differing in color and length.
Types of roots
The most abundant roots that are grown are exclusively aquatic. These roots float in the water column and grow less than a millimeter in diameter. They are mostly superficially pink but can also contain a brownish green color especially found in the basal region. This type of root is also known as the aquatic root. The second type of root grown is superficially greenish brown and is thicker than aquatic roots. These roots range between 1 and 3 millimeters and grow through the water column into the surface soil. This type of root is also known as the semi-aquatic root.
Survival
The photosynthetic process that occurs in this species roots has the potential to supply oxygen to the rest of the plant while facing a flooded habitat. The cortical cells of the aquatic and semi aquatic roots contain photosynthetic chloroplasts. These chloroplasts produce specific responses to variation in carbon dioxide and light availability under water. They also contain proteins involved in the photosynthetic production of oxygen and carbon fixation. Unlike the sedimentary roots in this species, the aquatic roots get access to oxygen from the water column and can produce it internally. When both the semi aquatic and aquatic roots are submerged in water, photosynthesis occurs supplying oxygen for the plant.
Other classifications and uses
Tecticornia pergranulata is also part of a separate group called the Glassworts, the ashes of which yield soda ash, an important ingredient for glass and soap making.
Subspecies
Tecticornia subspecies include:
subspecies elongata
subspecies divaricata
subspecies pergranulata
References
pergranulata
Caryophyllales of Australia
Flora of New South Wales
Flora of the Northern Territory
Flora of Queensland
Flora of South Australia
Flora of Victoria (state)
Eudicots of Western Australia
Halophytes
Barilla plants
Taxa named by John McConnell Black | Tecticornia pergranulata | [
"Chemistry"
] | 662 | [
"Halophytes",
"Salts"
] |
17,617,862 | https://en.wikipedia.org/wiki/FraLine | fraLine is a non-profit (research) project of Research Center Frankfurt Technology Center Media - FTzM of Frankfurt University of Applied Sciences. Project and research activities focus on IT services management for schools and the use of digital media in educational settings. fraLine is also a joint project between the Frankfurt University of Applied Sciences and the city of Frankfurt am Main (represented by the "Stadtschulamt" = municipal school-maintaining body). fraLine and the city of Frankfurt cooperate in the fields of IT support and IT service management in educational contexts as well as technical implementation of digital media in class. fraLine also cooperates with the Hessian state education authority ("Staatliches Schulamt") in the field of media education and digital media training for teachers.
fraLine was launched in 2001 by Professor Dr Thomas Knaus and employs mainly technical students of the Frankfurt University of Applied Sciences and Goethe University Frankfurt, but also IT professionals, engineers and media educators.
Research and development projects
Projects
fraLine conducts research in the fields of IT service management according to ITIL recommendations as well as in the field of IT support and digital media use in educational contexts including educational software and media literacy.
Research results and concepts are provided to the city of Frankfurt ("Stadtschulamt") for the improvement of information technology in schools and the establishment of a centrally managed school IT infrastructure in the municipal area. Therefore, fraLine also offers advisory services to the city of Frankfurt, to all municipal schools as well as other educational institutions on topics such as acquisition of computers or computer parts, hardware, IT security, remote service etc.
Research in technical areas is complemented by studies in media education. Results in the field of media education and current media projects are presented on the annual expert conference "fraMediale" which is hosted and organized by Research Center Frankfurt Technology Center Media - FTzM.
The FTzM developed the "fraDesk", a multi-user capable trouble ticket system for coordinating and documenting customer requests, incident reports or general processes based on division of labor. fraLine uses this software in its own everyday routine. The "fraDesk" is free software and was presented at the CeBIT 2008 in Hanover, Germany.
Other research projects conducted by fraLine employees include the development of standard resource administration and user management for schools, improvement of school support via remote maintenance and support, but also media educational topics such as content filtering, evaluation of educational software and media communication.
Knowledge base for teachers
fraLine publishes relevant information for schools as well as research results in the form of test reports and academic papers, a glossary, and a large set of FAQ. The FAQ database is a practical knowledge base Knowledge base and used as a point of reference for local teachers and others. It includes technical tutorials and information on the Frankfurt local school network, operating systems, software, hardware, peripheral equipment, the Internet, licenses and rights of utilization.
Additionally, fraLine offers tutorials and introductory courses imparting theoretical and practical knowledge to school teachers and school IT representatives in Frankfurt.
Support services for schools
The fraLine student team offers practical IT support to all 152 schools in Frankfurt by operating a telephone hotline, the Internet-based helpdesk "fraDesk" and through on-site service. Financed by public funds through the city of Frankfurt the service is free of charge for schools. It covers everyday troubleshooting relating to hardware and software, as well as software installation, actualisation and handling. In order to coordinate its support services, fraLine cooperates with so-called "IT representatives" in every school. They are appointed by their school principals and responsible for collecting IT-incidents and requests in their schools and communicating them to fraLine. Here, fraLine functions as first point of contact ("single-point-of-contact"). If problems cannot be solved by telephone, fraLine employees visit schools on-site or help them to refer their request to the competent authorities (e.g. school maintaining body ("Stadtschulamt"), or the Department of Information and Communicational Technology). It is sometimes criticized that IT-support carried out by students does not match the schools´ need for a "professional" support. On the other hand, only senior students who are identified as possessing the necessary qualifications are accepted at the project.
In addition to the technical support, fraLine also offers training courses for teachers and school administration staff in the fields of IT-infrastructure and media use in educational contexts.
As another way to support schools, fraLine launched the project "educational-technical assistance". As part of this project, fraLine employees accompany teachers during class offering them technical assistance with the digital media devices they plan to use in their lesson. The project aims at reducing technical insecurities of teachers and wants to promote a broader use of digital media in education.
Partnerships, cooperations and awards
fraLine contributes to various multi-organisational and interdisciplinary initiatives in Frankfurt am Main, in the federal state of Hesse and in Germany responding to the growing demand for a sensible use of media in education. Thus, fraLine maintains close partnerships with its sister projects in Bremen and Hamburg.
In 2007, fraLine was awarded with the "Germany – Land of Ideas" award. "Land of Ideas" is an initiative under the patronage of former German Federal President Horst Köhler. Public or private institutions that have developed innovations and ideas are nominated as so-called "landmarks in the land of ideas" for one year.
Notes and references
External link
fraLine webpage
Education in Frankfurt
Information technology education | FraLine | [
"Technology"
] | 1,138 | [
"Information technology",
"Information technology education"
] |
17,618,265 | https://en.wikipedia.org/wiki/Stanhope%20lens | A Stanhope lens is a simple, one-piece microscope invented by Charles, the third Earl of Stanhope. It is a cylinder of glass with each end curved outwards, one being more convex than the other. The focal length of the apparatus is at or within the device so that objects to be studied are placed close to or in contact with the less curved end. Because its construction is simple and economical, it was popular in the 19th century. It was useful in medical practice for examining transparent materials such as crystals and fluids.
René Dagron modified the lens by keeping one curved end to refract light while sectioning the other end flat and locating it at the focal plane of the curved side. Dagron used the modified Stanhope lens in mounting his microscopic pictures in photographic jewels known as Stanhopes.
A rival lens is the Coddington magnifier. This was considered superior as a magnifier but was more expensive.
References
americanhistory.si.edu
Magnifiers
Microscopes | Stanhope lens | [
"Chemistry",
"Technology",
"Engineering"
] | 210 | [
"Magnifiers",
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
17,621,236 | https://en.wikipedia.org/wiki/H.%20Eugene%20Stanley | Harry Eugene Stanley (born March 28, 1941) is an American physicist and University Professor at Boston University. He has made seminal contributions to statistical physics and is one of the pioneers of interdisciplinary science. His current research focuses on understanding the anomalous behavior of liquid water, but he had made fundamental contributions to complex systems, such as quantifying correlations among the constituents of the Alzheimer brain, and quantifying fluctuations in noncoding and coding DNA sequences, interbeat intervals of the healthy and diseased heart. He is one of the founding fathers of econophysics.
Education
Stanley obtained his B.A. in physics at Wesleyan University in 1962.
He performed biological physics research with Max Delbrück in 1963 and was awarded a Ph.D. in physics from Harvard University in 1967.
Stanley was a Miller Fellow at University of California, Berkeley with Charles Kittel, where he wrote an Oxford monograph
Introduction to Phase Transitions and Critical Phenomena which won the
Choice Award for Outstanding Academic Book of 1971.
Academic career
Stanley was appointed Assistant Professor of Physics at MIT in 1969 and was promoted to Associate Professor in 1971. He was appointed Hermann von Helmholtz Associate Professor in 1973, in recognition of his interdepartmental teaching and research with the Harvard-MIT Program in Health Sciences and Technology. In 1976, Stanley joined Boston University as Professor of Physics, and Associate Professor of Physiology (in the School of Medicine). In 1978 and 1979, he was promoted to Professor of Physiology and University Professor, respectively. Since 2007 he holds joint appointments with the Chemistry and Biomedical Engineering Departments. In 2011, he was made William F. Warren Distinguished Professor. In the spring of 2013, he held the Lorentz Professorship at the University of Leiden.
Research and achievements
Stanley had fundamental contributions to several topics in statistical physics, such as the theory of phase transitions, percolation, disordered systems, aggregation phenomena, polymers, econophysics and biological physics. His early work introduced the n-vector model of magnetism and its exact solution in the limit nà infinity, topics that are now part of standard statistical physics textbooks.
His seminal work on liquid water started with a percolation model he developed in 1980 with José Teixeira to explain the experimentally observed anticorrelations in entropy and volume. In 1992 he developed the liquid-liquid critical point hypothesis, that offered a quantitative understanding of water’s anomalies, applying to all liquids with tetrahedral symmetry (such as silicon and silica). Direct experimental proof for his proposal was obtained by recent experiments in Tsukuba, MIT, and Stanford.
Stanley introduced the term ‘econophysics’ in 1994 to describe the interdisciplinary field merging physics principles with economic phenomena. His research group has identified empirical laws governing economic fluctuations and developed statistical mechanics models to elucidate their underlying mechanisms.
The ISI Web of Science, lists 76,778 citations to Stanley's work (excluding 33 books). Using the Hirsch H Index metric for publication impact [PNAS 102, 16569 (2005)], Stanley has authored 129 papers with a citation count equal to or greater than 129, so H = 129. Google Scholar lists over 200,000 citations, with H = 201.
Stanley is committed to education at all levels, from high school to graduate studies. He has served as thesis advisor to 114 Ph.D. students and has collaborated with 211 postdoctoral fellows and visiting faculty. He is also active in worldwide efforts for achieving gender balance in the physical sciences.
Honors and awards
Stanley has been elected to the U.S. National Academy of Sciences (2004), the Brazilian Academy of Sciences. He is an Honorary Member of the
Hungarian Physical Society. He is currently
Honorary Professor at the Institute for Advanced Studies, University of Pavia (Pavia, Italy), and at Eötvös Loránd University (Budapest, Hungary). Stanley awarded the 2004 APS Nicholson Medal for Humanitarian Service, "For his extraordinary contributions to human rights, for his initiatives on behalf of female physicists, and for his caring and supportive relationship with those who have worked in his laboratory."
For his contributions to phase transitions Stanley received the
2004 Boltzmann Medal, awarded by International Union of Pure and Applied Physics (IUPAP), and the American Physical Society 2008 Julius Edgar Lilienfeld Prize.
He was awarded the Teresiana Medal in Complex Systems Research
given by the University of Pavia. He also received the Distinguished
Teaching Scholar Director's Award from the National Science Foundation, the Nicholson Medal for Human Outreach from the American Physical Society, a Guggenheim Fellowship (1979), the David Turnbull Prize from the Materials Research Society (1998), a BP Venture Research Award, the Floyd K. Richtmyer Memorial Lectureship Award (1997), the Memory Ride Award for Alzheimer Research, and the Massachusetts Professor of the Year
awarded by the Council for Advancement and Support of Education.
Stanley has received nine Doctorates Honoris Causa, from Bar-Ilan University Ramat Gan, (Israel), Eötvös Loránd University (Budapest). University of Liège (Belgium), University of Dortmund, University of Wroclaw, Northwestern University, University of Messina, University of Leicester, and the IMT Institute for Advanced Studies Lucca.
See also
List of members of the National Academy of Sciences (Applied physical sciences)
Notes
External links
1941 births
Living people
21st-century American physicists
Boston University faculty
Harvard University alumni
University of California, Berkeley alumni
Harvard University faculty
Wesleyan University alumni
American probability theorists
Members of the United States National Academy of Sciences
Fellows of the American Physical Society
American network scientists
Statistical physicists | H. Eugene Stanley | [
"Physics"
] | 1,135 | [
"Statistical physicists",
"Statistical mechanics"
] |
17,621,644 | https://en.wikipedia.org/wiki/Tacit%20Networks | Tacit Networks, Inc. is an I.T. company based in South Plainfield, New Jersey. It was founded in 2000.
Their product lines are:
iShared which provides Wide area file services and WAN optimization.
Mobiliti (via the acquisition of Mobiliti) which provides backup, synchronization and offline access services to mobile users.
On January 30, 2004, Tacit Networks acquired the assets of AttachStor. The AttachStor technology provided the basis for the email acceleration feature in the iShared product.
On December 30, 2005, Tacit Networks acquired the assets of Mobiliti and integrated the Mobiliti product line into its portfolio.
On May 15, 2006, Packeteer acquired Tacit Networks and integrated the iShared and Mobiliti product lines into the Packeteer portfolio.
See also
Wide area file services
WAN optimization
References
External links
iShared product page on Packeteer.com
Network performance
Defunct computer companies of the United States
Defunct computer hardware companies
Computer companies established in 2000
Computer companies disestablished in 2006
Companies based in Middlesex County, New Jersey
South Plainfield, New Jersey
2006 mergers and acquisitions | Tacit Networks | [
"Technology"
] | 234 | [
"Computing stubs",
"Computer company stubs"
] |
17,622,112 | https://en.wikipedia.org/wiki/Aquatic%20science | Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted.
There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology.
Oceanography
Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean.
Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers and marine biologists often utilize field observations, computer models, laboratory experiments, or field experiments for their research. In the field of oceanography, there are also chemical oceanographers and marine chemists. These scientist's areas of focus are the composition of seawater. They study the processes and cycles of seawater, as well as how seawater chemically interacts with the atmosphere and seafloor. Some examples of jobs that chemical oceanographers and marine chemists perform are analyzing seawater components, exploring the effects pollutants have on seawater, and analyzing the effects that chemical processes have on marine animals. In addition, a chemical oceanographer might use chemistry to better understand how ocean currents move seawater and how the ocean affects the climate. They might also search for ocean resources that could be beneficial, such as products that have medicinal properties.
The field of oceanography also consists of geological oceanographers and marine geologists who study the ocean floor and how its mountains, canyons, and valleys were formed. Geological oceanographers and marine geologists use sampling to examine the history of sea-floor spreading, plate tectonics, thermohaline circulation, and climates. In addition, they study undersea volcanos as well as mantle (geology) and hydrothermal circulation. Their research helps us to better understand the events that led to the creation of oceanic basins and how the ocean interacts with the seabed. Lastly, under the field of oceanography, there are physical oceanographers. Physical oceanographers are experts on the physical conditions and processes that occur naturally in the ocean. These include waves, currents, eddies, gyres, tides, and coastal erosion. Physical oceanographers also study topics such as the transmission of light and sound through water and the effects that the ocean has on weather and climate. All of these fields are intertwined. In order for an oceanographer to succeed in their field, they need to have an adequate understanding of other related sciences, such as biology, chemistry, and physics.
Limnology
Limnology is the study of freshwater environments, such as rivers, streams, lakes, reservoirs, groundwater, and marshlands. Limnologists work to understand the various natural and man made factors that affect our natural water bodies such as pesticides, temperature, runoff, and aquatic life. For example, a limnologist might study the effects of pesticides on the temperature of a lake or they might seek to understand why a certain species of fish in the Nile River is declining.
In order to increase their understanding of what they are studying, limnologists employ three main study techniques. The first study technique has to do with observations. Limnologists make descriptive observations of conditions and note how those conditions have changed over time. These observations allow limnologists to form theories and hypotheses. The second study technique that limnologists use has to do with experimentation. Limnologists conduct controlled experiments under laboratory conditions in order to further their understanding of the impact of small, individual changes in the ecosystem. Lastly, limnologists come up with predictions. After they have conducted their experiments, they can apply what they have learned to known data about the wider ecosystem and make predictions about the natural environment.
Within the field of limnology, there are more specific areas of study. One of those areas of study is ecology, particularly the ecology of water systems. The ecology of water systems focuses on the organisms that live in freshwater environments and how they are affected by changes in their habitat. For example, a limnologist specializing in ecology could study how chemical or temperature changes in a body of water inhibit or support new organic growth. Another aspect that they may examine are the effects of a nonnative species on native populations of aquatic life. Most ecological limnologists conduct their studies in laboratory settings, where their hypotheses can be tested, verified, and controlled. Another area of study under limnology is biology. Limnologists who specialize in the biology field only study the living aquatic organisms that are present in a certain freshwater environment. They aim to understand various aspects of the organisms, such as their history, their life cycles, and their populations. These scientists study living organisms in order to support the proper management of fresh bodies of water and their ecosystems.
Aquatic environments
Most aquatic environments contain both plants and animals. Aquatic plants are plants that grow in water. Examples of aquatic plants are waterlilies, floating hearts, the lattice plant, seagrass, and phytoplankton. Aquatic plants can be rooted in mud, such as the lotus flower or they can be found floating on the surface of the water such as the water hyacinth. Aquatic plants provide oxygen, food, and shelter for many aquatic animals. In addition, underwater vegetation provides several species of marine animals with grounds to spawn, nurse, take refuge, and forage.
Seagrass, for example, is a vital source of food for commercial and recreational fish. Seagrass stabilizes sediments, produces the organic material that small aquatic invertebrates need, and adds oxygen to the water. Phytoplankton are also an important class of aquatic plant. Phytoplankton are similar to terrestrial plants in that they require chlorophyll and sunlight to grow. Most Phytoplankton are buoyant, floating in the upper part of the ocean, where sunlight penetrates the water. There are two main classes of phytoplankton: dinoflagellates and diatoms. Dinoflagellates have a whip-like tail called a Flagellum, which they use to move through the water, and their bodies are covered with complex shells. Diatoms, on the other hand, have shells, but they are made of a different substance. Instead of relying on flagella to travel through the water, diatoms use ocean currents. Both classes of phytoplankton provide food for a variety of sea creatures, such as shrimp, snails, and jellyfish.
Both aquatic animals and plants contribute to the health of our environment and to the quality of human life. Humans depend on their ecological functions for our survival. Humans use surface waters and their inhabitants in order to process our waste products. Aquatic plants and animals provide us with necessities such as medicine, food, energy, shelter, and several raw materials. Today, more than 40% of medicines are derived from aquatic plants and animals. Moreover, aquatic wildlife are an important source of food for many people. In addition, aquatic wildlife is a big source of atmospheric oxygen and plays a big role in preventing humans from being affected by new diseases, pests, predators, food shortages, and global climate change.
Aquatic animals
Aquatic animals are organisms that spend most of their life underwater. These animals consist of crustaceans, reptiles, mollusks, aquatic birds, aquatic insects, and even starfish and coral. Aquatic animals unfortunately face a lot of threats, with most of these threats resulting from human behaviors. One major threat that aquatic animals face is overfishing. Scientists have figured out a way to replenish the species of fish that humans have over hunted by creating marine protected areas or fish regeneration zones These fish regeneration zones help protect their ecosystems and help rebuild their abundance. Another threat that aquatic animals face is pollution, particularly coastal pollution. This pollution is caused by industrial agriculture. These agricultural practices result in reactive nitrogen and phosphorus being poured into the rivers, which then gets transported to the ocean. These chemicals have created what is known as "dead zones" which is when there is less oxygen in the water. Moreover, another detrimental threat that aquatic animals face is the threat of habitat destruction. This can be exemplified with the clearing of mangrove forests for shrimp production and the scraping of underwater mountain ranges through deep-sea trawling. Other threats that aquatic animals face are global warming and acidification. Global warming is responsible for killing the algae that keeps coral alive, forcing species out of their natural habitats and into new areas, and for causing sea levels to rise. Acidification, on the other hand, is decreasing the pH level of oceans. High acidity levels in the water are preventing marine-calcifying organisms, such as coral, from forming shells.
World Aquatic Animal Day
Although there are not many currently existing formal holidays celebrating aquatic science, a new one has been made called World Aquatic Animal Day. World Aquatic Animal Day was created on April 3, 2020, as a way to raise awareness for these often forgotten animals. The holiday begun as a project of the Aquatic Animal Law Initiative and the Animal Law Clinic at the Lewis & Clark Law School as part of the Center for Animal Law Studies. In addition to raising awareness for these animals, this holiday aims to increase our appreciation and understanding of them. Under this holiday, the definition of aquatic animals is not limited to fish.
See also
GIS and aquatic science
Pan-American Journal of Aquatic Sciences
References
Marine biology
Limnology
Oceanography | Aquatic science | [
"Physics",
"Biology",
"Environmental_science"
] | 2,186 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Marine biology"
] |
357,796 | https://en.wikipedia.org/wiki/Constructivism%20%28philosophy%20of%20science%29 | Constructivism is a view in the philosophy of science that maintains that scientific knowledge is constructed by the scientific community, which seeks to measure and construct models of the natural world. According to constructivists, natural science consists of mental constructs that aim to explain sensory experiences and measurements, and that there is no single valid methodology in science but rather a diversity of useful methods. They also hold that the world is independent of human minds, but knowledge of the world is always a human and social construction. Constructivism opposes the philosophy of objectivism, embracing the belief that human beings can come to know the truth about the natural world not mediated by scientific approximations with different degrees of validity and accuracy.
Constructivism and sciences
Social constructivism in sociology
One version of social constructivism contends that categories of knowledge and reality are actively created by social relationships and interactions. These interactions also alter the way in which scientific episteme is organized.
Social activity presupposes human interaction, and in the case of social construction, utilizing semiotic resources (meaning-making and signifying) with reference to social structures and institutions. Several traditions use the term Social Constructivism: psychology (after Lev Vygotsky), sociology (after Peter Berger and Thomas Luckmann, themselves influenced by Alfred Schütz), sociology of knowledge (David Bloor), sociology of mathematics (Sal Restivo), philosophy of mathematics (Paul Ernest). Ludwig Wittgenstein's later philosophy can be seen as a foundation for social constructivism, with its key theoretical concepts of language games embedded in forms of life.
Constructivism in philosophy of science
Thomas Kuhn argued that changes in scientists' views of reality not only contain subjective elements but result from group dynamics, "revolutions" in scientific practice, and changes in "paradigms". As an example, Kuhn suggested that the Sun-centric Copernican "revolution" replaced the Earth-centric views of Ptolemy not because of empirical failures but because of a new "paradigm" that exerted control over what scientists felt to be the more fruitful way to pursue their goals.
The view of reality as accessible only through models was called model-dependent realism by Stephen Hawking and Leonard Mlodinow. While not rejecting an independent reality, model-dependent realism says that we can know only an approximation of it provided by the intermediary of models.
These models evolve over time as guided by scientific inspiration and experiments.
In the field of the social sciences, constructivism as an epistemology urges that researchers reflect upon the paradigms that may be underpinning their research, and in the light of this that they become more open to considering other ways of interpreting any results of the research. Furthermore, the focus is on presenting results as negotiable constructs rather than as models that aim to "represent" social realities more or less accurately. Norma Romm, in her book Accountability in Social Research (2001), argues that social researchers can earn trust from participants and wider audiences insofar as they adopt this orientation and invite inputs from others regarding their inquiry practices and the results thereof.
Constructivism and psychology
In psychology, constructivism refers to many schools of thought that, though extraordinarily different in their techniques (applied in fields such as education and psychotherapy), are all connected by a common critique of previous standard objectivist approaches. Constructivist psychology schools share assumptions about the active constructive nature of human knowledge. In particular, the critique is aimed at the "associationist" postulate of empiricism, "by which the mind is conceived as a passive system that gathers its contents from its environment and, through the act of knowing, produces a copy of the order of reality."
In contrast, "constructivism is an epistemological premise grounded on the assertion that, in the act of knowing, it is the human mind that actively gives meaning and order to that reality to which it is responding".
The constructivist psychologies theorize about and investigate how human beings create systems for meaningfully understanding their worlds and experiences.
Constructivism and education
Joe L. Kincheloe has published numerous social and educational books on critical constructivism (2001, 2005, 2008), a version of constructivist epistemology that places emphasis on the exaggerated influence of political and cultural power in the construction of knowledge, consciousness, and views of reality. In the contemporary mediated electronic era, Kincheloe argues, dominant modes of power have never exerted such influence on human affairs. Coming from a critical pedagogical perspective, Kincheloe argues that understanding a critical constructivist epistemology is central to becoming an educated person and to the institution of just social change.
Kincheloe's characteristics of critical constructivism:
Knowledge is socially constructed: World and information co-construct one another
Consciousness is a social construction
Political struggles: Power plays an exaggerated role in the production of knowledge and consciousness
The necessity of understanding consciousness—even though it does not lend itself to traditional reductionistic modes of measurability
The importance of uniting logic and emotion in the process of knowledge and producing knowledge
The inseparability of the knower and the known
The centrality of the perspectives of oppressed peoples—the value of the insights of those who have suffered as the result of existing social arrangements
The existence of multiple realities: Making sense of a world far more complex than we originally imagined
Becoming humble knowledge workers: Understanding our location in the tangled web of reality
Standpoint epistemology: Locating ourselves in the web of reality, we are better equipped to produce our own knowledge
Constructing practical knowledge for critical social action
Complexity: Overcoming reductionism
Knowledge is always entrenched in a larger process
The centrality of interpretation: Critical hermeneutics
The new frontier of classroom knowledge: Personal experiences intersecting with pluriversal information
Constructing new ways of being human: Critical ontology
Constructivist approaches
Critical constructivism
A series of articles published in the journal Critical Inquiry (1991) served as a manifesto for the movement of critical constructivism in various disciplines, including the natural sciences. Not only truth and reality, but also "evidence", "document", "experience", "fact", "proof", and other central categories of empirical research (in physics, biology, statistics, history, law, etc.) reveal their contingent character as a social and ideological construction. Thus, a "realist" or "rationalist" interpretation is subjected to criticism. Kincheloe's political and pedagogical notion (above) has emerged as a central articulation of the concept.
Cultural constructivism
Cultural constructivism asserts that knowledge and reality are a product of their cultural context, meaning that two independent cultures will likely form different observational methodologies.
Genetic epistemology
James Mark Baldwin invented this expression, which was later popularized by Jean Piaget. From 1955 to 1980, Piaget was Director of the International Centre for Genetic Epistemology in Geneva.
Radical constructivism
Ernst von Glasersfeld was a prominent proponent of radical constructivism. This claims that knowledge is not a commodity that is transported from one mind into another. Rather, it is up to the individual to "link up" specific interpretations of experiences and ideas with their own reference of what is possible and viable. That is, the process of constructing knowledge, of understanding, is dependent on the individual's subjective interpretation of their active experience, not what "actually" occurs. Understanding and acting are seen by radical constructivists not as dualistic processes but "circularly conjoined".
Radical constructivism is closely related to second-order cybernetics.
Constructivist Foundations is a free online journal publishing peer-reviewed articles on radical constructivism by researchers from multiple domains.
Relational constructivism
Relational constructivism can be perceived as a relational consequence of radical constructivism. In contrary to social constructivism, it picks up the epistemological threads. It maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception (i.e., self-referentially operating cognition). Therefore, humans are not able to come to objective conclusions about the world.
In spite of the subjectivity of human constructions of reality, relational constructivism focuses on the relational conditions applying to human perceptional processes. Björn Kraus puts it in a nutshell:
Social Constructivism
Criticisms
Numerous criticisms have been levelled at Constructivism. The most common one is that it either explicitly advocates or implicitly reduces to relativism.
Another criticism of constructivism is that it holds that the concepts of two different social formations be entirely different and incommensurate. This being the case, it is impossible to make comparative judgments about statements made according to each worldview. This is because the criteria of judgment will themselves have to be based on some worldview or other. If this is the case, then it brings into question how communication between them about the truth or falsity of any given statement could be established.
The Wittgensteinian philosopher Gavin Kitching argues that constructivists usually implicitly presuppose a deterministic view of language, which severely constrains the minds and use of words by members of societies: they are not just "constructed" by language on this view but are literally "determined" by it. Kitching notes the contradiction here: somehow, the advocate of constructivism is not similarly constrained. While other individuals are controlled by the dominant concepts of society, the advocate of constructivism can transcend these concepts and see through them.
See also
Autopoiesis
Consensus reality
Constructivism in international relations
Cultural pluralism
Epistemological pluralism
Tinkerbell effect
Map–territory relation
Meaning making
Metacognition
Ontological pluralism
Personal construct psychology
Perspectivism
Pragmatism
References
Further reading
Devitt, M. 1997. Realism and Truth, Princeton University Press.
Gillett, E. 1998. "Relativism and the Social-constructivist Paradigm", Philosophy, Psychiatry, & Psychology, Vol.5, No.1, pp. 37–48
Ernst von Glasersfeld 1987. The construction of knowledge, Contributions to conceptual semantics.
Ernst von Glasersfeld 1995. Radical constructivism: A way of knowing and learning.
Joe L. Kincheloe 2001. Getting beyond the Facts: Teaching Social Studies/Social Science in the Twenty-First Century, NY: Peter Lang.
Joe L. Kincheloe 2005. Critical Constructivism Primer, NY: Peter Lang.
Joe L. Kincheloe 2008. Knowledge and Critical Pedagogy, Dordrecht, The Netherlands: Springer.
Kitching, G. 2008. The Trouble with Theory: The Educational Costs of Postmodernism, Penn State University Press.
Björn Kraus 2014: Introducing a model for analyzing the possibilities of power, help and control. In: Social Work and Society. International Online Journal. Retrieved 3 April 2019.(http://www.socwork.net/sws/article/view/393)
Björn Kraus 2015: The Life We Live and the Life We Experience: Introducing the Epistemological Difference between "Lifeworld" (Lebenswelt) and "Life Conditions" (Lebenslage). In: Social Work and Society. International Online Journal. Retrieved 27 August 2018.(http://www.socwork.net/sws/article/view/438).
Björn Kraus 2019: Relational constructivism and relational social work. In: Webb, Stephen, A. (edt.) The Routledge Handbook of Critical Social Work. Routledge international Handbooks. London and New York: Taylor & Francis Ltd.
Friedrich Kratochwil: Constructivism: what it is (not) and how it matters, in Donatella della Porta & Michael Keating (eds.) 2008, Approaches and Methodologies in the Social Sciences: A Pluralist Perspective, Cambridge University Press, 80–98.
Mariyani-Squire, E. 1999. "Social Constructivism: A flawed Debate over Conceptual Foundations", Capitalism, Nature, Socialism, vol.10, no.4, pp. 97–125
Matthews, M.R. (ed.) 1998. Constructivism in Science Education: A Philosophical Examination, Kluwer Academic Publishers.
Edgar Morin 1986, La Méthode, Tome 3, La Connaissance de la connaissance.
Nola, R. 1997. "Constructivism in Science and in Science Education: A Philosophical Critique", Science & Education, Vol.6, no.1-2, pp. 55–83.
Jean Piaget (ed.) 1967. Logique et connaissance scientifique, Encyclopédie de la Pléiade, vol. 22. Editions Gallimard.
Herbert A. Simon 1969. The Sciences of the Artificial (3rd Edition MIT Press 1996).
Slezak, P. 2000. "A Critique of Radical Social Constructivism", in D.C. Philips, (ed.) 2000, Constructivism in Education: Opinions and Second Opinions on Controversial Issues, The University of Chicago Press.
Suchting, W.A. 1992. "Constructivism Deconstructed", Science & Education, vol.1, no.3, pp. 223–254
Paul Watzlawick 1984. The Invented Reality: How Do We Know What We Believe We Know? (Contributions to Constructivism), W W. Norton.
Tom Rockmore 2008. On Constructivist Epistemology.
Romm, N.R.A. 2001. Accountability in Social Research, Dordrecht, The Netherlands: Springer. https://www.springer.com/social+sciences/book/978-0-306-46564-2
External links
Journal of Constructivist Psychology
Radical Constructivism
Constructivist Foundations
Epistemological theories
Epistemology of science
Metatheory of science
Philosophical analogies
Social constructionism
Social epistemology
Systems theory
Theories of truth
Constructivism | Constructivism (philosophy of science) | [
"Technology"
] | 2,848 | [
"Social epistemology",
"Science and technology studies"
] |
357,826 | https://en.wikipedia.org/wiki/Joseph%20Needham | Noel Joseph Terence Montgomery Needham (; 9 December 1900 – 24 March 1995) was a British biochemist, historian of science and sinologist known for his scientific research and writing on the history of Chinese science and technology, initiating publication of the multivolume Science and Civilisation in China. A focus of his was what has come to be called the Needham Question of why and how China had ceded its leadership in science and technology to Western countries.
He was elected a fellow of the Royal Society in 1941 and a fellow of the British Academy in 1971. In 1992, Queen Elizabeth II conferred on him the Order of the Companions of Honour, and the Royal Society noted he was the only living person to hold these three titles.
Early life
Needham's father, Joseph, was a doctor, and his mother, Alicia Adelaïde, née Montgomery (1863–1945), was a music composer from Oldcastle, County Meath, Ireland. His father, born in East London, then a poor section of town, rose to become a Harley Street physician, but frequently battled with Needham's mother. The young Needham often mediated. In his early teens, he was taken to hear the Sunday lectures of Ernest Barnes, a professional mathematician who became Master of the Temple, a royal church in London. Barnes inspired an interest in the philosophers and medieval scholastics that Needham pursued in his father's library. Needham later attributed his strong Christian faith to Barnes' philosophical theology, which was founded on rational argument, and attributed his openness to the religions of other cultures to Barnes as well.
In 1914, with the outbreak of World War I, Needham was sent to Oundle School, founded in 1556 in Northamptonshire. He did not enjoy leaving home, but he later described the headmaster Frederick William Sanderson as a "man of genius" and said that without that influence on him at a tender age, he might not have attempted his largest work. Sanderson had been charged by the school's governors with developing a science and technology programme, which included a metal shop that gave the young Needham a grounding in engineering. Sanderson also emphasised to the boys of the school that co-operation led to higher human achievement than competition and that knowledge of history was necessary to build a better future. The Bible, in Sanderson's teaching, supplied archaeological knowledge to compare with the present. During school holidays, Needham assisted his father in the operating rooms of several wartime hospitals, an experience that convinced him that he was not interested in becoming a surgeon. The Royal Navy, however, appointed him a surgeon sub-lieutenant, a position that he held for only a few months.
Education
In 1921, Needham graduated with a Bachelor of Arts degree from Gonville and Caius College, Cambridge.
In January 1925, Needham earned an MA. In October 1925, Needham earned a PhD. He had intended to study medicine, but came under the influence of Frederick Hopkins, resulting in his switch to biochemistry.
Career
After graduation, Needham was elected to a fellowship at Gonville and Caius College and worked in Gowland Hopkins' laboratory at the University Department of Biochemistry, specialising in embryology and morphogenesis. His three-volume work Chemical Embryology, published in 1931, includes a history of embryology from Egyptian times up to the early 19th century, including quotations in most European languages. Including this history reflected Needham's fear that overspecialization would hold back scientific progress and that social and historical forces shaped science. At that time Cambridge school of biochemistry were recognised for imaginative exploratory science and had outstanding scientists Hopkins, Dorothy M. Needham (later his wife)Robin Hill, Barcroft who were joined by Rudi Lemberg on a Rockefeller Foundation fellowship.
In 1936, he and several other Cambridge scientists founded the History of Science Committee. The Committee included conservatives but also Marxists like J.D. Bernal, whose views on the social and economic frameworks of science influence Needham.
Needham's Terry Lecture of 1936 was published by Cambridge University Press in association with Yale University Press under the title of Order and Life. In 1939 he produced a massive work on morphogenesis that a Harvard reviewer claimed "will go down in the history of science as Joseph Needham's magnum opus," little knowing what would come later.
Although his career as biochemist and an academic was well established, his career developed in unanticipated directions with World War II with his evident interest in history. In 1939, Needham referred to his thesis that "man's intellectual progress cannot be understood save in the light of his social progress" illustrated by the historical period which saw the re-birth of experimental science in Europe and in England at the end of the seventeenth century. He was writing the foreword, with "particular pleasure", to the little book of my friend on the Levellers. Needham wrote: "Merton has shown how puritan were the early Fellows of the Royal Society." And goes on to acknowledge Holorenshaw in pointing out: "that no less than the men of property, the Levellers realised the social importance of science, and foresaw the part it would one day play in human welfare."
Three Chinese scientists came to Cambridge for graduate study in 1937: Lu Gwei-djen, Wang Ying-lai, and Shen Shih-Chang (, the only one under Needham's tutelage). Lu, daughter of a Nanjing pharmacist, taught Needham Chinese, igniting his interest in China's ancient technological and scientific past. He then pursued, and mastered, the study of Classical Chinese privately with Gustav Haloun.
Under the Royal Society's direction, Needham was the director of the Sino-British Science Co-operation Office in Chongqing from 1942 to 1946. During this time he made several long journeys through war-torn China and many smaller ones, visiting scientific and educational establishments and obtaining for them much needed supplies. His longest trip in late 1943 ended in far west in Gansu at the caves in Dunhuang at the end of the Great Wall where the earliest dated printed book - a copy of the Diamond Sutra - was found. The other long trip reached Fuzhou on the east coast, returning across the Xiang River just two days before the Japanese blew up the bridge at Hengyang and cut off that part of China. In 1944 he visited Yunnan in an attempt to reach the Burmese border. Everywhere he went he purchased and was given old historical and scientific books which he shipped back to Britain through diplomatic channels. They were to form the foundation of his later research. He got to know Zhou Enlai, first Premier of the People's Republic of China, and met numerous Chinese scholars, including the painter Wu Zuoren, and the meteorologist Zhu Kezhen, who later sent crates of books to him in Cambridge, including 2,000 volumes of the Gujin Tushu Jicheng encyclopaedia, a comprehensive record of China's past.
On his return to Europe, he was asked by Julian Huxley to become the first head of the Natural Sciences Section of UNESCO in Paris, France. In fact it was Needham who insisted that science should be included in the organisation's mandate at an earlier planning meeting.
After two years in which the suspicions of the Americans over scientific co-operation with communists intensified, Needham resigned in 1948 and returned to Gonville and Caius College, where he resumed his fellowship and his rooms, which were soon filled with his books.
He devoted his energy to the history of Chinese science until his retirement in 1990, even though he continued to teach some biochemistry until 1993. Needham's reputation recovered from the Korean affair (see below) such that by 1959 he was elected as president of the fellows of Caius College and in 1965 he became Master (head) of the college, a post which he held until he was 76.
Science and Civilisation in China
In 1948, Needham proposed a project to the Cambridge University Press for a book on Science and Civilisation in China. Within weeks of being accepted, the project had grown to seven volumes, and it has expanded ever since. His initial collaborator was the historian Wang Ling, whom he had met in Lizhuang and obtained a position for at Trinity. The first years were devoted to compiling a list of every mechanical invention and abstract idea that had been made and conceived in China. These included cast iron, the ploughshare, the stirrup, gunpowder, printing, the magnetic compass and clockwork escapements, most of which were thought at the time to be western inventions. The first volume eventually appeared in 1954.
The publication received widespread acclaim, which intensified to lyricism as the further volumes appeared. He wrote fifteen volumes himself, and the regular production of further volumes continued after his death in 1995. Later, Volume III was divided, so that 27 volumes have now been published. Successive volumes are published as they are completed, which means that they do not appear in the order originally contemplated in the project's prospectus.
Needham's final organizing schema was:
Vol. I. Introductory Orientations
Vol. II. History of Scientific Thought
Vol. III. Mathematics and the Sciences of the Heavens and Earth
Vol. IV. Physics and Physical Technology
Vol. V. Chemistry and Chemical Technology
Vol. VI. Biology and Biological Technology
Vol. VII. The Social Background
See Science and Civilisation in China for a full list.
The project is still proceeding under the guidance of the Publications Board of the Needham Research Institute, directed by Professor Mei Jianjun.
UNESCO
Needham, along with colleague Julian Huxley, was one of the founders of the United Nations Educational, Scientific, and Cultural Organization (UNESCO). Developed in 1945 with the help of Allied governments, UNESCO is an international organization that aims to bring education to regions that had been affected by Nazi occupation. Needham and Huxley advocated the growth of scientific education as a means to overcome political conflict and hence founded UNESCO in an effort to expand its influence. Composed of representatives from various Allied countries, UNESCO operated on the principle that ideas and information should spread freely among nations. However, Needham disagreed with this initial mode of exchange because of its failure to include nations outside of Europe and America.
To communicate his discordance with the model, Needham wrote and distributed a formal message to others in the organization explaining its flaws. He stated that nations outside of the European-American "bright zone", or primary location of scientific advancement, needed the help of international education the most. He also argued that the lack of familiarity between other nations and those in the bright zone made ideological exchange difficult. Finally, he expressed the notion that other countries had issues disseminating knowledge because they lacked the capital necessary for distribution. Due to these constraints, Needham suggested that most of the organization's support should be given to the "periphery" nations that lie outside of the bright zone.
In addition to supporting periphery nations, Needham incorporated his desire for a non-Eurocentric record of science in UNESCO's mission. To this end, Huxley and Needham devised an ambitious scholarly project they called The History of Scientific and Cultural Development of Mankind (shortened to History of Mankind). The goal of this project was to write a non-ethnocentric account of scientific and cultural history; it aimed to synthesize the contributions, perspectives, and development of oriental nations in the East in a way that was complementary to the Western scientific tradition. This vision was partly influenced by the political climate of the time of its planning in the late 1940s - the "East" and "West" were seen as cultural and political opposites. Working from the belief that science was the universal experience that bound humanity, Huxley and Needham hoped that their project would help ease some of the animosity between the two spheres. The project involved hundreds of scholars from around the globe and took over a decade to reach fruition in 1966. The work is still continued today with new volumes published periodically.
The Needham Question
"Needham's Grand Question", also known as "The Needham Question", is this: why had China been overtaken by the West in science and technology, despite their earlier successes? In Needham's words,
"Why did modern science, the mathematization of hypotheses about Nature, with all its implications for advanced technology, take its meteoric rise only in the West at the time of Galileo?", and why it "had not developed in Chinese civilization" which, in the previous many centuries "was much more efficient than occidental in applying" natural knowledge to practical needs.
Francis Bacon considered four inventions as completely transforming the modern world, marking it off from the antiquity of the Middle Ages: paper and printing, gunpowder, and the magnetic compass. He regarded the origins of these inventions as 'obscure and inglorious', dying without ever knowing that all of them were Chinese. Part of Needham's work attempts to "put this record straight".
Needham's works attribute significant weight to the impact of Confucianism and Taoism on the pace of Chinese scientific discovery, and emphasises the "diffusionist" approach of Chinese science as opposed to a perceived independent inventiveness in the western world.
Needham thought the notion that the Chinese script had inhibited scientific thought was "grossly overrated".
His own research revealed a steady accumulation of scientific results throughout Chinese history. In the final volume he suggests "A continuing general and scientific progress manifested itself in traditional Chinese society but this was violently overtaken by the exponential growth of modern science after the Renaissance in Europe. China was homeostatic, but never stagnant."
Nathan Sivin, one of Needham's collaborators, while agreeing that Needham's achievement was monumental, suggested that the "Needham question", as a counterfactual hypothesis, was not conducive to a useful answer:
There are several hypotheses attempting to explain the Needham Question. Yingqiu Liu and Chunjiang Liu argued that the issue rested on the lack of property rights and that those rights were only obtainable through favour of the emperor. Protection was incomplete as the emperor could rescind those rights at any time. Science and technology were subjugated to the needs of the feudal royal family, and any new discoveries were sequestered by the government for its use. The government took steps to control and interfere with private enterprises by manipulating prices and engaging in bribery. Each revolution in China redistributed property rights under the same feudal system. Land and property were reallocated first and foremost to the royal family of the new dynasty up until the late Qing Dynasty (1644–1911) when fiefdom land was taken over by warlords and merchants. These limited property rights constrained potential scientific innovations.
The Chinese Empire enacted totalitarian control and was able to do so because of its great size. There were smaller independent states that had no choice but to comply with this control. They could not afford to isolate themselves. The Chinese believed in the well-being of the state as their primary motive for economic activity, and individual initiatives were shunned. There were regulations on the press, clothing, construction, music, birth rates, and trade. The Chinese state controlled all aspects of life, severely limiting any incentives to innovate and to better one's self. "The ingenuity and inventiveness of the Chinese would no doubt have enriched China further and probably brought it to the threshold of modern industry, had it not been for this stifling state control. It is the State that kills technological progress in China". Meanwhile, the lack of a free market in China escalated to a new affair whereby the Chinese were restricted from carrying trade with foreigners. Foreign trade is a great source of foreign knowledge as well as the capability of acquisition of new products. Foreign trade promotes innovation as well as the expansion of a countries market. As Landes (2006) further puts it, in 1368 when the new emperor Hongwu was inaugurated, his main objective was war. (p. 6). A lot of revenue that can otherwise be used for innovative procedures are as a result lost in wars. Heavy participation in war significantly hindered the Chinese to have the capability of focusing on the industrial revolution. Landes (2006) further explains that Chinese were advised to stay put and never to move without permission from the Chinese state. As illustrated, "The Ming code of core laws also sought to block social mobility" (Landes, 2006, p. 7). How can you expect the industrial revolution to a country that prohibited its people from performing social mobility? From the above, you will come to find that it is clear that the Chinese would not be able to achieve industrial revolution since they were heavily tamed by their state government who were naïve about the aspect of innovation.
According to Justin Lin, China did not make the shift from an experience-based technological invention process to an experiment-based innovation process. The experience-based process depended on the size of a population, and while new technologies have come about through the trials and errors of the peasants and artisans, experiment-based processes surpasses experience-based processes in yielding new technology. Progress from experimentation following the logic of a scientific method can occur at a much faster rate because the inventor can perform many trials during the same production period under a controlled environment. Results from experimentation is dependent on the stock of scientific knowledge while results from experience-based processes is tied directly to the size of a population; hence, experiment-based innovation processes have a higher likelihood of producing better technology as human capital grows. China had about twice the population of Europe until the 13th century and so had a higher probability of creating new technologies. After the 14th century, China's population grew exponentially, but progress in innovation saw diminishing returns. Europe had a smaller population but began to integrate science and technology that arose from the scientific revolution in the 17th century. This scientific revolution gave Europe a comparative advantage in developing technology in modern times.
Lin blamed the institutions in China for preventing the adoption of the experiment-based methodology. Its sociopolitical institution inhibited intellectual creativity, but more importantly, it diverted this creativity away from scientific endeavours. Totalitarian control by the state in the Chinese Empire inhibited public dispute, competition, and the growth of modern science, while the clusters of independent European nations were more favourable to competition and scientific development. In addition, the Chinese did not have the incentives to acquire human capital necessary for modern scientific experimentation. Civil service was deemed the most rewarding and honourable work in pre-modern China. The gifted had more incentives to pursue this route to move up the social status ladder as opposed to pursuing scientific endeavours. Further the laxity and lack of innovation exhibited by China made her to be surpassed by the growing European levels of technological advancement and innovation. As Landes (2006) puts forward, the Chinese lived as they wanted. They were ruled by an emperor "Son of Heaven" who they termed to be unique, and he was godlike. As he further adds, this emperor had arrogant representatives who were chosen in terms of "competitive examinations in Confucian letters and morals." As explained, these representatives were submissive to their subordinates as they possessed a high degree of self-esteem. Just as put forward by Landes (2006), the downward tyranny combined with the cultural triumphalism had made China as a state to become a bad learner. (p. 11). It is clear China could not be able to accept any information from their inferiors.
The High-Level Equilibrium Trap. High population, although sometimes it can be a cheap source of labor which is necessary for economic development, sometimes the high population can be a great setback when it comes to development. The land which a factor of production can be negatively affected by high population. The ratio of person-to-land-area will eventually decrease as the population of a community grows. During the thirteenth century, China was significantly affected by this population factor when it came to the point of ignition of an industrial revolution. As Lin (1995) puts forward, initially, the culture of the Chinese has valued the males in the society; as a result, early marriages were experienced which boosted the fertility rates leading to the rapid increase in the China population. (p. 271). An increase in population with no equivalent increase in economic and technological development will ultimately suppress the available resources causing laxity to the general economic development. The high population experienced in China significantly raised the man to land ratio. The China population was massive. Just as Lin (1995) elaborates, the raising man-land-ratio in the Chinese meant that there was a diminishing surplus per capita. Due to this, China were not able to have surplus resources which can be tapped and used to ignite the industrial revolution. Just as Lin (1995) puts forward, Europeans were enjoying an optimum man to land ratio with no land strain. The Europeans also had vast unexploited technologies as well as economics possibilities. All these advantages were possible because of the feudal system that the European had embraced, (p. 272). The availability of unexploited ventures made European have significant potential in the execution of a fully-fledged industrial revolution. Lin (1995) further adds that although Europe was lagging behind China during the pre-modern era in terms of economic and technological advancements, the right time finally came for Europe to use the accumulated sufficient knowledge. A strong need to save labor was finally felt in Europe. The agrarian revolution experienced before also provided agricultural surplus that ultimately served as the core assets towards financing the industrial revolution. (p. 272). The accumulation of adequate labor and knowledge to their threshold was a significant step that the European embraced to ignite an industrial revolution. It is also clear that the agrarian revolution experienced in Europe was a tangible asset towards industrialization. The issue of the abundance of land was also at the forefront in ensuring that industrial revolution was realized in Europe contrary to what was experienced in China whereby the large populations put a lot of strain to the available resources as a result making industrial revolution unattainable in China during the early fourteenth century.
Evaluations and critiques
Needham's work has been criticised by most scholars who assert that it has a strong inclination to exaggerate Chinese technological achievements and has an excessive propensity to assume a Chinese origin for the wide range of objects his work covered. Pierre-Yves Manguin writes, for instance:
J Needham's (1971) monumental work on Chinese nautics offers by far the most
scholarly synthesis on the subjects of Chinese shipbuilding and navigation. His propensity to view the Chinese as the initiators of all things and his constant references to the superiority of Chinese over the rest of the world's techniques does at times
detract from his argument.
In another vein of criticism, Andre Gunder Frank's Re-Orient argues that despite Needham's contributions in the field of Chinese technological history, he still struggled to break free from his preconceived notions of European exceptionalism. Re-Orient criticizes Needham for his Eurocentric assumptions borrowed from Marx and the presupposition of Needham's famous Grand Question that science was a uniquely Western phenomenon. Frank observes:
Alas, it was also
originally Needham's Marxist and Weberian point of departure. As
Needham found more and more evidence about science and technology
in China, he struggled to liberate himself from his Eurocentric original
sin, which he had inherited directly from Marx, as Cohen also observes.
But Needham never quite succeeded, perhaps because his concentration
on China prevented him from sufficiently revising his still ethnocentric
view of Europe itself.
T. H. Barrett asserts in The Woman Who Discovered Printing that Needham was unduly critical of Buddhism, describing it as having 'tragically played a part in strangling the growth of Chinese science,' to which Needham readily conceded in a conversation a few years later. Barrett also criticizes Needham's favoritism and uncritical evaluation of Taoism in Chinese technological history:
He had a tendency — not entirely justified in the light of more recent research — to think well of Taoism, because he saw it as playing a part that could not be found elsewhere in Chinese civilization. The mainstream school of thinking of the bureaucratic Chinese elite, or 'Confucianism' (another problematic term) in his vocabulary, seemed to him to be less interested in science and technology, and to have 'turned its face away from Nature.' Ironically, the dynasty that apparently turned away from printing from 706 till its demise in 907 was as Taoist as any in Chinese history, though perhaps its 'state Taoism' would have seemed a corrupt and inauthentic business to Needham.
Daiwie Fu, in the essay "On Mengxi bitan'''s World of Marginalities and 'South-pointing Needles': Fragment Translation vs. Contextual Tradition", criticises Needham, among other Western scholars, for translations that select fragments deemed “scientific,” usually without appreciating the unity of the text, the context of the quotation, and taxonomy in which those fragments are embedded, then reorganize and reinterpret them in a new, Western taxonomy and narrative. Needham used this process of selection and re-assembly to argue for a Chinese tradition of science that did not exist as such.
Justin Lin argues against Needham's premise that China's early adoption of modern socioeconomic institutions contributed heavily to its technological advancement. Lin contends that technological advancements at this time were largely separate from economic circumstance, and that the effects of these institutions on technological advancement were indirect.
Political involvement
Needham's political views were unorthodox and his lifestyle controversial. His left-wing stance was based in a form of Christian socialism. However he was influenced by Louis Rapkine and Liliana Lubińska, both Marxists brought up with a Jewish anti-clerical outlook. He never joined any Communist Party. After 1949 his sympathy with Chinese culture was extended to the new government. During his stay in China, Needham was asked to analyse some cattle cakes that had been scattered by American aircraft in the south of China at the end of World War II, and found they were impregnated with anthrax. During the Korean War he made further accusations that the Americans had used biological warfare. Zhou Enlai coordinated an international campaign to enlist Needham for a study commission, tacitly offering access to materials and contacts in China needed for his then early research. Needham agreed to be an inspector in North Korea and his report supported the allegations (it is debated to this very day whether the evidence had been planted as a part of a complicated disinformation campaign). Needham's biographer Simon Winchester claimed that "Needham was intellectually in love with communism; and yet communist spymasters and agents, it turned out, had pitilessly duped him." Needham was blacklisted by the US government until well into the 1970s.
In 1965, with Derek Bryan, a retired diplomat whom he first met in China, Needham established the Society for Anglo-Chinese Understanding, which for some years provided the only way for British subjects to visit the People's Republic of China. On a visit to China in 1964 he was met by Zhou Enlai, and in 1965 stated that "China has a better government now than for centuries", but on a visit in 1972 he was deeply depressed by the changes under the Cultural Revolution.
Personal life
Needham married the biochemist Dorothy Moyle (1896–1987) in 1924 and they became the first husband and wife both to be elected as Fellows of the Royal Society.
Simon Winchester notes that, in his younger days, Needham was an avid gymnosophist and he was always attracted by pretty women. When he and Lu Gwei-djen met in 1937, they fell deeply in love, which Dorothy accepted. The three of them eventually lived contentedly on the same road in Cambridge for many years. In 1989, two years after Dorothy's death, Needham married Lu, who died two years later. He suffered from Parkinson's disease from 1982, and died at the age of 94 at his Cambridge home. In 2008, the Chair of Chinese in the University of Cambridge, a post never awarded to Needham, was endowed in his honour as the Joseph Needham Professorship of Chinese History, Science and Civilisation. Since 2016, an annual Needham Memorial Lecture is held at Clare College.
Needham was a high church Anglo-Catholic who worshipped regularly at Ely Cathedral and in the college chapel, but he also described himself as an "honorary Taoist".
Honours and awards
In 1961, Needham was awarded the George Sarton Medal by the History of Science Society and in 1966 he became Master of Gonville and Caius College.
In 1979, Joseph Needham received the Dexter Award for Outstanding Achievement in the History of Chemistry from the American Chemical Society.
In 1984, Needham became the fourth recipient of the J.D. Bernal Award, awarded by the Society for Social Studies of Science. In 1990, he was awarded the Fukuoka Asian Culture Prize by Fukuoka City.
The Needham Research Institute in Robinson College in Cambridge, devoted to the study of China's scientific history, was opened in 1985 by Prince Philip, Duke of Edinburgh and Chancellor of Cambridge University.
Order of the Companions of Honour, 1992.
British Academy, 1971.
Royal Society, 1941.
Works
Science, Religion and Reality (1925)
Man a Machine (1927) Kegan Paul
Chemical Embryology (1931) C.U.P.
The Great Amphibium: Four Lectures on the Position of Religion in a World Dominated by Science (1931)
A History of Embryology (1934, 1959) C.U.P.
Order and Life The Terry Lectures (1936)
Biochemistry and Morphogenesis (1942)
Time: The Refreshing River (Essays and Addresses, 1932–1942) (1943)
Chinese Science (1945) Pilot Press
History Is On Our Side (1947)
Science Outpost; Papers of the Sino-British Science Co-Operation Office (British Council Scientific Office in China) 1942–1946 (1948) Pilot Press
Science and Civilisation in China (1954–2008...) C.U.P. – 27 volumes to date
The Grand Titration: Science and Society in East and West (1969) Allen & Unwin
Within the Four Seas: The Dialogue of East and West (1969)
Clerks and Craftsmen in China and the West: Lectures and Addresses on the History of Science and Technology (1970) C.U.P.
Chinese Science: Explorations of an Ancient Tradition (1973) Ed. Shigeru Nakayama, Nathan Sivin. Cambridge : MIT Press
Moulds of Understanding: A Pattern of Natural Philosophy (1976) Allen & Unwin
The Shorter Science and Civilisation in China (5 volumes) (1980–95) – an abridgement by Colin Ronan
Science in Traditional China : A Comparative Perspective (1982)
The Genius of China (1986) A one-volume distillation by Robert Temple Simon & Schuster
Heavenly Clockwork : The Great Astronomical Clocks of Medieval China (1986) C.U.P.
The Hall of Heavenly Records : Korean Astronomical Instruments and Clocks, 1380–1780 (1986) C.U.P.
A Selection from the Writings of Joseph Needham ed Mansel Davies, The Book Guild 1990
See also
Four Great Inventions
G. E. R. Lloyd
List of sinologists
List of historians
The Rise of the West References
Citations
Sources
Biographical
Sarah Lyall. "Joseph Needham, China Scholar from Britain, Dies at 94", The New York Times. 27 March 1995.
Robert P. Multhauf, "Joseph Needham (1900–1995)," Technology and Culture 37.4 (1996): 880–891. .
.
Roel Sterckx. In the Fields of Shennong: An inaugural lecture delivered before the University of Cambridge on 30 September 2008 to mark the establishment of the Joseph Needham Professorship of Chinese History, Science and Civilization. Cambridge: Needham Research Institute, 2008 ().
Published in Great Britain as Bomb, Book and Compass.
A popular biography characterized by Nathan Siven as a "sniggering biography by a writer who specializes in rollicking tales of English eccentrics" and is "unprepared to deal with [Needham's] historic work." —
Francesca Bray, "How Blind Is Love?: Simon Winchester's The Man Who Loved China, Technology and Culture 51.3 (2010): 578–588. .
The "Needham Question"
Elvin, Mark, "Introduction (Symposium: The Work of Joseph Needham)", Past & Present no. 87 (1980): 17–20. .
Cullen, Christopher, "Joseph Needham on Chinese Astronomy," Past & Present no. 87 (1980): 39–53. .
Justin Y. Lin, "The Needham Puzzle: Why the Industrial Revolution Did Not Originate in China," Economic development and cultural change 43.2 (1995): 269–292. .
Timothy Brook, "The Sinology of Joseph Needham," Modern China 22.3 (1996): 340–348. .
Robert P. Multhauf, "Joseph Needham (1900–1995)," Technology and Culture 37.4 (1996): 880–891. .
Gregory Blue, "Joseph Needham, Heterodox Marxism and the Social Background to Chinese Science," Science & Society 62.2 (1998): 195–217. .
Robert Finlay, "China, the West, and World History in Joseph Needham's Science and Civilisation in China," Journal of World History 11 (Fall 2000): 265–303.
Further reading
(alk. paper)
Yoke, Ho Peng. Reminiscences of a Roving Scholar: Science, Humanities and Joseph Needham''. xii, 240 pp. Singapore: World Scientific Publishing, 2005.
External links
English
Interview with biographer Simon Winchester on ABC Brisbane September 2000
Needham Research Institute (NRI)
Science and Civilisation in China
Asian Philosophy and Critical Thinking Divergence or Convergence?
Guide to manuscripts by British scientists: N, O.
BBC Radio4 'In Our Time' audio stream on the Needham Question.
Question marks: Chinese invention – The Economist, 5 June 2008, review of Needham biography by Simon Winchester
Needham's wartime photos in China
The Answer to the Needham Question?
Imperial War Museum Interview
Joseph Needham Collection of digitised photographs and journals from the NRI archive in Cambridge Digital Library
Chinese
Xinhua:Today's NRI
Papers in Chinese 1991–2004 on Needham and his Grand Question
Needham and his early knowledge on Chinese culture
1900 births
1995 deaths
20th-century British historians
20th-century British scientists
Alumni of Gonville and Caius College, Cambridge
Anglican scholars
Anglo-Catholic socialists
British biochemists
British Christian socialists
British expatriates in China
British historians of science
British science writers
British sinologists
Christian communists
Deaths from Parkinson's disease in England
Fellows of Gonville and Caius College, Cambridge
Fellows of the British Academy
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Foreign members of the Chinese Academy of Sciences
Historians of astronomy
History of science and technology in China
Leonardo da Vinci Medal recipients
Masters of Gonville and Caius College, Cambridge
Members of the Order of the Companions of Honour
People educated at Oundle School
Recipients of the Fukuoka Prize
UNESCO officials | Joseph Needham | [
"Astronomy"
] | 7,338 | [
"People associated with astronomy",
"Historians of astronomy",
"History of astronomy"
] |
357,828 | https://en.wikipedia.org/wiki/Zolpidem | Zolpidem, sold under the brand name Ambien among others, is a medication primarily used for the short-term treatment of sleeping problems. Guidelines recommend that it be used only after cognitive behavioral therapy for insomnia and after behavioral changes, such as sleep hygiene, have been tried. It decreases the time to sleep onset by about fifteen minutes and at larger doses helps people stay asleep longer. It is taken by mouth and is available as conventional tablets, extended-release tablets, or sublingual tablets.
Common side effects include daytime sleepiness, headache, nausea, and diarrhea. More severe side effects include memory problems and hallucinations. While flumazenil, a GABAA–receptor antagonist, can reverse zolpidem's effects, usually supportive care is all that is recommended in overdose.
Zolpidem is a nonbenzodiazepine, or Z-drug, which acts as a sedative and hypnotic as a positive allosteric modulator at the GABAA receptor. It is an imidazopyridine and increases GABA effects in the central nervous system by binding to GABAA receptors at the same location as benzodiazepines.
In 2025, it became known that it also suppresses the norepinephrine effect and reduces glymphatic flow, i.e. it suppresses the brain´s waste disposal, which explains some of its adverse effects.
It generally has a half-life of two to three hours. This, however, is increased in those with liver problems.
Zolpidem was approved for medical use in the United States in 1992. It became available as a generic medication in 2007. Zolpidem is a schedule IV controlled substance in the US under the Controlled Substances Act of 1970 (CSA). More than 10million prescriptions are filled each year in the United States, making it one of the most commonly used treatments for sleeping problems. In 2022, it was the 66th most commonly prescribed medication in the United States, with more than 9million prescriptions.
Medical uses
Zolpidem is labeled for short-term (usually about two to six weeks) treatment of insomnia at the lowest possible dose. It may be used for both improving sleep onset, sleep onset latency, and staying asleep.
Guidelines from NICE, the European Sleep Research Society, and the American College of Physicians recommend medication for insomnia (including possible zolpidem) only as a second-line treatment after non-pharmacological treatment options have been tried (e.g. cognitive behavioral therapy for insomnia). This is based in part on a 2012 review which found that zolpidem's effectiveness is nearly as much due to psychological effects as to the medication itself.
Contraindications
Use of zolpidem may impair driving skills with a resultant increased risk of road traffic accidents. This adverse effect is not unique to zolpidem but also occurs with other hypnotic drugs. Caution should be exercised by motor vehicle drivers. The U.S. Food and Drug Administration (FDA) recommends lower doses of zolpidem due to impaired function the day after taking it.
Zolpidem should not be prescribed to older people, who are more sensitive to the effects of hypnotics including zolpidem, and are at an increased risk of falls and adverse cognitive effects, such as delirium and neurocognitive disorder.
Animal studies have revealed evidence of incomplete ossification and increased intrauterine fetal death at doses greater than seven times the maximum recommended human dose or higher; however, teratogenicity was not observed at any dose level. There are no controlled data on human pregnancy. In one case report, zolpidem was found in cord blood at delivery. Zolpidem is recommended for use during pregnancy only when the benefits outweigh the risks.
Adverse effects
The most common adverse effects of short-term use include headache (reported by 7% of people in clinical trials), drowsiness (2%), dizziness (1%), and diarrhea (1%); the most common side effects of long-term use included
drowsiness (8%),
dizziness (5%),
allergy (4%),
sinusitis (4%),
back pain (3%),
diarrhea (3%),
drugged feeling (3%),
dry mouth (3%),
lethargy (3%),
sore throat (3%),
abdominal pain (2%),
constipation (2%),
heart palpitations (2%),
lightheadedness (2%),
rash (2%),
abnormal dreams (1%),
amnesia (1%),
chest pain (1%),
depression (1%),
flu-like symptoms (1%),
and sleep disorder (1%).
Zolpidem increases the risk of depression, falls and bone fracture, poor driving, suppressed respiration and has been associated with an increased risk of death. Upper and lower respiratory infections are also common (experienced by 1–10% of people).
Residual 'hangover' effects, such as sleepiness and impaired psychomotor and cognitive function, may persist into the day following nighttime administration. Such effects may impair the ability of users to drive safely and increase risks of falls and hip fractures. Around 3% of people taking zolpidem are likely to break a bone as a result of a fall due to impaired coordination caused by the drug.
Sleepwalking and complex sleep behaviors
Zolpidem is associated with complex sleep behaviors (CSBs), defined as activities performed during sleep followed by amnesia. These activities may include walking, driving, eating, having sex, having conversations, and performing other daily activities while asleep. Research by Australia's National Prescribing Service found these activities typically occur after the first dose or within a few days of starting therapy, although they may occur at any time during treatment.
Concerns regarding zolpidem-related CSBs have prompted actions by regulatory authorities, including Australia's Therapeutic Goods Administration (TGA) and the U.S. Food and Drug Administration (FDA). In February 2008, the TGA implemented a boxed warning for the drug. In January 2013, the FDA issued a safety communication addressing next-morning cognitive impairment associated with the drug. In May 2013, the FDA recommended avoiding activities requiring alertness the day after using extended-release formulations. In April 2019, the FDA strengthened the drug's warning labeling by adding a black box warning highlighting the risk of serious injuries and fatalities related to CSBs, even at recommended doses and after single use, and added a contraindication advising against zolpidem use in patients with a history of CSBs.
Tolerance, dependence and withdrawal
As zolpidem is associated with drug tolerance and substance dependence, its prescription guidelines are only for severe insomnia and short periods of use at the lowest effective dose. Tolerance to the effects of zolpidem can develop in some people in just a few weeks. Abrupt withdrawal may cause delirium, seizures, or other adverse effects, especially if used for prolonged periods and at high doses. When drug tolerance and physical dependence to zolpidem develop, treatment usually entails a gradual dose reduction over a period of months to minimize withdrawal symptoms, which can resemble those seen during benzodiazepine withdrawal. Failing that, an alternative method may be necessary for some people, such as a switch to a benzodiazepine equivalent dose of a longer-acting benzodiazepine drug, as for diazepam or chlordiazepoxide, followed by a gradual reduction in dose of the long-acting benzodiazepine. In people who are difficult to treat, an inpatient flumazenil administration allows for rapid competitive binding of flumazenil to GABAA–receptor as an antagonist, thus stopping (and effectively detoxifying) zolpidem from being able to bind as an agonist on GABAA–receptor; slowly drug dependence or addiction to zolpidem will wane.
Alcoholics or recovering alcoholics may be at increased risk of physical dependency or abuse of zolpidem. It is not typically prescribed in people with a history of alcoholism, recreational drug use, physical dependency, or psychological dependency on sedative-hypnotic drugs. A 2014 review found evidence of drug-seeking behavior, with prescriptions for zolpidem making up 20% of falsified or forged prescriptions.
Rodent studies of the tolerance-inducing properties have shown that zolpidem has less tolerance-producing potential than benzodiazepines, but in primates, the tolerance-producing potential of zolpidem was the same as seen with benzodiazepines.
Overdose
Overdose can lead to coma or death.
Zolpidem overdose can be treated with the GABAA receptor antagonist flumazenil, which displaces zolpidem from its binding site on the GABAA receptor to rapidly reverse the effects of the zolpidem.
Detection in body fluids
Zolpidem may be quantitated in blood or plasma to confirm a diagnosis of poisoning in people who are hospitalized, to provide evidence in an impaired driving arrest, or to assist in a medicolegal death investigation. Blood or plasma zolpidem concentrations are usually in a range of 30–300μg/L in persons receiving the drug therapeutically, 100–700μg/L in those arrested for impaired driving, and 1000–7000μg/L in victims of acute overdosage. Analytical techniques, in general, involve gas or liquid chromatography.
Pharmacology
Mechanism of action
Zolpidem is a ligand of high-affinity positive modulator sites of GABAA receptors, which enhances GABAergic inhibition of neurotransmission in the central nervous system. It selectively binds to α1 subunits of this pentameric ion channel. Accordingly, it has strong hypnotic properties and weak anxiolytic, myorelaxant, and anticonvulsant properties. Opposed to diazepam, zolpidem is able to bind to binary αβ GABA receptors, where it was shown to bind to the α1–α1 subunit interface. Zolpidem has about 10-fold lower affinity for the α2- and α3- subunits than for α1, and no appreciable affinity for α5 subunit-containing receptors. ω1 type GABAA receptors are the α1-containing GABAA receptors and are found primarily in the brain, the ω2 receptors are those that contain the α2-, α3-, α4-, α5-, or α6 subunits, and are found primarily in the spine. Thus, zolpidem favours binding to GABAA receptors located in the brain rather than the spine. Zolpidem has no affinity for γ1 and γ3 subunit-containing receptors and, like the vast majority of benzodiazepine-like drugs, it lacks affinity for receptors containing α4 and α6. Zolpidem modulates the receptor presumably by inducing a receptor conformation that enables an increased binding strength of the orthosteric agonist GABA towards its cognate receptor without affecting desensitization or peak currents.
In 2025, zolpidem was shown that zolpidem supresses the norepinephrine mediated pump effect in the brain and reduces glymphatic flow, thereby suppressing waste disposal.
Like zaleplon, zolpidem may increase slow-wave sleep but cause no effect on stage 2 sleep.
A 2004 meta-analysis compared benzodiazepines against nonbenzodiazepines and showed few consistent differences between zolpidem and benzodiazepines in terms of sleep onset latency, total sleep duration, number of awakenings, quality of sleep, adverse events, tolerance, rebound insomnia, and daytime alertness.
Pharmacokinetics
Microsome studies indicate zolpidem is metabolized by CYP3A4 (61%) CYP2C9 (22%), CYP1A2 (14%), CYP2D6 (<3%), and CYP2C19 (<3%). Less than 1% is excreted in urine unchanged. It is principally metabolized into three metabolites, none of which are believed to be pharmacologically active. The absolute bioavailability of zolpidem is about 70%. The drug reaches peak concentration in about 2 hours and has a half-life in healthy adults of about 2–3 hours. Zolpidem's half life is decreased in children and increased in the elderly and people with liver issues. While some studies show men metabolize zolpidem faster than women (possibly due to testosterone), others do not. A review found only a 33% lower clearance in women compared to men, suggesting the FDA's dosage reduction of 50% for women may have been too large.
Interactions
People should not consume alcohol while taking zolpidem, and should not be prescribed opioid drugs nor take such illicit drugs recreationally. Use of opioids with zolpidem increases the risk of respiratory depression and death. The U.S. Food and Drug Administration (FDA) is advising that the opioid addiction medications buprenorphine and methadone should not be withheld from patients taking benzodiazepines or other drugs that depress the central nervous system (CNS).
Next day sedation can be worsened if people take zolpidem while they are also taking antipsychotics, other sedatives, anxiolytics, antidepressants, anticonvulsants, and antihistamines. Some people taking antidepressants have had visual hallucinations when they also took zolpidem.
Cytochrome P450 inhibitors, particularly CYP3A4 and CYP1A2 inhibitors such as fluvoxamine, ciprofloxacin, and clarithromycin will increase the effects of a given dose of zolpidem. Cytochrome P450 activators like St. John's Wort may decrease the activity of zolpidem. One study found that caffeine increases the concentration over time curve of zolpidem by about 20% and furthermore found that caffeine cannot adequately compensate for the impaired cognition caused by zolpidem. Other studies show no effect of caffeine on zolpidem metabolism.
Chemistry
Three chemical syntheses of zolpidem are common. 4-Methylacetophenone is used as a common precursor. This is brominated and reacted with 2-amino-5-methylpyridine to give the imidazopyridine. From here the reactions use a variety of reagents to complete the synthesis, either involving thionyl chloride or sodium cyanide. These reagents are challenging to handle and require thorough safety assessments. Though such safety procedures are common in the industry, they make clandestine manufacture difficult.
Several major side-products of the sodium cyanide reaction have been characterised and include dimers and mannich products.
Alpidem is also an imidazopyridine and is an analogue of zolpidem. Both agents are GABAA receptor positive allosteric modulators. However, whereas zolpidem is used as a hypnotic and sedative, alpidem was used as an anxiolytic.
History
Zolpidem was used in Europe starting in 1988 and was brought to market there by Synthelabo. Synthelabo and Searle collaborated to bring it to market in the US, and it was approved in the United States in 1992 under the brand name "Ambien". It became available as a generic medication in 2007.
In 2015, the American Geriatrics Society said that zolpidem, eszopiclone, and zaleplon met the Beers criteria and should be avoided in individuals 65 and over "because of their association with harms balanced with their minimal efficacy in treating insomnia." The AGS stated the strength of the recommendation that older adults avoid zolpidem is "strong" and the quality of evidence supporting it is "moderate."
Society and culture
Prescriptions in the US for all sleeping pills (including zolpidem) steadily declined from around 57 million tablets in 2013, to around 47 million in 2017, possibly due to concern about prescribing addictive drugs amid the opioid crisis.
Military use
As of 20212, the United States Air Force used zolpidem as one of the hypnotics approved as a "no-go pill" with a six-hour restriction on subsequent flight operation to help aviators and special duty personnel sleep in support of mission readiness. (The other hypnotics used are temazepam and zaleplon.) "Ground tests" are required before an authorization is issued to use the medication in an operational situation.
Recreational use
Zolpidem has potential for medical misuse when the drug is continued long term without or against medical advice, or for recreational use when the drug is taken to achieve a "high". The transition from medical use of zolpidem to high-dose addiction or drug dependence can occur with use, but some believe it may be more likely when used without a clinical recommendation to continue using it, when physiological drug tolerance leads to higher doses than the usual 5mg or 10mg, when consumed through insufflation or injection, or when taken for purposes other than as a sleep aid. Recreational use is more prevalent in those having been dependent on other drugs in the past, but tolerance and drug dependence can still sometimes occur in those without a history of drug dependence. Chronic users of high doses are more likely to develop physical dependence on the drug, which may cause severe withdrawal symptoms, including seizures if abrupt withdrawal from zolpidem occurs.
Other drugs, including benzodiazepines and zopiclone, are also found in high numbers of suspected drugged drivers. Many drivers have blood levels far exceeding the therapeutic dose range, suggesting a high degree of excessive-use potential for benzodiazepines, zolpidem, and zopiclone. U.S. Congressman Patrick J. Kennedy says that he was using zolpidem (Ambien) and promethazine (Phenergan) when he was caught driving erratically at 3 a.m. "I simply do not remember getting out of bed, being pulled over by the police, or being cited for three driving infractions," Kennedy said.
, nonmedical use of zolpidem is common for some adolescents. Some users have reported decreased anxiety, mild euphoria, perceptual changes, visual distortions, and hallucinations. Zolpidem was used by Australian Olympic swimmers at the London Olympics in 2012, leading to controversy.
Regulation
For the stated reason of its potential for recreational use and dependence, zolpidem (along with the other benzodiazepine-like Z-drugs) is a schedule IV substance under the Controlled Substances Act in the US. The United States patent for zolpidem was held by the French pharmaceutical corporation Sanofi-Aventis.
Use in crime
The Z-drugs, including zolpidem, have been used as date rape drugs. Zolpidem is available by prescription, and broadly prescribed unlike other date rape drugs: gamma-hydroxybutyrate (GHB), which is used to treat narcolepsy, or flunitrazepam (Rohypnol), which is only prescribed as a second-line choice for insomnia. Zolpidem can be detected in bodily fluids for 36 hours, though it may be possible to detect it by hair testing much later, which is due to the short elimination half-life of 2.5–3 hours. This use of the drug was highlighted during proceedings against Darren Sharper, who was accused of using the tablets he was prescribed to facilitate a series of rapes.
Sleepwalking and complex sleep behaviors
Zolpidem has drawn significant media attention due to reports of complex sleep behaviors (CSBs), including sleepwalking, sleep-driving, and other activities performed while not fully conscious. Notable incidents include media reports in the United States concerning events such as Congressman Patrick Kennedy's motor vehicle accident and in Australia following a fatal fall from the Sydney Harbour Bridge involving an individual reportedly under the influence of zolpidem.
In May 2018, actress Roseanne Barr attributed a controversial remark on Twitter to the effects of zolpidem. Barr's tweet compared Valerie Jarrett, a Black woman and former advisor to Barack Obama, to an ape. The comparison sparked widespread condemnation and led to the cancellation of Roseanne. The incident prompted Sanofi, the manufacturer of Ambien, to issue a public statement clarifying that "racism is not a known side effect" of the medication.
Brand names
As of September 2018, zolpidem is marketed under many brands.
Research
While cases of zolpidem improving aphasia in people with stroke have been described, use for this purpose has unclear benefits. Zolpidem has also been studied in persistent vegetative states with unclear effect. A 2017 systematic review concluded that while there is preliminary evidence of benefit for treating disorders of movement and consciousness other than insomnia (including Parkinson's disease), more research is needed.
Animal studies in FDA files for zolpidem showed a dose dependent increase in some types of tumors, although the studies were too small to reach statistical significance. Some observational epidemiological studies have found a correlation between use of benzodiazepines and certain hypnotics including zolpidem and an increased risk of getting cancer, but others have found no correlation; a 2017 meta-analysis of such studies found a correlation, stating that use of hypnotics was associated with a 29% increased risk of cancer, and that "zolpidem use showed the strongest risk of cancer" with an estimated 34% increased risk, but noted that the results were tentative because some of the studies failed to control for confounders like cigarette smoking and alcohol use, and some of the studies analyzed were case–controls, which are more prone to some forms of bias. Similarly, a meta-analysis of benzodiazepine drugs also shows their use is associated with increased risk of cancer.
References
External links
Acetamides
Dimethylamino compounds
Drugs developed by Pfizer
GABAA receptor positive allosteric modulators
Hypnotics
Imidazopyridines
Medical controversies
Nonbenzodiazepines
Sanofi
Wikipedia medicine articles ready to translate | Zolpidem | [
"Biology"
] | 4,773 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
357,881 | https://en.wikipedia.org/wiki/Test-driven%20development | Test-driven development (TDD) is a way of writing code that involves writing an automated unit-level test case that fails, then writing just enough code to make the test pass, then refactoring both the test code and the production code, then repeating with another new test case.
Alternative approaches to writing automated tests is to write all of the production code before starting on the test code or to write all of the test code before starting on the production code. With TDD, both are written together, therefore shortening debugging time necessities.
TDD is related to the test-first programming concepts of extreme programming, begun in 1999, but more recently has created more general interest in its own right.
Programmers also apply the concept to improving and debugging legacy code developed with older techniques.
History
Software engineer Kent Beck, who is credited with having developed or "rediscovered" the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.
Coding cycle
The TDD steps vary somewhat by author in count and description, but are generally as follows. These are based on the book Test-Driven Development by Example, and Kent Beck's Canon TDD article.
1. List scenarios for the new feature
List the expected variants in the new behavior. “There’s the basic case & then what-if this service times out & what-if the key isn’t in the database yet &…” The developer can discover these specifications by asking about use cases and user stories. A key benefit of TDD is that it makes the developer focus on requirements before writing code. This is in contrast with the usual practice, where unit tests are only written after code.
2. Write a test for an item on the list
Write an automated test that would pass if the variant in the new behavior is met.
3. Run all tests. The new test should fail for expected reasons
This shows that new code is actually needed for the desired feature. It validates that the test harness is working correctly. It rules out the possibility that the new test is flawed and will always pass.
4. Write the simplest code that passes the new test
Inelegant code and hard coding is acceptable. The code will be honed in Step 6. No code should be added beyond the tested functionality.
5. All tests should now pass
If any fail, fix failing tests with minimal changes until all pass.
6. Refactor as needed while ensuring all tests continue to pass
Code is refactored for readability and maintainability. In particular, hard-coded test data should be removed from the production code. Running the test suite after each refactor ensures that no existing functionality is broken. Examples of refactoring:
moving code to where it most logically belongs
removing duplicate code
making names self-documenting
splitting methods into smaller pieces
re-arranging inheritance hierarchies
Repeat
Repeat the process, starting at step 2, with each test on the list until all tests are implemented and passing.
Each tests should be small and commits made often. If new code fails some tests, the programmer can undo or revert rather than debug excessively.
When using external libraries, it is important not to write tests that are so small as to effectively test merely the library itself, unless there is some reason to believe that the library is buggy or not feature-rich enough to serve all the needs of the software under development.
Test-driven work
TDD has been adopted outside of software development, in both product and service teams, as test-driven work. For testing to be successful, it needs to be practiced at the micro and macro levels. Every method in a class, every input data value, log message, and error code, amongst other data points, need to be tested. Similar to TDD, non-software teams develop quality control (QC) checks (usually manual tests rather than automated tests) for each aspect of the work prior to commencing. These QC checks are then used to inform the design and validate the associated outcomes. The six steps of the TDD sequence are applied with minor semantic changes:
"Add a check" replaces "Add a test"
"Run all checks" replaces "Run all tests"
"Do the work" replaces "Write some code"
"Run all checks" replaces "Run tests"
"Clean up the work" replaces "Refactor code"
"Repeat"
Development style
There are various aspects to using test-driven development, for example the principles of "keep it simple, stupid" (KISS) and "You aren't gonna need it" (YAGNI). By focusing on writing only the code necessary to pass tests, designs can often be cleaner and clearer than is achieved by other methods. In Test-Driven Development by Example, Kent Beck also suggests the principle "Fake it till you make it".
To achieve some advanced design concept such as a design pattern, tests are written that generate that design. The code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but it allows the developer to focus only on what is important.
Writing the tests first: The tests should be written before the functionality that is to be tested. This has been claimed to have many benefits. It helps ensure that the application is written for testability, as the developers must consider how to test the application from the outset rather than adding it later. It also ensures that tests for every feature gets written. Additionally, writing the tests first leads to a deeper and earlier understanding of the product requirements, ensures the effectiveness of the test code, and maintains a continual focus on software quality. When writing feature-first code, there is a tendency by developers and organizations to push the developer on to the next feature, even neglecting testing entirely. The first TDD test might not even compile at first, because the classes and methods it requires may not yet exist. Nevertheless, that first test functions as the beginning of an executable specification.
Each test case fails initially: This ensures that the test really works and can catch an error. Once this is shown, the underlying functionality can be implemented. This has led to the "test-driven development mantra", which is "red/green/refactor", where red means fail and green means pass. Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring. Receiving the expected test results at each stage reinforces the developer's mental model of the code, boosts confidence and increases productivity.
Code visibility
Test code needs access to the code it is testing, but testing should not compromise normal design goals such as information hiding, encapsulation and the separation of concerns. Therefore, unit test code is usually located in the same project or module as the code being tested.
In object oriented design this still does not provide access to private data and methods. Therefore, extra work may be necessary for unit tests. In Java and other languages, a developer can use reflection to access private fields and methods. Alternatively, an inner class can be used to hold the unit tests so they have visibility of the enclosing class's members and attributes. In the .NET Framework and some other programming languages, partial classes may be used to expose private methods and data for the tests to access.
It is important that such testing hacks do not remain in the production code. In C and other languages, compiler directives such as #if DEBUG ... #endif can be placed around such additional classes and indeed all other test-related code to prevent them being compiled into the released code. This means the released code is not exactly the same as what was unit tested. The regular running of fewer but more comprehensive, end-to-end, integration tests on the final release build can ensure (among other things) that no production code exists that subtly relies on aspects of the test harness.
There is some debate among practitioners of TDD, documented in their blogs and other writings, as to whether it is wise to test private methods and data anyway. Some argue that private members are a mere implementation detail that may change, and should be allowed to do so without breaking numbers of tests. Thus it should be sufficient to test any class through its public interface or through its subclass interface, which some languages call the "protected" interface. Others say that crucial aspects of functionality may be implemented in private methods and testing them directly offers advantage of smaller and more direct unit tests.
Fakes, mocks and integration tests
Unit tests are so named because they each test one unit of code. A complex module may have a thousand unit tests and a simple module may have only ten. The unit tests used for TDD should never cross process boundaries in a program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage developers from running the whole suite. Introducing dependencies on external modules or data also turns unit tests into integration tests. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause of the failure.
When code under development relies on a database, a web service, or any other external process or service, enforcing a unit-testable separation is also an opportunity and a driving force to design more modular, more testable and more reusable code. Two steps are necessary:
Whenever external access is needed in the final design, an interface should be defined that describes the access available. See the dependency inversion principle for a discussion of the benefits of doing this regardless of TDD.
The interface should be implemented in two ways, one of which really accesses the external process, and the other of which is a fake or mock. Fake objects need do little more than add a message such as "Person object saved" to a trace log, against which a test assertion can be run to verify correct behaviour. Mock objects differ in that they themselves contain test assertions that can make the test fail, for example, if the person's name and other data are not as expected.
Fake and mock object methods that return data, ostensibly from a data store or user, can help the test process by always returning the same, realistic data that tests can rely upon. They can also be set into predefined fault modes so that error-handling routines can be developed and reliably tested. In a fault mode, a method may return an invalid, incomplete or null response, or may throw an exception. Fake services other than data stores may also be useful in TDD: A fake encryption service may not, in fact, encrypt the data passed; a fake random number service may always return 1. Fake or mock implementations are examples of dependency injection.
A test double is a test-specific capability that substitutes for a system capability, typically a class or function, that the UUT depends on. There are two times at which test doubles can be introduced into a system: link and execution. Link time substitution is when the test double is compiled into the load module, which is executed to validate testing. This approach is typically used when running in an environment other than the target environment that requires doubles for the hardware level code for compilation. The alternative to linker substitution is run-time substitution in which the real functionality is replaced during the execution of a test case. This substitution is typically done through the reassignment of known function pointers or object replacement.
Test doubles are of a number of different types and varying complexities:
Dummy – A dummy is the simplest form of a test double. It facilitates linker time substitution by providing a default return value where required.
Stub – A stub adds simplistic logic to a dummy, providing different outputs.
Spy – A spy captures and makes available parameter and state information, publishing accessors to test code for private information allowing for more advanced state validation.
Mock – A mock is specified by an individual test case to validate test-specific behavior, checking parameter values and call sequencing.
Simulator – A simulator is a comprehensive component providing a higher-fidelity approximation of the target capability (the thing being doubled). A simulator typically requires significant additional development effort.
A corollary of such dependency injection is that the actual database or other external-access code is never tested by the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven code with the "real" implementations of the interfaces discussed above. These are integration tests and are quite separate from the TDD unit tests. There are fewer of them, and they must be run less often than the unit tests. They can nonetheless be implemented using the same testing framework.
Integration tests that alter any persistent store or database should always be designed carefully with consideration of the initial and final state of the files or database, even if any test fails. This is often achieved using some combination of the following techniques:
The TearDown method, which is integral to many test frameworks.
try...catch...finally exception handling structures where available.
Database transactions where a transaction atomically includes perhaps a write, a read and a matching delete operation.
Taking a "snapshot" of the database before running any tests and rolling back to the snapshot after each test run. This may be automated using a framework such as Ant or NAnt or a continuous integration system such as CruiseControl.
Initialising the database to a clean state before tests, rather than cleaning up after them. This may be relevant where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before detailed diagnosis can be performed.
Keep the unit small
For TDD, a unit is most commonly defined as a class, or a group of related functions often called a module. Keeping units relatively small is claimed to provide critical benefits, including:
Reduced debugging effort – When test failures are detected, having smaller units aids in tracking down errors.
Self-documenting tests – Small test cases are easier to read and to understand.
Advanced practices of test-driven development can lead to acceptance test–driven development (ATDD) and specification by example where the criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit test-driven development (UTDD) process. This process ensures the customer has an automated mechanism to decide whether the software meets their requirements. With ATDD, the development team now has a specific target to satisfy – the acceptance tests – which keeps them continuously focused on what the customer really wants from each user story.
Best practices
Test structure
Effective layout of a test case ensures all required actions are completed, improves the readability of the test case, and smooths the flow of execution. Consistent structure helps in building a self-documenting test case. A commonly applied structure for test cases has (1) setup, (2) execution, (3) validation, and (4) cleanup.
Setup: Put the Unit Under Test (UUT) or the overall test system in the state needed to run the test.
Execution: Trigger/drive the UUT to perform the target behavior and capture all output, such as return values and output parameters. This step is usually very simple.
Validation: Ensure the results of the test are correct. These results may include explicit outputs captured during execution or state changes in the UUT.
Cleanup: Restore the UUT or the overall test system to the pre-test state. This restoration permits another test to execute immediately after this one. In some cases, in order to preserve the information for possible test failure analysis, the cleanup should be starting the test just before the test's setup run.
Individual best practices
Some best practices that an individual could follow would be to separate common set-up and tear-down logic into test support services utilized by the appropriate test cases, to keep each test oracle focused on only the results necessary to validate its test, and to design time-related tests to allow tolerance for execution in non-real time operating systems. The common practice of allowing a 5-10 percent margin for late execution reduces the potential number of false negatives in test execution. It is also suggested to treat test code with the same respect as production code. Test code must work correctly for both positive and negative cases, last a long time, and be readable and maintainable. Teams can get together and review tests and test practices to share effective techniques and catch bad habits.
Practices to avoid, or "anti-patterns"
Having test cases depend on system state manipulated from previously executed test cases (i.e., you should always start a unit test from a known and pre-configured state).
Dependencies between test cases. A test suite where test cases are dependent upon each other is brittle and complex. Execution order should not be presumed. Basic refactoring of the initial test cases or structure of the UUT causes a spiral of increasingly pervasive impacts in associated tests.
Interdependent tests. Interdependent tests can cause cascading false negatives. A failure in an early test case breaks a later test case even if no actual fault exists in the UUT, increasing defect analysis and debug efforts.
Testing precise execution, behavior, timing or performance.
Building "all-knowing oracles". An oracle that inspects more than necessary is more expensive and brittle over time. This very common error is dangerous because it causes a subtle but pervasive time sink across the complex project.
Testing implementation details.
Slow running tests.
Comparison and demarcation
TDD and ATDD
Test-driven development is related to, but different from acceptance test–driven development (ATDD). TDD is primarily a developer's tool to help create well-written unit of code (function, class, or module) that correctly performs a set of operations. ATDD is a communication tool between the customer, developer, and tester to ensure that the requirements are well-defined. TDD requires test automation. ATDD does not, although automation helps with regression testing. Tests used in TDD can often be derived from ATDD tests, since the code units implement some portion of a requirement. ATDD tests should be readable by the customer. TDD tests do not need to be.
TDD and BDD
BDD (behavior-driven development) combines practices from TDD and from ATDD.
It includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. Tools such as JBehave, Cucumber, Mspec and Specflow provide syntaxes which allow product owners, developers and test engineers to define together the behaviors which can then be translated into automated tests.
Software for TDD
There are many testing frameworks and tools that are useful in TDD.
xUnit frameworks
Developers may use computer-assisted testing frameworks, commonly collectively named xUnit (which are derived from SUnit, created in 1998), to create and automatically run the test cases. xUnit frameworks provide assertion-style test validation capabilities and result reporting. These capabilities are critical for automation as they move the burden of execution validation from an independent post-processing activity to one that is included in the test execution. The execution framework provided by these test frameworks allows for the automatic execution of all system test cases or various subsets along with other features.
TAP results
Testing frameworks may accept unit test output in the language-agnostic Test Anything Protocol created in 1987.
TDD for complex systems
Exercising TDD on large, challenging systems requires a modular architecture, well-defined components with published interfaces, and disciplined system layering with maximization of platform independence. These proven practices yield increased testability and facilitate the application of build and test automation.
Designing for testability
Complex systems require an architecture that meets a range of requirements. A key subset of these requirements includes support for the complete and effective testing of the system. Effective modular design yields components that share traits essential for effective TDD.
High Cohesion ensures each unit provides a set of related capabilities and makes the tests of those capabilities easier to maintain.
Low Coupling allows each unit to be effectively tested in isolation.
Published Interfaces restrict Component access and serve as contact points for tests, facilitating test creation and ensuring the highest fidelity between test and production unit configuration.
A key technique for building effective modular architecture is Scenario Modeling where a set of sequence charts is constructed, each one focusing on a single system-level execution scenario. The Scenario Model provides an excellent vehicle for creating the strategy of interactions between components in response to a specific stimulus. Each of these Scenario Models serves as a rich set of requirements for the services or functions that a component must provide, and it also dictates the order in which these components and services interact together. Scenario modeling can greatly facilitate the construction of TDD tests for a complex system.
Managing tests for large teams
In a larger system, the impact of poor component quality is magnified by the complexity of interactions. This magnification makes the benefits of TDD accrue even faster in the context of larger projects. However, the complexity of the total population of tests can become a problem in itself, eroding potential gains. It sounds simple, but a key initial step is to recognize that test code is also important software and should be produced and maintained with the same rigor as the production code.
Creating and managing the architecture of test software within a complex system is just as important as the core product architecture. Test drivers interact with the UUT, test doubles and the unit test framework.
Advantages and Disadvantages of Test Driven Development
Advantages
Test Driven Development (TDD) is a software development approach where tests are written before the actual code. It offers several advantages:
Comprehensive Test Coverage: TDD ensures that all new code is covered by at least one test, leading to more robust software.
Enhanced Confidence in Code: Developers gain greater confidence in the code's reliability and functionality.
Enhanced Confidence in Tests: As the tests are known to be failing without the proper implementation, we know that the tests actually tests the implementation correctly.
Well-Documented Code: The process naturally results in well-documented code, as each test clarifies the purpose of the code it tests.
Requirement Clarity: TDD encourages a clear understanding of requirements before coding begins.
Facilitates Continuous Integration: It integrates well with continuous integration processes, allowing for frequent code updates and testing.
Boosts Productivity: Many developers find that TDD increases their productivity.
Reinforces Code Mental Model: TDD helps in building a strong mental model of the code's structure and behavior.
Emphasis on Design and Functionality: It encourages a focus on the design, interface, and overall functionality of the program.
Reduces Need for Debugging: By catching issues early in the development process, TDD reduces the need for extensive debugging later.
System Stability: Applications developed with TDD tend to be more stable and less prone to bugs.
Disadvantages
However, TDD is not without its drawbacks:
Increased Code Volume: Implementing TDD can result in a larger codebase as tests add to the total amount of code written.
False Security from Tests: A large number of passing tests can sometimes give a misleading sense of security regarding the code's robustness.
Maintenance Overheads: Maintaining a large suite of tests can add overhead to the development process.
Time-Consuming Test Processes: Writing and maintaining tests can be time-consuming.
Testing Environment Set-Up: TDD requires setting up and maintaining a suitable testing environment.
Learning Curve: It takes time and effort to become proficient in TDD practices.
Overcomplication: An overemphasis on TDD can lead to code that is more complex than necessary.
Neglect of Overall Design: Focusing too narrowly on passing tests can sometimes lead to neglect of the bigger picture in software design.
Increased Costs: The additional time and resources required for TDD can result in higher development costs.
Benefits
A 2005 study found that using TDD meant writing more tests and, in turn, programmers who wrote more tests tended to be more productive. Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive.
Programmers using pure TDD on new ("greenfield") projects reported they only rarely felt the need to invoke a debugger. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging.
Test-driven development offers more than just simple validation of correctness, but can also drive the design of a program. By focusing on the test cases first, one must imagine how the functionality is used by clients (in the first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit is complementary to design by contract as it approaches code through test cases rather than through mathematical assertions or preconceptions.
Test-driven development offers the ability to take small steps when required. It allows a programmer to focus on the task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered initially, and tests to create these extraneous circumstances are implemented separately. Test-driven development ensures in this way that all written code is covered by at least one test. This gives the programming team, and subsequent users, a greater level of confidence in the code.
While it is true that more code is required with TDD than without TDD because of the unit test code, the total code implementation time could be shorter based on a model by Müller and Padberg. Large numbers of tests help to limit the number of defects in the code. The early and frequent nature of the testing helps to catch defects early in the development cycle, preventing them from becoming endemic and expensive problems. Eliminating defects early in the process usually avoids lengthy and tedious debugging later in the project.
TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the methodology requires that the developers think of the software in terms of small units that can be written and tested independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner interfaces. The use of the mock object design pattern also contributes to the overall modularization of the code because this pattern requires that the code be written so that modules can be switched easily between mock versions for unit testing and "real" versions for deployment.
Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code path. For example, for a TDD developer to add an else branch to an existing if statement, the developer would first have to write a failing test case that motivates the branch. As a result, the automated tests resulting from TDD tend to be very thorough: they detect any unexpected changes in the code's behaviour. This detects problems that can arise where a change later in the development cycle unexpectedly alters other functionality.
Madeyski provided empirical evidence (via a series of laboratory experiments with over 200 developers) regarding the superiority of the TDD practice over the traditional Test-Last approach or testing for correctness approach, with respect to the lower coupling between objects (CBO). The mean effect size represents a medium (but close to large) effect on the basis of meta-analysis of the performed experiments which is a substantial finding. It suggests a better modularization (i.e., a more modular design), easier reuse and testing of the developed software products due to the TDD programming practice. Madeyski also measured the effect of the TDD practice on unit tests using branch coverage (BC) and mutation score indicator (MSI), which are indicators of the thoroughness and the fault detection effectiveness of unit tests, respectively. The effect size of TDD on branch coverage was medium in size and therefore is considered substantive effect. These findings have been subsequently confirmed by further, smaller experimental evaluations of TDD.
Psychological benefits to programmer
Increased Confidence: TDD allows programmers to make changes or add new features with confidence. Knowing that the code is constantly tested reduces the fear of breaking existing functionality. This safety net can encourage more innovative and creative approaches to problem-solving.
Reduced Fear of Change, Reduced Stress: In traditional development, changing existing code can be daunting due to the risk of introducing bugs. TDD, with its comprehensive test suite, reduces this fear, as tests will immediately reveal any problems caused by changes. Knowing that the codebase has a safety net of tests can reduce stress and anxiety associated with programming. Developers might feel more relaxed and open to experimenting and refactoring.
Improved Focus: Writing tests first helps programmers concentrate on requirements and design before writing the code. This focus can lead to clearer, more purposeful coding, as the developer is always aware of the goal they are trying to achieve.
Sense of Achievement and Job Satisfaction: Passing tests can provide a quick, regular sense of accomplishment, boosting morale. This can be particularly motivating in long-term projects where the end goal might seem distant. The combination of all these factors can lead to increased job satisfaction. When developers feel confident, focused, and part of a collaborative team, their overall job satisfaction can significantly improve.
Limitations
Test-driven development does not perform sufficient testing in situations where full functional tests are required to determine success or failure, due to extensive use of unit tests. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world.
Management support is essential. Without the entire organization believing that test-driven development is going to improve the product, management may feel that time spent writing tests is wasted.
Unit tests created in a test-driven development environment are typically created by the developer who is writing the code being tested. Therefore, the tests may share blind spots with the code: if, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify those parameters. Another example: if the developer misinterprets the requirements for the module they are developing, the code and the unit tests they write will both be wrong in the same way. Therefore, the tests will pass, giving a false sense of correctness.
A high number of passing unit tests may bring a false sense of security, resulting in fewer additional software testing activities, such as integration testing and compliance testing.
Tests become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings, are themselves prone to failure, and they are expensive to maintain. This is especially the case with fragile tests. There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs, it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during the code refactoring phase described above.
Writing and maintaining an excessive number of tests costs time. Also, more-flexible modules (with limited tests) might accept new requirements without the need for changing the tests. For those reasons, testing for only extreme conditions, or a small sample of data, can be easier to adjust than a set of highly detailed tests.
The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore, these original, or early, tests become increasingly precious as time goes by. The tactic is to fix it early. Also, if a poor architecture, a poor design, or a poor testing strategy leads to a late change that makes dozens of existing tests fail, then it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to undetectable holes in the test coverage.
Conference
First TDD Conference was held during July 2021. Conferences were recorded on YouTube
See also
References
External links
TestDrivenDevelopment on WikiWikiWeb
Microsoft Visual Studio Team Test from a TDD approach
Write Maintainable Unit Tests That Will Save You Time And Tears
Improving Application Quality Using Test-Driven Development (TDD)
Test Driven Development Conference
Extreme programming
Software development philosophies
Software development process
Software testing | Test-driven development | [
"Engineering"
] | 6,642 | [
"Software engineering",
"Software testing"
] |
357,909 | https://en.wikipedia.org/wiki/Mulberry%20harbours | The Mulberry harbours were two temporary portable harbours developed by the British Admiralty and War Office during the Second World War to facilitate the rapid offloading of cargo onto beaches during the Allied invasion of Normandy in June 1944. They were designed in 1942 then built in under a year in great secrecy; within hours of the Allies creating beachheads after D-Day, sections of the two prefabricated harbours were towed across the English Channel from southern England and placed in position off Omaha Beach (Mulberry "A") and Gold Beach (Mulberry "B"), along with old ships to be sunk as breakwaters.
The Mulberry harbours solved the problem of needing deepwater jetties and a harbour to provide the invasion force with the necessary reinforcements and supplies, and were to be used until major French ports could be captured and brought back into use after repair of the inevitable sabotage by German defenders. Comprising floating but sinkable breakwaters, floating pontoons, piers and floating roadways, this innovative and technically difficult system was being used for the first time.
The Mulberry B harbour at Gold Beach was used for ten months after D-Day, while over two million men, four million tons of supplies and half a million vehicles were landed before it was fully decommissioned. The partially completed Mulberry A harbour at Omaha Beach was damaged on 19 June by a violent storm that arrived from the northeast before the pontoons were securely anchored. After three days the storm finally abated and damage was found to be so severe that the harbour was abandoned and the Americans resorted to landing men and material over the open beaches.
Background
The Dieppe Raid of 1942 had shown that the Allies could not rely on being able to penetrate the Atlantic Wall to capture a port on the north French coast. The problem was that large ocean-going ships of the type needed to transport heavy and bulky cargoes and stores needed sufficient depth of water under their keels, together with dockside cranes, to offload their cargo. These were only available at the already heavily defended French harbours. Thus, the Mulberries were created to provide the port facilities necessary to offload the thousands of men and vehicles and millions of tons of supplies necessary to sustain Operation Overlord. The harbours were made up of all the elements one would expect of any harbour: breakwater, piers and roadways.
Preparation
With the planning of Operation Overlord at an advanced stage by the summer of 1943, it was accepted that the proposed artificial harbours would need to be prefabricated in Britain and then towed across the English Channel.
The need for two separate artificial harbours – one American and one British/Canadian – was agreed at the Quebec Conference in August 1943. An Artificial Harbours Sub-Committee was set up under the Chairmanship of the civil engineer Colin R. White, brother of Sir Bruce White, to advise on the location of the harbours and the form of the breakwater; the Sub-Committee's first meeting was held at the Institution of Civil Engineers (ICE) on 4 August 1943. The minutes of the Sub-Committee's meetings show that initially it was envisaged that bubble breakwaters would be used, then blockships were proposed, and finally, because not enough block ships were available, a mix of blockships and purpose-made concrete caisson units were used.
On 2 September 1943 the Combined Chiefs of Staff estimated that the artificial ports (Mulberries) would need to handle 12,000 tons per day, exclusive of motor transport, and in all weathers. On 4 September the go-ahead was given to start work immediately on the harbours. Infighting between the War Office and the Admiralty over responsibility was only resolved on 15 December 1943 by the intervention of the Vice-Chiefs of Staff. The decision was that the Admiralty managed the blockships, bombardons and assembly of all constituent parts on the south coast of England. It would also undertake all necessary work to survey, site, tow and mark navigation. The War Office was given the task of constructing the concrete caissons (phoenixes), the roadways (whales) and protection via anti-aircraft installations. Once at the site, the army was responsible for sinking the caissons and assembling all the various other units of the harbours. For the Mulberry A at Omaha Beach, the US Navy Civil Engineer Corps (CEC) would construct the harbour from prefabricated parts.
The proposed harbours called for many huge caissons of various sorts to build breakwaters and piers and connecting structures to provide the roadways. The caissons were built at a number of locations, mainly existing ship building facilities or large beaches, like Conwy Morfa, around the British coast. The works were let out to commercial construction firms, including Wates Construction, Balfour Beatty, Henry Boot, Bovis & Co, Cochrane & Sons, Costain, Cubitts, French, Holloway Brothers, John Laing & Son, Peter Lind & Company, Sir Robert McAlpine, Melville Dundas & Whitson, Mowlem, Nuttall, Parkinson, Halcrow Group, Pauling & Co. and Taylor Woodrow. On completion they were towed across the English Channel by tugboats to the Normandy coast at only and assembled, operated and maintained by the Corps of Royal Engineers, under the guidance of Reginald D. Gwyther, who was appointed CBE for his efforts. Various elements of the whale piers were designed and constructed by a group of companies led by Braithwaite & Co, West Bromwich and Newport.
Beach surveys
Both locations for the temporary harbours required detailed information concerning geology, hydrography and sea conditions. To collect this data a special team of hydrographers was created in October 1943. The 712th Survey Flotilla, operating from naval base HMS Tormentor in Hamble, were detailed to collect soundings off the enemy coast. Between November 1943 and January 1944 this team used a number of specially adapted Landing Craft Personnel (Large), or LCP(L), to survey the Normandy coast.
The LCP(L)s were manned by a Royal Navy crew and a small group of hydrographers. The first sortie, Operation KJF, occurred on the night of 26/27 November 1943 when three LCP(L)s took measurements off the port of Arromanches, the location for Mulberry B. A follow-up mission, Operation KJG, to the proposed location for Mulberry A happened over 1 and 2 December but a navigation failure meant the team sounded an area 2,250 yards west of the correct area.
Two attempts to take soundings were made off Pointe de Ver. The first sortie, Operation Bellpush Able, on 25/26 December had problems with their equipment. They returned on 28/29 December, in Operation Bellpush Baker, to complete the task.
(On New Year's Eve 1943, the 712th Survey Flotilla carried a Combined Operations Pilotage Party (COPP) to the Gold Beach area just west of Ver-sur-Mer. Two soldiers – Major Logan Scott-Bowden, of the Royal Engineers, and commando Sergeant Bruce Ogden Smith, of the East Surrey Regiment – landed on the beach at night in Operation KJH and took samples of the sand. This operation was to check the load-bearing capabilities of sand and help determine whether armoured vehicles would be able to cross the beach or become bogged down, rather than being in connection with the Mulberry harbours.)
The final Mulberry harbour survey, Operation Bellpush Charlie, occurred on the night of 30–31 January but limited information was gathered due to fog and because German lookouts heard the craft. Further sorties were abandoned.
Design and development
An early idea for temporary harbours was sketched by Winston Churchill in a 1915 memo to Lloyd George. This memo was for artificial harbours to be created off the German islands of Borkum and Sylt. No further investigation was made and the memo was filed away.
In 1940 the civil engineer Guy Maunsell wrote to the War Office with a proposal for an artificial harbour, but the idea was not at first adopted.
Churchill issued his memo "Piers for use on beaches" on 30 May 1942, apparently in some frustration at the lack of progress being made on finding a solution to the temporary harbour problem. Between 17 June and 6 August 1942, Hugh Iorys Hughes submitted a design concept for artificial harbours to the War Office.
At a meeting following the Dieppe Raid of 19 August 1942, Vice-Admiral John Hughes-Hallett (the naval commander for the Dieppe Raid) declared that if a port could not be captured, then one should be taken across the Channel. Hughes-Hallett had the support of Churchill. The concept of Mulberry harbours began to take shape when Hughes-Hallett moved to be Naval Chief of Staff to the Overlord planners.
In the autumn of 1942, the Chief of Combined Operations Vice-Admiral Lord Louis Mountbatten, outlined the requirement for piers at least long at which a continuous stream of supplies could be handled, including a pier head capable of handling 2,000-ton ships.
In July 1943 a committee of eminent civil engineers consisting of Colin R White (chairman), J D C Couper, J A Cochrane, R D Gwyther and Lt. Col. Ivor Bell was established to advise on how a number of selected sites on the French coastline could be converted into sheltered harbours. The committee initially investigated the use of compressed air breakwaters before eventually deciding on blockships and caissons.
Trials
In August and September 1943 a trial of three competing designs for the cargo-handling jetties was set up together with a test of a compressed air breakwater. The pier designs were by:
Hugh Iorys Hughes (a civil engineer) who developed his "Hippo" piers and "Crocodile" bridge spans;
Ronald Hamilton (working at the Department of Miscellaneous Weapons Development) who devised the "Swiss roll" which consisted of a floating roadway made of waterproofed canvas stiffened with slats and tensioned by cables;
Lieutenant Colonel William T Everall and Major Allan Beckett (of the War Office's 'Transportation 5 Department' (Tn5)) who designed a floating bridge linked to a pier head (the latter had integral 'spud' legs that were raised and lowered with the tide).
The western side of Wigtown Bay, in the Solway Firth, was selected for the trials as the tides were similar to those on the expected invasion beaches in Normandy, a harbour was available at Garlieston, and the area's remoteness would simplify security matters. A headquarters camp was erected at Cairn Head, about south of Garlieston. Prototypes of each of the designs were built and transported to the area for testing by Royal Engineers, based at Cairn Head and in Garlieston. The tests revealed various problems (the "Swiss roll" would only take up to a seven-ton truck in the Atlantic swell). The final choice of design was determined by a storm during which the "Hippos" were undermined causing the "Crocodile" bridge spans to fail and the Swiss roll was washed away. Tn5's design proved the most successful and Beckett's floating roadway (subsequently codenamed whale) survived undamaged; the design was adopted and of whale roadway were manufactured under the management of J. D. Bernal and Brigadier Bruce White, the Director of Ports and Inland Water Transport at the War Office.
Elements
Mulberry was the codename for all the various structures that created the artificial harbours. These were called gooseberries, which metamorphosed into fully fledged harbours. Mulberry "A" and "B" each consisted of a floating outer breakwater called a bombardons, a static breakwater consisting of "corncobs" and reinforced concrete caissons called phoenix breakwaters, floating piers or roadways codenamed whales and beetles and pier heads codenamed spuds. These harbours when built were both of a similar size to Dover harbour. In the planning of Operation Neptune the term Mulberry "B" was defined as "an artificial harbour to be built in England and towed to the British beaches at Arromanches".
The Mulberry harbour assembled on Omaha Beach at Saint-Laurent-sur-Mer was for use by the American invasion forces. Mulberry "A" (American) was not as securely anchored to the sea bed as Mulberry "B" had been by the British, resulting in such severe damage during the Channel storm of June 19, 1944 that it was considered to be irreparable and its further assembly ceased, It was commanded by Augustus Dayton Clark.
Mulberry "B" (British) was the harbour assembled on Gold Beach at Arromanches for use by the British and Canadian invasion forces. The harbour was decommissioned six months after D-Day, when Allied forces could use the recently captured port of Antwerp to offload troops and supplies. Mulberry "B" was operated by 20 Port Group, Royal Engineers, under the command of Lieutenant Colonel G.C.B Shaddick.
Breakwaters
Corncobs and gooseberries
Corncobs were 61 ships that crossed the English Channel (either under their own steam or towed) and were then scuttled to act as breakwaters and create sheltered water at the five landing beaches. Once in position the corncobs created the sheltered waters known as gooseberries.
The ships used for each beach were:
Utah Beach (Gooseberry 1, 10 ships): , David O. Saylor, George S. Wasson, Matt W. Ransom, , , , Willis A. Slater, Victory Sword and Vitruvius.
Omaha Beach (Gooseberry 2, 15 ships): Artemas Ward, , Baialoide, , Courageous, Flight-Command, Galveston, George W. Childs, James W. Marshall, James Iredell Illinoian, Olambala, Potter, and Wilscox.
Gold Beach (Gooseberry 3, 16 ships): Alynbank, Alghios Spyridon, Elswick Park, Flowergate, Giorgios P., Ingman, Innerton, Lynghaug, Modlin, Njegos, Parkhaven, Parklaan, Saltersgate, Sirehei, Vinlake and Winha.
Juno Beach (Gooseberry 4, 11 ships): Belgique, Bendoran, , Empire Flamingo, Empire Moorhen, Empire Waterhen, Formigny, Manchester Spinner, Mariposa, Panos and Vera Radcliffe.
Sword Beach (Gooseberry 5, 9 ships ): Becheville, , , , , Empire Tamar, Empire Tana, Forbin and HNLMS Sumatra.
Phoenix caissons
Phoenixes were reinforced concrete caissons constructed by civil engineering contractors around the coast of Britain, collected and sunk at Dungeness in Kent and Pagham Harbour in West Sussex prior to D-Day. There were six different sizes of caisson (with displacements of approximately 2,000 tons to 6,000 tons each) and each unit was towed to Normandy by two tugs at around three knots. The caissons were initially planned to be moored along the coast, but due to a lack of mooring capacity they were sunk awaiting D-Day, and then refloated ("resurrected", hence the name).
The Royal Engineers were responsible for the task, and questions had arisen about whether their plans were adequate. US Navy Captain (later Rear Admiral) Edward Ellsberg, a known expert in marine salvage, was brought in to review the plans and determined that they were not. The supplied pumps were designed for moving large volumes of sewage horizontally, and were incapable of providing the necessary lift to pump the water up and out of the caissons.
Ellsberg's report resulted in Churchill's intervention, taking the task away from the Royal Engineers and giving it to the Royal Navy. Newly-appointed commodore Sinclair McKenzie was put in charge and quickly assembled every salvage barge in the British Isles. The phoenixes, once refloated, were towed across the channel to form the "Mulberry" harbour breakwaters together with the gooseberry block ships. Ellsberg rode one of the concrete caissons to Normandy; once there he helped unsnarl wrecked landing craft and vehicles on the beach.
Bombardons
The bombardons were large by plus-shaped floating breakwaters fabricated in steel and rubberized canvas that were anchored outside the main breakwaters that consisted of gooseberries (scuttled ships) and phoenixes (concrete caissons. Twenty-four bombardon units, attached to one another with hemp ropes, created breakwaters. During the storms at the end of June 1944. some broke up and sank while others parted their anchors and drifted down onto the harbours, possibly causing more damage than the storm itself. Their design was the responsibility of the Royal Navy; the Royal Engineers designed the rest of the Mulberry harbour equipment.
Roadways
Whales
The dock piers were codenamed whales. They were the floating roadways that connected the "spud" pier heads to the land. Designed by Allan Beckett, the roadways were made from innovative torsionally flexible bridging units that had a span of , mounted on pontoon units of either steel or concrete called "beetles". After the war many of the "Whale" bridge spans from Arromanches were used to repair bombed bridges in France, Belgium and the Netherlands. Such units are still visible as a bridge over the Noireau river in Normandy, Meuse River in Vacherauville (Meuse), as a bridge over the Moselle River on road D56 between Cattenom and Kœnigsmacker (Moselle) and in Vierville-sur-Mer (Calvados) along road D517. In 1954, some whales were also used to build two bridges (still visible) in Cameroon along the Edea to Kribi road. In the 1960s, three whale spans from Arromanches were used at Ford Dagenham for cars to drive from the assembly line directly onto ships. A span from Mulberry B reused after the war at Pont-Farcy was saved from destruction in 2008 by Les Amis du Pont Bailey, a group of English and French volunteers. Seeking a permanent home for it, they gifted it to the Imperial War Museum and it was returned to England in July 2015. After conservation work it is now part of the Land Warfare exhibition at Imperial War Museum Duxford.
Beetles
Beetles were pontoons that supported the Whale piers. War work by the Butterley Company included the production of steel "pontoons used to support the floating bridge between the offshore Mulberry Harbour caissons and the shore on Gold and Omaha beaches after D-Day 1944". Roy Christian wrote: "The workers who made mysterious floats had no idea of their ultimate purpose until one morning in June 1944 they realised that their products were helping to support the Mulberry Harbour off the low coastline of Normandy, and by that time they were busy building pontoon units and Bailey bridge panels ready for the breakthrough into Germany. But if they were often in the dark about the purpose and destination of the products over which they toiled for days in workshop, forge and foundry, they understood their importance. No time was lost through the war years on strikes or disputes, and absenteeism was low.
Some of those workers were women, for in the first time in its history female labour was being employed at the Butterley works." 420 concrete pontoons were made by Wates Ltd. at their Barrow in Furness, West India Docks, Marchwood and Beaulieu sites. A further 40 concrete beetles were made by John Laing (for Wates)at their Southsea factory and 20 were made at R. Costain at Erith, Twelve were made by John Mowlem at Russia Dock as were 8 by Melville Dundas and Whiston. They were moored in position using wires attached to "Kite" anchors which were also designed by Allan Beckett. These anchors had such high holding power that few could be recovered at the end of the war. The Navy was dismissive of Beckett's claims for his anchor's holding ability so Kite anchors were not used for mooring the bombardons. An original Kite anchor is displayed in a private museum at Vierville-sur-Mer while a full size replica forms part of a memorial to Beckett in Arromanches. In October 2018 five Kite anchors were recovered from the bed of the Solent off Woodside Beach, which had been an assembly area for Whale tows prior to D Day. The anchors were taken to Mary Rose Archaeological Services in Portsmouth for conservation treatment.
Spuds
The pier heads or landing wharves at which ships were unloaded were codenamed spuds. Each consisted of a pontoon with four legs that rested on the sea bed to anchor it while it could float up and down freely with the tide.
Deployment
Components for the Mulberry harbours were constructed at many different locations in Britain, before being transferred to assembly points off the south coast. Then on the afternoon of 6 June 1944 (D-Day) over 400 towed component parts (weighing approximately 1.5 million tons) set sail to create the two Mulberry harbours. It included all the blockships (codenamed Corncobs) to create the outer breakwater (gooseberries) and 146 concrete caissons (phoenixes).
Arromanches
At Arromanches, the first phoenix was sunk at dawn on 8 June 1944. By 15 June a further 115 had been sunk to create a five-mile-long arc between Tracy-sur-Mer in the west to Asnelles in the east. To protect the new anchorage, the superstructures of the blockships (which remained above sea-level) and the concrete caissons were festooned with anti-aircraft guns and barrage balloons manned by the men of the 397th and 481st Anti-Aircraft Artillery (Automatic Weapons) Battalions, attached to the First US Army.
Omaha
Arriving first on D-Day were the bombardons, followed a day later by the first blockship. The first phoenix was sunk on 9 June and the gooseberry was finished by 11 June. By 18 June two piers and four pier heads were working. Though this harbour was abandoned in late June (see below), the beach continued to be used for landing vehicles and stores using Landing Ship Tanks (LSTs). Using this method, the Americans were able to unload a higher tonnage of supplies than at Arromanches. Salvageable parts of the artificial port were sent to Arromanches to repair the Mulberry there.
Storm
Both harbours were almost fully functional when on 19 June a nor'easter of force 6 to 8 blew into Normandy and devastated the Mulberry harbour at Omaha Beach. The harbours had been designed with summer weather conditions in mind, but this was the worst storm to hit the Normandy coast in 40 years.
The entire harbour at Omaha was deemed irreparable, 21 of the 28 phoenix caissons were completely destroyed, the bombardons were cast adrift and the roadways and piers lay smashed.
The Mulberry harbour at Arromanches was more protected, and although damaged by the storm, it remained usable. It came to be known as Port Winston. While the harbour at Omaha was destroyed sooner than expected, Port Winston saw heavy use for eight months, despite being designed to last only three months. In the ten months after D-Day, it was used to land almost three million men, four million tons of supplies and half a million vehicles to reinforce France. In response to this longer-than-planned use, the phoenix breakwater was reinforced with the addition of specially strengthened caissons. The Royal Engineers had built a complete Mulberry Harbour out of 600,000 tons of concrete between 33 jetties, and had of floating roadways to land men and vehicles on the beach. Port Winston is commonly upheld as one of the best examples of military engineering. Its remains are still visible today from the beaches at Arromanches.
Post-war analysis
Although it was a success, the vast resources used on the Mulberry may have been wasted, as the American forces were supplied mostly over the beaches without the use of a Mulberry right through to September 1944. By the end of 6 June, 20,000 troops and 1,700 vehicles had landed on Utah beach (the shortest beach). At Omaha and Utah, 6,614 tons of cargo was discharged in the first three days. A month after D-Day, Omaha and Utah were handling 9,200 tons, and after a further month, they were landing 16,000 tons per day. This increased until 56,200 tons of supplies, 20,000 vehicles, and 180,000 troops were discharged each day at those beaches. The Mulberry harbours provided less than half the total (on good weather days) to begin with. The Normandy beaches supplied the following average daily tonnage of supplies:
By the end of June, over 289,827 tons of supplies had been offloaded onto the Normandy beaches. Up to September, U.S. forces were supported largely across the beaches, primarily without the use of the Mulberry. "However, in the critical early stage of the operation, had the Allied assault ships been caught in the open without the benefit of any protection, the damage in the American sector especially could have been catastrophic to the lines of supply and communication."
Mulberry B was substantially reinforced with units salvaged from the American harbor and that the Phoenixes were pumped full of sand to give them greater stability, measures that undoubtedly explain the extended service which the British port was able to render. Furthermore, the planners obviously underrated the capacities of open beaches. The tremendous tonnage capacities subsequently developed at both Utah and Omaha were without doubt one of the most significant and gratifying features of the entire Overlord operation.
Surviving remnants in the UK
Sections of Phoenix caissons are located at:
Thorpe Bay, Southend-on-Sea – while being towed from Immingham to Southsea, the caisson began to leak and was intentionally beached on a sandbank in the Thames Estuary. It was designated as a scheduled monument in 2004. It is accessible at low tide.
Pagham Harbour, West Sussex – south-east of Pagham a Phoenix Caisson, known as the 'Near Mulberry', that sank and could not be re-floated is still visible at low tide. Further off the coast in of water, is a second Phoenix Caisson, known as the 'Far Mulberry', that broke its back and sank in the storm the night before D-Day. Both sections were scheduled in 2019.
Littlestone-on-Sea, Kent – caisson could not be refloated. The site was scheduled in 2013.
Langstone Harbour, Hayling Island – faulty caisson left in-situ at place of construction.
Littlehampton - caissons about five metres underwater and dived by novice divers.
Portland Harbour, Portland, Dorset – two are located at the beach at Castletown. They were designated as a Grade II listed building in 1993.
Beetles are located at:
Bognor Regis, on the shore line West of Marine Drive, Aldwick, where it washed up a few days after D-Day. Easily accessible at low tide
Garlieston, Wigtownshire - concrete beetle remains are accessible on foot on the north side of Garlieston Bay (Eggerness) and at Cairn Head on the south side of Portyerrock Bay on the road to Isle of Whithorn.
Old House Point, Cairnryan, Dumfries and Galloway, three beetles at the shore line.
Other artefacts around Garlieston include:
a conspicuous stone wall at the back of Rigg Bay beach: this was the landward terminal for a "Crocodile" link to a "Hippo"
the remains of a collapsed "Hippo" visible at low tide in Rigg Bay
a number of abandoned brick buildings, once the camp at Cairn Head
some lengths of concrete roadway on the beach at Cairn Head, intended for use with "Swiss roll".
At Southampton, Town Quay, a short section of whale roadway and a buffer pontoon, now derelict, used after the war for Isle of Wight ferries, survive between the Royal Pier and the Town Quay car ferry terminal.
German equivalent of Mulberry
In the period between postponement and cancellation of Operation Sea Lion, the invasion of the United Kingdom, Germany developed some prototype prefabricated jetties with a similar purpose in mind. These could be seen in Alderney, until they were demolished in 1978.
Daily Telegraph crosswords
"Mulberry" and the names of all the beaches were words appearing in the Daily Telegraph crossword puzzle in the month prior to the invasion. The crossword compilers, Melville Jones and Leonard Dawe, were questioned by MI5, which determined the appearance of the words was innocent. Over 60 years later, a former student reported that Dawe frequently requested words from his students, many of whom were children in the same area as US military personnel.
See also
Operation Pluto
Lily: a floating airstrip using components developed for the Mulberry harbour.
Notes
References
Further reading
External links
Beckett Rankine Mulberry Harbour Archive
Garlieston's Secret War, Mulberry Harbour trials around Garlieston
Google Maps satellite view
A wartime aerial view of part of the Mulberry Harbour at Arromanches
"Seabees in Normandy" video (U.S. National Archives)
Operation Overlord
Military logistics of World War II
Coastal construction
Operation Neptune
British inventions
Military installations closed in 1945
Closed installations of the United States Navy
Allied logistics in the Western European Campaign (1944–1945) | Mulberry harbours | [
"Engineering"
] | 6,187 | [
"Construction",
"Coastal construction"
] |
357,971 | https://en.wikipedia.org/wiki/Eumetazoa | Eumetazoa (), also known as diploblasts, Epitheliozoa or Histozoa, are a proposed basal animal clade as a sister group of Porifera (sponges). The basal eumetazoan clades are the Ctenophora and the ParaHoxozoa. Placozoa is now also seen as a eumetazoan in the ParaHoxozoa. The competing hypothesis is the Myriazoa clade.
Several other extinct or obscure life forms, such as Iotuba and Thectardis, appear to have emerged in the group. Characteristics of eumetazoans include true tissues organized into germ layers, the presence of neurons and muscles, and an embryo that goes through a gastrula stage.
Some phylogenists once speculated the sponges and eumetazoans evolved separately from different single-celled organisms, which would have meant that the animal kingdom does not form a clade (a complete grouping of all organisms descended from a common ancestor). However, genetic studies and some morphological characteristics, like the common presence of choanocytes, now unanimously support a common origin.
Traditionally, eumetazoans are a major group of animals in the Five Kingdoms classification of Lynn Margulis and K. V. Schwartz, comprising the Radiata and Bilateria – all animals except the sponges. When treated as a formal taxon Eumetazoa is typically ranked as a subkingdom. The name Metazoa has also been used to refer to this group, but more often refers to the Animalia as a whole. Many classification schemes do not include a subkingdom Eumetazoa.
Taxonomy
A widely accepted hypothesis, based on molecular data (mostly 18S rRNA sequences), divides Bilateria into four superphyla: Deuterostomia, Ecdysozoa, Lophotrochozoa, and Platyzoa (sometimes included in Lophotrochozoa). The last three groups are also collectively known as Protostomia.
However, some skeptics emphasize inconsistencies in the new data. The zoologist Claus Nielsen argues in his 2001 book Animal Evolution: Interrelationships of the Living Phyla for the traditional divisions of Protostomia and Deuterostomia.
Evolutionary origins
It has been suggested that one type of molecular clock and one approach to interpretation of the fossil record both place the evolutionary origins of eumetazoa in the Ediacaran. However, the earliest eumetazoans may not have left a clear impact on the fossil record and other interpretations of molecular clocks suggest the possibility of an earlier origin. The discoverers of Vernanimalcula describe it as the fossil of a bilateral triploblastic animal that appeared at the end of the Marinoan glaciation prior to the Ediacaran period, implying an even earlier origin for eumetazoans.
References
External links
Bilateria. Tree of Life web project, US National Science Foundation. 2002. 6 January 2006.
Invertebrates and the Origin of Animal Diversity
Evers, Christine A., Lisa Starr. Biology:Concepts and Applications. 6th ed. United States:Thomson, 2006. .
TRICHOPLAX ADHAERENS (PLACOZOA TYPE) St. Petersburg. 2005
Metazoa: the Animals
Nielsen, C. 2001. Animal Evolution: Interrelationships of the Living Phyla, 2nd edition, 563 pp. Oxford Univ. Press, Oxford.
Animal taxa
Subkingdoms
Ediacaran first appearances | Eumetazoa | [
"Biology"
] | 740 | [
"Animal taxa",
"Animals"
] |
357,980 | https://en.wikipedia.org/wiki/Street%20fighting | Street fighting is hand-to-hand combat in public places between individuals or groups of people. The venue is usually a public place (e.g., a street), and the fight sometimes results in serious injury or even death.<ref>White, Rob. et al (2007). 'Youth Gangs, Violence and Anti-Social Behaviour. Australian Research and Alliance Club. pp. 18, 29.</ref> Some street fights can be gang related.
A typical situation involves two men arguing in a bar, during which dispute one suggests stepping outside, where the fight commences. It is often possible to avoid the fight by withdrawing from the situation; whereas in self-defense, a person is actively trying to escape the confrontation, using force if necessary to ensure his own safety.
In some martial arts communities, street fighting and self-defense are often considered synonymous.
History
Evidence for human fighting goes back 430,000 years in Spain, where a fossil skull was found with two fractures apparently caused by the same object, implying an intentional lethal attack. Another record of early human fighting is one that happened 9,500 to 10,500 years ago in Nataruk, Kenya. The hunter-gatherers fight was a group fight involving both males and females, including children, armed with bladelets and arrow projectiles. The fight was to protect their valuables such as lands, food and water resources and their tribes or families or to respond mortally to the threat from the encounter between two groups of people.
Characteristic
Street fights can be planned ahead or occur suddenly, regardless of location and time. The frequency of physical assaults is based on crime rates, level of poverty and accessibility to weapons. In street fights, everyone can be an opponent, including friends, relatives or even strangers. Street fights usually start with an outbreak of emotion such as anger, fear and indignation. Street fights do not last long, usually running for minutes or even seconds. The outcome of the fight is unpredictable due to the fact that participants are unlikely to know others' abilities, strengths or weaknesses.
The scene can go beyond expectation with the introduction of weapons or the participation of someone from the crowd, whether intentional or unintentional. In the past , only when an opponent died could the other participant be considered the winner. Similarly, at present, the match is only over when one surrenders, or both are unable to continue, when someone from the crowd or the police or a security guard stops the fight or "steps in" or when one of the combatants dies. Despite the brutal and life-threatening consequences, people's willingness to commit violence has increased over time , escalating the danger of street fights.
Causes
The causes of street fighting are varied. Originally, street fighting was a way of defending oneself. In the stone age, fights were mostly aimed for survival purposes – protecting territory, securing resources and defending families. According to Mike Martin, a London lecturer in war studies, "Humans fight to achieve status and belonging. They do so because, in evolutionary terms, these are the surest routes to survival and increased reproduction".
As humans evolve, new conflicts arise in order to gratify more sophisticated wants. The purposes of street fighting shifted to solve interpersonal conflicts. These conflicts could be stratification, misunderstanding, hate speech or even retaliation. For instance, in areas that are not under police surveillance and criminally dominated, violence is believed to be the substantiation of superior reputation and pride. In other words, people take part in street fights to obtain dominance because of social status given to the ruler. For another instance, men showed off their value in the sense that opponents' self-esteem is on the verge of being destroyed from their insults, humiliation and vilification to which violence is the go-to resort. Additionally, some fights are driven by alcohol. Alcohol itself does not directly lead to violence but it acts as a catalyst, allowing cheers from the crowds or provocation from opponents to ignite the fight between fighters. Since the consumption of alcohol negatively impacts the brain function, drunk people fail to assess the situation which often results in overreacting and unpredictable fights.
Effects
Biological
It is theorized that certain biological features of the Homo lineage have evolved over time as a means to mitigate injury from hand-to-hand combat. Facial robusticity, which includes traits such as jaw adductor muscle strength and brow ridge size, may offer a protective effect against combat. The jaw adductors (the masseter and the temporalis) stretch as a means to absorb energy from the punch in order to reduce the likelihood of jaw dislocation and prevent fracture. The postcanine teeth may have evolved to be larger and thicker so as to allow the energy from the punch to be transferred from the jaw to the skull. Additionally, the proportion of the human hands have evolved in a way that allows for the formation of a fist, something that was not possible in pre-Homo species.
Physical and mental health
The consequences of street fighting are undeniably dangerous and critical, and street fighters are exposed to short-term and long-term physical health issues. Such poor health includes temporary and permanent disabilities, fractures, partial body part losses, severe injuries, or death. For instance, the face, other parts of the head and neck, and the thorax are the most targeted parts in the body, which account for 83%, 4% and 2% of fractures, respectively, amongst all injuries. In addition to damaging physical health, street fighting can also result in mental illness, such as post-traumatic stress disorder, substance abuse and depression. Extreme feelings of guilt experienced by some perpetrators in the aftermath of a violent event may lead to suicide.
Not only does the involvement in street fights affect the participants, it also collaterally influences the participants' family members and friends, especially small children. Traumatic exposure in small children to such negative experience often leads to post-traumatic stress reactions, such as fear, sadness, numbness, timidity, moodiness, eating disorders, difficulty sleeping, or nightmares. Adults have the high probability of coping with trauma, even when they do not sustain direct injuries, leading to increases in preterm birth, mortality rate, and communal trauma.
Legal
Street fighting is usually illegal due to its disruption of public order. Depending on each localities' laws and the gravity of the situation, participants may be liable to either a fine or imprisonment. In South Australia, for example, the maximum penalty for the offence of fighting in public is a $1,250 fine or three months imprisonment. In New South Wales, Australia, persons involved in a fight that could intimidate the public can be charged by the police for the offence of affray with a maximum punishment of ten years imprisonment. If any injuries are caused during the fight, the severity of the injury will impact the penalty of the participants. Intentional injuries, especially, will result in more severe penalties. One may still be liable for the injuries of the victim even if the injuries were not directly caused by that person but by another participating in the fight. If someone dies, all members in the group that are involved in the assault may be accused of murder, no matter who inflicted the fatal blow. Self-defence is generally too narrow to provide protection.
Economic
In terms of economics, street fights result in damage to social infrastructure. In 2000, a fund worth approximately 9 million euros was spent in order to repair previous three-year demolition done by street fighters. In 1995 in Basque city, the destruction of public transport resulting from street fights cost 2.5 million euros.
Underground street fight clubs
Street fights used to happen in the dark, out of communal sight. With the exposure to social media, however, street fights have become more transparent. Organisers that help with professional street fight setup are known as "clubs", which are run on a money-oriented basis. These clubs can host either amateur underground fights or professional ones. In New York, professional fighters are those who contend for the prize (money or gift) which has monetary value exceeding $75. In contrast, amateur fights also known as 'smokers' refer to unsanctioned fights where no safeguards and regulations are required.
Despite the fact that some illegal fight clubs still run within the authorisers' competence, some street fight clubs even obtain authoritative approval, meaning these sanctioning entities are running under the supervision of a certified regulator. Some further requirements for professional fights enacted by New York State Athletics Commission (NYSAC) include:
Medical check-ups for participants before and after the fight
A minimum attendance of one commission-designated doctor and an ambulance with medical personnel equipped with appropriate resuscitation kits to be on scene
Medical insurance must be provided to participants
The venue must meet safety requirements
Pre-fight medical check-ups are required to ensure that the participants are not involved with drugs or infectious diseases such as HIV, Hepatitis or any other illnesses. Any fights that are not in compliance with the authoriser rules and regulations are considered illegal and the participants will have to face legal penalty. The venue of the fight is changed every time for confidential protection and will be announced on the fight day. The promoters are in charge of finding different locations to host these fights where indoor boxing rings, gyms or gym mats with crowd-form barricade are utilised as a disguise so as not to attract the public attention. Amongst incentives that draw people into underground street fights, money oriented and attention seeking are the two most fundamental one. In order to qualify for the fight, attendees have to go through a registration process. The fight is either between two randomly matched applicants whose identity will be kept until the matching day or between two attendees with unresolved conflicts. Sometimes, it can be between 2 fighters urging to start their MMA career that get matched right on the registration spot. Attendees are required to comply with the rules set by the club. The grant price is usually given to the winner only, but sometimes both people can be paid. The club is funded by entrance tickets sold to audiences with undisclosed amounts. The audience may have to go through a security check for weapons as they are not allowed inside the venue. On several occasions, the audience gamble on the result of the fight, particularly, they place their bet on one of the attendees that they expect to win in the hope of a worthy return. The fight lasts for three rounds, sometimes an additional round is conducted because the crowd's provocation fuels the combativeness of the attendees.
Notable street fighters
Bruce Lee: Lee was known for engaging in street fights before he started training in martial arts, and continued to street fight while training. He challenged students from rival schools, cross-trained in several disciplines, and eventually developed the hybrid martial art of Jeet Kune Do.
Chuck Wepner: A retired professional boxer. He was once a street fighter and took part in multiple street fights from a young age.
Haku: Professional wrestler with a fearsome reputation for street fighting and resisting arrest.
Ken Shamrock: He engaged in paid street fights while a pro wrestler prior to his mixed martial arts (MMA) career.
Tank Abbott: He engaged in many street fights before beginning his professional career with UFC.
Josh Barnett: Former UFC Heavyweight Champion Barnett engaged in street fights that he organized online prior to his professional MMA career.
Kimbo Slice: He started his career participating in street fights and gained public recognition after footage of him defeating his opponents went viral on the internet. In his first taped fight against a man named Big D, Ferguson left a large cut on his opponent's right eye which led internet fans to call him "Slice", becoming the last name to his already popular childhood nickname, Kimbo.
Jorge Masvidal: He was a known street fighter prior to his professional MMA career, including fighting and beating Kimbo Slice's protégé "Ray."
Eddie Alvarez: Former UFC Lightweight Champion Alvarez engaged in street fights due to a lack of opportunity before his professional career.
Nate Diaz: He has engaged in several street fights during his professional fighting career.
Lenny McLean: An English unlicensed boxer, bouncer, bodyguard, businessman and actor. He was known as "The Guv'nor", "the King of the Cobbles" and "the hardest man in Britain".
Bar fights
A bar fight, sometimes known as a pub brawl'', is a type of street fight that happens in bars, pubs, and taverns. It is commonly depicted in fiction, most notably in Hollywood films and crime video games.
See also
Gouging (fighting style)
Jailhouse rock (fighting style)
Mutual combat
Slapboxing
Streetbeefs
Tawuran, mass street fighting between gangs of students in Indonesia
Trial by combat
References
Combat
Martial arts terminology
Riots
Fighting
Violence | Street fighting | [
"Biology"
] | 2,595 | [
"Behavior",
"Aggression",
"Human behavior",
"Violence"
] |
358,057 | https://en.wikipedia.org/wiki/When%20HARLIE%20Was%20One | When HARLIE Was One is a 1972 science fiction novel by American writer David Gerrold. It was nominated for the Nebula Award for Best Novel in 1972 and the Hugo Award for Best Novel in 1973. The novel, a "fix-up" of previously published short stories, was published as an original paperback by Ballantine Books in 1972, with an accompanying Science Fiction Book Club release. A revised version, subtitled "Release 2.0", was published in 1988 by Bantam Books.
Plot introduction
Central to the story is an artificial intelligence named H.A.R.L.I.E., also referred to by the proper name "HARLIE"—an acronym for Human Analog Replication, Lethetic Intelligence Engine (originally Human Analog Robot Life Input Equivalents).
HARLIE's story revolves around his relationship with David Auberson, the psychologist who is responsible for guiding HARLIE from childhood into adulthood. It is also the story of HARLIE's fight against being turned off, and the philosophical question of whether or not HARLIE is human; for that matter, what it means to be human.
When HARLIE Was One contains one of the first fictional representations of a computer virus (preceded by Gregory Benford in 1970), and one of the first uses of the term "virus" to describe a program that infects another computer.
Reception
Theodore Sturgeon reported that the novel "carries a good freight of social and psychological insight".
In other works
The HARLIE intelligence engine appears in a number of Gerrold's other works:
In the Star Wolf series, HARLIE is routinely installed as the administrating AI of Terran warships.
The Dingilliad series, Jumping Off the Planet, Bouncing Off the Moon, and Leaping to the Stars.
A Nest for Nightmares, the fifth book of The War Against the Chtorr.
HARLIE is a major character in Hella .
References
External links
1972 science fiction novels
1972 American novels
1988 American novels
American science fiction novels
Doubleday (publisher) books
Fictional computers
Novels by David Gerrold
Novels about artificial intelligence | When HARLIE Was One | [
"Technology"
] | 437 | [
"Fictional computers",
"Computers"
] |
358,069 | https://en.wikipedia.org/wiki/Proof%20by%20infinite%20descent | In mathematics, a proof by infinite descent, also known as Fermat's method of descent, is a particular kind of proof by contradiction used to show that a statement cannot possibly hold for any number, by showing that if the statement were to hold for a number, then the same would be true for a smaller number, leading to an infinite descent and ultimately a contradiction. It is a method which relies on the well-ordering principle, and is often used to show that a given equation, such as a Diophantine equation, has no solutions.
Typically, one shows that if a solution to a problem existed, which in some sense was related to one or more natural numbers, it would necessarily imply that a second solution existed, which was related to one or more 'smaller' natural numbers. This in turn would imply a third solution related to smaller natural numbers, implying a fourth solution, therefore a fifth solution, and so on. However, there cannot be an infinity of ever-smaller natural numbers, and therefore by mathematical induction, the original premise—that any solution exists—is incorrect: its correctness produces a contradiction.
An alternative way to express this is to assume one or more solutions or examples exists, from which a smallest solution or example—a minimal counterexample—can then be inferred. Once there, one would try to prove that if a smallest solution exists, then it must imply the existence of a smaller solution (in some sense), which again proves that the existence of any solution would lead to a contradiction.
The earliest uses of the method of infinite descent appear in Euclid's Elements. A typical example is Proposition 31 of Book 7, in which Euclid proves that every composite integer is divided (in Euclid's terminology "measured") by some prime number.
The method was much later developed by Fermat, who coined the term and often used it for Diophantine equations. Two typical examples are showing the non-solvability of the Diophantine equation and proving Fermat's theorem on sums of two squares, which states that an odd prime p can be expressed as a sum of two squares when (see Modular arithmetic and proof by infinite descent). In this way Fermat was able to show the non-existence of solutions in many cases of Diophantine equations of classical interest (for example, the problem of four perfect squares in arithmetic progression).
In some cases, to the modern eye, his "method of infinite descent" is an exploitation of the inversion of the doubling function for rational points on an elliptic curve E. The context is of a hypothetical non-trivial rational point on E. Doubling a point on E roughly doubles the length of the numbers required to write it (as number of digits), so that "halving" a point gives a rational with smaller terms. Since the terms are positive, they cannot decrease forever.
Number theory
In the number theory of the twentieth century, the infinite descent method was taken up again, and pushed to a point where it connected with the main thrust of algebraic number theory and the study of L-functions. The structural result of Mordell, that the rational points on an elliptic curve E form a finitely-generated abelian group, used an infinite descent argument based on E/2E in Fermat's style.
To extend this to the case of an abelian variety A, André Weil had to make more explicit the way of quantifying the size of a solution, by means of a height function – a concept that became foundational. To show that A(Q)/2A(Q) is finite, which is certainly a necessary condition for the finite generation of the group A(Q) of rational points of A, one must do calculations in what later was recognised as Galois cohomology. In this way, abstractly-defined cohomology groups in the theory become identified with descents in the tradition of Fermat. The Mordell–Weil theorem was at the start of what later became a very extensive theory.
Application examples
Irrationality of
The proof that the square root of 2 () is irrational (i.e. cannot be expressed as a fraction of two whole numbers) was discovered by the ancient Greeks, and is perhaps the earliest known example of a proof by infinite descent. Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, that the square root of two is irrational. Little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as an official secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it. The square root of two is occasionally called "Pythagoras' number" or "Pythagoras' Constant", for example .
The ancient Greeks, not having algebra, worked out a geometric proof by infinite descent (John Horton Conway presented another geometric proof by infinite descent that may be more accessible). The following is an algebraic proof along similar lines:
Suppose that were rational. Then it could be written as
for two natural numbers, and . Then squaring would give
so 2 must divide p2. Because 2 is a prime number, it must also divide p, by Euclid's lemma. So p = 2r, for some integer r.
But then,
which shows that 2 must divide q as well. So q = 2s for some integer s.
This gives
.
Therefore, if could be written as a rational number, then it could always be written as a rational number with smaller parts, which itself could be written with yet-smaller parts, ad infinitum. But this is impossible in the set of natural numbers. Since is a real number, which can be either rational or irrational, the only option left is for to be irrational.
(Alternatively, this proves that if were rational, no "smallest" representation as a fraction could exist, as any attempt to find a "smallest" representation p/q would imply that a smaller one existed, which is a similar contradiction.)
Irrationality of if it is not an integer
For positive integer k, suppose that is not an integer, but is rational and can be expressed as for natural numbers m and n, and let q be the largest integer less than (that is, q is the floor of ). Then
The numerator and denominator were each multiplied by the expression ( − q)—which is positive but less than 1—and then simplified independently. So, the resulting products, say m′ and n′, are themselves integers, and are less than m and n respectively. Therefore, no matter what natural numbers m and n are used to express , there exist smaller natural numbers m′ < m and n′ < n that have the same ratio. But infinite descent on the natural numbers is impossible, so this disproves the original assumption that could be expressed as a ratio of natural numbers.
Non-solvability of r2 + s4 = t4 and its permutations
The non-solvability of in integers is sufficient to show the non-solvability of in integers, which is a special case of Fermat's Last Theorem, and the historical proofs of the latter proceeded by more broadly proving the former using infinite descent. The following more recent proof demonstrates both of these impossibilities by proving still more broadly that a Pythagorean triangle cannot have any two of its sides each either a square or twice a square, since there is no smallest such triangle:
Suppose there exists such a Pythagorean triangle. Then it can be scaled down to give a primitive (i.e., with no common factors other than 1) Pythagorean triangle with the same property. Primitive Pythagorean triangles' sides can be written as , with a and b relatively prime and with a+b odd and hence y and z both odd. The property that y and z are each odd means that neither y nor z can be twice a square. Furthermore, if x is a square or twice a square, then each of a and b is a square or twice a square. There are three cases, depending on which two sides are postulated to each be a square or twice a square:
y and z: In this case, y and z are both squares. But then the right triangle with legs and and hypotenuse also would have integer sides including a square leg () and a square hypotenuse (), and would have a smaller hypotenuse ( compared to ).
z and x: z is a square. The integer right triangle with legs and and hypotenuse also would have two sides ( and ) each of which is a square or twice a square, and a smaller hypotenuse ( compared to .
y and x: y is a square. The integer right triangle with legs and and hypotenuse would have two sides (b and a) each of which is a square or twice a square, with a smaller hypotenuse than the original triangle ( compared to ).
In any of these cases, one Pythagorean triangle with two sides each of which is a square or twice a square has led to a smaller one, which in turn would lead to a smaller one, etc.; since such a sequence cannot go on infinitely, the original premise that such a triangle exists must be wrong.
This implies that the equations
and
cannot have non-trivial solutions, since non-trivial solutions would give Pythagorean triangles with two sides being squares.
For other similar proofs by infinite descent for the n = 4 case of Fermat's Theorem, see the articles by Grant and Perella and Barbara.
See also
Vieta jumping
References
Further reading
Diophantine equations
Mathematical proofs
Mathematical terminology | Proof by infinite descent | [
"Mathematics"
] | 2,079 | [
"Mathematical objects",
"Equations",
"Diophantine equations",
"nan",
"Number theory"
] |
358,086 | https://en.wikipedia.org/wiki/28978%20Ixion | 28978 Ixion (, provisional designation ) is a large trans-Neptunian object and a possible dwarf planet. It is located in the Kuiper belt, a region of icy objects orbiting beyond Neptune in the outer Solar System. Ixion is classified as a plutino, a dynamical class of objects in a 2:3 orbital resonance with Neptune. It was discovered in May 2001 by astronomers of the Deep Ecliptic Survey at the Cerro Tololo Inter-American Observatory, and was announced in July 2001. The object is named after the Greek mythological figure Ixion, who was a king of the Lapiths.
In visible light, Ixion appears dark and moderately red in color due to organic compounds covering its surface. Water ice has been suspected to be present on Ixion's surface, but may exist in trace amounts hidden underneath a thick layer of organic compounds. Ixion has a measured diameter of , making it the fourth-largest known plutino. Several astronomers have considered Ixion to be a possible dwarf planet, whereas others consider it a transitional object between irregularly-shaped small Solar System bodies and spherical dwarf planets. Ixion is currently not known to have a natural satellite, so its mass and density remain unknown.
History
Discovery
Ixion was discovered on 22 May 2001 by a team of American astronomers at the Cerro Tololo Inter-American Observatory in Chile. The discovery formed part of the Deep Ecliptic Survey, a survey conducted by American astronomer Robert Millis to search for Kuiper belt objects located near the ecliptic plane using telescopes at the facilities of the National Optical Astronomy Observatory. On the night of 22 May 2001, American astronomers James Elliot and Lawrence Wasserman identified Ixion in digital images of the southern sky taken with the 4-meter Víctor M. Blanco Telescope at Cerro Tololo. Ixion was first noted by Elliot while compiling two images taken approximately two hours apart, which revealed Ixion's slow motion relative to the background stars. At the time of discovery, Ixion was located in the constellation of Scorpius.
The discoverers of Ixion noted that it appeared relatively bright for a distant object, implying that it might be rather large for a TNO. The discovery supported suggestions that there were undiscovered large trans-Neptunian objects comparable in size to Pluto. Since Ixion's discovery, numerous large trans-Neptunian objects, notably the dwarf planets Haumea, , and Makemake, have been discovered; in particular, Eris is almost the same size as Pluto.
The discovery of Ixion was formally announced by the Minor Planet Center in a Minor Planet Electronic Circular on 1 July 2001. It was given the provisional designation , indicating that it was discovered in the second half of May 2001. Ixion was the 1,923rd object discovered in the latter half of May, as indicated by the last letter and numbers in its provisional designation.
At the time of discovery, Ixion was thought to be among the largest trans-Neptunian objects in the Solar System, as implied by its high intrinsic brightness. These characteristics of Ixion prompted follow-up observations in order to ascertain its orbit, which would in turn improve the certainty of later size estimates of Ixion. In August 2001, a team of astronomers used the European Southern Observatory's Astrovirtel virtual observatory to automatically scan through archival precovery photographs obtained from various observatories. The team obtained nine precovery images of Ixion, with the earliest taken by the Siding Spring Observatory on 17 July 1982. These precovery images along with subsequent follow-up observations with the La Silla Observatory's 2.2-meter MPG/ESO telescope in 2001 extended Ixion's observation arc by over 18 years, sufficient for its orbit to be accurately determined and eligible for numbering by the Minor Planet Center. Ixion was given the permanent minor planet number 28978 on 2 September 2001.
Name
This minor planet is named after the Greek mythological figure Ixion, in accordance with the International Astronomical Union's (IAU's) naming convention which requires plutinos (objects in a 3:2 orbital resonance with Neptune) to be named after mythological figures associated with the underworld. In Greek mythology, Ixion was the king of the legendary Lapiths of Thessaly and had married Dia, a daughter of Deioneus (or Eioneus), whom Ixion promised to give valuable bridal gifts. Ixion invited Deioneus to a banquet but instead pushed him into a pitfall of burning coals and wood, killing Deioneus. Although the lesser gods despised his actions, Zeus pitied Ixion and invited him to a banquet with other gods. Rather than being grateful, Ixion became lustful towards Zeus's wife, Hera. Zeus found out about his intentions and created the cloud Nephele in the shape of Hera, and tricked Ixion into coupling with it, fathering the race of Centaurs. For his crimes, Ixion was expelled from Olympus, blasted with a thunderbolt, and bound to a burning solar wheel in the underworld for all eternity.
The name for Ixion was suggested by E. K. Elliot, who was also involved in the naming of Kuiper belt object 38083 Rhadamanthus. The naming citation was published by the Minor Planet Center on 28 March 2002.
The usage of planetary symbols is discouraged in astronomy, so Ixion never received a symbol in the astronomical literature. There is no standard symbol for Ixion used by astrologers either. Sandy Turnbull proposed a symbol for Ixion (), which includes the initials I and X as well as depicts the solar wheel that Ixion was bound to in Tartarus. Denis Moskowitz, a software engineer in Massachusetts who designed the symbols for most of the dwarf planets, substitutes the Greek letter iota (Ι) and xi (Ξ) for I and X, creating a variant (). These symbols are occasionally mentioned on astrological websites, but are not used broadly.
Orbit and rotation
Ixion is classified as a plutino, a large population of resonant trans-Neptunian objects in a 2:3 mean-motion orbital resonance with Neptune. Thus, Ixion completes two orbits around the Sun for every three orbits that Neptune takes. At the time of Ixion's discovery, it was initially thought to be in a 3:4 orbital resonance with Neptune, which would have made Ixion closer to the Sun. Ixion orbits the Sun at an average distance of , taking 251 years to complete a full orbit. This is characteristic of all plutinos, which have orbital periods around 250 years and semi-major axes around 39 AU.
Like Pluto, Ixion's orbit is elongated and inclined to the ecliptic. Ixion has an orbital eccentricity of 0.24 and an orbital inclination of 19.6 degrees, slightly greater than Pluto's inclination of 17 degrees. Over the course of its orbit, Ixion's distance from the Sun varies from 30 AU at perihelion (closest distance) to 49.6 AU at aphelion (farthest distance). Although Ixion's orbit is similar to that of Pluto, their orbits are oriented differently: Ixion's perihelion is below the ecliptic whereas Pluto's is above it (see right image). , Ixion is approximately 39 AU from the Sun and is currently moving closer, approaching perihelion by 2070. Simulations by the Deep Ecliptic Survey show that Ixion can acquire a perihelion distance (qmin) as small as 27.5 AU over the next 10 million years.
The rotation period of Ixion is uncertain; various photometric measurements suggest that it displays very little variation in brightness, with a small light curve amplitude of less than 0.15 magnitudes. Initial attempts to determine Ixion's rotation period were conducted by astronomer Ortiz and colleagues in 2001 but yielded inconclusive results. Although their short-term photometric data was insufficient for Ixion's rotation period to be determined based on its brightness variations, they were able to constrain Ixion's light curve amplitude below 0.15 magnitudes. Astronomers Sheppard and Jewitt obtained similarly inconclusive results in 2003 and provided an amplitude constraint less than 0.05 magnitudes, considerably less than Ortiz's amplitude constraint. In 2010, astronomers Rousselot and Petit observed Ixion with the European Southern Observatory's New Technology Telescope and determined Ixion's rotation period to be hours, with a light curve amplitude around 0.06 magnitudes. Galiazzo and colleagues obtained a shorter rotation period of hours in 2016, though they calculated that there is a 1.2% probability that their result may be erroneous.
Physical characteristics
Size and brightness
Ixion has a measured diameter of , with an optical absolute magnitude of 3.77 and a geometric albedo (reflectivity) of 0.11. Compared to Pluto and its moon Charon, Ixion is less than one-third the diameter of Pluto and three-fifths the diameter of Charon. Ixion is the fourth-largest known plutino that has a well-constrained diameter, preceding , , and Pluto. It was the intrinsically brightest object discovered by the Deep Ecliptic Survey and is among the twenty brightest trans-Neptunian objects known according to astronomer Michael Brown and the Minor Planet Center.
Ixion was the largest and brightest Kuiper belt object found when it was discovered. Under the assumption of a low albedo, it was presumed to have a diameter around , which would have made it larger than the dwarf planet and comparable in size to Charon. Subsequent observations of Ixion with the La Silla Observatory's MPG/ESO telescope along with the European Southern Observatory's Astrovirtel in August 2001 concluded a similar size around , though under the former assumption of a low albedo.
In 2002, astronomers of the Max Planck Institute for Radio Astronomy measured Ixion's thermal emission at millimeter wavelengths with the IRAM 30m telescope and obtained an albedo of 0.09, corresponding to a diameter of , consistent with previous assumptions of Ixion's size and albedo. They later reevaluated their results in 2003 and realized that their detection of Ixion's thermal emission was spurious; follow-up observations with the IRAM telescope did not detect any thermal emission within the millimeter range at frequencies of 250 GHz, implying a high albedo and consequently a smaller size for Ixion. The lower limit for Ixion's albedo was constrained at 0.15, suggesting that Ixion's diameter did not exceed .
With space-based telescopes such as the Spitzer Space Telescope, astronomers were able to more accurately measure Ixion's thermal emissions, allowing for more accurate estimates of its albedo and size. Preliminary thermal measurements with Spitzer in 2005 yielded a much higher albedo constraint of 0.25–0.50, corresponding to a diameter range of . Further Spitzer thermal measurements at multiple wavelength ranges (bands) in 2007 yielded mean diameter estimates around and for a single-band and two-band solution for the data, respectively. From these results, the adopted mean diameter was (), just beyond Spitzer's 2005 diameter constraint albeit having a large margin of error. Ixion's diameter was later revised to , based on multi-band thermal observations by the Herschel Space Observatory along with Spitzer in 2013.
On 13 October 2020, Ixion occulted a 10th magnitude red giant star (star Gaia DR2 4056440205544338944), blocking out its light for a duration of approximately 45 seconds. The stellar occultation was observed by astronomers from seven different sites across the western United States. Of the ten participating observers, eight of them reported positive detections of the occultation. Observers from the Lowell Observatory provided highly precise measurements of the occultation chord timing, allowing for tight constraints to Ixion's diameter and possible atmosphere. An elliptical fit for Ixion's occultation profile gives projected dimensions of approximately , corresponding to a projected spherical diameter of . The precise Lowell Observatory chords place an upper limit surface pressure of <2 microbars for any possible atmosphere of Ixion.
Possible dwarf planet
Astronomer Gonzalo Tancredi considers Ixion as a likely candidate as it has a diameter greater than , the estimated minimum size for an object to achieve hydrostatic equilibrium, under the assumption of a predominantly icy composition. Ixion also displays a light curve amplitude less than 0.15 magnitudes, indicative of a likely spheroidal shape, hence why Tancredi considered Ixion as a likely dwarf planet. American astronomer Michael Brown considers Ixion to highly likely be a dwarf planet, placing it at the lower end of the "highly likely" range. However, in 2019, astronomer William Grundy and colleagues proposed that trans-Neptunian objects similar in size to Ixion, around in diameter, have not collapsed into solid bodies and are thus transitional between smaller, porous (and thus low-density) bodies and larger, denser, brighter and geologically differentiated planetary bodies such as dwarf planets. Ixion is situated within this size range, suggesting that it is at most only partially differentiated, with a porous internal structure. While Ixion's interior may have collapsed gravitationally, its surface remained uncompressed, implying that Ixion might not be in hydrostatic equilibrium and thus not a dwarf planet. However, this notion for Ixion cannot currently be tested: the object is not currently known to have any natural satellites, and thus Ixion's mass and density cannot currently be measured. Only two attempts with the Hubble Space Telescope have been made to find a satellite within an angular distance of 0.5 arcseconds from Ixion, and it has been suggested that there is a chance as high as 0.5% that a satellite may have been missed in these searches.
Spectra and surface
The surface of Ixion is very dark and unevolved, resembling those of smaller, primitive Kuiper belt objects such as Arrokoth. In the visible spectrum, Ixion appears moderately red in color, similar to the large Kuiper belt object . Ixion's reflectance spectrum displays a red spectral slope that extends from wavelengths of 0.4 to 0.95 μm, in which it reflects more light at these wavelengths. Longward of 0.85 μm, Ixion's spectrum becomes flat and featureless, especially at near-infrared wavelengths. In the near-infrared, Ixion's reflectance spectrum appears neutral in color and lacks apparent absorption signatures of water ice at wavelengths of 1.5 and 2 μm. Although water ice appears to be absent in Ixion's near-infrared spectrum, Barkume and colleagues have reported a detection of weak absorption signatures of water ice in Ixion's near-infrared spectrum in 2007. Ixion's featureless near-infrared spectrum indicates that its surface is covered with a thick layer of dark organic compounds irradiated by solar radiation and cosmic rays.
The red color of Ixion's surface originates from the irradiation of water- and organic-containing clathrates by solar radiation and cosmic rays, which produces dark, reddish heteropolymers called tholins that cover its surface. The production of tholins on Ixion's surface is responsible for Ixion's red, featureless spectrum as well as its low surface albedo. Ixion's neutral near-infrared color and apparent lack of water ice indicates that it has a thick layer of tholins covering its surface, suggesting that Ixion has undergone long-term irradiation and has not experienced resurfacing by impact events that may otherwise expose water ice underneath. While Ixion is generally known to have a red color, visible and near-infrared observations by the Very Large Telescope (VLT) in 2006 and 2007 paradoxically found a bluer color. This discrepancy was concluded to be an indication of heterogeneities across its surface, which may also explain the conflicting detections of water ice in various studies.
In 2003, VLT observations tentatively resolved a weak absorption feature at 0.8 μm in Ixion's spectrum, which could possibly be attributed to surface materials aqueously altered by water. However, it was not confirmed in a follow-up study by Boehnhardt and colleagues in 2004, concluding that the discrepancy between the 2003 and 2004 spectroscopic results may be the result of Ixion's heterogenous surface. In that same study, their results from photometric and polarimetric observations suggest that Ixion's surface consists of a mixture of mostly dark material and a smaller proportion of brighter, icy material. Boehnhardt and colleagues suggested a mixing ratio of 6:1 for dark and bright material as a best-fit model for a geometric albedo of 0.08. Based on combined visible and infrared spectroscopic results, they suggested that Ixion's surface consists of a mixture largely of amorphous carbon and tholins, with the following best-fit model of Ixion's surface composition: 65% amorphous carbon, 20% cometary ice tholins (ice tholin II), 13% nitrogen and methane-rich Titan tholins, and 2% water ice.
In 2005, astronomers Lorin and Rousselot observed Ixion with the VLT in attempt to search for evidence of cometary activity. They did not detect a coma around Ixion, placing an upper limit of for Ixion's dust production rate.
Exploration
The New Horizons spacecraft, which successfully flew by Pluto in 2015, observed Ixion from afar using its long range imager on 13 and 14 July 2016. The spacecraft detected Ixion at magnitude 20.2 from a range of , and was able to observe it from a high phase angle of 64 degrees, enabling the determination of the light scattering properties and photometric phase curve behavior of its surface.
In a study published by Ashley Gleaves and colleagues in 2012, Ixion was considered as a potential target for an orbiter mission concept, which would be launched on an Atlas V 551 or Delta IV HLV rocket. For an orbiter mission to Ixion, the spacecraft have a launch date in November 2039 and use a gravity assist from Jupiter, taking 20 to 25 years to arrive. Gleaves concluded that Ixion and were the most feasible targets for the orbiter, as the trajectories required the fewest maneuvers for orbital insertion around either. For a flyby mission to Ixion, planetary scientist Amanda Zangari calculated that a spacecraft could take just over 10 years to arrive at Ixion using a Jupiter gravity assist, based on a launch date of 2027 or 2032. Ixion would be approximately 31 to 35 AU from the Sun when the spacecraft arrives. Alternatively, a flyby mission with a later launch date of 2040 would also take just over 10 years, using a Jupiter gravity assist. By the time the spacecraft arrives in 2050, Ixion would be approximately 31 to 32 AU from the Sun. Other trajectories using gravity assists from Jupiter or Saturn have been also considered. A trajectory using gravity assists from Jupiter and Saturn could take under 22 years, based a launch date of 2035 or 2040, whereas a trajectory using one gravity assist from Saturn could take at least 19 years, based on a launch date of 2038 or 2040. Using these alternative trajectories for the spacecraft, Ixion would be approximately 30 AU from the Sun when the spacecraft arrives.
Notes
References
External links
Astronomy Picture of the Day–30 August 2001
Beyond Jupiter: The World of Distant Minor Planets – (28978) Ixion
Plutinos
Discoveries by the Deep Ecliptic Survey
Ixion
Possible dwarf planets
Objects observed by stellar occultation
20010522 | 28978 Ixion | [
"Physics",
"Astronomy"
] | 4,113 | [
"Concepts in astronomy",
"Unsolved problems in astronomy",
"Possible dwarf planets"
] |
358,196 | https://en.wikipedia.org/wiki/Optimal%20solutions%20for%20the%20Rubik%27s%20Cube | Optimal solutions for the Rubik's Cube are solutions that are the shortest in some sense. There are two common ways to measure the length of a solution. The first is to count the number of quarter turns. The second is to count the number of outer-layer twists, called "face turns". A move to turn an outer layer two quarter (90°) turns in the same direction would be counted as two moves in the quarter turn metric (QTM), but as one turn in the face metric (FTM, or HTM "Half Turn Metric", or OBTM "Outer Block Turn Metric").
The maximal number of face turns needed to solve any instance of the Rubik's Cube is 20, and the maximal number of quarter turns is 26. These numbers are also the diameters of the corresponding Cayley graphs of the Rubik's Cube group. In STM (slice turn metric), the minimal number of turns is unknown.
There are many algorithms to solve scrambled Rubik's Cubes. An algorithm that solves a cube in the minimum number of moves is known as God's algorithm.
Move notation
To denote a sequence of moves on the 3×3×3 Rubik's Cube, this article uses "Singmaster notation", which was developed by David Singmaster.
The following are standard moves, which do not move centre cubies of any face to another location:
The letters L, R, F, B, U, and D indicate a clockwise quarter turn of the left, right, front, back, up, and down face respectively. A half turn (i.e. 2 quarter turns in the same direction) are indicated by appending a 2. A counterclockwise turn is indicated by appending a prime symbol ( ′ ).
However, because these notations are human-oriented, we use clockwise as positive, and not mathematically oriented, which is counterclockwise as positive.
The following are non-standard moves
Non-standard moves are usually represented with lowercase letters in contrast to the standard moves above.
Moving centre cubies of faces to other locations:
The letters M, S and E are used to denote the turning of a middle layer. M (short for "Middle" layer) represents turning the layer between the R and L faces 1 quarter turn clockwise (front to back <- you got it reversed), as seen facing the (invisible) L face. S (short for "Standing" layer) represents turning the layer between the F and B faces 1 quarter turn clockwise (top to bottom), as seen facing the (visible) F face. E (short for "Equator" layer) represents turning the layer between the U and D faces 1 quarter turn clockwise (left to right), as seen facing the (invisible) D face. As with regular turns, a 2 signifies a half turn and a prime (') indicates a turn counterclockwise.
The letters H, S and V are used to denote the turning of a middle layer. H (short for "Horizontal" layer) represents turning the layer between the U and D faces 1 quarter turn clockwise, as seen facing the (visible) U face. S (short for "Side" layer) represents turning the layer between the F and B faces 1 quarter turn clockwise, as seen facing the (visible) F face. V (short for "Vertical" layer) represents turning the layer between the R and L faces 1 quarter turn clockwise, as seen facing the (visible) R face. As with regular turns, a prime (') indicates a turn counterclockwise and a 2 signifies a half turn.
Instead, lowercase letters r, f and u are also used to denote turning layers next to R, F and U respectively in the same direction as R, F and U. This is more consistent with 4-layered cubes.
In multiple-layered cubes, numbers may precede face names to indicate rotation of the nth layer from the named face. 2R, 2F and 2U are then used to denote turning layers next to R, F and U respectively in the same direction as R, F and U. Using this notation for a three-layered cube is more consistent with multiple-layered cubes.
Rotating the whole cube:
The letters x, y and z are used to signify cube rotations. x signifies rotating the cube in the R direction. y signifies the rotation of the cube in the U direction. z signifies the rotation of the cube on the F direction. These cube rotations are often used in algorithms to make them smoother and faster. As with regular turns, a 2 signifies a half turn and a prime (') indicates a turn counterclockwise. Note that these spacial rotations are usually represented with lowercase letters.
Lower bounds
It can be proven by counting arguments that there exist positions needing at least 18 moves to solve. To show this, first count the number of cube positions that exist in total, then count the number of positions achievable using at most 17 moves starting from a solved cube. It turns out that the latter number is smaller.
This argument was not improved upon for many years. Also, it is not a constructive proof: it does not exhibit a concrete position that needs this many moves. It was conjectured that the so-called superflip would be a position that is very difficult. A Rubik's Cube is in the superflip pattern when each corner piece is in the correct position, but each edge piece is incorrectly oriented. In 1992, a solution for the superflip with 20 face turns was found by Dik T. Winter, of which the minimality was shown in 1995 by Michael Reid, providing a new lower bound for the diameter of the cube group. Also in 1995, a solution for superflip in 24 quarter turns was found by Michael Reid, with its minimality proven by Jerry Bryan. In 1998, a new position requiring more than 24 quarter turns to solve was found. The position, which was called a 'superflip composed with four spot' needs 26 quarter turns.
Upper bounds
The first upper bounds were based on the 'human' algorithms. By combining the worst-case scenarios for each part of these algorithms, the typical upper bound was found to be around 100.
Perhaps the first concrete value for an upper bound was the 277 moves mentioned by David Singmaster in early 1979. He simply counted the maximum number of moves required by his cube-solving algorithm. Later, Singmaster reported that Elwyn Berlekamp, John Conway, and Richard K. Guy had come up with a different algorithm that took at most 160 moves. Soon after, Conway's Cambridge Cubists reported that the cube could be restored in at most 94 moves.
Thistlethwaite's algorithm
The breakthrough, known as "descent through nested sub-groups" was found by Morwen Thistlethwaite; details of Thistlethwaite's algorithm were published in Scientific American in 1981 by Douglas Hofstadter. The approaches to the cube that led to algorithms with very few moves are based on group theory and on extensive computer searches. Thistlethwaite's idea was to divide the problem into subproblems. Where algorithms up to that point divided the problem by looking at the parts of the cube that should remain fixed, he divided it by restricting the type of moves that could be executed. In particular he divided the cube group into the following chain of subgroups:
Next he prepared tables for each of the right coset spaces . For each element he found a sequence of moves that took it to the next smaller group. After these preparations he worked as follows. A random cube is in the general cube group . Next he found this element in the right coset space . He applied the corresponding process to the cube. This took it to a cube in . Next he looked up a process that takes the cube to , next to and finally to .
Although the whole cube group is very large (~4.3×1019), the right coset spaces and are much smaller.
The coset space is the largest and contains only 1082565 elements. The number of moves required by this algorithm is the sum of the largest process in each step.
Initially, Thistlethwaite showed that any configuration could be solved in at most 85 moves. In January 1980 he improved his strategy to yield a maximum of 80 moves. Later that same year, he reduced the number to 63, and then again to 52. By exhaustively searching the coset spaces it was later found that the worst possible number of moves for each stage was 7, 10, 13, and 15 giving a total of 45 moves at most. There have been implementations of Thistlewaite's algorithm in various computer languages.
Kociemba's algorithm
Thistlethwaite's algorithm was improved by Herbert Kociemba in 1992. He reduced the number of intermediate groups to only two:
As with Thistlethwaite's algorithm, he would search through the right coset space to take the cube to group . Next he searched the optimal solution for group . The searches in and were both done with a method equivalent to iterative deepening A* (IDA*). The search in needs at most 12 moves and the search in at most 18 moves, as Michael Reid showed in 1995. By also generating suboptimal solutions that take the cube to group and looking for short solutions in , much shorter overall solutions are usually obtained. Using this algorithm solutions are typically found of fewer than 21 moves, though there is no proof that it will always do so.
In 1995 Michael Reid proved that using these two groups every position can be solved in at most 29 face turns, or in 42 quarter turns. This result was improved by Silviu Radu in 2005 to 40.
At first glance, this algorithm appears to be practically inefficient: if contains 18 possible moves (each move, its prime, and its 180-degree rotation), that leaves (over 1 quadrillion) cube states to be searched. Even with a heuristic-based computer algorithm like IDA*, which may narrow it down considerably, searching through that many states is likely not practical. To solve this problem, Kociemba devised a lookup table that provides an exact heuristic for . When the exact number of moves needed to reach is available, the search becomes virtually instantaneous: one need only generate 18 cube states for each of the 12 moves and choose the one with the lowest heuristic each time. This allows the second heuristic, that for , to be less precise and still allow for a solution to be computed in reasonable time on a modern computer.
Korf's algorithm
Using these group solutions combined with computer searches will generally quickly give very short solutions. But these solutions do not always come with a guarantee of their minimality. To search specifically for minimal solutions a new approach was needed.
In 1997 Richard Korf announced an algorithm with which he had optimally solved random instances of the cube. Of the ten random cubes he did, none required more than 18 face turns. The method he used is called IDA* and is described in his paper "Finding Optimal Solutions to Rubik's Cube Using Pattern Databases". Korf describes this method as follows
IDA* is a depth-first search that looks for increasingly longer solutions in a series of iterations, using a lower-bound heuristic to prune branches once a lower bound on their length exceeds the current iterations bound.
It works roughly as follows. First he identified a number of subproblems that are small enough to be solved optimally. He used:
The cube restricted to only the corners, not looking at the edges
The cube restricted to only 6 edges, not looking at the corners nor at the other edges.
The cube restricted to the other 6 edges.
Clearly the number of moves required to solve any of these subproblems is a lower bound for the number of moves needed to solve the entire cube.
Given a random cube C, it is solved as iterative deepening. First all cubes are generated that are the result of applying 1 move to them. That is C * F, C * U, ... Next, from this list, all cubes are generated that are the result of applying two moves. Then three moves and so on. If at any point a cube is found that needs too many moves based on the lower bounds to still be optimal it can be eliminated from the list.
Although this algorithm will always find optimal solutions, there is no worst-case analysis. It is not known in general how many iterations this algorithm will need to reach an optimal solution. An implementation of this algorithm can be found here.
Further improvements, and finding God's Number
In 2006, Silviu Radu further improved his methods to prove that every position can be solved in at most 27 face turns or 35 quarter turns. Daniel Kunkle and Gene Cooperman in 2007 used a supercomputer to show that all unsolved cubes can be solved in no more than 26 moves (in face-turn metric). Instead of attempting to solve each of the billions of variations explicitly, the computer was programmed to bring the cube to one of 15,752 states, each of which could be solved within a few extra moves. All were proved solvable in 29 moves, with most solvable in 26. Those that could not initially be solved in 26 moves were then solved explicitly, and shown that they too could be solved in 26 moves.
Tomas Rokicki reported in a 2008 computational proof that all unsolved cubes could be solved in 25 moves or fewer. This was later reduced to 23 moves. In August 2008, Rokicki announced that he had a proof for 22 moves.
Finally, in 2010, Tomas Rokicki, Herbert Kociemba, Morley Davidson, and John Dethridge gave the final computer-assisted proof that all cube positions could be solved with a maximum of 20 face turns.
In 2009, Tomas Rokicki proved that 29 moves in the quarter-turn metric is enough to solve any scrambled cube. And in 2014, Tomas Rokicki and Morley Davidson proved that the maximum number of quarter-turns needed to solve the cube is 26.
The face-turn and quarter-turn metrics differ in the nature of their antipodes.
An antipode is a scrambled cube that is maximally far from solved, one that requires the maximum number of moves to solve. In the half-turn metric with a maximum number of 20, there are hundreds of millions of such positions. In the quarter-turn metric, only a single position (and its two rotations) is known that requires the maximum of 26 moves. Despite significant effort, no additional quarter-turn distance-26 positions have been found. Even at distance 25, only two positions (and their rotations) are known to exist. At distance 24, perhaps 150,000 positions exist.
Feather's algorithm
In 2015, Michael Feather introduced a unique solving algorithm on his website. Similarly to older Kociemba's algorithm, Feather's algorithm is a 2-phase algorithm, being able to generate both suboptimal and optimal solutions in reasonable time on a modern device. Unlike Thistlethwaite-like algorithms, Feather's algorithm is not heavily based on a mathematical field of group theory.
Intermediate state for phase 1 consists of a solved 3-color Rubik's cube (on a 6-color cube, it means opposite colors being on opposite faces). Phase 2 consists of solving a 6-color cube.
At first sight it may seem that phase 1 of Feather's algorithm is basically the same as first 3 phases of Thistlethwaite's algorithm. However, there is a substantial difference between a 3-color reduction which is having a total of 3,981,312 configurations, and a half-turn reduction which is having a total of 663,552 configurations. Feather's algorithm goes as follows: any 3-color solutions that arise from the nodes being generated are then looked up in the array containing distances from intermediate 3-color solutions to the final 6-color solution (3,981,312 configurations), and if it is 8 moves or less (of which there are 117,265 configurations) then a solution is generated.
References
Further reading
External links
How to solve the Rubik's Cube, a Wikibooks article that gives an overview over several algorithms that are simple enough to be memorizable by humans. However, such algorithms will usually not give an optimal solution which only uses the minimum possible number of moves.
Rubik's Cube
Computer-assisted proofs | Optimal solutions for the Rubik's Cube | [
"Mathematics"
] | 3,422 | [
"Computer-assisted proofs"
] |
358,237 | https://en.wikipedia.org/wiki/Roof%20garden | A roof garden is a garden on the roof of a building. Besides the decorative benefit, roof plantings may provide food, temperature control, hydrological benefits, architectural enhancement, habitats or corridors for wildlife, recreational opportunities, and in large scale it may even have ecological benefits. The practice of cultivating food on the rooftop of buildings is sometimes referred to as rooftop farming. Rooftop farming is usually done using green roof, hydroponics, aeroponics or air-dynaponics systems or container gardens.
History
Humans have grown plants atop structures since the ziggurats of ancient Mesopotamia (4th millennium BC–600 BC) had plantings of trees and shrubs on aboveground terraces. An example in Roman times was the Villa of the Mysteries in Pompeii, which had an elevated terrace where plants were grown. A roof garden has also been discovered around an audience hall in Roman-Byzantine Caesarea. The medieval Egyptian city of Fustat had a number of high-rise buildings that Nasir Khusraw in the early 11th century described as rising up to 14 stories, with roof gardens on the top story complete with ox-drawn water wheels for irrigating them.
Among the Seven Wonders of the Ancient World, The Hanging Gardens of Babylon are often depicted as tall structures holding vegetation; even immense trees.
In New York City between 1880 and Prohibition large rooftop gardens built included the Hotel Astor (New York City), the American Theater on Eighth Avenue, the garden atop Stanford White's 1890 Madison Square Garden, and the Paradise Roof Garden opened by Oscar Hammerstein I in 1900.
Commercial greenhouses on rooftops have existed at least since 1969, when Terrestris rooftop nursery opened on 60th st. in New York City.
In the 2010s, large commercial hydroponic rooftop farms were started by Gotham Greens, Lufa Farms, and others.
Environmental impact
Roof gardens are most often found in urban environments. Plants have the ability to reduce the overall heat absorption of the building which then reduces energy consumption for cooling. "The primary cause of heat build-up in cities is insolation, the absorption of solar radiation by roads and buildings in the city and the storage of this heat in the building material and its subsequent re-radiation. Plant surfaces however, as a result of transpiration, do not rise more than above the ambient and are sometimes cooler." This then translates into a cooling of the environment between , depending on the area on earth (in hotter areas, the environmental temperature will cool more). The study was performed by the University of Cardiff.
A study at the National Research Council of Canada showed the differences between roofs with gardens and roofs without gardens against temperature. The study shows temperature effects on different layers of each roof at different times of the day. Roof gardens are obviously very beneficial in reducing the effects of temperature against roofs without gardens. “If widely adopted, rooftop gardens could reduce the urban heat island, which would decrease smog episodes, problems associated with heat stress and further lower energy consumption.”
Aside from rooftop gardens providing resistance to thermal radiation, rooftop gardens are also beneficial in reducing rain run off. A roof garden can delay run off; reduce the rate and volume of run off. “As cities grow, permeable substrates are replaced by impervious structures such as buildings and paved roads. Storm water run-off and combined sewage overflow events are now major problems for many cities in North America. A key solution is to reduce peak flow by delaying (e.g., control flow drain on roofs) or retaining run-off (e.g., rain detention basins). Rooftop gardens can delay peak flow and retain the run-off for later use by the plants.”
Urban agriculture
“In an accessible rooftop garden, space becomes available for localized small-scale urban agriculture, a source of local food production. An urban garden can supplement the diets of the community it feeds with fresh produce and provide a tangible tie to food production.” At Trent University, there is currently a working rooftop garden which provides food to the student café and local citizens.
Available gardening areas in cities are often seriously lacking, which is likely the key impetus for many roof gardens. The garden may be on the roof of an autonomous building which takes care of its own water and waste. Hydroponics and other alternative methods can expand the possibilities of roof top gardening by reducing, for example, the need for soil or its tremendous weight. Plantings in containers are used extensively in roof top gardens. Planting in containers prevents added stress to the roof's waterproofing. One high-profile example of a building with a roof garden is Chicago City Hall.
For those who live in small apartments with little space, square foot gardening, or (when even less space is available) green walls (vertical gardening) can be a solution. These use much less space than traditional gardening. These also encourage environmentally responsible practices, eliminating tilling, reducing or eliminating pesticides, and weeding, and encouraging the recycling of wastes through composting.
Importance to urban planning
Becoming green is a high priority for urban planners. The environmental and aesthetic benefits to cities are the prime motivation. It was calculated that the temperature in Tokyo could be lowered by if 50% of all available rooftop space were planted with greenery. This would lead to savings of approximately 100 million yen.
Singapore is active in green urban development. "Roof gardens present possibilities for carrying the notions of nature and open space further in tall building development." When surveyed, 80% of Singapore residents voted for more roof gardens to be implemented in the city's plans. Recreational reasons, such as leisure and relaxation, beautifying the environment, and greenery and nature, received the most votes. Planting roof gardens on the tops of buildings is a way to make cities more efficient.
A roof garden can be distinguished from a green roof, although the two terms are often used interchangeably. The term roof garden is well suited to roof spaces that incorporate recreation, and entertaining and provide additional outdoor living space for the building's residents. It may include planters, plants, dining and lounging furniture, outdoor structures such as pergolas and sheds, and automated irrigation and lighting systems.
Although they may provide aesthetic and recreational benefits a green roof is not necessarily designed for this purpose. A green roof may not provide any recreational space and be constructed with an emphasis on improving the insulation or improving the overall energy efficiency and reducing the cooling and heating costs within a building.
Green roofs may be extensive or intensive. The terms are used to describe the type of planting required. The panels that comprise a green roof are generally no more than a few centimeters up to 30 cm (a few inches up to a foot) in depth, since weight is an important factor when covering an entire roof surface. The plants that go into a green roof are usually sedum or other shallow-rooted plants that will tolerate the hot, dry, windy conditions that prevail in most rooftop gardens. With a green roof, "the plants' layer can shield off as much as 87% of solar radiation while a bare roof receives 100% direct exposure".
The planters on a roof garden may be designed for a variety of functions and vary greatly in depth to satisfy aesthetic and recreational purposes. These planters can hold a range of ornamental plants: anything from trees, shrubs, vines, or an assortment of flowers. As aesthetics and recreation are the priority they may not provide the environmental and energy benefits of a green roof.
In popular culture
American jazz singer Al Jarreau composed a song named "Roof Garden", released on his 1981 album.
Apu Nahasapeemapetilon of the TV show The Simpsons has a rooftop garden visited by Paul McCartney and his wife.
In BBC's 1990 television miniseries House of Cards, the main character, Member of Parliament (MP) Francis Urquhart, murders journalist Mattie Storin by throwing her off the Palace of Westminster's rooftop garden.
Gallery
See also
Agrivoltaic
Building-integrated agriculture
Cool roof
Green building
Green infrastructure
Greening
Kensington Roof Gardens
List of garden types
Low-flow irrigation systems
Metropolitan Museum of Art Roof Garden
Ralph Hancock, designer of The Rockefeller Center Roof Gardens
Roof deck
Terrace garden
Urban green space
Urban park
Wildlife corridor
References
External links
The New York Times article about rooftop garden in Manhattan
Types of garden
Architectural elements
Urban agriculture
Roofs
Sustainable building | Roof garden | [
"Technology",
"Engineering"
] | 1,699 | [
"Structural engineering",
"Sustainable building",
"Building engineering",
"Structural system",
"Construction",
"Architectural elements",
"Roofs",
"Components",
"Architecture"
] |
358,277 | https://en.wikipedia.org/wiki/Cayley%20graph | In mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group, is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem (named after Arthur Cayley), and uses a specified set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs makes them particularly good candidates for constructing expander graphs.
Definition
Let be a group and be a generating set of . The Cayley graph is an edge-colored directed graph constructed as follows:
Each element of is assigned a vertex: the vertex set of is identified with
Each element of is assigned a color .
For every and , there is a directed edge of color from the vertex corresponding to to the one corresponding to .
Not every convention requires that generate the group. If is not a generating set for , then is disconnected and each connected component represents a coset of the subgroup generated by .
If an element of is its own inverse, then it is typically represented by an undirected edge.
The set is often assumed to be finite, especially in geometric group theory, which corresponds to being locally finite and being finitely generated.
The set is sometimes assumed to be symmetric () and not containing the group identity element. In this case, the uncolored Cayley graph can be represented as a simple undirected graph.
Examples
Suppose that is the infinite cyclic group and the set consists of the standard generator 1 and its inverse (−1 in the additive notation); then the Cayley graph is an infinite path.
Similarly, if is the finite cyclic group of order and the set consists of two elements, the standard generator of and its inverse, then the Cayley graph is the cycle . More generally, the Cayley graphs of finite cyclic groups are exactly the circulant graphs.
The Cayley graph of the direct product of groups (with the cartesian product of generating sets as a generating set) is the cartesian product of the corresponding Cayley graphs. Thus the Cayley graph of the abelian group with the set of generators consisting of four elements is the infinite grid on the plane , while for the direct product with similar generators the Cayley graph is the finite grid on a torus.
A Cayley graph of the dihedral group on two generators and is depicted to the left. Red arrows represent composition with . Since is self-inverse, the blue lines, which represent composition with , are undirected. Therefore the graph is mixed: it has eight vertices, eight arrows, and four edges. The Cayley table of the group can be derived from the group presentation A different Cayley graph of is shown on the right. is still the horizontal reflection and is represented by blue lines, and is a diagonal reflection and is represented by pink lines. As both reflections are self-inverse the Cayley graph on the right is completely undirected. This graph corresponds to the presentation
The Cayley graph of the free group on two generators and corresponding to the set is depicted at the top of the article, with being the identity. Travelling along an edge to the right represents right multiplication by while travelling along an edge upward corresponds to the multiplication by Since the free group has no relations, the Cayley graph has no cycles: it is the 4-regular infinite tree. It is a key ingredient in the proof of the Banach–Tarski paradox.
More generally, the Bethe lattice or Cayley tree is the Cayley graph of the free group on generators. A presentation of a group by generators corresponds to a surjective homomorphism from the free group on generators to the group defining a map from the Cayley tree to the Cayley graph of . Interpreting graphs topologically as one-dimensional simplicial complexes, the simply connected infinite tree is the universal cover of the Cayley graph; and the kernel of the mapping is the fundamental group of the Cayley graph.
A Cayley graph of the discrete Heisenberg group is depicted to the right. The generators used in the picture are the three matrices given by the three permutations of 1, 0, 0 for the entries . They satisfy the relations , which can also be understood from the picture. This is a non-commutative infinite group, and despite being embedded in a three-dimensional space, the Cayley graph has four-dimensional volume growth.
Characterization
The group acts on itself by left multiplication (see Cayley's theorem). This may be viewed as the action of on its Cayley graph. Explicitly, an element maps a vertex to the vertex The set of edges of the Cayley graph and their color is preserved by this action: the edge is mapped to the edge , both having color . In fact, all automorphisms of the colored directed graph are of this form, so that is isomorphic to the symmetry group of .
The left multiplication action of a group on itself is simply transitive, in particular, Cayley graphs are vertex-transitive. The following is a kind of converse to this:
To recover the group and the generating set from the unlabeled directed graph , select a vertex and label it by the identity element of the group. Then label each vertex of by the unique element of that maps to The set of generators of that yields as the Cayley graph is the set of labels of out-neighbors of . Since is uncolored, it might have more directed graph automorphisms than the left multiplication maps, for example group automorphisms of which permute .
Elementary properties
The Cayley graph depends in an essential way on the choice of the set of generators. For example, if the generating set has elements then each vertex of the Cayley graph has incoming and outgoing directed edges. In the case of a symmetric generating set with elements, the Cayley graph is a regular directed graph of degree
Cycles (or closed walks) in the Cayley graph indicate relations among the elements of In the more elaborate construction of the Cayley complex of a group, closed paths corresponding to relations are "filled in" by polygons. This means that the problem of constructing the Cayley graph of a given presentation is equivalent to solving the Word Problem for .
If is a surjective group homomorphism and the images of the elements of the generating set for are distinct, then it induces a covering of graphs where In particular, if a group has generators, all of order different from 2, and the set consists of these generators together with their inverses, then the Cayley graph is covered by the infinite regular tree of degree corresponding to the free group on the same set of generators.
For any finite Cayley graph, considered as undirected, the vertex connectivity is at least equal to 2/3 of the degree of the graph. If the generating set is minimal (removal of any element and, if present, its inverse from the generating set leaves a set which is not generating), the vertex connectivity is equal to the degree. The edge connectivity is in all cases equal to the degree.
If is the left-regular representation with matrix form denoted , the adjacency matrix of is .
Every group character of the group induces an eigenvector of the adjacency matrix of . The associated eigenvalue is which, when is Abelian, takes the form for integers In particular, the associated eigenvalue of the trivial character (the one sending every element to 1) is the degree of , that is, the order of . If is an Abelian group, there are exactly characters, determining all eigenvalues. The corresponding orthonormal basis of eigenvectors is given by It is interesting to note that this eigenbasis is independent of the generating set . More generally for symmetric generating sets, take a complete set of irreducible representations of and let with eigenvalue set . Then the set of eigenvalues of is exactly where eigenvalue appears with multiplicity for each occurrence of as an eigenvalue of
Schreier coset graph
If one instead takes the vertices to be right cosets of a fixed subgroup one obtains a related construction, the Schreier coset graph, which is at the basis of coset enumeration or the Todd–Coxeter process.
Connection to group theory
Knowledge about the structure of the group can be obtained by studying the adjacency matrix of the graph and in particular applying the theorems of spectral graph theory. Conversely, for symmetric generating sets, the spectral and representation theory of are directly tied together: take a complete set of irreducible representations of and let with eigenvalues . Then the set of eigenvalues of is exactly where eigenvalue appears with multiplicity for each occurrence of as an eigenvalue of
The genus of a group is the minimum genus for any Cayley graph of that group.
Geometric group theory
For infinite groups, the coarse geometry of the Cayley graph is fundamental to geometric group theory. For a finitely generated group, this is independent of choice of finite set of generators, hence an intrinsic property of the group. This is only interesting for infinite groups: every finite group is coarsely equivalent to a point (or the trivial group), since one can choose as finite set of generators the entire group.
Formally, for a given choice of generators, one has the word metric (the natural distance on the Cayley graph), which determines a metric space. The coarse equivalence class of this space is an invariant of the group.
Expansion properties
When , the Cayley graph is -regular, so spectral techniques may be used to analyze the expansion properties of the graph. In particular for abelian groups, the eigenvalues of the Cayley graph are more easily computable and given by with top eigenvalue equal to , so we may use Cheeger's inequality to bound the edge expansion ratio using the spectral gap.
Representation theory can be used to construct such expanding Cayley graphs, in the form of Kazhdan property (T). The following statement holds:
For example the group has property (T) and is generated by elementary matrices and this gives relatively explicit examples of expander graphs.
Integral classification
An integral graph is one whose eigenvalues are all integers. While the complete classification of integral graphs remains an open problem, the Cayley graphs of certain groups are always integral.
Using previous characterizations of the spectrum of Cayley graphs, note that is integral iff the eigenvalues of are integral for every representation of .
Cayley integral simple group
A group is Cayley integral simple (CIS) if the connected Cayley graph is integral exactly when the symmetric generating set is the complement of a subgroup of . A result of Ahmady, Bell, and Mohar shows that all CIS groups are isomorphic to , or for primes . It is important that actually generates the entire group in order for the Cayley graph to be connected. (If does not generate , the Cayley graph may still be integral, but the complement of is not necessarily a subgroup.)
In the example of , the symmetric generating sets (up to graph isomorphism) are
: is a -cycle with eigenvalues
: is with eigenvalues
The only subgroups of are the whole group and the trivial group, and the only symmetric generating set that produces an integral graph is the complement of the trivial group. Therefore must be a CIS group.
The proof of the complete CIS classification uses the fact that every subgroup and homomorphic image of a CIS group is also a CIS group.
Cayley integral group
A slightly different notion is that of a Cayley integral group , in which every symmetric subset produces an integral graph . Note that no longer has to generate the entire group.
The complete list of Cayley integral groups is given by , and the dicyclic group of order , where and is the quaternion group. The proof relies on two important properties of Cayley integral groups:
Subgroups and homomorphic images of Cayley integral groups are also Cayley integral groups.
A group is Cayley integral iff every connected Cayley graph of the group is also integral.
Normal and Eulerian generating sets
Given a general group , a subset is normal if is closed under conjugation by elements of (generalizing the notion of a normal subgroup), and is Eulerian if for every , the set of elements generating the cyclic group is also contained in .
A 2019 result by Guo, Lytkina, Mazurov, and Revin proves that the Cayley graph is integral for any Eulerian normal subset , using purely representation theoretic techniques.
The proof of this result is relatively short: given an Eulerian normal subset, select pairwise nonconjugate so that is the union of the conjugacy classes . Then using the characterization of the spectrum of a Cayley graph, one can show the eigenvalues of are given by taken over irreducible characters of . Each eigenvalue in this set must be an element of for a primitive root of unity (where must be divisible by the orders of each ). Because the eigenvalues are algebraic integers, to show they are integral it suffices to show that they are rational, and it suffices to show is fixed under any automorphism of . There must be some relatively prime to such that for all , and because is both Eulerian and normal, for some . Sending bijects conjugacy classes, so and have the same size and merely permutes terms in the sum for . Therefore is fixed for all automorphisms of , so is rational and thus integral.
Consequently, if is the alternating group and is a set of permutations given by , then the Cayley graph is integral. (This solved a previously open problem from the Kourovka Notebook.) In addition when is the symmetric group and is either the set of all transpositions or the set of transpositions involving a particular element, the Cayley graph is also integral.
History
Cayley graphs were first considered for finite groups by Arthur Cayley in 1878. Max Dehn in his unpublished lectures on group theory from 1909–10 reintroduced Cayley graphs under the name Gruppenbild (group diagram), which led to the geometric group theory of today. His most important application was the solution of the word problem for the fundamental group of surfaces with genus ≥ 2, which is equivalent to the topological problem of deciding which closed curves on the surface contract to a point.
See also
Vertex-transitive graph
Generating set of a group
Lovász conjecture
Cube-connected cycles
Algebraic graph theory
Cycle graph (algebra)
Notes
External links
Cayley diagrams
Group theory
Permutation groups
Graph families
Application-specific graphs
Geometric group theory | Cayley graph | [
"Physics",
"Mathematics"
] | 3,116 | [
"Geometric group theory",
"Group actions",
"Group theory",
"Fields of abstract algebra",
"Symmetry"
] |
358,330 | https://en.wikipedia.org/wiki/Peano%20curve | In geometry, the Peano curve is the first example of a space-filling curve to be discovered, by Giuseppe Peano in 1890. Peano's curve is a surjective, continuous function from the unit interval onto the unit square, however it is not injective. Peano was motivated by an earlier result of Georg Cantor that these two sets have the same cardinality. Because of this example, some authors use the phrase "Peano curve" to refer more generally to any space-filling curve.
Construction
Peano's curve may be constructed by a sequence of steps, where the th step constructs a set of squares, and a sequence of the centers of the squares, from the set and sequence constructed in the previous step. As a base case, consists of the single unit square, and is the one-element sequence consisting of its center point.
In step , each square of is partitioned into nine smaller equal squares, and its center point is replaced by a contiguous subsequence of the centers of these nine smaller squares.
This subsequence is formed by grouping the nine smaller squares into three columns, ordering the centers contiguously within each column, and then ordering the columns from one side of the square to the other, in such a way that the distance between each consecutive pair of points in the subsequence equals the side length of the small squares. There are four such orderings possible:
Left three centers bottom to top, middle three centers top to bottom, and right three centers bottom to top
Right three centers bottom to top, middle three centers top to bottom, and left three centers bottom to top
Left three centers top to bottom, middle three centers bottom to top, and right three centers top to bottom
Right three centers top to bottom, middle three centers bottom to top, and left three centers top to bottom
Among these four orderings, the one for is chosen in such a way that the distance between the first point of the ordering and its predecessor in also equals the side length of the small squares. If was the first point in its ordering, then the first of these four orderings is chosen for the nine centers that replace .
The Peano curve itself is the limit of the curves through the sequences of square centers, as goes to infinity.
L-system construction
The Peano curve shown in the introduction can be constructed using a Lindenmayer system. This L-system can be described as follows:
where "" means "draw forward", "+" means "turn clockwise 90°", and "−" means "turn anticlockwise 90°". The image in the introduction shows the images of the first three iterations of the rules.
The curve shown in the 'construction' section be constructed as follows:
where "" means "draw forward", "+" means "turn clockwise 90°", and "−" means "turn anticlockwise 90°". The image above shows the first two iterations of the rule.
Variants
In the definition of the Peano curve, it is possible to perform some or all of the steps by making the centers of each row of three squares be contiguous, rather than the centers of each column of squares. These choices lead to many different variants of the Peano curve.
A "multiple radix" variant of this curve with different numbers of subdivisions in different directions can be used to fill rectangles of arbitrary shapes.
The Hilbert curve is a simpler variant of the same idea, based on subdividing squares into four equal smaller squares instead of into nine equal smaller squares.
References
Theory of continuous functions
Fractal curves | Peano curve | [
"Mathematics"
] | 741 | [
"Theory of continuous functions",
"Topology"
] |
358,364 | https://en.wikipedia.org/wiki/Dungeon | A dungeon is a room or cell in which prisoners are held, especially underground. Dungeons are generally associated with medieval castles, though their association with torture probably derives more from the Renaissance period. An oubliette (from the French , meaning 'to forget') or bottle dungeon is a basement room which is accessible only from a hatch or hole (an angstloch) in a high ceiling.
Etymology
The word dungeon comes from French donjon (also spelled dongeon), which means "keep", the main tower of a castle. The first recorded instance of the word in English was near the beginning of the 14th century when it held the same meaning as donjon. The earlier meaning of "keep" is still in use for academics, although in popular culture, it has come to mean a cell or "oubliette". Though it is uncertain, both dungeon and donjon are thought to derive from the Middle Latin word dominus, meaning "lord" or "master".
In French, the term donjon still refers to a "keep", and the English term "dungeon" refers mostly to oubliette in French. Donjon is therefore a false friend to dungeon (although the game Dungeons & Dragons is titled Donjons et Dragons in its French editions).
An oubliette (same origin as the French oublier, meaning "to forget") is a basement room which is accessible only from a hatch or hole (an angstloch) in a high ceiling.
The use of "donjons" evolved over time, sometimes to include prison cells, which could explain why the meaning of "dungeon" in English evolved over time from being a prison within the tallest, most secure tower of the castle into meaning a cell, and by extension, in popular use, an oubliette or even a torture chamber.
The earliest use of oubliette in French dates back to 1374, but its earliest adoption in English is Walter Scott's Ivanhoe in 1819: "The place was utterly dark—the oubliette, as I suppose, of their accursed convent."
History
Few Norman keeps in English castles originally contained prisons, which were more common in Scotland. Imprisonment was not a usual punishment in the Middle Ages, with most prisoners awaiting an imminent trial, sentence or a political solution. Noble prisoners were not generally held in dungeons, but lived in some comfort in castle apartments. The Tower of London is famous for housing political prisoners, and Pontefract Castle at various times held Thomas of Lancaster (1322), Richard II (1400), Earl Rivers (1483), Richard Scrope, Archbishop of York (1405), James I of Scotland (1405–1424) and Charles, Duke of Orléans (1417–1430). Purpose-built prison chambers in castles became more common after the 12th century, when they were built into gatehouses or mural towers. Some castles had larger provision for prisoners, such as the prison tower at Caernarfon Castle.
Features
Although many real dungeons are simply a single plain room with a heavy door or with access only from a hatchway or trapdoor in the floor of the room above, the use of dungeons for torture, along with their association to common human fears of being trapped underground, have made dungeons a powerful metaphor in a variety of contexts. Dungeons, as a whole, have become associated with underground complexes of cells and torture chambers. As a result, the number of true dungeons in castles is often exaggerated to interest tourists. Many chambers described as dungeons or oubliettes were in fact water-cisterns or even latrines.
An example of what might be popularly termed an "oubliette" is the particularly claustrophobic cell in the dungeon of Warwick Castle's Caesar's Tower, in central England. The access hatch consists of an iron grille. Even turning around (or moving at all) would be nearly impossible in this tiny chamber.
However, the tiny chamber that is described as the oubliette, is in reality a short shaft which opens up into a larger chamber with a latrine shaft entering it from above. This suggests that the chamber is in fact a partially back-filled drain. The positioning of the supposed oubliette within the larger dungeon, situated in a small alcove, is typical of garderobe arrangement within medieval buildings. These factors perhaps point to this feature being the remnants of a latrine rather than a cell for holding prisoners. Footage of the inside of this chamber can be seen in episode 3 of the first series of Secrets of Great British Castles.
A "bottle dungeon" is sometimes simply another term for an oubliette. It has a narrow entrance at the top and sometimes the room below is even so narrow that it would be impossible to lie down but in other designs the actual cell is larger.
The identification of dungeons and rooms used to hold prisoners is not always a straightforward task. Alnwick Castle and Cockermouth Castle, both near England's border with Scotland, had chambers in their gatehouses which have often been interpreted as oubliettes. However, this has been challenged. These underground rooms (accessed by a door in the ceiling) were built without latrines, and since the gatehouses at Alnwick and Cockermouth provided accommodation it is unlikely that the rooms would have been used to hold prisoners. An alternative explanation was proposed, suggesting that these were strong-rooms where valuables were stored. Folklore often has it that one mode of use for oubliettes in the Borders, which would obviate latrines anyway, was to throw attackers into the oubliette, close the latch, and leave them to die. It seems likely that this gruesome act was threatened more often than it was carried out in practice, with the real aim being deterrence of potential attackers via the notoriety of the rumor that such a fate was entirely possible, and (plausibly) perhaps not unlikely, for anyone who might dare to attack.
In fiction
Oubliettes and dungeons were a favorite topic of nineteenth century gothic novels or historical novels, where they appeared as symbols of hidden cruelty and tyrannical power. Usually found under medieval castles or abbeys, they were used by villainous characters to persecute blameless characters. In Alexandre Dumas's La Reine Margot, Catherine de Medici is portrayed gloating over a victim in the oubliettes of the Louvre.
Dungeons are common elements in modern fantasy literature, related tabletop, and video games. The most famous examples are the various Dungeons & Dragons media. In this context, the word "dungeon" has come to be used broadly to describe any labyrinthine complex (castle, cave system, etc) rather than a prison cell or torture chamber specifically. A role-playing game involving dungeon exploration is called a dungeon crawl.
Near the beginning of Jack Vance's high-fantasy Lyonesse Trilogy (1983–1989), King Casmir of Lyonesse commits Prince Aillas of Troicinet, who he believes to be a vagabond, to an oubliette for the crime of having seduced his daughter. After some months, the resourceful prince fashions a ladder from the bones of earlier prisoners and the rope by which he had been lowered, and escapes.
In the musical fantasy film Labyrinth, director Jim Henson includes a scene in which the heroine Sarah is freed from an oubliette by the dwarf Hoggle, who defines it for her as "a place you put people... to forget about 'em!"
In the Thomas Harris novel The Silence of the Lambs, Clarice makes a descent into Gumb's basement dungeon labyrinth in the narrative's climactic scene, where the killer is described as having an oubliette.
In the Robert A. Heinlein novel Stranger in a Strange Land, the term "oubliette" is used to refer to a trash disposal much like the "memory holes" in Nineteen Eighty-Four.
See also
Immurement
Keep
References
Further reading
Castle architecture
Rooms
Imprisonment and detention | Dungeon | [
"Engineering"
] | 1,675 | [
"Rooms",
"Architecture"
] |
358,477 | https://en.wikipedia.org/wiki/Random%20graph | In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them. The theory of random graphs lies at the intersection between graph theory and probability theory. From a mathematical perspective, random graphs are used to answer questions about the properties of typical graphs. Its practical applications are found in all areas in which complex networks need to be modeled – many random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context, random graph refers almost exclusively to the Erdős–Rényi random graph model. In other contexts, any graph model may be referred to as a random graph.
Models
A random graph is obtained by starting with a set of n isolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise. Different random graph models produce different probability distributions on graphs. Most commonly studied is the one proposed by Edgar Gilbert but often called the Erdős–Rényi model, denoted G(n,p). In it, every possible edge occurs independently with probability 0 < p < 1. The probability of obtaining any one particular random graph with m edges is with the notation .
A closely related model, also called the Erdős–Rényi model and denoted G(n,M), assigns equal probability to all graphs with exactly M edges. With 0 ≤ M ≤ N, G(n,M) has elements and every element occurs with probability . The G(n,M) model can be viewed as a snapshot at a particular time (M) of the random graph process , a stochastic process that starts with n vertices and no edges, and at each step adds one new edge chosen uniformly from the set of missing edges.
If instead we start with an infinite set of vertices, and again let every possible edge occur independently with probability 0 < p < 1, then we get an object G called an infinite random graph. Except in the trivial cases when p is 0 or 1, such a G almost surely has the following property:
Given any n + m elements , there is a vertex c in V that is adjacent to each of and is not adjacent to any of .
It turns out that if the vertex set is countable then there is, up to isomorphism, only a single graph with this property, namely the Rado graph. Thus any countably infinite random graph is almost surely the Rado graph, which for this reason is sometimes called simply the random graph. However, the analogous result is not true for uncountable graphs, of which there are many (nonisomorphic) graphs satisfying the above property.
Another model, which generalizes Gilbert's random graph model, is the random dot-product model. A random dot-product graph associates with each vertex a real vector. The probability of an edge uv between any vertices u and v is some function of the dot product u • v of their respective vectors.
The network probability matrix models random graphs through edge probabilities, which represent the probability that a given edge exists for a specified time period. This model is extensible to directed and undirected; weighted and unweighted; and static or dynamic graphs structure.
For M ≃ pN, where N is the maximal number of edges possible, the two most widely used models, G(n,M) and G(n,p), are almost interchangeable.
Random regular graphs form a special case, with properties that may differ from random graphs in general.
Once we have a model of random graphs, every function on graphs, becomes a random variable. The study of this model is to determine if, or at least estimate the probability that, a property may occur.
Terminology
The term 'almost every' in the context of random graphs refers to a sequence of spaces and probabilities, such that the error probabilities tend to zero.
Properties
The theory of random graphs studies typical properties of random graphs, those that hold with high probability for graphs drawn from a particular distribution. For example, we might ask for a given value of and what the probability is that is connected. In studying such questions, researchers often concentrate on the asymptotic behavior of random graphs—the values that various probabilities converge to as grows very large. Percolation theory characterizes the connectedness of random graphs, especially infinitely large ones.
Percolation is related to the robustness of the graph (called also network). Given a random graph of nodes and an average degree . Next we remove randomly a fraction of nodes and leave only a fraction . There exists a critical percolation threshold below which the network becomes fragmented while above a giant connected component exists.
Localized percolation refers to removing a node its neighbors, next nearest neighbors etc. until a fraction of of nodes from the network is removed. It was shown that for random graph with Poisson distribution of degrees exactly as for random removal.
Random graphs are widely used in the probabilistic method, where one tries to prove the existence of graphs with certain properties. The existence of a property on a random graph can often imply, via the Szemerédi regularity lemma, the existence of that property on almost all graphs.
In random regular graphs, are the set of -regular graphs with such that and are the natural numbers, , and is even.
The degree sequence of a graph in depends only on the number of edges in the sets
If edges, in a random graph, is large enough to ensure that almost every has minimum degree at least 1, then almost every is connected and, if is even, almost every has a perfect matching. In particular, the moment the last isolated vertex vanishes in almost every random graph, the graph becomes connected.
Almost every graph process on an even number of vertices with the edge raising the minimum degree to 1 or a random graph with slightly more than edges and with probability close to 1 ensures that the graph has a complete matching, with exception of at most one vertex.
For some constant , almost every labeled graph with vertices and at least edges is Hamiltonian. With the probability tending to 1, the particular edge that increases the minimum degree to 2 makes the graph Hamiltonian.
Properties of random graph may change or remain invariant under graph transformations. Mashaghi A. et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient.
Colouring
Given a random graph G of order n with the vertex V(G) = {1, ..., n}, by the greedy algorithm on the number of colors, the vertices can be colored with colors 1, 2, ... (vertex 1 is colored 1, vertex 2 is colored 1 if it is not adjacent to vertex 1, otherwise it is colored 2, etc.).
The number of proper colorings of random graphs given a number of q colors, called its chromatic polynomial, remains unknown so far. The scaling of zeros of the chromatic polynomial of random graphs with parameters n and the number of edges m or the connection probability p has been studied empirically using an algorithm based on symbolic pattern matching.
Random trees
A random tree is a tree or arborescence that is formed by a stochastic process. In a large range of random graphs of order n and size M(n) the distribution of the number of tree components of order k is asymptotically Poisson. Types of random trees include uniform spanning tree, random minimum spanning tree, random binary tree, treap, rapidly exploring random tree, Brownian tree, and random forest.
Conditional random graphs
Consider a given random graph model defined on the probability space and let be a real valued function which assigns to each graph in a vector of m properties.
For a fixed , conditional random graphs are models in which the probability measure assigns zero probability to all graphs such that '.
Special cases are conditionally uniform random graphs, where assigns equal probability to all the graphs having specified properties. They can be seen as a generalization of the Erdős–Rényi model G(n,M), when the conditioning information is not necessarily the number of edges M, but whatever other arbitrary graph property . In this case very few analytical results are available and simulation is required to obtain empirical distributions of average properties.
History
The earliest use of a random graph model was by Helen Hall Jennings and Jacob Moreno in 1938 where a "chance sociogram" (a directed Erdős-Rényi model) was considered in studying comparing the fraction of reciprocated links in their network data with the random model. Another use, under the name "random net", was by Ray Solomonoff and Anatol Rapoport in 1951, using a model of directed graphs with fixed out-degree and randomly chosen attachments to other vertices.
The Erdős–Rényi model of random graphs was first defined by Paul Erdős and Alfréd Rényi in their 1959 paper "On Random Graphs" and independently by Gilbert in his paper "Random graphs".
See also
Bose–Einstein condensation: a network theory approach
Cavity method
Complex networks
Dual-phase evolution
Erdős–Rényi model
Exponential random graph model
Graph theory
Interdependent networks
Network science
Percolation
Percolation theory
Random graph theory of gelation
Regular graph
Scale free network
Semilinear response
Stochastic block model
Lancichinetti–Fortunato–Radicchi benchmark
References
Graph theory | Random graph | [
"Mathematics"
] | 1,973 | [
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Mathematical relations",
"Random graphs"
] |
358,488 | https://en.wikipedia.org/wiki/Desargues%27s%20theorem | In projective geometry, Desargues's theorem, named after Girard Desargues, states:
Two triangles are in perspective axially if and only if they are in perspective centrally.
Denote the three vertices of one triangle by and , and those of the other by and . Axial perspectivity means that lines and meet in a point, lines and meet in a second point, and lines and meet in a third point, and that these three points all lie on a common line called the axis of perspectivity. Central perspectivity means that the three lines and are concurrent, at a point called the center of perspectivity.
This intersection theorem is true in the usual Euclidean plane but special care needs to be taken in exceptional cases, as when a pair of sides are parallel, so that their "point of intersection" recedes to infinity. Commonly, to remove these exceptions, mathematicians "complete" the Euclidean plane by adding points at infinity, following Jean-Victor Poncelet. This results in a projective plane.
Desargues's theorem is true for the real projective plane and for any projective space defined arithmetically from a field or division ring; that includes any projective space of dimension greater than two or in which Pappus's theorem holds. However, there are many "non-Desarguesian planes", in which Desargues's theorem is false.
History
Desargues never published this theorem, but it appeared in an appendix entitled Universal Method of M. Desargues for Using Perspective (Manière universelle de M. Desargues pour practiquer la perspective) to a practical book on the use of perspective published in 1648. by his friend and pupil Abraham Bosse (1602–1676).
Coordinatization
The importance of Desargues's theorem in abstract projective geometry is due especially to the fact that a projective space satisfies that theorem if and only if it is isomorphic to a projective space defined over a field or division ring.
Projective versus affine spaces
In an affine space such as the Euclidean plane a similar statement is true, but only if one lists various exceptions involving parallel lines. Desargues's theorem is therefore one of the simplest geometric theorems whose natural home is in projective rather than affine space.
Self-duality
By definition, two triangles are perspective if and only if they are in perspective centrally (or, equivalently according to this theorem, in perspective axially). Note that perspective triangles need not be similar.
Under the standard duality of plane projective geometry (where points correspond to lines and collinearity of points corresponds to concurrency of lines), the statement of Desargues's theorem is self-dual: axial perspectivity is translated into central perspectivity and vice versa. The Desargues configuration (below) is a self-dual configuration.
This self-duality in the statement is due to the usual modern way of writing the theorem. Historically, the theorem only read, "In a projective space, a pair of centrally perspective triangles is axially perspective" and the dual of this statement was called the converse of Desargues's theorem and was always referred to by that name.
Proof of Desargues's theorem
Desargues's theorem holds for projective space of any dimension over any field or division ring, and also holds for abstract projective spaces of dimension at least 3. In dimension 2 the planes for which it holds are called Desarguesian planes and are the same as the planes that can be given coordinates over a division ring. There are also many non-Desarguesian planes where Desargues's theorem does not hold.
Three-dimensional proof
Desargues's theorem is true for any projective space of dimension at least 3, and more generally for any projective space that can be embedded in a space of dimension at least 3.
Desargues's theorem can be stated as follows:
If lines and are concurrent (meet at a point), then
the points and are collinear.
The points and are coplanar (lie in the same plane) because of the assumed concurrency of and . Therefore, the lines and belong to the same plane and must intersect. Further, if the two triangles lie on different planes, then the point belongs to both planes. By a symmetric argument, the points and also exist and belong to the planes of both triangles. Since these two planes intersect in more than one point, their intersection is a line that contains all three points.
This proves Desargues's theorem if the two triangles are not contained in the same plane. If they are in the same plane, Desargues's theorem can be proved by choosing a point not in the plane, using this to lift the triangles out of the plane so that the argument above works, and then projecting back into the plane.
The last step of the proof fails if the projective space has dimension less than 3, as in this case it is not possible to find a point not in the plane.
Monge's theorem also asserts that three points lie on a line, and has a proof using the same idea of considering it in three rather than two dimensions and writing the line as an intersection of two planes.
Two-dimensional proof
As there are non-Desarguesian projective planes in which Desargues's theorem is not true, some extra conditions need to be met in
order to prove it. These conditions usually take the form of assuming the existence of sufficiently many collineations of a certain type, which in turn leads to showing that the underlying algebraic coordinate system must be a division ring (skewfield).
Relation to Pappus's theorem
Pappus's hexagon theorem states that, if a hexagon is drawn in such a way that vertices and lie on a line and vertices and lie on a second line, then each two opposite sides of the hexagon lie on two lines that meet in a point and the three points constructed in this way are collinear. A plane in which Pappus's theorem is universally true is called Pappian.
showed that Desargues's theorem can be deduced from three applications of Pappus's theorem.
The converse of this result is not true, that is, not all Desarguesian planes are Pappian. Satisfying Pappus's theorem universally is equivalent to having the underlying coordinate system be commutative. A plane defined over a non-commutative division ring (a division ring that is not a field) would therefore be Desarguesian but not Pappian. However, due to Wedderburn's little theorem, which states that all finite division rings are fields, all finite Desarguesian planes are Pappian. There is no known completely geometric proof of this fact, although give a proof that uses only "elementary" algebraic facts (rather than the full strength of Wedderburn's little theorem).
The Desargues configuration
The ten lines involved in Desargues's theorem (six sides of triangles, the three lines and , and the axis of perspectivity) and the ten points involved (the six vertices, the three points of intersection on the axis of perspectivity, and the center of perspectivity) are so arranged that each of the ten lines passes through three of the ten points, and each of the ten points lies on three of the ten lines. Those ten points and ten lines make up the Desargues configuration, an example of a projective configuration. Although Desargues's theorem chooses different roles for these ten lines and points, the Desargues configuration itself is more symmetric: any of the ten points may be chosen to be the center of perspectivity, and that choice determines which six points will be the vertices of triangles and which line will be the axis of perspectivity.
The little Desargues theorem
This restricted version states that if two triangles are perspective from a point on a given line, and two pairs of corresponding sides also meet on this line, then the third pair of corresponding sides meet on the line as well. Thus, it is the specialization of Desargues's Theorem to only the cases in which the center of perspectivity lies on the axis of perspectivity.
A Moufang plane is a projective plane in which the little Desargues theorem is valid for every line.
See also
Pascal's theorem
Notes
References
External links
Desargues Theorem at MathWorld
Desargues's Theorem at cut-the-knot
Monge via Desargues at cut-the-knot
Proof of Desargues's theorem at PlanetMath
Desargues's Theorem at Dynamic Geometry Sketches
Theorems in projective geometry
Proof without words
Theorems about triangles
Euclidean plane geometry | Desargues's theorem | [
"Mathematics"
] | 1,813 | [
"Theorems in projective geometry",
"Euclidean plane geometry",
"Proof without words",
"Theorems in geometry",
"Planes (geometry)"
] |
358,601 | https://en.wikipedia.org/wiki/Science%20studies | Science studies is an interdisciplinary research area that seeks to situate scientific expertise in broad social, historical, and philosophical contexts. It uses various methods to analyze the production, representation and reception of scientific knowledge and its epistemic and semiotic role.
Similarly to cultural studies, science studies are defined by the subject of their research and encompass a large range of different theoretical and methodological perspectives and practices. The interdisciplinary approach may include and borrow methods from the humanities, natural and formal sciences, from scientometrics to ethnomethodology or cognitive science.
Science studies have a certain importance for evaluation and science policy. Overlapping with the field of science, technology and society, practitioners study the relationship between science and technology, and the interaction of expert and lay knowledge in the public realm.
Scope
The field started with a tendency toward navel-gazing: it was extremely self-conscious in its genesis and applications. From early concerns with scientific discourse, practitioners soon started to deal with the relation of scientific expertise to politics and lay people. Practical examples include bioethics, bovine spongiform encephalopathy (BSE), pollution, global warming, biomedical sciences, physical sciences, natural hazard predictions, the (alleged) impact of the Chernobyl disaster in the UK, generation and review of science policy and risk governance and its historical and geographic contexts. While staying a discipline with multiple metanarratives, the fundamental concern is about the role of the perceived expert in providing governments and local authorities with information from which they can make decisions.
The approach poses various important questions about what makes an expert and how experts and their authority are to be distinguished from the lay population and interacts with the values and policy making process in liberal democratic societies.
Practitioners examine the forces within and through which scientists investigate specific phenomena such as
technological milieus, epistemic instruments and cultures and laboratory life (compare Karin Knorr-Cetina, Bruno Latour, Hans-Jörg Rheinberger)
science and technology (e.g. Wiebe Bijker, Trevor Pinch, Thomas P. Hughes)
science, technology and society (e.g. Peter Weingart, Ulrike Felt, Helga Nowotny and Reiner Grundmann)
language and rhetoric of science (e.g. Charles Bazerman, Alan G. Gross, Greg Myers)
aesthetics of science and visual culture in science (u.a. Peter Geimer), the role of aesthetic criteria in scientific practice (compare mathematical beauty) and the relation between emotion, cognition and rationality in the development of science.
semiotic studies of creative processes, as in the discovery, conceptualization, and realization of new ideas. or the interaction and management of different forms of knowledge in cooperative research.
large-scale research and research institutions, e.g. particle colliders (Sharon Traweek)
research ethics, science policy, and the role of the university.
History of the field
In 1935, in a celebrated paper, the Polish sociologist couple Maria Ossowska and Stanisław Ossowski proposed the founding of a "science of science" to study the scientific enterprise, its practitioners, and the factors influencing their work. Earlier, in 1923, the Polish sociologist Florian Znaniecki had made a similar proposal.
Fifty years before Znaniecki, in 1873, Aleksander Głowacki, better known in Poland by his pen name "Bolesław Prus", had delivered a public lecture – later published as a booklet – On Discoveries and Inventions, in which he said:
It is striking that, while early 20th-century sociologist proponents of a discipline to study science and its practitioners wrote in general theoretical terms, Prus had already half a century earlier described, with many specific examples, the scope and methods of such a discipline.
Thomas Kuhn's Structure of Scientific Revolutions (1962) increased interest both in the history of science and in science's philosophical underpinnings. Kuhn posited that the history of science was less a linear succession of discoveries than a succession of paradigms within the philosophy of science. Paradigms are broader, socio-intellectual constructs that determine which types of truth claims are permissible.
Science studies seeks to identify key dichotomies – such as those between science and technology, nature and culture, theory and experiment, and science and fine art – leading to the differentiation of scientific fields and practices.
The sociology of scientific knowledge arose at the University of Edinburgh, where David Bloor and his colleagues developed what has been termed "the strong programme". It proposed that both "true" and "false" scientific theories should be treated the same way. Both are informed by social factors such as cultural context and self-interest.
Human knowledge, abiding as it does within human cognition, is ineluctably influenced by social factors.
It proved difficult, however, to address natural-science topics with sociological methods, as was abundantly evidenced by the US science wars. Use of a deconstructive approach (as in relation to works on arts or religion) to the natural sciences risked endangering not only the "hard facts" of the natural sciences, but the objectivity and positivist tradition of sociology itself. The view on scientific knowledge production as a (at least partial) social construct was not easily accepted. Latour and others identified a dichotomy crucial for modernity, the division between nature (things, objects) as being transcendent, allowing to detect them, and society (the subject, the state) as immanent as being artificial, constructed. The dichotomy allowed for mass production of things (technical-natural hybrids) and large-scale global issues that endangered the distinction as such. E.g. We Have Never Been Modern asks to reconnect the social and natural worlds, returning to the pre-modern use of "thing"—addressing objects as hybrids made and scrutinized by the public interaction of people, things, and concepts.
Science studies scholars such as Trevor Pinch and Steve Woolgar started already in the 1980s to involve "technology", and called their field "science, technology and society". This "turn to technology" brought science studies into communication with academics in science, technology, and society programs.
More recently, a novel approach known as mapping controversies has been gaining momentum among science studies practitioners, and was introduced as a course for students in engineering, and architecture schools. In 2002 Harry Collins and Robert Evans asked for a third wave of science studies (a pun on The Third Wave), namely studies of expertise and experience answering to recent tendencies to dissolve the boundary between experts and the public.
Application to natural and man-made hazards
Sheepfarming after Chernobyl
A showcase of the rather complex problems of scientific information and its interaction with lay persons is Brian Wynne's study of Sheepfarming in Cumbria after the Chernobyl disaster. He elaborated on the responses of sheep farmers in Cumbria, who had been subjected to administrative restrictions because of radioactive contamination, allegedly caused by the nuclear accident at Chernobyl in 1986. The sheep farmers suffered economic losses, and their resistance against the imposed regulation was being deemed irrational and inadequate. It turned out that the source of radioactivity was actually the Sellafield nuclear reprocessing complex; thus, the experts who were responsible for the duration of the restrictions were completely mistaken. The example led to attempts to better involve local knowledge and lay-persons' experience and to assess its often highly geographically and historically defined background.
Science studies on volcanology
Donovan et al. (2012) used social studies of volcanology to investigate the generation of knowledge and expert advice on various active volcanoes. It contains a survey of volcanologists carried out during 2008 and 2009 and interviews with scientists in the UK, Montserrat, Italy and Iceland during fieldwork seasons. Donovan et al. (2012) asked the experts about the felt purpose of volcanology and what they considered the most important eruptions in historical time. The survey tries to identify eruptions that had an influence on volcanology as a science and to assess the role of scientists in policymaking.
A main focus was on the impact of the Montserrat eruption 1997. The eruption, a classical example of the black swan theory directly killed (only) 19 persons. However the outbreak had major impacts on the local society and destroyed important infrastructure, as the island's airport. About 7,000 people, or two-thirds of the population, left Montserrat; 4,000 to the United Kingdom.
The Montserrat case put immense pressure on volcanologists, as their expertise suddenly became the primary driver of various public policy approaches. The science studies approach provided valuable insights in that situation. There were various miscommunications among scientists. Matching scientific uncertainty (typical of volcanic unrest) and the request for a single unified voice for political advice was a challenge. The Montserrat Volcanologists began to use statistical elicitation models to estimate the probabilities of particular events, a rather subjective method, but allowing to synthesizing consensus and experience-based expertise step by step. It involved as well local knowledge and experience.
Volcanology as a science currently faces a shift of its epistemological foundations of volcanology. The science started to involve more research into risk assessment and risk management. It requires new, integrated methodologies for knowledge collection that transcend scientific disciplinary boundaries but combine qualitative and quantitative outcomes in a structured whole.
Experts and democracy
Science has become a major force in Western democratic societies, which depend on innovation and technology (compare Risk society) to address its risks. Beliefs about science can be very different from those of the scientists themselves, for reasons of e.g. moral values, epistemology or political motivations. The designation of expertise as authoritative in the interaction with lay people and decision makers of all kind is nevertheless challenged in contemporary risk societies, as suggested by scholars who follow Ulrich Beck's theorisation. The role of expertise in contemporary democracies is an important theme for debate among science studies scholars. Some argue for a more widely distributed, pluralist understanding of expertise (Sheila Jasanoff and Brian Wynne, for example), while others argue for a more nuanced understanding of the idea of expertise and its social functions (Collins and Evans, for example).
See also
Logology (study of science)
Merton thesis
Public awareness of science
Science and technology studies
Science and technology studies in India
Social construction of technology
Sociology of scientific knowledge
Sokal affair
References
Bibliography
Science studies, general
Bauchspies, W., Jennifer Croissant and Sal Restivo: Science, Technology, and Society: A Sociological Perspective (Oxford: Blackwell, 2005).
Biagioli, Mario, ed. The Science Studies Reader (New York: Routledge, 1999).
Bloor, David; Barnes, Barry & Henry, John, Scientific knowledge: a sociological analysis (Chicago: University Press, 1996).
Gross, Alan. Starring the Text: The Place of Rhetoric in Science Studies. Carbondale: SIU Press, 2006.
Fuller, Steve, The Philosophy of Science and Technology Studies (New York: Routledge, 2006).
Hess, David J. Science Studies: An Advanced Introduction (New York: NYU Press, 1997).
Jasanoff, Sheila, ed. Handbook of science and technology studies (Thousand Oaks, Calif.: SAGE Publications, 1995).
Latour, Bruno, "The Last Critique," Harper's Magazine (April 2004): 15–20.
Latour, Bruno. Science in Action. Cambridge. 1987.
Latour, Bruno, "Do You Believe in Reality: News from the Trenches of the Science Wars," in Pandora's Hope (Cambridge: Harvard University Press, 1999)
Vinck, Dominique. The Sociology of Scientific Work. The Fundamental Relationship between Science and Society (Cheltenham: Edward Elgar, 2010).
Wyer, Mary; Donna Cookmeyer; Mary Barbercheck, eds. Women, Science and Technology: A Reader in Feminist Science Studies, Routledge 200
Haraway, Donna J. "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective," in Simians, Cyborgs, and Women: the Reinvention of Nature (New York: Routledge, 1991), 183–201. Originally published in Feminist Studies, Vol. 14, No. 3 (Autumn, 1988), pp. 575–599. (available online)
Foucault, Michel, "Truth and Power," in Power/Knowledge (New York: Pantheon Books, 1997), 109–133.
Porter, Theodore M. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton: Princeton University Press, 1995).
Restivo, Sal: "Science, Society, and Values: Toward a Sociology of Objectivity" (Lehigh PA: Lehigh University Press, 1994).
Medicine and biology
Media, culture, society and technology
Hancock, Jeff. Deception and design: the impact of communication technology on lying behavior
Lessig, Lawrence. Free Culture. Penguin USA, 2004.
MacKenzie, Donald. The Social Shaping of Technology Open University Press: 2nd ed. 1999.
Mitchell, William J. Rethinking Media Change Thorburn and Jennings eds. Cambridge, Massachusetts : MIT Press, 2003.
Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Penguin USA, 1985.
Rheingold, Howard. Smart Mobs: The Next Social Revolution. Cambridge: Mass., Perseus Publishing. 2002.
External links
Sociology of Science, an introductory article by Joseph Ben-David & Teresa A. Sullivan, Annual Review of Sociology, 1975
The Incommensurability of Scientific and Poetic Knowledge
University of Washington Science Studies Network
Historiography of science
Philosophy of science
Pedagogy
Science and technology studies | Science studies | [
"Technology"
] | 2,832 | [
"Science and technology studies"
] |
358,677 | https://en.wikipedia.org/wiki/Obedience | Obedience, in human behavior, is a form of "social influence in which a person yields to explicit instructions or orders from an authority figure". Obedience is generally distinguished from compliance, which some authors define as behavior influenced by peers while others use it as a more general term for positive responses to another individual's request, and from conformity, which is behavior intended to match that of the majority. Depending on context, obedience can be seen as moral, immoral, or amoral. For example, in psychological research, individuals are usually confronted with immoral demands designed to elicit an internal conflict. If individuals still choose to submit to the demand, they are acting obediently.
Humans have been shown to be obedient in the presence of perceived legitimate authority figures, as shown by the Milgram experiment in the 1960s, which was carried out by Stanley Milgram to find out how the Nazis managed to get ordinary people to take part in the mass murders of the Holocaust. The experiment showed that obedience to authority was the norm, not the exception. Regarding obedience, Milgram said that "Obedience is as basic an element in the structure of social life as one can point to. Some system of authority is a requirement of all communal living, and it is only the man dwelling in isolation who is not forced to respond, through defiance or submission, to the commands of others." A similar conclusion was reached in the Stanford prison experiment.
Experimental studies
Classical methods and results
Although other fields have studied obedience, social psychology has been primarily responsible for the advancement of research on obedience. It has been studied experimentally in several different ways.
Milgram's experiment
In one classical study, Stanley Milgram (as part of the Milgram experiment) created a highly controversial yet often replicated study. Like many other experiments in psychology, Milgram's setup involved deception of the participants. In the experiment, subjects were told they were going to take part in a study of the effects of punishment on learning. In reality, the experiment focuses on people's willingness to obey malevolent authority. Each subject served as a teacher of associations between arbitrary pairs of words. After meeting the "teacher" at the beginning of the experiment, the "learner" (an accomplice of the experimenter) sat in another room and could be heard, but not seen. Teachers were told to give the "learner" electric shocks of increasing severity for each wrong answer. If subjects questioned the procedure, the "researcher" (again, an accomplice of Milgram) would encourage them to continue. Subjects were told to ignore the agonized screams of the learner, his desire to be untied and stop the experiment, and his pleas that his life was at risk and that he suffered from a heart condition. The experiment, the "researcher" insisted, had to go on. The dependent variable in this experiment was the voltage amount of shocks administered.
Zimbardo's experiment
The other classical study on obedience was conducted at Stanford University during the 1970s. Phillip Zimbardo was the main psychologist responsible for the experiment. In the Stanford Prison Experiment, college age students were put into a pseudo prison environment in order to study the impacts of "social forces" on participants behavior. Unlike the Milgram study in which each participant underwent the same experimental conditions, here using random assignment half the participants were prison guards and the other half were prisoners. The experimental setting was made to physically resemble a prison while simultaneously inducing "a psychological state of imprisonment".
Results
The Milgram study found that most participants would obey orders even when obedience posed severe harm to others. With encouragement from a perceived authority figure, about two-thirds of the participants were willing to administer the highest level of shock to the learner. This result was surprising to Milgram because he thought that "subjects have learned from childhood that it is a fundamental breach of moral conduct to hurt another person against his will". Milgram attempted to explain how ordinary people were capable of performing potentially lethal acts against other human beings by suggesting that participants may have entered into an agentic state, where they allowed the authority figure to take responsibility for their own actions. Another unanticipated discovery was the tension that the procedure caused. Subjects expressed signs of tension and emotional strain especially after administering the powerful shocks. 3 of the subjects had full-blown uncontrollable seizures, and on one occasion the experiment was stopped.
Zimbardo obtained similar results as the guards in the study obeyed orders and turned aggressive. Prisoners likewise were hostile to and resented their guards. The cruelty of the "guards" and the consequent stress of the "prisoners," forced Zimbardo to terminate the experiment prematurely, after 6 days.
Modern methods and results
The previous two studies greatly influenced how modern psychologists think about obedience. Milgram's study in particular generated a large response from the psychology community. In a modern study, Jerry Burger replicated Milgram's method with a few alterations. Burger's method was identical to Milgram's except when the shocks reached 150 volts, participants decided whether or not they wanted to continue and then the experiment ended (base condition). To ensure the safety of the participants, Burger added a two-step screening process; this was to rule out any participants that may react negatively to the experiment. In the modeled refusal condition, two confederates were used, where one confederate acted as the learner and the other was the teacher. The teacher stopped after going up to 90 volts, and the participant was asked to continue where the confederate left off. This methodology was considered more ethical because many of the adverse psychological effects seen in previous studies' participants occurred after moving past 150 volts. Additionally, since Milgram's study only used men, Burger tried to determine if there were differences between genders in his study and randomly assigned equal numbers of men and women to the experimental conditions.
Using data from his previous study, Burger probed participant's thoughts about obedience. Participants' comments from the previous study were coded for the number of times they mentioned "personal responsibility and the learner's well being". The number of prods the participants used in the first experiment were also measured.
Another study that used a partial replication of Milgram's work changed the experimental setting. In one of the Utrecht University studies on obedience, participants were instructed to make a confederate who was taking an employment test feel uncomfortable. Participants were told to make all of the instructed stress remarks to the confederate that ultimately made him fail in the experimental condition, but in the control condition they were not told to make stressful remarks. The dependent measurements were whether or not the participant made all of the stress remarks (measuring absolute obedience) and the number of stress remarks (relative obedience).
Following the Utrecht studies, another study used the stress remarks method to see how long participants would obey authority. The dependent measures for this experiment were the number of stress remarks made and a separate measure of personality designed to measure individual differences.
Neuroscience has only recently begun to approach the question of obedience, bringing novel but complementary perspectives on how obeying or issuing commands impacts brain functioning, fostering conditions for moral transgressions. The experimental protocol, inspired by Milgram, does not rely on deception and involves real behaviors. A participant assigned the role of agent must either freely decide or receive orders from the experimenter to deliver or withhold a mildly painful electric shock to another participant (the "victim") in exchange for €0.05. In a study conducted in 2020, fMRI results indicated that seeing the shock delivered to the victim triggered activations in the anterior cingulate cortex (ACC) and the anterior insula (AI), key brain regions associated with empathy. However, such activations were lower in the coerced condition compared to the free-choice condition, consistent with participants' subjective perception of the victim’s pain. Activity in brain regions associated with the interpersonal feeling of guilt was also reduced when participants obeyed orders compared to acting freely. Other studies showed that the sense of agency, as measured through the implicit task of time perception, was reduced in the coerced compared to the free-choice condition, suggesting that the sense of agency diminishes when individuals obey orders compared to acting freely. These neuroscience studies highlight how obeying orders alters our natural aversion to hurting others.
Results
Burger's first study had results similar to the ones found in Milgram's previous study. The rates of obedience were very similar to those found in the Milgram study, showing that participants' tendency to obey has not declined over time. Additionally, Burger found that both genders exhibited similar behavior, suggesting that obedience will occur in participants independent of gender.
In Burger's follow-up study, he found that participants that worried about the well-being of the learner were more hesitant to continue the study. He also found that the more the experimenter prodded the participant to continue, the more likely they were to stop the experiment.
The Utrecht University study also replicated Milgram's results. They found that although participants indicated they did not enjoy the task, over 90% of them completed the experiment.
The Bocchiaro and Zimbardo study had similar levels of obedience compared to the Milgram and Utrecht studies. They also found that participants would either stop the experiment at the first sign of the learner's pleas or would continue until the end of the experiment (called "the foot in the door scenario").
In addition to the above studies, additional research using participants from different cultures (including Spain, Australia, and Jordan) also found participants to be obedient.
Implications
One of the major assumptions of obedience research is that the effect is caused only by the experimental conditions, and Thomas Blass' research contests this point, as in some cases participant factors involving personality could potentially influence the results.
In one of Blass' reviews on obedience, he found that participant's personalities can impact how they respond to authority, as people that were high in authoritarian submission were more likely to obey. He replicated this finding in his own research, as in one of his experiments, he found that when watching portions of the original Milgram studies on film, participants placed less responsibility on those punishing the learner when they scored high on measures of authoritarianism.
In addition to personality factors, participants who are resistant to obeying authority had high levels of social intelligence.
Other research
Obedience can also be studied outside of the Milgram paradigm in fields such as economics or political science. One economics study that compared obedience to a tax authority in the lab versus at home found that participants were much more likely to pay participation tax when confronted in the lab. This finding implies that even outside of experimental settings, people will forgo potential financial gain to obey authority.
Another study involving political science measured public opinion before and after a Supreme Court case debating whether or not states can legalize physician assisted suicide. They found that participants' tendency to obey authorities was not as important to public opinion polling numbers as religious and moral beliefs. Although prior research has demonstrated that the tendency to obey persists across settings, this finding suggests that at personal factors like religion and morality can limit how much people obey authority.
Other experiments
The Hofling hospital experiment
Both the Milgram and Stanford experiments were conducted in research settings. In 1966, psychiatrist Charles K. Hofling published the results of a field experiment on obedience in the nurse–physician relationship in its natural hospital setting. Nurses, unaware they were taking part in an experiment, were ordered by unknown doctors to administer dangerous doses of a (fictional) drug to their patients. Although several hospital rules disallowed administering the drug under the circumstances, 21 out of the 22 nurses would have given the patient an overdose.
Cultural attitudes
Many traditional cultures regard obedience as a virtue; historically, societies have expected children to obey their elders (compare patriarchy or matriarchy), slaves their owners, serfs their lords in feudal society, lords their king, and everyone God. Even long after slavery ended in the United States, the Black codes required black people to obey and submit to whites, on pain of lynching. Compare the religious ideal of surrender and its importance in Islam (the word Islam can literally mean "surrender").
In some Christian weddings, obedience was formally included along with honor and love as part of the bride's (but not the bridegroom's) marriage vow. This came under attack with women's suffrage and the feminist movement. the inclusion of this promise to obey has become optional in some denominations.
In the Catholic Church, obedience is seen as one of the evangelical counsels, "undertaken in a spirit of faith and love in the following of Christ".
Learning to obey adult rules is a major part of the socialization process in childhood, and many techniques are used by adults to modify the behavior of children. Additionally, extensive training is given in armies to make soldiers capable of obeying orders in situations where an untrained person would not be willing to follow orders. Soldiers are initially ordered to do seemingly trivial things, such as picking up the sergeant's hat off the floor, marching in just the right position, or marching and standing in formation. The orders gradually become more demanding, until an order to the soldiers to place themselves into the midst of gunfire gets an instinctively obedient response.
Factors affecting obedience
Embodiment of prestige or power
When the Milgram experimenters were interviewing potential volunteers, the participant selection process itself revealed several factors that affected obedience, outside of the actual experiment.
Interviews for eligibility were conducted in an abandoned complex in Bridgeport, Connecticut. Despite the dilapidated state of the building, the researchers found that the presence of a Yale professor as stipulated in the advertisement affected the number of people who obeyed. This was not further researched to test obedience without a Yale professor because Milgram had not intentionally staged the interviews to discover factors that affected obedience. A similar conclusion was reached in the Stanford prison experiment.
In the actual experiment, prestige or the appearance of power was a direct factor in obedience—particularly the presence of men dressed in gray laboratory coats, which gave the impression of scholarship and achievement and was thought to be the main reason why people complied with administering what they thought was a painful or dangerous shock. A similar conclusion was reached in the Stanford prison experiment.
Raj Persaud, in an article in the BMJ, comments on Milgram's attention to detail in his experiment:
Despite the fact that prestige is often thought of as a separate factor, it is, in fact, merely a subset of power as a factor. Thus, the prestige conveyed by a Yale professor in a laboratory coat is only a manifestation of the experience and status associated with it and/or the social status afforded by such an image.
Agentic state and other factors
According to Milgram, "the essence of obedience consists in the fact that a person comes to view himself as the instrument for carrying out another person's wishes, and he therefore no longer sees himself as responsible for his actions. Once this critical shift of viewpoint has occurred in the person, all of the essential features of obedience follow." Thus, "the major problem for the subject is to recapture control of his own regnant processes once he has committed them to the purposes of the experimenter." Besides this hypothetical agentic state, Milgram proposed the existence of other factors accounting for the subject's obedience: politeness, awkwardness of withdrawal, absorption in the technical aspects of the task, the tendency to attribute impersonal quality to forces that are essentially human, a belief that the experiment served a desirable end, the sequential nature of the action, and anxiety.
Belief perseverance
Another explanation of Milgram's results invokes belief perseverance as the underlying cause. What "people cannot be counted on is to realize that a seemingly benevolent authority is in fact malevolent, even when they are faced with overwhelming evidence which suggests that this authority is indeed malevolent. Hence, the underlying cause for the subjects' striking conduct could well be conceptual, and not the alleged 'capacity of man to abandon his humanity ... as he merges his unique personality into larger institutional structures."'
See also
In humans:
In animals:
Animal training
Obedience training (for dogs)
Horse breaking
References
External links
Science Aid: Obedience High school level Psychology
Catholic Encyclopedia article on obedience
Authority
Human behavior
Conformity
Social influence
Virtue | Obedience | [
"Biology"
] | 3,347 | [
"Behavior",
"Conformity",
"Human behavior"
] |
358,754 | https://en.wikipedia.org/wiki/Method%20of%20complements | In mathematics and computing, the method of complements is a technique to encode a symmetric range of positive and negative integers in a way that they can use the same algorithm (or mechanism) for addition throughout the whole range. For a given number of places half of the possible representations of numbers encode the positive numbers, the other half represents their respective additive inverses. The pairs of mutually additive inverse numbers are called complements. Thus subtraction of any number is implemented by adding its complement. Changing the sign of any number is encoded by generating its complement, which can be done by a very simple and efficient algorithm. This method was commonly used in mechanical calculators and is still used in modern computers. The generalized concept of the radix complement (as described below) is also valuable in number theory, such as in Midy's theorem.
The nines' complement of a number given in decimal representation is formed by replacing each digit with nine minus that digit. To subtract a decimal number y (the subtrahend) from another number x (the minuend) two methods may be used:
In the first method, the nines' complement of x is added to y. Then the nines' complement of the result obtained is formed to produce the desired result.
In the second method, the nines' complement of y is added to x and one is added to the sum. The leftmost digit '1' of the result is then discarded. Discarding the leftmost '1' is especially convenient on calculators or computers that use a fixed number of digits: there is nowhere for it to go so it is simply lost during the calculation. The nines' complement plus one is known as the tens' complement.
The method of complements can be extended to other number bases (radices); in particular, it is used on most digital computers to perform subtraction, represent negative numbers in base 2 or binary arithmetic and test overflow in calculation.
Numeric complements
The radix complement of an -digit number in radix is defined as . In practice, the radix complement is more easily obtained by adding 1 to the diminished radix complement, which is . While this seems equally difficult to calculate as the radix complement, it is actually simpler since is simply the digit repeated times. This is because (see also Geometric series Formula). Knowing this, the diminished radix complement of a number can be found by complementing each digit with respect to , i.e. subtracting each digit in from .
The subtraction of from using diminished radix complements may be performed as follows. Add the diminished radix complement of to to obtain or equivalently , which is the diminished radix complement of . Further taking the diminished radix complement of results in the desired answer of .
Alternatively using the radix complement, may be obtained by adding the radix complement of to to obtain or . Assuming , the result will be greater or equal to and dropping the leading from the result is the same as subtracting , making the result or just , the desired result.
In the decimal numbering system, the radix complement is called the ten's complement and the diminished radix complement the nines' complement. In binary, the radix complement is called the two's complement and the diminished radix complement the ones' complement. The naming of complements in other bases is similar. Some people, notably Donald Knuth, recommend using the placement of the apostrophe to distinguish between the radix complement and the diminished radix complement. In this usage, the four's complement refers to the radix complement of a number in base four while fours' complement is the diminished radix complement of a number in base 5. However, the distinction is not important when the radix is apparent (nearly always), and the subtle difference in apostrophe placement is not common practice. Most writers use one's and nine's complement, and many style manuals leave out the apostrophe, recommending ones and nines complement.
Decimal example
The nines' complement of a decimal digit is the number that must be added to it to produce 9; the nines' complement of 3 is 6, the nines' complement of 7 is 2, and so on, see table. To form the nines' complement of a larger number, each digit is replaced by its nines' complement.
Consider the following subtraction problem:
873 [x, the minuend]
- 218 [y, the subtrahend]
First method
Compute the nines' complement of the minuend, 873. Add that to the subtrahend 218, then calculate the nines' complement of the result.
126 [nines' complement of x = 999 - x]
+ 218 [y, the subtrahend]
—————
344 [999 - x + y]
Now calculate the nines' complement of the result
344 [result]
655 [nines' complement of 344 = 999 - (999 - x + y) = x - y, the correct answer]
Second method
Compute the nines' complement of 218, which is 781. Because 218 is three digits long, this is the same as subtracting 218 from 999.
Next, the sum of and the nines' complement of is taken:
873 [x]
+ 781 [nines' complement of y = 999 - y]
—————
1654 [999 + x - y]
The leading "1" digit is then dropped, giving 654.
1654
-1000 [-(999 + 1)]
—————
654 [-(999 + 1) + 999 + x - y]
This is not yet correct. In the first step, 999 was added to the equation. Then 1000 was subtracted when the leading 1 was dropped. So, the answer obtained (654) is one less than the correct answer . To fix this, 1 is added to the answer:
654
+ 1
—————
655 [x - y]
Adding a 1 gives 655, the correct answer to our original subtraction problem. The last step of adding 1 could be skipped if instead the ten's complement of y was used in the first step.
Magnitude of numbers
In the following example the result of the subtraction has fewer digits than :
123410 [x, the minuend]
- 123401 [y, the subtrahend]
Using the first method the sum of the nines' complement of and is
876589 [nines' complement of x]
+ 123401 [y]
————————
999990
The nines' complement of 999990 is 000009. Removing the leading zeros gives 9, the desired result.
If the subtrahend, , has fewer digits than the minuend, , leading zeros must be added in the second method. These zeros become leading nines when the complement is taken. For example:
48032 [x]
- 391 [y]
can be rewritten
48032 [x]
- 00391 [y with leading zeros]
Replacing 00391 with its nines' complement and adding 1 produces the sum:
48032 [x]
+ 99608 [nines' complement of y]
+ 1
———————
147641
Dropping the leading 1 gives the correct answer: 47641.
Binary method
The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained by inverting each bit (changing '0' to '1' and vice versa). Adding 1 to get the two's complement can be done by simulating a carry into the least significant bit. For example:
0110 0100 [x, equals decimal 100]
- 0001 0110 [y, equals decimal 22]
becomes the sum:
0110 0100 [x]
+ 1110 1001 [ones' complement of y = 1111 1111 - y]
+ 1 [to get the two's complement = 1 0000 0000 - y]
———————————
10100 1110 [x + 1 0000 0000 - y]
Dropping the initial "1" gives the answer: 0100 1110 (equals decimal 78)
Negative number representations
The method of complements normally assumes that the operands are positive and that y ≤ x, logical constraints given that adding and subtracting arbitrary integers is normally done by comparing signs, adding the two or subtracting the smaller from the larger, and giving the result the correct sign.
Let's see what happens if x < y. In that case, there will not be a "1" digit to cross out after the addition since will be less than . For example, (in decimal):
185 [x]
- 329 [y]
Complementing y and adding gives:
185 [x]
+ 670 [nines' complement of y]
+ 1
—————
856
At this point, there is no simple way to complete the calculation by subtracting (1000 in this case); one cannot simply ignore a leading 1. The expected answer is −144, which isn't as far off as it seems; 856 happens to be the ten's complement of 144. This issue can be addressed in a number of ways:
Ignore the issue. This is reasonable if a person is operating a calculating device that doesn't support negative numbers since comparing the two operands before the calculation so they can be entered in the proper order, and verifying that the result is reasonable, is easy for humans to do.
Use the same method to subtract 856 from 1000, and then add a negative sign to the result.
Represent negative numbers as radix complements of their positive counterparts. Numbers less than are considered positive; the rest are considered negative (and their magnitude can be obtained by taking the radix complement). This works best for even radices since the sign can be determined by looking at the first digit. For example, numbers in ten's complement notation are positive if the first digit is 0, 1, 2, 3, or 4, and negative if 5, 6, 7, 8, or 9. And it works very well in binary since the first bit can be considered a sign bit: the number is positive if the sign bit is 0 and negative if it is 1. Indeed, two's complement is used in most modern computers to represent signed numbers.
Complement the result if there is no carry out of the most significant digit (an indication that x was less than y). This is easier to implement with digital circuits than comparing and swapping the operands. But since taking the radix complement requires adding 1, it is difficult to do directly. Fortunately, a trick can be used to get around this addition: Instead of always setting a carry into the least significant digit when subtracting, the carry out of the most significant digit is used as the carry input into the least significant digit (an operation called an end-around carry). So if y ≤ x, the carry from the most significant digit that would normally be ignored is added, producing the correct result. And if not, the 1 is not added and the result is one less than the radix complement of the answer, or the diminished radix complement, which does not require an addition to obtain. This method is used by computers that use sign-and-magnitude to represent signed numbers.
Practical uses
The method of complements was used in many mechanical calculators as an alternative to running the gears backwards. For example:
Pascal's calculator had two sets of result digits, a black set displaying the normal result and a red set displaying the nines' complement of this. A horizontal slat was used to cover up one of these sets, exposing the other. To subtract, the red digits were exposed and set to 0. Then the nines' complement of the minuend was entered. On some machine this could be done by dialing in the minuend using inner wheels of complements (i.e. without having to mentally determine the nines' complement of the minuend). In displaying that data in the complement window (red set), the operator could see the nines' complement of the nines' complement of the minuend, that is the minuend. The slat was then moved to expose the black digits (which now displayed the nines' complement of the minuend) and the subtrahend was added by dialing it in. Finally, the operator had to move the slat again to read the correct answer.
The Comptometer had nines' complement digits printed in smaller type along with the normal digits on each key. To subtract, the operator was expected to mentally subtract 1 from the subtrahend and enter the result using the smaller digits. Since subtracting 1 before complementing is equivalent to adding 1 afterwards, the operator would thus effectively add the ten's complement of the subtrahend. The operator also needed to hold down the "subtraction cutoff tab" corresponding to the leftmost digit of the answer. This tab prevented the carry from being propagated past it, the Comptometer's method of dropping the initial 1 from the result.
The Curta calculator used the method of complements for subtraction, and managed to hide this from the user. Numbers were entered using digit input slides along the side of the device. The number on each slide was added to a result counter by a gearing mechanism which engaged cams on a rotating "echelon drum" (a.k.a. "step drum"). The drum was turned by use of a crank on the top of the instrument. The number of cams encountered by each digit as the crank turned was determined by the value of that digit. For example, if a slide is set to its "6" position, a row of 6 cams would be encountered around the drum corresponding to that position. For subtraction, the drum was shifted slightly before it was turned, which moved a different row of cams into position. This alternate row contained the nines' complement of the digits. Thus, the row of 6 cams that had been in position for addition now had a row with 3 cams. The shifted drum also engaged one extra cam which added 1 to the result (as required for the method of complements). The always present ten's complement "overflow 1" which carried out beyond the most significant digit of the results register was, in effect, discarded.
In computers
Use of the method of complements is ubiquitous in digital computers, regardless of the representation used for signed numbers. However, the circuitry required depends on the representation:
If two's complement representation is used, subtraction requires only inverting the bits of the subtrahend and setting a carry into the rightmost bit.
Using ones' complement representation requires inverting the bits of the subtrahend and connecting the carry out of the most significant bit to the carry in of the least significant bit (end-around carry).
Using sign-magnitude representation requires only complementing the sign bit of the subtrahend and adding, but the addition/subtraction logic needs to compare the sign bits, complement one of the inputs if they are different, implement an end-around carry, and complement the result if there was no carry from the most significant bit.
Manual uses
The method of complements was used to correct errors when accounting books were written by hand. To remove an entry from a column of numbers, the accountant could add a new entry with the ten's complement of the number to subtract. A bar was added over the digits of this entry to denote its special status. It was then possible to add the whole column of figures to obtain the corrected result.
Complementing the sum is handy for cashiers making change for a purchase from currency in a single denomination of 1 raised to an integer power of the currency's base. For decimal currencies that would be 10, 100, 1,000, etc., e.g. a $10.00 bill.
In grade school education
In grade schools, students are sometimes taught the method of complements as a shortcut useful in mental arithmetic. Subtraction is done by adding the ten's complement of the subtrahend, which is the nines' complement plus 1. The result of this addition is used when it is clear that the difference will be positive, otherwise the ten's complement of the addition's result is used with it marked as negative. The same technique works for subtracting on an adding machine.
See also
Curta
References
Computer arithmetic | Method of complements | [
"Mathematics"
] | 3,515 | [
"Computer arithmetic",
"Arithmetic"
] |
358,820 | https://en.wikipedia.org/wiki/Cornice | In architecture, a cornice (from the Italian cornice meaning "ledge") is generally any horizontal decorative moulding that crowns a building or furniture element—for example, the cornice over a door or window, around the top edge of a pedestal, or along the top of an interior wall. A simple cornice may be formed with a crown, as in crown moulding atop an interior wall or above kitchen cabinets or a bookcase.
A projecting cornice on a building has the function of throwing rainwater free of its walls. In residential building practice, this function is handled by projecting gable ends, roof eaves, and gutters. However, house eaves may also be called "cornices" if they are finished with decorative moulding. In this sense, while most cornices are also eaves (overhanging the sides of the building), not all eaves are usually considered cornices. Eaves are primarily functional and not necessarily decorative, while cornices have a decorative aspect.
A building's projecting cornice may appear to be heavy and hence in danger of falling, particularly on commercial buildings, but it often is actually very light and made of pressed metal.
In classical architecture
In Ancient Greek architecture and its successors using the classical orders in the tradition of classical architecture, the cornice is the topmost element of the entablature, which consists (from top to bottom) of the cornice, the frieze, and the architrave.
Where a triangular pediment is above the entablature, the cornice continues all round the triangle, the two sides being "raking cornices". The vertical space below the cornice is typically decorated by dentils (little teeth) or the larger modillions. The soffit, or horizontal space under a projecting cornice, may be elaborately carved with vegetal designs.
In modern residential architecture
Rake
A rake is an architectural term for an eave or cornice that runs along the gable of the roof of a modern residential structure. It may also be called a sloping cornice, a raking cornice. The trim and rafters at this edge are called rakes, rake board, rake fascia, verge-boards, barge-boards or verge- or barge-rafters. It is a sloped timber on the outside facing edge of a roof running between the ridge and the eave. On a typical house, any gable will have two rakes, one on each sloped side. The rakes are often supported by a series of lookouts (sometimes also called strong arms) and may be trimmed with a rake fascia board (which is not a true fascia) on the outside facing edge and a rake soffit along the bottom.
Types
The cornices of a modern residential building will usually be one of three types: a box cornice, a close or closed cornice, or an open cornice.
Box cornice
Box cornices enclose the cornice of the building with what is essentially a long, narrow box. A box cornice may further be divided into either the narrow box cornice or the wide box cornice type. A narrow box cornice is one in which "the projection of the rafter serves as a nailing surface for the soffit board as well as the fascia trim." This is possible if the slope of the roof is fairly steep and the width of the eave relatively narrow. A wide box cornice, a common practice on houses with gentle roof slopes and wide eaves, requires lookouts to support it and provide a surface to attach the soffits securely. Box cornices often have ventilation screens laid over openings cut in the soffits to allow air to circulate within the cornice.
Close cornice
A closed or snub cornice is one in which there is no projection of the rafters beyond the walls of the building and, therefore, no soffit or fascia. This type of cornice is easy to construct but provides little aid in dispersing water away from the building and is sometimes considered to lack aesthetic value.
Open cornice
In an open cornice, the shape of the cornice is similar to that of a wide box cornice, except that both the lookouts and the soffit are absent. It is a lower-cost treatment that requires fewer materials and may even have no fascia board, but it lacks the finished appearance of a box cornice.
Cavetto cornice
Ancient Egyptian architectural tradition made special use of large cavetto mouldings as a cornice, with only a short fillet (plain vertical face) above, and a torus moulding (convex semi-circle) below. This cavetto cornice is sometimes also known as an "Egyptian cornice", "hollow and roll" or "gorge cornice". It has been suggested to be a reminiscence in stone architecture of the primitive use of bound bunches of reeds as supports for buildings, the weight of the roof bending their tops out.
The cavetto cornice, often forming less than a quarter-circle, influenced Egypt's neighbours and as well as appearing in early Ancient Greek architecture, it is seen in Syria and ancient Iran, for example at the Tachara palace of Darius I at Persepolis, completed in 486 BC. Inspired by this precedent, it was then revived by Ardashir I (r. 224–41 AD), the founder of the Sasanian dynasty.
The cavetto took the place of the cymatium in many Etruscan temples, often painted with vertical "tongue" patterns, and combined with the distinctive "Etruscan round moulding", often painted with scales. A typical example may be seen at the reconstructed Etruscan temple at Villa Giulia.
Additional more obscure varieties of cornice include the architrave cornice, bracketed cornice, and modillion cornice.
Cornice return
A cornice return is an architectural detail that occurs where a roof's horizontal cornice connects to a gable's rake. It is a short horizontal extension of the cornice that occurs on each side of the gable end of the building (see picture of Härnösands rådhus with two of these). The two most common types of cornice return are the Greek return and the soffit return (also called a boxed or box soffit return). The former includes a sloped hip shape on the inside of the cornice under the eaves, which is sheathed or shingled like the rest of the roof above it and is considered very attractive; the latter is a simple return without these features.
As window treatment
The term cornice may also be used to describe a form of hard window treatment along the top edge of a window. In this context, a cornice represents a board (usually wood) placed above the window to conceal the mechanism for opening and closing drapes. If covered in a layer of cloth and given padding, it is sometimes called a soft cornice rather than a hard cornice.
Gallery
See also
Eaves
Geison
References
Columns and entablature
Architectural elements | Cornice | [
"Technology",
"Engineering"
] | 1,422 | [
"Building engineering",
"Structural system",
"Architectural elements",
"Columns and entablature",
"Components",
"Architecture"
] |
358,882 | https://en.wikipedia.org/wiki/Push-to-talk | Push-to-talk (PTT), also known as press-to-transmit, is a method of having conversations or talking on half-duplex communication lines, including two-way radio, using a momentary button to switch from voice reception mode to transmit mode.
History
For example, an air traffic controller usually supervises several aircraft and talks on one radio frequency to all of them. Those under the same frequency can hear others' transmissions while using procedure words such as "break", "break break" to separate order during the conversation (ICAO doc 9432). In doing so, they are aware of each other's actions and intentions. Unlike in a conference call, they do not hear background noise from the ones who are not speaking. Similar considerations apply to police radio, the use of business band radios on construction sites, and other scenarios requiring coordination of several parties. Citizens Band is another example of classic push-to-talk operation.
The PTT switch is most commonly located on the radio's handheld microphone, or for small hand-held radios, directly on the radio. For heavy radio users, a PTT foot switch may be used, and also can be combined with either a boom-mounted microphone or a headset with integrated microphone.
Less commonly, a separate hand-held PTT switch may be used. This type of switch was historically called a pressel.
In situations where a user may be too busy to handle a talk switch, voice operated switches are sometimes employed. Some systems use PTT ID to identify the speaker.
Mobile phones
Push-to-talk over cellular (PTToC) is a service option for a cellular phone network that enables subscribers to use their phones as walkie-talkies with unlimited range. A typical push-to-talk connection connects almost instantly. A significant advantage of PTT is the ability for a single person to reach an active talk group with a single button press; users don't need to make several telephone calls to coordinate with a group.
Push-to-talk cellular calls similarly provide half-duplex communications – while one person transmits, the other(s) receive. This combines the operational advantages of PTT with the interference resistance and other virtues of mobile phones. Manufacturers of (POC or PoC) hardware include ToooAir and Hytera US Inc.
Mobile push-to-talk services, offered by some mobile carriers directly as well as by independent companies, adds PTT functionality to smartphones and specialized mobile handsets (hand portable and mobile/base station PTT Radio Terminals). In addition to mobile handsets, some services also work on a laptop, desktop, and tablet computers.
Smartphone and computer apps
Recent development in PTT communications is the appearance of apps on smartphones, some of which can function on multiple platforms. Wireless carrier-grade PTT systems have adapted to and adopted the smartphone platform by providing downloadable apps that support their PTT systems across many mobile platforms. Over-the-top (OTT) applications do not depend on a specific carrier or type of communication network, and may be slower than carrier implementations.
See also
References
Push-to-Talk over Cellular Consortium Phase 2 Specifications and Documentation
Open Mobile Alliance - Push to talk over Cellular (PoC) - Architecture Candidate Version 2.0 – 26 February 2008
IMS services | Push-to-talk | [
"Technology"
] | 677 | [
"IMS services"
] |
358,890 | https://en.wikipedia.org/wiki/Grey%20alien | Grey aliens, also referred to as Zeta Reticulans, Roswell Greys or Greys, are purported extraterrestrial beings. They are frequent subjects of close encounters and alien abduction claims. The details of such claims vary widely. That said, Greys are typically described as being human-like with small bodies, smooth, grey-colored skin; enlarged, hairless heads; and large, black eyes. The Barney and Betty Hill abduction claim, which purportedly took place in New Hampshire in 1961, popularized Grey aliens. Precursor figures have been described in science fiction and similar descriptions appeared in later accounts of the 1947 Roswell UFO incident and early accounts of the 1948 Aztec UFO hoax.
The Grey alien has emerged as an archetypal image of an intelligent non-human creature and extraterrestrial life in general, as well as an iconic trope of popular culture in the age of space exploration.
Description
Appearance
Greys are typically depicted as grey-skinned, diminutive humanoid beings that possess reduced forms of, or completely lack, external human body parts such as noses, ears, or sex organs. Their bodies are usually depicted as being elongated, having a small chest, and lacking in muscular definition and visible skeletal structure. Their legs are depicted as being shorter and jointed differently from humans with limbs proportionally different from a human.
Greys are depicted as having unusually large heads in proportion to their bodies with no hair on the body, and no noticeable outer ears or noses, sometimes with small openings or orifices for ears, nostrils, and mouths. In drawings, Greys are almost always shown with very large, opaque, black eyes, without eye whites. They are frequently described as shorter than average adult humans.
Association with Zeta Reticuli
The association between Grey aliens and Zeta Reticuli originated with the interpretation of a map drawn by Betty Hill by a school-teacher named Marjorie Fish sometime in 1969. Betty Hill, under hypnosis, had claimed to have been shown a map that displayed the aliens' home system and nearby stars. Upon learning of this, Fish attempted to create a model from a drawing produced by Hill, eventually determining that the stars marked as the aliens' home were Zeta Reticuli, a binary star system.
History
Origins
In 1893, H. G. Wells presented a description of humanity's future appearance in the article "The Man of the Year Million", describing humans as having no mouths, noses, or hair, and with large heads. In 1895, Wells also depicted the Eloi, a successor species to humanity, in similar terms in the novel The Time Machine.
As early as 1917, the occultist Aleister Crowley described a meeting with a "preternatural entity" named Lam that was similar in appearance to a modern Grey. Crowley believed he had contacted the entity through a process that he called the "Amalantrah Workings," which he thought allowed humans to contact beings from outer space and across dimensions. Other occultists and ufologists, many of whom have retroactively linked Lam to later Grey encounters, have since described their own visitations from him, with one describing the being as a "cold, computer-like intelligence," and utterly beyond human comprehension.
In 1933, the Swedish novelist Gustav Sandgren, using the pen name Gabriel Linde, published a science fiction novel called Den okända faran (The Unknown Danger), in which he describes a race of extraterrestrials who wore clothes made of soft grey fabric and were short, with big bald heads, and large, dark, gleaming eyes. The novel, aimed at young readers, included illustrations of the imagined aliens. This description would become the template upon which the popular image of grey aliens is based.
Barney and Betty Hill abduction
The conception remained a niche one until 1965, when newspaper reports of the Betty and Barney Hill abduction made the archetype famous. The alleged abductees, Betty and Barney Hill, claimed that in 1961, humanoid alien beings with grayish skin had abducted them and taken them to a flying saucer.
In his 1990 article "Entirely Unpredisposed", Martin Kottmeyer suggested that Barney's memories revealed under hypnosis might have been influenced by an episode of the science-fiction television show The Outer Limits titled "The Bellero Shield", which was broadcast 12 days before Barney's first hypnotic session. The episode featured an extraterrestrial with large eyes, who says, "In all the universes, in all the unities beyond the universes, all who have eyes have eyes that speak." The report from the regression featured a scenario that was in some respects similar to the television show. In part, Kottmeyer wrote:
Carl Sagan echoed Kottmeyer's suspicions in his 1997 book, The Demon Haunted World: Science as a Candle in the Dark, where Invaders from Mars was cited as another potential inspiration.
Diffusion into folklore
After the Hills' encounter, Greys would go on to become an integral part of ufology and other extraterrestrial-related folklore. This is particularly true in the case of the United States: according to journalist C. D. B. Bryan, 73% of all reported alien encounters in the United States describe Grey aliens, a significantly higher proportion than other countries.
During the early 1980s, Greys were linked to the alleged crash-landing of a flying saucer in Roswell, New Mexico, in 1947. A number of publications contained statements from individuals who claimed to have seen the U.S. military handling a number of unusually proportioned, bald, child-sized beings. These individuals claimed, during and after the incident, that the beings had oversized heads and slanted eyes, but scant other distinguishable facial features.
In 1987, novelist Whitley Strieber published the book Communion, which, unlike his previous works, was categorized as non-fiction, and in which he describes a number of close encounters he alleges to have experienced with Greys and other extraterrestrial beings. The book became a New York Times bestseller, and New Line Cinema released a 1989 film adaption that starred Christopher Walken as Strieber.
In 1988, Christophe Dechavanne interviewed the French science-fiction writer and ufologist Jimmy Guieu on TF1's Ciel, mon mardi !. Besides mentioning Majestic 12, Guieu described the existence of what he called "the little greys", which later on became better known in French under the name: les Petits-Gris. Guieu later wrote two docudramas, using as a plot the Grey aliens / Majestic-12 conspiracy theory as described by John Lear and Milton William Cooper: the series "E.B.E." (for "Extraterrestrial Biological Entity"): E.B.E.: Alerte rouge (first part) (1990) and E.B.E.: L'entité noire d'Andamooka (second part) (1991).
Greys have since become the subject of many conspiracy theories. Many conspiracy theorists believe that Greys represent part of a government-led disinformation or plausible deniability campaign, or that they are a product of government mind-control experiments. During the 1990s, popular culture also began to increasingly link Greys to a number of military-industrial complex and New World Order conspiracy theories.
In 1995, filmmaker Ray Santilli claimed to have obtained 22 reels of 16 mm film that depicted the autopsy of a "real" Grey supposedly recovered from the site of the 1947 incident in Roswell. In 2006, though, Santilli announced that the film was not original, but was instead a "reconstruction" created after the original film was found to have degraded. He maintained that a real Grey had been found and autopsied on camera in 1947, and that the footage released to the public contained a percentage of that original footage.
Analysis
In close encounter claims and ufology
Greys are often involved in alien abduction claims. Among reports of alien encounters, Greys make up about 50% in Australia, 73% in the United States, 48% in continental Europe, and around 12% in the United Kingdom. These reports include two distinct groups of Greys that differ in height.
Abduction claims are often described as extremely traumatic, similar to an abduction by humans or even a sexual assault in the level of trauma and distress. The emotional impact of perceived abductions can be as great as that of combat, sexual abuse, and other traumatic events.
The eyes are often a focus of abduction claims, which often describe a Grey staring into the eyes of an abductee when conducting mental procedures. This staring is claimed to induce hallucinogenic states or directly provoke different emotions.
Psychocultural expression of intelligence
Neurologist Steven Novella proposes that Grey aliens are a byproduct of the human imagination, with the Greys' most distinctive features representing everything that modern humans traditionally link with intelligence. "The aliens, however, do not just appear as humans, they appear like humans with those traits we psychologically associate with intelligence."
The "Mother Hypothesis"
In 2005, Frederick V. Malmstrom, writing in Skeptic magazine, volume 11, issue 4, presents his idea that Greys are actually residual memories of early childhood development. Malmstrom reconstructs the face of a Grey through transformation of a mother's face based on our best understanding of early-childhood sensation and perception. Malmstrom's study offers another alternative to the existence of Greys, the intense instinctive response many people experience when presented an image of a Grey, and the act of regression hypnosis and recovered-memory therapy in "recovering" memories of alien abduction experiences, along with their common themes.
Evolutionary implausibility
According to biologist Jack Cohen, the typical image of a Grey, assuming that it would have evolved from a world with different environmental and ecological conditions from Earth, is too physiologically similar to a human to be credible as a representation of an alien.
Other hypotheses
The interdimensional hypothesis, the cryptoterrestrial hypothesis, and the time-traveller hypothesis attempt to provide an alternative explanation to the humanoid anatomy and behavior of these alleged beings.
In popular culture
Depictions of Grey aliens have gone on to appear in a number of films and television shows, supplanting the previously popular little green men. As early as 1966, for example, the superhero character Ultraman was explicitly based on them, and in 1977 they were featured in Close Encounters of the Third Kind. Greys have also been worked into space opera and other interstellar settings: in Babylon 5, the Greys are referred to as the "Vree", and are depicted as being allies and trade partners of 23rd-century Earth, while in the Stargate franchise they are called the "Asgard" and depicted as ancient astronauts allied with modern-day Earth. South Park refers to them as "visitors".
During the 1990s, plotlines wherein Greys were linked to conspiracy theories became common. and American Dad!, which features a Grey-like alien named Roger, whose backstory draws from both the Roswell incident and Area 51 conspiracy theories.
The 2011 film Paul tells the story of a Grey named Paul who attributes the Greys' frequent presence in science fiction pop culture to the US government deliberately inserting the stereotypical Grey alien image into mainstream media; this is done so that if humanity came into contact with Paul's species, no immediate shock would occur as to their appearance. Child abduction by Greys is a key plot point in the 2013 film, Dark Skies.
Greys appear in Syfy's 2021 science fiction dramedy series Resident Alien.
The Greys appear as the main antagonistic faction in the 2023 independent game Greyhill Incident.
See also
Alien autopsy
Budd Hopkins
Extraterrestrials in fiction
John E. Mack
Insectoid
List of alleged extraterrestrial beings
Little green men
Men in black
Mythic humanoids
Nordic aliens
Reptilians
Stan Romanek
Starchild skull
Notes
References
External links
Skeptics Dictionary: Alien abduction
Alleged extraterrestrial beings
Mythic_humanoids
Roswell incident
Extraterrestrial life in popular culture
Reticulum | Grey alien | [
"Astronomy"
] | 2,494 | [
"Reticulum",
"Constellations"
] |
358,913 | https://en.wikipedia.org/wiki/Broadcast%20flag | A broadcast flag is a bit field sent in the data stream of a digital television program that indicates whether or not the data stream can be recorded, or if there are any restrictions on recorded content. Possible restrictions include the inability to save an unencrypted digital program to a hard disk or other non-volatile storage, inability to make secondary copies of recorded content (in order to share or archive), forceful reduction of quality when recording (such as reducing high-definition video to the resolution of standard TVs), and inability to skip over commercials.
In the United States, new television receivers using the ATSC standard were supposed to incorporate this functionality by July 1, 2005. The requirement was successfully contested in 2005 and rescinded in 2011.
FCC ruling
Officially called "Digital Broadcast Television Redistribution Control," the FCC's rule is in 47 CFR 73.9002(b) and the following sections, stating in part: "No party shall sell or distribute in interstate commerce a Covered Demodulator Product that does not comply with the Demodulator Compliance Requirements and Demodulator Robustness Requirements." According to the rule, hardware must "actively thwart" piracy.
The rule's Demodulator Compliance Requirements insists that all HDTV demodulators must "listen" for the flag (or assume it to be present in all signals). Flagged content must be output only to "protected outputs" (such as DVI and HDMI ports with HDCP encryption), or in degraded form through analog outputs or digital outputs with visual resolution of 720x480 pixels (EDTV) or less. Flagged content may be recorded only by "authorized" methods, which may include tethering of recordings to a single device.
Since broadcast flags could be activated at any time, a viewer who often records a program might suddenly find that it is no longer possible to save their favorite show. This and other reasons lead many to see the flags as a direct affront to consumer rights.
The Demodulator Robustness Requirements are difficult to implement in open source systems. Devices must be "robust" against user access or modifications so that someone could not easily alter it to ignore the broadcast flags that permit access to the full digital stream. Since open-source device drivers are by design user-modifiable, a PC TV tuner card with open-source drivers would not be "robust".
The GNU Radio project already successfully demonstrated that purely software-based demodulators can exist and the hardware rule is not fully enforceable.
Current status
In American Library Association v. FCC, 406 F.3d 689 (D.C. Cir. 2005), the United States Court of Appeals for the D.C. Circuit ruled that the FCC had exceeded its authority in creating this rule. The court stated that the Commission could not prohibit the manufacture of computer or video hardware without copy-protection technology because the FCC only has authority to regulate transmissions, not devices that receive communications. While it is always possible that the Supreme Court could overturn this ruling, the more likely reemergence of the broadcast flag is in legislation granting such authority to the FCC.
On May 1, 2006, Sen. Ted Stevens inserted a version of the Broadcast Flag into the Communications, Consumer's Choice, and Broadband Deployment Act of 2006. On June 22, 2006 Sen. John E. Sununu offered an amendment to strike the broadcast and radio flag, but this failed and the broadcast-flag amendment was approved by the Commerce committee. Nonetheless, the overall bill was never passed, and thus died upon adjournment of the 109th Congress in December 2006.
On May 18, 2008, News.com reported that Microsoft had confirmed that current versions of Windows Media Center shipping with the Windows family of operating systems adhered to the use of the broadcast flag, following reports of users being blocked from taping specific airings of NBC programs, mainly American Gladiators and Medium. A Microsoft spokesperson said that Windows Media Center adheres to the "rules set forth by the FCC".
On August 22, 2011, the FCC officially eliminated the broadcast flag regulations.
Related technologies
Radio broadcast flag and RIAA
With the coming of digital radio, the recording industry is attempting to change the ground rules for copyright of songs played on radio. Currently, over the air (i.e. broadcast but not Internet) radio stations may play songs freely but RIAA wants Congress to insert a radio broadcast flag. On April 26, 2006, Congress held a hearing over the radio broadcast flag. Among the witnesses were musicians Anita Baker and Todd Rundgren.
European Broadcast Flag
At present no equivalent signal is typically used in European DVB transmissions, although DVB-CPCM would provide such a set of signal as defined by DVB-SI, usable on clear-to-air television broadcasts. How adherence to such a system would be enforced in a receiver is not yet clear.
In the UK, the BBC introduced content protection restrictions in 2010 on Free to Air content by licensing data necessary to receive the service information for the Freeview HD broadcasts. However the BBC have stated the highest protection applied will be to allow only one copy to be made.
ISDB
ISDB broadcasts are protected as to allow the broadcast to be digitally recorded once, but to not allow digital copies of the recording to be made. Analog recordings can be copied freely. It is possible to disallow the use of analog outputs, although this has yet to be implemented. The protection can be circumvented with the correct hardware and software.
DVB-CPCM
The Digital Video Broadcasting organization is developing DVB-CPCM which allows broadcasters (especially PayTV broadcaster) far more control over the use of content on (and beyond) home networks. The DVB standards are commonly used in Europe and around the world (for satellite, terrestrial, and cable distribution), but are also employed in the United States by Dish Network. In Europe, some entertainment companies were lobbying to legally mandate the use of DVB-CPCM. Opponents fear that mandating DVB-CPCM will kill independent receiver manufacturers that use open source operating systems (e.g., Linux-based set-top boxes.)
Pay-per-view use of broadcast flag
In the US, since April 15, 2008, pay-per-view movies on cable and satellite television now are flagged to prevent a recording off a pay-per-view channel to a digital video recorders or other related devices from being retained after 24 hours from the ordered time of the film. This is the standard film industry practice, including for digital rentals from the iTunes Store and Google Play. Movies recorded before that point would still be available without flagging and could be copied freely, though as of 2015 those pre-2008 DVR units are well out-of-date or probably non-functional, and the pay-per-view concern is moot for all but special events, as nearly all satellite providers and cable providers have moved to more easily restricted video on demand platforms; pay-per-view films have been drawn down to non-notable content.
See also
CGMS-A
Copy Control Information
Digital Millennium Copyright Act
Digital rights management
Digital Transition Content Security Act
Family Entertainment and Copyright Act
Evil bit
Image Constraint Token
Selectable Output Control
Serial Copy Management System
References
, October 1, 2005.
External links
Copyright Protection of Digital Television: The “Broadcast Flag”
Electronic Frontier Foundation's Broadcast Flag page
The Broadcast Flag and "Plug & Play": The FCC's Lockdown of Digital Television
U.S. District Court shoots down broadcast flag (CNET)
Broadcast Flag: Media Industry May Try to Steal the Law - June 2005 MP3 Newswire article
Circuit Court ruling striking down (PDF format)
ATSC
Digital television
High-definition television
Digital rights management standards
Federal Communications Commission
Television terminology
History of television | Broadcast flag | [
"Technology"
] | 1,600 | [
"Computer standards",
"Digital rights management standards"
] |
359,070 | https://en.wikipedia.org/wiki/Burrowing%20owl | The burrowing owl (Athene cunicularia), also called the shoco, is a small, long-legged, primarily terrestrial—though not flightless—species of owl native to the open landscapes of North and South America. They are typically found in grasslands, rangelands, agricultural areas, deserts, or any other open, dry area with low vegetation. They nest and roost in burrows, and, despite their common name, do not construct these dwellings themselves, rather repurposing disused burrows or tunnels previously excavated and inhabited by other species, such as American badgers (Taxidea taxus), foxes (Vulpes sp.), ground squirrels or prairie dogs (Cynomys spp.), among others.
Unlike most owls, burrowing owls are often active during the day, although they tend to avoid the heat of midday. But, similar to many other species of owls, they are mostly crepuscular hunters, as they can utilize their night vision and attuned hearing to maximum potential during sunrise and sunset. Having evolved to live on open grasslands and prairie habitat (as well as badlands, chaparral and desert ecosystems), as opposed to dense forest, the burrowing owl has developed longer legs than other owls, a trait which enables it to sprint when pursuing its prey, not dissimilarly to the greater roadrunner, as well as providing momentum when taking flight; however, burrowing owls typically only become airborne for short bursts, such as when fleeing threats, and typically do not fly very high off of the ground.
Taxonomy
The burrowing owl was formally described by Spanish naturalist Juan Ignacio Molina in 1782 under the binomial name Strix cunicularia from a specimen collected in Chile. The specific epithet is from the Latin cunicularius, meaning "burrower" or "miner". The burrowing owl is now placed in the genus Athene which was introduced by German zoologist Friedrich Boie in 1822.
The burrowing owl is sometimes classified in the monotypic genus Speotyto (based on an overall unique morphology and karyotype). Osteology and DNA sequence data, though, suggests that the burrowing owl is a terrestrial member of the little owls genus (Athene), thus it is placed in that group today, by most authorities.
A considerable number of subspecies have been described, though they differ little in appearance; the taxonomic validity of several is still up-for-debate. Most subspecies are found in or near the Andes and within the Antilles of the Caribbean Sea. Although distinct from each other, the relationship of the Florida subspecies, for instance, to (and its distinctness from) the Caribbean owls, is not quite clear.
The 18 recognised subspecies, of which two are now extinct, are:
†A. c. amaura (Lawrence, 1878): Antiguan burrowing owl – formerly Antigua and Saint Kitts and Nevis; extinct (circa 1905)
A. c. boliviana (L. Kelso, 1939): Bolivian burrowing owl – the Bolivian altiplano
A. c. brachyptera (Richmond, 1896): Margarita Island burrowing owl – Margarita Island (may include A. c. apurensis)
A. c. carrikeri (Stone, 1922): East Colombian burrowing owl – Eastern Colombia; doubtfully distinct from A. c. tolimae
A. c. cunicularia (Molina, 1782):- southern burrowing owl – lowlands of southern Bolivia and southern Brazil south to Tierra del Fuego
A. c. floridana (Ridgway, 1874): Florida burrowing owl – Florida and the Bahamas; listed as Vulnerable
A. c. grallaria (Temminck, 1822): Brazilian burrowing owl – Central and eastern Brazil
†A. c. guadeloupensis (Ridgway, 1874): Guadeloupe burrowing owl – formerly Guadeloupe and Marie-Galante islands; extinct (circa 1890)
A. c. guantanamensis (Garrido, 2001): Cuban burrowing owl – Cuba and Isla de la Juventud
A. c. hypugaea (Bonaparte, 1825): western burrowing owl – Southern Canada through the Great Plains, south to Central America; listed as Apparently Secure
A. c. juninensis (Berlepsch & Stolzmann, 1902): south Andean burrowing owl – Andes Mountains and foothills from central Perú to northwestern Argentina (may include A. c. punensis)
A. c. minor (Cory, 1918): Guyanese burrowing owl – southern Guyana and Roraima state (Brazil)
A. c. nanodes (Berlepsch & Stolzmann, 1892): Southwest Peruvian burrowing owl – southwestern Perú (may include A. c. intermedia)
A. c. pichinchae (Boetticher, 1929): West Ecuadorean burrowing owl – western Ecuador
A. c. rostrata (C. H. Townsend, 1890): Revillagigedo burrowing owl – Clarion and Revillagigedo Islands
A. c. tolimae (Stone, 1899): West Colombian burrowing owl – Western Colombia (may include A. c. carrikeri)
A. c. troglodytes (Wetmore & Swales, 1931): Hispaniolan burrowing owl – Hispaniola (Haiti and the Dominican Republic) and surrounding islands (Gonâve, Beata Island)
includes A. c. partridgei (Olrog, 1976): Corrientes burrowing owl – Corrientes Province, Argentina (probably not distinct from A. c. cunicularia)
A paleosubspecies, A. c. providentiae, has been described from fossil remains from the Pleistocene of the Bahamas. How these birds relate to the extant A. c. floridana – that is, whether they were among the ancestors of that subspecies, or whether they represented a more distant lineage that completely disappeared later – is unknown.
In addition, prehistoric fossils of similar owls have been recovered from many islands in the Caribbean (Barbuda, the Cayman Islands, Jamaica, Mona Island and Puerto Rico). These birds became extinct towards the end of the Pleistocene, probably because of ecological and sea-level changes at the end of the last ice age rather than human activity. These fossil owls differed in size from present-day burrowing owls, and their relationship to the modern taxa has not been resolved.
Description
Burrowing owls have bright eyes; their beaks can be dark yellow or gray depending on the subspecies. They lack ear tufts and have a flattened facial disc. The owls have prominent white eyebrows and a white "chin" patch which they expand and display during certain behaviors, such as a bobbing of the head when agitated.
Adults have brown heads and wings with white spotting. Their chests and abdomens are white with variable brown spotting or barring, also depending on the subspecies. Juvenile owls are similar in appearance, but they lack most of the white spotting above and brown barring below. The juveniles have a buff bar across their upper wings and their breasts may be buff-colored rather than white. Burrowing owls of all ages have grayish legs longer than those of other owls.
Males and females are similar in size and appearance, so display little sexual dimorphism. Females tend to be heavier, but males tend to have longer linear measurements (wing length, tail length, etc.). Adult males appear lighter in color than females because they spend more time outside the burrow during daylight, and their feathers become "sun-bleached". The burrowing owl measures long and spans across the wings, and weighs . As a size comparison, an average adult is slightly larger than an American robin (Turdus migratorius).
Distribution and habitat
Before European colonization, burrowing owls probably inhabited every suitable area of the New World, but in North America, they have experienced some restrictions in distribution since then. In parts of South America, they are expanding their range due to deforestation. The western burrowing owls (A. c. hypugaea) are most common in the Rocky Mountain Arsenal National Wildlife Refuge, as well as in most of the western states. Known resident populations inhabit areas of Colorado, Arizona, New Mexico, Texas and California, where their population is reportedly threatened by human encroachment and construction.
Burrowing owls range from the southern portions of the western Canadian provinces (British Columbia, Alberta, Saskatchewan, Manitoba) and all the way through Mexico to western Panamá. They are also found across the state of Florida, as well as some Caribbean islands. In South America, they are fairly common, and are known to inhabit every country on the continent, with the exception of the dense Amazon rainforest interior and the highest ranges of the Andes Mountains. Their preference is for the cooler, possibly sub-tropical coastal and temperate regions. South of the Amazon, their population seems to again rebound, as they are widely distributed from southern Brazil and the Pantanal down to Patagonia and Tierra del Fuego.
Burrowing owls are year-round residents in most of their range. Birds that breed in Canada and the northern U.S. usually migrate south to Mexico and the southern U.S. during winter months.
Behaviour and ecology
This species can live for at least 9 years in the wild and over 10 years in captivity. They are often killed by vehicles when crossing roads, and have many natural enemies, including badgers, coyotes, and snakes. They are also killed by both feral and domestic cats and dogs. Two birds studied in the Parque Nacional de La Macarena of Colombia were free of blood parasites.
Burrowing owls often nest and roost in the burrows made by ground squirrels, a strategy also used by rattlesnakes. When threatened, the owl retreats to the burrow and produces rattling and hissing sounds similar to those of a rattlesnake. The behavior is suggested to be an example of acoustic Batesian mimicry and has been observed to be an effective strategy against animals that are familiar with the dangers posed by rattlesnakes.
Breeding
The nesting season begins in late March or April in North America. Burrowing owls usually only have one mate but occasionally a male will have two mates. Pairs of owls will sometimes nest in loose colonies. Their typical breeding habitat is open grassland or prairie, but they can occasionally adapt to other open areas like airports, golf courses, and agricultural fields. Burrowing owls are slightly tolerant of human presence, often nesting near roads, farms, homes, and regularly maintained irrigation canals.
The owls nest in a burrow, hence the name burrowing owl. If burrows are unavailable and the soil is not hard or rocky, the owls may excavate their own. Burrowing owls will also nest in shallow, underground, man-made structures that have easy access to the surface.
During the nesting season, burrowing owls will collect a wide variety of materials to line their nest, some of which are left around the entrance to the burrow. The most common material is mammal dung, usually from cattle. At one time it was thought that the dung helped to mask the scent of the juvenile owls, but researchers now believe the dung helps to control the microclimate inside the burrow and to attract insects, which the owls may eat.
The female lays an egg every one or two days until she has completed a clutch, which can consist of four to 12 eggs (usually 9). She then incubates the eggs for 3–4 weeks while the male brings her food. After the eggs hatch, both parents feed the chicks. Four weeks after hatching, the chicks can make short flights and begin leaving the nest burrow. The parents still help feed the chicks for 1–3 months.
Site fidelity rates appear to vary among populations. In some locations, owls will frequently reuse a nest several years in a row. Owls in migratory northern populations are less likely to return to the same burrow every year. Also, as with many other birds, the female owls are more likely to disperse to a different site than are male owls.
Food and feeding
When hunting, they wait on a perch until they spot prey. Then, they swoop down on prey or fly up to catch insects in flight. Sometimes, they chase prey on foot across the ground. The highly variable diet includes invertebrates and small vertebrates, which make up roughly one third and two thirds of the diet, respectively. Burrowing owls mainly eat large insects and small rodents. Although burrowing owls often live close to ground squirrels (Marmotini), they rarely prey upon them. They also hunt bats. An analysis of burrowing owl diets in the Dominican Republic found the owls consumed ~53% invertebrates, ~28% other birds, ~15% reptiles, ~3% amphibians, and 1% mammals.
Rodent prey is usually dominated by locally superabundant species, like the delicate vesper mouse (Calomys tener) in southern Brazil. Among squamates and amphibians, small lizards like the tropical house gecko (Hemidactylus mabouia), snakes, frogs, and toads predominate. Generally, most vertebrate prey is in the weight class of several grams per individual. The largest prey are usually birds, such as eared doves (Zenaida auriculata) which may weigh almost as much as a burrowing owl, as well as sparrows.
Regarding invertebrates, the burrowing owl seems less of a generalist. It is extremely fond of termites such as Termitidae, and Orthoptera such as Conocephalinae and Copiphorinae katydids, Jerusalem crickets (Stenopelmatidae), true crickets (Gryllidae) and grasshoppers. Bothynus and Dichotomius anaglypticus scarab beetles (Scarabaeidae) were eaten far more often than even closely related species by many burrowing owls across central São Paulo (Brazil). Similarly, it was noted that among scorpions Bothriuridae were much preferred, among spiders Lycosidae (wolf spiders), and among millipedes (Diplopoda) certain Diplocheta. Small ground beetles (Carabidae) are eaten in quantity, while larger ones are much less popular as burrowing owl food, perhaps due to the vigorous defense the large species can put up. Earthworms are also preyed upon. Burrowing owls are also known to place the fecal matter of large herbivorous mammals around the outside of their burrows to attract dung beetles, which are used to provide a steady source of food for the owls. Burrowing owls can also predate on invertebrates attracted to artificial night lighting.
Unlike other owls, they also eat fruits and seeds, especially the fruit of tasajillo (Cylindropuntia leptocaulis) and other prickly pear and cholla cacti. On Clarion Island, where mammalian prey is lacking, they feed essentially on crickets and prickly pear fruit, adding Clarión wrens (Troglodytes tanneri) and young Clarion mourning doves (Zenaida macroura clarionensis) on occasion.
Status and conservation
The burrowing owl is endangered in Canada and threatened in Mexico. It is a state threatened species in Colorado and Florida and a California species of special concern. It is common and widespread in open regions of many Neotropical countries, where they sometimes even inhabit fields and parks in cities. In regions bordering the Amazon Rainforest they are spreading with deforestation. It is therefore listed as Least Concern on the IUCN Red List. Burrowing owls are protected under the Migratory Bird Treaty Act in Canada, the United States, and Mexico. They are also included in CITES Appendix II. NatureServe lists the species as Apparently Secure.
California Endangered Species Act Listing Petition
In March 2024, Center for Biological Diversity, Urban Bird Foundation, Defenders of Wildlife, Burrowing Owl Preservation Society, Santa Clara Valley Audubon Society, Central Valley Bird Club and San Bernardino Valley Audubon Society submitted a California Endangered Species Act listing petition to the Fish and Game Commission to get protections for five populations of the western burrowing owl.
The petition requests endangered status for burrowing owls in southwestern California, central-western California and the San Francisco Bay Area, and threatened status for burrowing owls in the Central Valley and southern desert range.
Dependency on burrowing animals
The major reasons for declining populations in North America are loss of habitat, and control programs for prairie dogs. While some species of burrowing owl can dig their own burrows, most species rely on burrowing animals to burrow holes that the owls can use as shelter and nesting space. There is a high correlation between the location of burrowing animal colonies, like those of ground squirrels, with the presence of burrowing owls. Rates of burrowing owl decline have also been shown to correlate with prairie dog decline. Western burrowing owls, for example, nest in burrows made by black-tailed prairie dogs since they are unable to dig their own. However, prairie dog populations have experienced a decline, one of the causes of this being prairie dog eradication programs. When prairie dogs dig burrows, they can uproot plants in the process. This is most common in agricultural areas, where burrows cause damage to existing crops, creating a problem for local farmers. In Nebraska and Montana, eradication programs have already been put in place to manage the population of prairie dogs. Eradication programs for ground squirrels have also been put in place. In California, California ground squirrels have been known to feed on crop seedlings as well as grasses meant for cattle, which prevents crop growth and decreases food supply for cattle. However, as burrowing animal populations decrease, burrowing owls become more vulnerable to exposure to predators. With fewer burrows available, burrowing owl populations will be more concentrated, with more owls occupying fewer burrows . As a result, predators will more easily detect owl populations and be capable of eliminating larger broods of owls at once. Prairie dogs and ground squirrels also act as a buffer between owls and their predators, since they become the target prey rather than the owls. Another benefit prairie dogs in particular provide burrowing owls takes the form of their alarm calls, which alert burrowing owls if predators are nearby, therefore giving the owls ample time to hide or escape. Without burrowing animals, almost every aspect contributing to suitable and safe living for burrowing owls will no longer be available. Organizations have tried contributing to the conservation of burrowing owls by digging artificial burrows for these owls to occupy in areas with no active colony of burrowing animals. However, creating artificial burrows is not sustainable and is not effective as a long term solution.
Anthropogenic impacts
Burrowing owls readily inhabit some anthropogenic landscapes, such as airport grasslands or golf courses, and are known to take advantage of artificial nest sites (plastic burrows with tubing for the entrance) and perches. Burrowing owls have demonstrated similar reproductive success in rural grasslands and urban settings. The urban-residing burrowing owls have also developed the behavior of digging their own burrows and exhibit different fear responses to human and domestic dogs compared to their rural counterparts. Research has suggested that this species has made adaptations to the rapid urbanization of their usual habitat, and conservation efforts should be considered accordingly. Genetic analysis of the two North American subspecies indicates that inbreeding is not a problem within those populations.
Relocation
Where the presence of burrowing owls conflicts with development interests, a passive relocation technique has been applied successfully: rather than capturing the birds and transporting them to a new site (which may be stressful and prone to failure), the owls are half-coerced, half-enticed to move on their own accord. The preparations need to start several months prior to the anticipated disturbance with observing the owl colony and noting especially their local movements and site preferences. After choosing a location nearby that has suitable ground and provides good burrowing owl breeding habitat, this new site is enhanced by adding burrows, perches, etc. Once the owls have accustomed to the changes and are found to be interested in the location – if possible, this should be at the onset of spring, before the breeding season starts – they are prevented from entering the old burrows. A simple one-way trapdoor design has been described that is placed over the burrow for this purpose. If everything has been correctly prepared, the owl colony will move over to the new site in the course of a few nights at most. It will need to be monitored occasionally for the following months or until the major human construction nearby has ended.
Some organizations like Center for Biological Diversity and Urban Bird Foundation contend that the removal from their burrows, either through active or passive relocation, has been a factor in the extirpation of burrowing owl populations in California because of the species high site fidelity.
References
Further reading
External links
Burrowing Owl Live Camera Feed & Fact Sheet at critterzoom.com
Rocky Mountain Arsenal National Wildlife Refuge: Burrowing Owl Study
Burrowing Owl Species Account – Cornell Lab of Ornithology
Burrowing Owl Conservation Network
Burrowing Owl Photo Essay at The Ark in Space
Urban Bird Foundation
burrowing owl
burrowing owl
Birds of the Dominican Republic
Fauna of the Sonoran Desert
Native birds of the Canadian Prairies
Native birds of the Southeastern United States
Native birds of the Western United States
Tool-using animals
Subterranean nesting birds
burrowing owl
burrowing owl
Owls of South America | Burrowing owl | [
"Biology"
] | 4,428 | [
"Ethology",
"Behavior",
"Tool-using animals"
] |
359,096 | https://en.wikipedia.org/wiki/Log-structured%20file%20system | A log-structured filesystem is a file system in which data and metadata are written sequentially to a circular buffer, called a log. The design was first proposed in 1988 by John K. Ousterhout and Fred Douglis and first implemented in 1992 by Ousterhout and Mendel Rosenblum for the Unix-like Sprite distributed operating system.
Rationale
Conventional file systems lay out files with great care for spatial locality and make in-place changes to their data structures in order to perform well on optical and magnetic disks, which tend to seek relatively slowly.
The design of log-structured file systems is based on the hypothesis that this will no longer be effective because ever-increasing memory sizes on modern computers would lead to I/O becoming write-heavy since reads would be almost always satisfied from memory cache. A log-structured file system thus treats its storage as a circular log and writes sequentially to the head of the log.
This has several important side effects:
Write throughput on optical and magnetic disks is improved because they can be batched into large sequential runs and costly seeks are kept to a minimum.
The structure is naturally suited to media with append-only zones or pages such as flash storages and shingled magnetic recording HDDs
Writes create multiple, chronologically-advancing versions of both file data and meta-data. Some implementations make these old file versions nameable and accessible, a feature sometimes called time-travel or snapshotting. This is very similar to a versioning file system.
Recovery from crashes is simpler. Upon its next mount, the file system does not need to walk all its data structures to fix any inconsistencies, but can reconstruct its state from the last consistent point in the log.
Log-structured file systems, however, must reclaim free space from the tail of the log to prevent the file system from becoming full when the head of the log wraps around to meet it. The tail can release space and move forward by skipping over data for which newer versions exist further ahead in the log. If there are no newer versions, then the data is moved and appended to the head.
To reduce the overhead incurred by this garbage collection, most implementations avoid purely circular logs and divide up their storage into segments. The head of the log simply advances into non-adjacent segments which are already free. If space is needed, the least-full segments are reclaimed first. This decreases the I/O load (and decreases the write amplification) of the garbage collector, but becomes increasingly ineffective as the file system fills up and nears capacity.
Disadvantages
The design rationale for log-structured file systems assumes that most reads will be optimized away by ever-enlarging memory caches. This assumption does not always hold:
On magnetic media—where seeks are relatively expensive—the log structure may actually make reads much slower, since it fragments files that conventional file systems normally keep contiguous with in-place writes.
On flash memory—where seek times are usually negligible—the log structure may not confer a worthwhile performance gain because write fragmentation has much less of an impact on write throughput. Another issue is stacking one log on top of another log, which is not a very good idea as it forces multiple erases with unaligned access. However, many flash based devices cannot rewrite part of a block, and they must first perform a (slow) erase cycle of each block before being able to re-write. By putting all the writes in one block, this can help performance as opposed to writes scattered into various blocks, and each one must be copied into a buffer, erased, and written back, which is a clear advantage for so-called "raw" flash memory where flash translation layer is bypassed.
See also
Comparison of file systems
List of log-structured file systems
References
Further reading
Log-structured File Systems (2014), Arpaci-Dusseau, Remzi H.; Arpaci-Dusseau, Andrea C.; Arpaci-Dusseau Books
Computer file systems
Bell Labs
Fault-tolerant computer systems | Log-structured file system | [
"Technology",
"Engineering"
] | 839 | [
"Fault-tolerant computer systems",
"Reliability engineering",
"Computer systems"
] |
359,135 | https://en.wikipedia.org/wiki/Chemical%20kinetics | Chemical kinetics, also known as reaction kinetics, is the branch of physical chemistry that is concerned with understanding the rates of chemical reactions. It is different from chemical thermodynamics, which deals with the direction in which a reaction occurs but in itself tells nothing about its rate. Chemical kinetics includes investigations of how experimental conditions influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that also can describe the characteristics of a chemical reaction.
History
The pioneering work of chemical kinetics was done by German chemist Ludwig Wilhelmy in 1850. He experimentally studied the rate of inversion of sucrose and he used integrated rate law for the determination of the reaction kinetics of this reaction. His work was noticed 34 years later by Wilhelm Ostwald. In 1864, Peter Waage and Cato Guldberg published the law of mass action, which states that the speed of a chemical reaction is proportional to the quantity of the reacting substances.
Van 't Hoff studied chemical dynamics and in 1884 published his famous "Études de dynamique chimique". In 1901 he was awarded the first Nobel Prize in Chemistry "in recognition of the extraordinary services he has rendered by the discovery of the laws of chemical dynamics and osmotic pressure in solutions". After van 't Hoff, chemical kinetics dealt with the experimental determination of reaction rates from which rate laws and rate constants are derived. Relatively simple rate laws exist for zero order reactions (for which reaction rates are independent of concentration), first order reactions, and second order reactions, and can be derived for others. Elementary reactions follow the law of mass action, but the rate law of stepwise reactions has to be derived by combining the rate laws of the various elementary steps, and can become rather complex. In consecutive reactions, the rate-determining step often determines the kinetics. In consecutive first order reactions, a steady state approximation can simplify the rate law. The activation energy for a reaction is experimentally determined through the Arrhenius equation and the Eyring equation. The main factors that influence the reaction rate include: the physical state of the reactants, the concentrations of the reactants, the temperature at which the reaction occurs, and whether or not any catalysts are present in the reaction.
Gorban and Yablonsky have suggested that the history of chemical dynamics can be divided into three eras. The first is the van 't Hoff wave searching for the general laws of chemical reactions and relating kinetics to thermodynamics. The second may be called the Semenov-Hinshelwood wave with emphasis on reaction mechanisms, especially for chain reactions. The third is associated with Aris and the detailed mathematical description of chemical reaction networks.
Factors affecting reaction rate
Nature of the reactants
The reaction rate varies depending upon what substances are reacting. Acid/base reactions, the formation of salts, and ion exchange are usually fast reactions. When covalent bond formation takes place between the molecules and when large molecules are formed, the reactions tend to be slower.
The nature and strength of bonds in reactant molecules greatly influence the rate of their transformation into products.
Physical state
The physical state (solid, liquid, or gas) of a reactant is also an important factor of the rate of change. When reactants are in the same phase, as in aqueous solution, thermal motion brings them into contact. However, when they are in separate phases, the reaction is limited to the interface between the reactants. Reaction can occur only at their area of contact; in the case of a liquid and a gas, at the surface of the liquid. Vigorous shaking and stirring may be needed to bring the reaction to completion. This means that the more finely divided a solid or liquid reactant the greater its surface area per unit volume and the more contact it with the other reactant, thus the faster the reaction. To make an analogy, for example, when one starts a fire, one uses wood chips and small branches — one does not start with large logs right away. In organic chemistry, on water reactions are the exception to the rule that homogeneous reactions take place faster than heterogeneous reactions (those in which solute and solvent are not mixed properly).
Surface area of solid state
In a solid, only those particles that are at the surface can be involved in a reaction. Crushing a solid into smaller parts means that more particles are present at the surface, and the frequency of collisions between these and reactant particles increases, and so reaction occurs more rapidly. For example, Sherbet (powder) is a mixture of very fine powder of malic acid (a weak organic acid) and sodium hydrogen carbonate. On contact with the saliva in the mouth, these chemicals quickly dissolve and react, releasing carbon dioxide and providing for the fizzy sensation. Also, fireworks manufacturers modify the surface area of solid reactants to control the rate at which the fuels in fireworks are oxidised, using this to create diverse effects. For example, finely divided aluminium confined in a shell explodes violently. If larger pieces of aluminium are used, the reaction is slower and sparks are seen as pieces of burning metal are ejected.
Concentration
The reactions are due to collisions of reactant species. The frequency with which the molecules or ions collide depends upon their concentrations. The more crowded the molecules are, the more likely they are to collide and react with one another. Thus, an increase in the concentrations of the reactants will usually result in the corresponding increase in the reaction rate, while a decrease in the concentrations will usually have a reverse effect. For example, combustion will occur more rapidly in pure oxygen than in air (21% oxygen).
The rate equation shows the detailed dependence of the reaction rate on the concentrations of reactants and other species present. The mathematical forms depend on the reaction mechanism. The actual rate equation for a given reaction is determined experimentally and provides information about the reaction mechanism. The mathematical expression of the rate equation is often given by
Here is the reaction rate constant, is the molar concentration of reactant i and is the partial order of reaction for this reactant. The partial order for a reactant can only be determined experimentally and is often not indicated by its stoichiometric coefficient.
Temperature
Temperature usually has a major effect on the rate of a chemical reaction. Molecules at a higher temperature have more thermal energy. Although collision frequency is greater at higher temperatures, this alone contributes only a very small proportion to the increase in rate of reaction. Much more important is the fact that the proportion of reactant molecules with sufficient energy to react (energy greater than activation energy: E > Ea) is significantly higher and is explained in detail by the Maxwell–Boltzmann distribution of molecular energies.
The effect of temperature on the reaction rate constant usually obeys the Arrhenius equation , where A is the pre-exponential factor or A-factor, Ea is the activation energy, R is the molar gas constant and T is the absolute temperature.
At a given temperature, the chemical rate of a reaction depends on the value of the A-factor, the magnitude of the activation energy, and the concentrations of the reactants. Usually, rapid reactions require relatively small activation energies.
The 'rule of thumb' that the rate of chemical reactions doubles for every 10 °C temperature rise is a common misconception. This may have been generalized from the special case of biological systems, where the α (temperature coefficient) is often between 1.5 and 2.5.
The kinetics of rapid reactions can be studied with the temperature jump method. This involves using a sharp rise in temperature and observing the relaxation time of the return to equilibrium. A particularly useful form of temperature jump apparatus is a shock tube, which can rapidly increase a gas's temperature by more than 1000 degrees.
Catalysts
A catalyst is a substance that alters the rate of a chemical reaction but it remains chemically unchanged afterwards. The catalyst increases the rate of the reaction by providing a new reaction mechanism to occur with in a lower activation energy. In autocatalysis a reaction product is itself a catalyst for that reaction leading to positive feedback. Proteins that act as catalysts in biochemical reactions are called enzymes. Michaelis–Menten kinetics describe the rate of enzyme mediated reactions. A catalyst does not affect the position of the equilibrium, as the catalyst speeds up the backward and forward reactions equally.
In certain organic molecules, specific substituents can have an influence on reaction rate in neighbouring group participation.
Pressure
Increasing the pressure in a gaseous reaction will increase the number of collisions between reactants, increasing the rate of reaction. This is because the activity of a gas is directly proportional to the partial pressure of the gas. This is similar to the effect of increasing the concentration of a solution.
In addition to this straightforward mass-action effect, the rate coefficients themselves can change due to pressure. The rate coefficients and products of many high-temperature gas-phase reactions change if an inert gas is added to the mixture; variations on this effect are called fall-off and chemical activation. These phenomena are due to exothermic or endothermic reactions occurring faster than heat transfer, causing the reacting molecules to have non-thermal energy distributions (non-Boltzmann distribution). Increasing the pressure increases the heat transfer rate between the reacting molecules and the rest of the system, reducing this effect.
Condensed-phase rate coefficients can also be affected by pressure, although rather high pressures are required for a measurable effect because ions and molecules are not very compressible. This effect is often studied using diamond anvils.
A reaction's kinetics can also be studied with a pressure jump approach. This involves making fast changes in pressure and observing the relaxation time of the return to equilibrium.
Absorption of light
The activation energy for a chemical reaction can be provided when one reactant molecule absorbs light of suitable wavelength and is promoted to an excited state. The study of reactions initiated by light is photochemistry, one prominent example being photosynthesis.
Experimental methods
The experimental determination of reaction rates involves measuring how the concentrations of reactants or products change over time. For example, the concentration of a reactant can be measured by spectrophotometry at a wavelength where no other reactant or product in the system absorbs light.
For reactions which take at least several minutes, it is possible to start the observations after the reactants have been mixed at the temperature of interest.
Fast reactions
For faster reactions, the time required to mix the reactants and bring them to a specified temperature may be comparable or longer than the half-life of the reaction. Special methods to start fast reactions without slow mixing step include
Stopped flow methods, which can reduce the mixing time to the order of a millisecond The stopped flow methods have limitation, for example, we need to consider the time it takes to mix gases or solutions and are not suitable if the half-life is less than about a hundredth of a second.
Chemical relaxation methods such as temperature jump and pressure jump, in which a pre-mixed system initially at equilibrium is perturbed by rapid heating or depressurization so that it is no longer at equilibrium, and the relaxation back to equilibrium is observed. For example, this method has been used to study the neutralization H3O+ + OH− with a half-life of 1 μs or less under ordinary conditions.
Flash photolysis, in which a laser pulse produces highly excited species such as free radicals, whose reactions are then studied.
Equilibrium
While chemical kinetics is concerned with the rate of a chemical reaction, thermodynamics determines the extent to which reactions occur. In a reversible reaction, chemical equilibrium is reached when the rates of the forward and reverse reactions are equal (the principle of dynamic equilibrium) and the concentrations of the reactants and products no longer change. This is demonstrated by, for example, the Haber–Bosch process for combining nitrogen and hydrogen to produce ammonia. Chemical clock reactions such as the Belousov–Zhabotinsky reaction demonstrate that component concentrations can oscillate for a long time before finally attaining the equilibrium.
Free energy
In general terms, the free energy change (ΔG) of a reaction determines whether a chemical change will take place, but kinetics describes how fast the reaction is. A reaction can be very exothermic and have a very positive entropy change but will not happen in practice if the reaction is too slow. If a reactant can produce two products, the thermodynamically most stable one will form in general, except in special circumstances when the reaction is said to be under kinetic reaction control. The Curtin–Hammett principle applies when determining the product ratio for two reactants interconverting rapidly, each going to a distinct product. It is possible to make predictions about reaction rate constants for a reaction from free-energy relationships.
The kinetic isotope effect is the difference in the rate of a chemical reaction when an atom in one of the reactants is replaced by one of its isotopes.
Chemical kinetics provides information on residence time and heat transfer in a chemical reactor in chemical engineering and the molar mass distribution in polymer chemistry. It is also provides information in corrosion engineering.
Applications and models
The mathematical models that describe chemical reaction kinetics provide chemists and chemical engineers with tools to better understand and describe chemical processes such as food decomposition, microorganism growth, stratospheric ozone decomposition, and the chemistry of biological systems. These models can also be used in the design or modification of chemical reactors to optimize product yield, more efficiently separate products, and eliminate environmentally harmful by-products. When performing catalytic cracking of heavy hydrocarbons into gasoline and light gas, for example, kinetic models can be used to find the temperature and pressure at which the highest yield of heavy hydrocarbons into gasoline will occur.
Chemical Kinetics is frequently validated and explored through modeling in specialized packages as a function of ordinary differential equation-solving (ODE-solving) and curve-fitting.
Numerical methods
In some cases, equations are unsolvable analytically, but can be solved using numerical methods if data values are given. There are two different ways to do this, by either using software programmes or mathematical methods such as the Euler method. Examples of software for chemical kinetics are i) Tenua, a Java app which simulates chemical reactions numerically and allows comparison of the simulation to real data, ii) Python coding for calculations and estimates and iii) the Kintecus software compiler to model, regress, fit and optimize reactions.
-Numerical integration: for a 1st order reaction A → B
The differential equation of the reactant A is:
It can also be expressed as
which is the same as
To solve the differential equations with Euler and Runge-Kutta methods we need to have the initial values.
See also
Autocatalytic reactions and order creation
Corrosion engineering
Detonation
Electrochemical kinetics
Flame speed
Heterogenous catalysis
Intrinsic low-dimensional manifold
MLAB chemical kinetics modeling package
Nonthermal surface reaction
PottersWheel Matlab toolbox to fit chemical rate constants to experimental data
Reaction progress kinetic analysis
References
External links
Chemistry applets
University of Waterloo
Chemical Kinetics of Gas Phase Reactions
Kinpy: Python code generator for solving kinetic equations
Reaction rate law and reaction profile - a question of temperature, concentration, solvent and catalyst - how fast will a reaction proceed (Video by SciFox on TIB AV-Portal)
Jacobus Henricus van 't Hoff | Chemical kinetics | [
"Chemistry"
] | 3,182 | [
"Chemical reaction engineering",
"Chemical kinetics"
] |
359,147 | https://en.wikipedia.org/wiki/Dosa%20%28food%29 | A dosa, dosey, dosai, dosha, dose, or dhosa is a thin, savoury crepe in Indian cuisine made from a fermented batter of ground black gram and rice. Dosas are served hot, often with chutney and sambar (a lentil-based vegetable stew). Dosas are a common food in Southern India and in Sri Lanka.
History
The dosa originated in South India, but its precise geographical origins are unknown. According to food historian K. T. Achaya, references in the Sangam literature suggest that dosa was already in use in the ancient Tamil country around the first century CE. However, according to historian P. Thankappan Nair, dosa originated in the town of Udupi in present-day Karnataka. Achaya states that the earliest written mention of dosa appears in the eighth-century literature of present-day Tamil Nadu, while the earliest mention of dosa in Kannada literature appears a century later.
In popular tradition outside of Southern India, the origin of the dosa is linked to Udupi, probably because of the dish's association with Udupi restaurants. The Tamil dosa is traditionally softer and thicker; the thinner and crispier version of dosa was first made in present-day Karnataka. A recipe for dosa can be found in Manasollasa, a 12th-century Sanskrit encyclopedia compiled by Someshvara III, who ruled from present-day Karnataka.
The dosa arrived in Mumbai with the opening of Udupi restaurants in the 1930s. After India's independence in 1947, South Indian cuisine became gradually popular in North India. In New Delhi, the Madras Hotel in Connaught Place became one of the first restaurants to serve South Indian cuisine.
Dosas, like many other dishes of South Indian cuisine, were introduced in Ceylon (Sri Lanka) by South Indian emigrants during British rule. Tirunelveli and Tuticorin merchants who settled there were instrumental in the spreading of South Indian cookery across the island by opening restaurants (vegetarian hotels) to meet initially the needs of the emigrant population. Dosa has found its way into the culinary habits of the Sri Lankan people, where it has evolved into an island-specific version which is quite distinct from the Indian dosa. In both forms, it is called those ( or ) or thosai ( or ) in Sinhala and in Sri Lankan Tamil.
As in Sri Lanka, dosa was introduced far abroad since the early 18th century, by the migration of the Indian Tamil diaspora to Southeast Asia and later in the Western World, and through the worldwide popularisation of Indian and South Indian cuisines since the second half of the 20th century.
Names
Dosa is the anglicised name of a variety of South Indian names for the dish, for example, dosai in Tamil, dosey in Kannada, and dosha in Malayalam.
The standard transliterations and pronunciations of the word in various South Indian languages are:
Nutrition
Dosa is high in carbohydrates and contains no added sugars. As its key ingredients are rice and black gram, it is a good source of protein. A typical homemade plain dosa without oil contains about 112 calories, of which 84% is carbohydrate and 16% is protein. The fermentation process increases the vitamin B and vitamin C content.
Preparation
A mixture of rice and white gram that has been soaked in water for at least 4–5 hours is ground finely to form a batter. Some add a bit of soaked fenugreek seeds while grinding the batter. The proportion of rice to lentils is generally 3:1 or 4:1. After adding salt, the batter is allowed to ferment overnight, before being mixed with water to get the desired consistency. The batter is then ladled onto a hot tava or griddle greased with oil or ghee. It is spread out with the base of a ladle or a bowl to form a pancake. It can be made either thick like a pancake, or thin and crispy. A dosa is served hot, either folded in half or rolled like a wrap. It is usually served with chutney and sambar. The mixture of white grams and rice can be replaced with highly refined wheat flour or semolina.
Serving
Dosas can be stuffed with fillings of vegetables and sauces to make a quick meal. They are typically served with a vegetarian side dish, which varies according to regional and personal preferences. Common side items are:
Sambar
Chutney
Idli podi or milagaipodi: A lentil powder with spices and sometimes desiccated coconut, mixed with sesame oil or groundnut oil or ghee
Indian pickles
Variations
Masala dosa is a roasted dosa served with potato curry, chutney, and sambar, while saada (plain) dosa is prepared with a lighter texture; paper dosa is a thin and crisp version. Rava dosa is made crispier using semolina. Newer versions include Chinese dosa, cheese dosa, paneer dosa, and pizza dosa.
Though dosa is typically made with rice and lentils, other versions exist.
World record
On 16 November 2014, 29 chefs, at Hotel Daspalla in Hyderabad, India, created a dosa that was long and weighed , earning the Guinness World Record for the longest dosa.
In popular culture
In a November 2019 video promoting her campaign for presidency, United States Vice President Kamala Harris cooked masala dosa with actress and comedian Mindy Kaling.
Venba is a 2023 cooking video game that features dosa as one of the dishes that can be cooked in a long line of foods representing Tamil cuisine.
Related foods
Uttapam: a thick relatively soft crepe mostly topped with diced onions, tomatoes, cilantro or cheese, sometimes described as an Indian pizza
Pesarattu: made from green gram in Andhra Pradesh, served with a ginger and tamarind chutney
Appam: a pancake prepared from patted rice batter, served with sweet coconut milk and/or sugar
Chakuli pitha: the batter contains more black gram and less rice flour
Apam balik: made from a mixture of flour, eggs, sugar, baking soda, coconut milk and water
Jianbing: a Chinese dish
Bánh xèo: a Vietnamese dish
Lahoh: a Somali dish
Injera: an Ethiopian dish made with fermented teff batter
See also
List of fermented foods
List of Indian breads
List of pancakes
Mangalorean cuisine
Udupi cuisine
Cuisine of Kerala
South Indian cuisine
Thali
References
Andhra cuisine
Articles containing video clips
Burmese cuisine
Fermented foods
Indian breads
Indian fast food
Karnataka cuisine
Kerala cuisine
Malaysian breads
Mangalorean cuisine
Pancakes
Singaporean cuisine
South Indian cuisine
Sri Lankan pancakes
Tamil cuisine
Telangana cuisine
Vegetarian dishes of India
Indo-Caribbean cuisine
Hindu cuisine
Indian cuisine
South Asian cuisine
Vegetarian cuisine | Dosa (food) | [
"Biology"
] | 1,430 | [
"Fermented foods",
"Biotechnology products"
] |
359,175 | https://en.wikipedia.org/wiki/Serre%20duality | In algebraic geometry, a branch of mathematics, Serre duality is a duality for the coherent sheaf cohomology of algebraic varieties, proved by Jean-Pierre Serre. The basic version applies to vector bundles on a smooth projective variety, but Alexander Grothendieck found wide generalizations, for example to singular varieties. On an n-dimensional variety, the theorem says that a cohomology group is the dual space of another one, . Serre duality is the analog for coherent sheaf cohomology of Poincaré duality in topology, with the canonical line bundle replacing the orientation sheaf.
The Serre duality theorem is also true in complex geometry more generally, for compact complex manifolds that are not necessarily projective complex algebraic varieties. In this setting, the Serre duality theorem is an application of Hodge theory for Dolbeault cohomology, and may be seen as a result in the theory of elliptic operators.
These two different interpretations of Serre duality coincide for non-singular projective complex algebraic varieties, by an application of Dolbeault's theorem relating sheaf cohomology to Dolbeault cohomology.
Serre duality for vector bundles
Algebraic theorem
Let X be a smooth variety of dimension n over a field k. Define the canonical line bundle to be the bundle of n-forms on X, the top exterior power of the cotangent bundle:
Suppose in addition that X is proper (for example, projective) over k. Then Serre duality says: for an algebraic vector bundle E on X and an integer i, there is a natural isomorphism:
of finite-dimensional k-vector spaces. Here denotes the tensor product of vector bundles. It follows that the dimensions of the two cohomology groups are equal:
As in Poincaré duality, the isomorphism in Serre duality comes from the cup product in sheaf cohomology. Namely, the composition of the cup product with a natural trace map on is a perfect pairing:
The trace map is the analog for coherent sheaf cohomology of integration in de Rham cohomology.
Differential-geometric theorem
Serre also proved the same duality statement for X a compact complex manifold and E a holomorphic vector bundle.
Here, the Serre duality theorem is a consequence of Hodge theory. Namely, on a compact complex manifold equipped with a Riemannian metric, there is a Hodge star operator:
where . Additionally, since is complex, there is a splitting of the complex differential forms into forms of type . The Hodge star operator (extended complex-linearly to complex-valued differential forms) interacts with this grading as:
Notice that the holomorphic and anti-holomorphic indices have switched places. There is a conjugation on complex differential forms which interchanges forms of type and , and if one defines the conjugate-linear Hodge star operator by then we have:
Using the conjugate-linear Hodge star, one may define a Hermitian -inner product on complex differential forms, by:
where now is an -form, and in particular a complex-valued -form and can therefore be integrated on with respect to its canonical orientation. Furthermore, suppose is a Hermitian holomorphic vector bundle. Then the Hermitian metric gives a conjugate-linear isomorphism between and its dual vector bundle, say . Defining , one obtains an isomorphism:
where consists of smooth -valued complex differential forms. Using the pairing between and given by and , one can therefore define a Hermitian -inner product on such -valued forms by:
where here means wedge product of differential forms and using the pairing between and given by .
The Hodge theorem for Dolbeault cohomology asserts that if we define:
where is the Dolbeault operator of and is its formal adjoint with respect to the inner product, then:
On the left is Dolbeault cohomology, and on the right is the vector space of harmonic -valued differential forms defined by:
Using this description, the Serre duality theorem can be stated as follows: The isomorphism induces a complex linear isomorphism:
This can be easily proved using the Hodge theory above. Namely, if is a cohomology class in with unique harmonic representative , then:
with equality if and only if . In particular, the complex linear pairing:
between and is non-degenerate, and induces the isomorphism in the Serre duality theorem.
The statement of Serre duality in the algebraic setting may be recovered by taking , and applying Dolbeault's theorem, which states that:
where on the left is Dolbeault cohomology and on the right sheaf cohomology, where denotes the sheaf of holomorphic -forms. In particular, we obtain:
where we have used that the sheaf of holomorphic -forms is just the canonical bundle of .
Algebraic curves
A fundamental application of Serre duality is to algebraic curves. (Over the complex numbers, it is equivalent to consider compact Riemann surfaces.) For a line bundle L on a smooth projective curve X over a field k, the only possibly nonzero cohomology groups are and . Serre duality describes the group in terms of an group (for a different line bundle). That is more concrete, since of a line bundle is simply its space of sections.
Serre duality is especially relevant to the Riemann–Roch theorem for curves. For a line bundle L of degree d on a curve X of genus g, the Riemann–Roch theorem says that:
Using Serre duality, this can be restated in more elementary terms:
The latter statement (expressed in terms of divisors) is in fact the original version of the theorem from the 19th century. This is the main tool used to analyze how a given curve can be embedded into projective space and hence to classify algebraic curves.
Example: Every global section of a line bundle of negative degree is zero. Moreover, the degree of the canonical bundle is . Therefore, Riemann–Roch implies that for a line bundle L of degree , is equal to . When the genus g is at least 2, it follows by Serre duality that . Here is the first-order deformation space of X. This is the basic calculation needed to show that the moduli space of curves of genus g has dimension .
Serre duality for coherent sheaves
Another formulation of Serre duality holds for all coherent sheaves, not just vector bundles. As a first step in generalizing Serre duality, Grothendieck showed that this version works for schemes with mild singularities, Cohen–Macaulay schemes, not just smooth schemes.
Namely, for a Cohen–Macaulay scheme X of pure dimension n over a field k, Grothendieck defined a coherent sheaf on X called the dualizing sheaf. (Some authors call this sheaf .) Suppose in addition that X is proper over k. For a coherent sheaf E on X and an integer i, Serre duality says that there is a natural isomorphism:
of finite-dimensional k-vector spaces. Here the Ext group is taken in the abelian category of -modules. This includes the previous statement, since is isomorphic to when E is a vector bundle.
In order to use this result, one has to determine the dualizing sheaf explicitly, at least in special cases. When X is smooth over k, is the canonical line bundle defined above. More generally, if X is a Cohen–Macaulay subscheme of codimension r in a smooth scheme Y over k, then the dualizing sheaf can be described as an Ext sheaf:
When X is a local complete intersection of codimension r in a smooth scheme Y, there is a more elementary description: the normal bundle of X in Y is a vector bundle of rank r, and the dualizing sheaf of X is given by:
In this case, X is a Cohen–Macaulay scheme with a line bundle, which says that X is Gorenstein.
Example: Let X be a complete intersection in projective space over a field k, defined by homogeneous polynomials of degrees . (To say that this is a complete intersection means that X has dimension .) There are line bundles O(d) on for integers d, with the property that homogeneous polynomials of degree d can be viewed as sections of O(d). Then the dualizing sheaf of X is the line bundle:
by the adjunction formula. For example, the dualizing sheaf of a plane curve X of degree d is .
Complex moduli of Calabi–Yau threefolds
In particular, we can compute the number of complex deformations, equal to for a quintic threefold in , a Calabi–Yau variety, using Serre duality. Since the Calabi–Yau property ensures Serre duality shows us that showing the number of complex moduli is equal to in the Hodge diamond. Of course, the last statement depends on the Bogomolev–Tian–Todorov theorem which states every deformation on a Calabi–Yau is unobstructed.
Grothendieck duality
Grothendieck's theory of coherent duality is a broad generalization of Serre duality, using the language of derived categories. For any scheme X of finite type over a field k, there is an object of the bounded derived category of coherent sheaves on X, , called the dualizing complex of X over k. Formally, is the exceptional inverse image , where f is the given morphism . When X is Cohen–Macaulay of pure dimension n, is ; that is, it is the dualizing sheaf discussed above, viewed as a complex in (cohomological) degree −n. In particular, when X is smooth over k, is the canonical line bundle placed in degree −n.
Using the dualizing complex, Serre duality generalizes to any proper scheme X over k. Namely, there is a natural isomorphism of finite-dimensional k-vector spaces:
for any object E in .
More generally, for a proper scheme X over k, an object E in , and F a perfect complex in , one has the elegant statement:
Here the tensor product means the derived tensor product, as is natural in derived categories. (To compare with previous formulations, note that can be viewed as .) When X is also smooth over k, every object in is a perfect complex, and so this duality applies to all E and F in . The statement above is then summarized by saying that is a Serre functor on for X smooth and proper over k.
Serre duality holds more generally for proper algebraic spaces over a field.
Notes
References
External links
Topological methods of algebraic geometry
Complex manifolds
Duality theories | Serre duality | [
"Mathematics"
] | 2,245 | [
"Mathematical structures",
"Category theory",
"Duality theories",
"Geometry"
] |
359,238 | https://en.wikipedia.org/wiki/Prescription%20drug | A prescription drug (also prescription medication, prescription medicine or prescription-only medication) is a pharmaceutical drug that is permitted to be dispensed only to those with a medical prescription. In contrast, over-the-counter drugs can be obtained without a prescription. The reason for this difference in substance control is the potential scope of misuse, from drug abuse to practicing medicine without a license and without sufficient education. Different jurisdictions have different definitions of what constitutes a prescription drug.
In North America, , usually printed as "Rx", is used as an abbreviation of the word "prescription". It is a contraction of the Latin word "recipe" (an imperative form of "recipere") meaning "take". Prescription drugs are often dispensed together with a monograph (in Europe, a Patient Information Leaflet or PIL) that gives detailed information about the drug.
The use of prescription drugs has been increasing since the 1960s.
Regulation
Australia
In Australia, the Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP) governs the manufacture and supply of drugs with several categories:
Schedule 1 – Defunct Drug.
Schedule 2 – Pharmacy Medicine
Schedule 3 – Pharmacist-Only Medicine
Schedule 4 – Prescription-Only Medicine/Prescription Animal Remedy
Schedule 5 – Caution/Poison.
Schedule 6 – Poison
Schedule 7 – Dangerous Poison
Schedule 8 – Controlled Drug (Possession without authority illegal)
Schedule 9 – Prohibited Substance (Possession illegal without a license legal only for research purposes)
Schedule 10 – Controlled Poison.
Unscheduled Substances.
As in other developed countries, the person requiring a prescription drug attends the clinic of a qualified health practitioner, such as a physician, who may write the prescription for the required drug.
Many prescriptions issued by health practitioners in Australia are covered by the Pharmaceutical Benefits Scheme, a scheme that provides subsidised prescription drugs to residents of Australia to ensure that all Australians have affordable and reliable access to a wide range of necessary medicines. When purchasing a drug under the PBS, the consumer pays no more than the patient co-payment contribution, which, as of January 1, 2022, is A$42.50 for general patients. Those covered by government entitlements (low-income earners, welfare recipients, Health Care Card holders, etc.) and or under the Repatriation Pharmaceutical Benefits Scheme (RPBS) have a reduced co-payment, which is A$6.80 in 2022. The co-payments are compulsory and can be discounted by pharmacies up to a maximum of A$1.00 at cost to the pharmacy.
United Kingdom
In the United Kingdom, the Medicines Act 1968 and the Prescription Only Medicines (Human Use) Order 1997 contain regulations that cover the supply of sale, use, prescribing and production of medicines. There are three categories of medicine:
Prescription-only medicines (POM), which may be dispensed (sold in the case of a private prescription) by a pharmacist only to those to whom they have been prescribed
Pharmacy medicines (P), which may be sold by a pharmacist without a prescription
General sales list (GSL) medicines, which may be sold without a prescription in any shop
The simple possession of a prescription-only medicine without a prescription is legal unless it is covered by the Misuse of Drugs Act 1971.
A patient visits a medical practitioner or dentist, who may prescribe drugs and certain other medical items, such as blood glucose-testing equipment for diabetics. Also, qualified and experienced nurses, paramedics and pharmacists may be independent prescribers. Both may prescribe all POMs (including controlled drugs), but may not prescribe Schedule 1 controlled drugs, and 3 listed controlled drugs for the treatment of addiction; which is similar to doctors, who require a special licence from the Home Office to prescribe schedule 1 drugs. Schedule 1 drugs have little or no medical benefit, hence their limitations on prescribing. District nurses and health visitors have had limited prescribing rights since the mid-1990s; until then, prescriptions for dressings and simple medicines had to be signed by a doctor. Once issued, a prescription is taken by the patient to a pharmacy, which dispenses the medicine.
Most prescriptions are NHS prescriptions, subject to a standard charge that is unrelated to what is dispensed. The NHS prescription fee was increased to £9.90 for each item in England in May 2024; prescriptions are free of charge if prescribed and dispensed in Scotland, Wales and Northern Ireland, and for some patients in England, such as inpatients, children, those over 60s or with certain medical conditions, and claimants of certain benefits. The pharmacy charges the NHS the actual cost of the medicine, which may vary from a few pence to hundreds of pounds. A patient can consolidate prescription charges by using a prescription payment certificate (informally a "season ticket"), effectively capping costs at £31.25 a quarter or £111.60 for a year.
Outside the NHS, private prescriptions are issued by private medical practitioner and sometimes under the NHS for medicines that are not covered by the NHS. A patient pays the pharmacy the normal price for medicine prescribed outside the NHS.
Survey results published by Ipsos MORI in 2008 found that around 800,000 people in England were not collecting prescriptions or getting them dispensed because of the cost, the same as in 2001.
United States
In the United States, the Federal Food, Drug, and Cosmetic Act defines what substances, known as legend drugs, require a prescription for them to be dispensed by a pharmacy. The federal government authorizes physicians (of any specialty), physician assistants, nurse practitioners and other advanced practice nurses, veterinarians, dentists, and optometrists to prescribe any controlled substance. They are issued unique DEA numbers. Many other mental and physical health technicians, including basic-level registered nurses, medical assistants, emergency medical technicians, most psychologists, and social workers, are not authorized to prescribe legend drugs.
The federal Controlled Substances Act (CSA) was enacted in 1970. It regulates manufacture, importation, possession, use, and distribution of controlled substances, which are drugs with potential for abuse or addiction. The legislation classifies these drugs into five schedules, with varying qualifications for each schedule. The schedules are designated schedule I, schedule II, schedule III, schedule IV, and schedule V. Many drugs other than controlled substances require a prescription.
The safety and the effectiveness of prescription drugs in the US are regulated by the 1987 Prescription Drug Marketing Act (PDMA). The Food and Drug Administration (FDA) is charged with implementing the law.
As a general rule, over-the-counter drugs (OTC) are used to treat a condition that does not need care from a healthcare professional if have been proven to meet higher safety standards for self-medication by patients. Often, a lower strength of a drug will be approved for OTC use, but higher strengths require a prescription to be obtained; a notable case is ibuprofen, which has been widely available as an OTC pain killer since the mid-1980s, but it is available by prescription in doses up to four times the OTC dose for severe pain that is not adequately controlled by the OTC strength.
Herbal preparations, amino acids, vitamins, minerals, and other food supplements are regulated by the FDA as dietary supplements. Because specific health claims cannot be made, the consumer must make informed decisions when purchasing such products.
By law, American pharmacies operated by "membership clubs" such as Costco and Sam's Club must allow non-members to use their pharmacy services and may not charge more for these services than they charge as their members.
Physicians may legally prescribe drugs for uses other than those specified in the FDA approval, known as off-label use. Drug companies, however, are prohibited from marketing their drugs for off-label uses.
Some prescription drugs are commonly abused, particularly those marketed as analgesics, including fentanyl (Duragesic), hydrocodone (Vicodin), oxycodone (OxyContin), oxymorphone (Opana), propoxyphene (Darvon), hydromorphone (Dilaudid), meperidine (Demerol), and diphenoxylate (Lomotil).
Some prescription painkillers have been found to be addictive, and unintentional poisoning deaths in the United States have skyrocketed since the 1990s according to the National Safety Council. Prescriber education guidelines as well as patient education, prescription drug monitoring programs and regulation of pain clinics are regulatory tactics which have been used to curtail opioid use and misuse.
Expiration date
The expiration date, required in several countries, specifies the date up to which the manufacturer guarantees the full potency and safety of a drug. In the United States, expiration dates are determined by regulations established by the FDA. The FDA advises consumers not to use products after their expiration dates.
A study conducted by the U.S. Food and Drug Administration covered over 100 drugs, prescription and over-the-counter. The results showed that about 90% of them were safe and effective far past their original expiration date. At least one drug worked 15 years after its expiration date. Joel Davis, a former FDA expiration-date compliance chief, said that with a handful of exceptions—notably nitroglycerin, insulin, and some liquid antibiotics (outdated tetracyclines can cause Fanconi syndrome)—most expired drugs are probably effective.
The American Medical Association issued a report and statement on Pharmaceutical Expiration Dates. The Harvard Medical School Family Health Guide notes that, with rare exceptions, "it's true the effectiveness of a drug may decrease over time, but much of the original potency still remains even a decade after the expiration date".
The expiration date is the final day that the manufacturer guarantees the full potency and safety of a medication. Drug expiration dates exist on most medication labels, including prescription, over-the-counter and dietary supplements. U.S. pharmaceutical manufacturers are required by law to place expiration dates on prescription products prior to marketing. For legal and liability reasons, manufacturers will not make recommendations about the stability of drugs past the original expiration date.
Cost
Prices of prescription drugs vary widely around the world. Prescription costs for biosimilar and generic drugs are usually less than brand names, but the cost is different from one pharmacy to another.
To lower prescription drug costs, some U.S. states have sought federal approval to buy drugs in Canada, as of 2022.
Generics undergo strict scrutiny to meet the equal efficacy, safety, dosage, strength, stability, and quality of brand name drugs. Generics are developed after the brand name has already been established, and so generic drug approval in many aspects has a shortened approval process because it replicates the brand name drug.
Brand name drugs cost more due to time, money, and resources that drug companies invest in them to conduct development, including clinical trials that the FDA requires for the drug to be marketed. Because drug companies have to invest more in research costs to do this, brand name drug prices are much higher when sold to consumers.
When the patent expires for a brand name drug, generic versions of that drug are produced by other companies and are sold for lower price. By switching to generic prescription drugs, patients can save significant amounts of money: e.g. one study by the FDA showed an example with more than 52% savings of a consumer's overall costs of their prescription drugs.
Strategies to limit drug prices in the United States
In the United States there are many resources available to patients to lower the costs of medication. These include copayments, coinsurance, and deductibles. The Medicaid Drug Rebate Program is another example.
Generic drug programs lower the amount of money patients have to pay when picking up their prescription at the pharmacy. As their name implies, they only cover generic drugs.
Co-pay assistance programs are programs that help patients lower the costs of specialty medications; i.e., medications that are on restricted formularies, have limited distribution, and/or have no generic version available. These medications can include drugs for HIV, hepatitis C, and multiple sclerosis. Patient Assistance Program Center (RxAssist) has a list of foundations that provide co-pay assistance programs. Co-pay assistance programs are for under-insured patients. Patients without insurance are not eligible for this resource; however, they may be eligible for patient assistance programs.
Patient assistance programs are funded by the manufacturer of the medication. Patients can often apply to these programs through the manufacturer's website. This type of assistance program is one of the few options available to uninsured patients.
The out-of-pocket cost for patients enrolled in co-pay assistance or patient assistance programs is $0. It is a major resource to help lower costs of medicationshowever, many providers and patients are not aware of these resources.
Environment
Traces of prescription drugsincluding antibiotics, anti-convulsants, mood stabilizers and sex hormoneshave been detected in drinking water. Pharmaceutically active compounds (PhACs) discarded from human therapy and their metabolites may not be eliminated entirely by sewage treatment plants and have been detected at low concentrations in surface waters downstream from those plants. The continuous discarding of incompletely treated water may interact with other environmental chemicals and lead to uncertain ecological effects. Due to most pharmaceuticals being highly soluble, fish and other aquatic organisms are susceptible to their effects. The long-term effects of pharmaceuticals in the environment may affect survival and reproduction of such organisms. However, levels of medical drug waste in the water is at a low enough level that it is not a direct concern to human health. However, processes, such as biomagnification, are potential human health concerns.
On the other hand, there is clear evidence of harm to aquatic animals and fauna. Recent advancements in technology have allowed scientists to detect smaller, trace quantities of pharmaceuticals in the ng/ml range. Despite being found at low concentrations, female hormonal contraceptives may cause feminizing effects on male vertebrate species, such as fish, frogs and crocodiles.
The FDA established guidelines in 2007 to inform consumers should dispose of prescription drugs. When medications do not include specific disposal instructions, patients should not flush medications in the toilet, but instead use medication take-back programs to reduce the amount of pharmaceutical waste in sewage and landfills. If no take-back programs are available, prescription drugs can be discarded in household trash after they are crushed or dissolved and then mixed in a separate container or sealable bag with undesirable substances like cat litter or other unappealing material (to discourage consumption).
See also
U.S. Controlled Substances Act
Co-pay card
Classification of Pharmaco-Therapeutic Referrals
Drug policy – policy regulating drugs considered dangerous, rather than only medicinal
Inverse benefit law
List of pharmaceutical companies
Package insert
Pharmacy (shop)
Pharmacy automation
Pill splitting
Prescription drug prices in the United States
Regulation of therapeutic goods
References
Pharmaceuticals policy
Prescription of drugs
Pharmacy
es:Medicamento | Prescription drug | [
"Chemistry"
] | 3,139 | [
"Pharmacology",
"Pharmacy"
] |
359,250 | https://en.wikipedia.org/wiki/Human-readable%20medium%20and%20data | In computing, a human-readable medium or human-readable format is any encoding of data or information that can be naturally read by humans, resulting in human-readable data. It is often encoded as ASCII or Unicode text, rather than as binary data.
In most contexts, the alternative to a human-readable representation is a machine-readable format or medium of data primarily designed for reading by electronic, mechanical or optical devices, or computers. For example, Universal Product Code (UPC) barcodes are very difficult to read for humans, but very effective and reliable with the proper equipment, whereas the strings of numerals that commonly accompany the label are the human-readable form of the barcode information. Since any type of data encoding can be parsed by a suitably programmed computer, the decision to use binary encoding rather than text encoding is usually made to conserve storage space. Encoding data in a binary format typically requires fewer bytes of storage and increases efficiency of access (input and output) by eliminating format parsing or conversion.
With the advent of standardized, highly structured markup languages, such as Extensible Markup Language (XML), the decreasing costs of data storage, and faster and cheaper data communication networks, compromises between human-readability and machine-readability are now more common-place than they were in the past. This has led to humane markup languages and modern configuration file formats that are far easier for humans to read.
In addition, these structured representations can be compressed very effectively for transmission or storage.
Human-readable protocols greatly reduce the cost of debugging.
Various organizations have standardized the definition of human-readable and machine-readable data and how they are applied in their respective fields of application, e.g., the Universal Postal Union.
Often the term human-readable is also used to describe shorter names or strings, that are easier to comprehend or to remember than long, complex syntax notations, such as some Uniform Resource Locator strings.
Occasionally "human-readable" is used to describe ways of encoding an arbitrary integer into a long series of English words.
Compared to decimal or other compact binary-to-text encoding systems,
English words are easier for humans to read, remember, and type in.
See also
Self-documenting code – source code that is both machine-readable and human-readable
Human-readable code
Machine-Readable Documents
Machine-readable data
Data (computing)
Data conversion
Hellschreiber
Human–computer interaction
Human factors
Plain text
Quoted printable
References
Optical character recognition | Human-readable medium and data | [
"Technology"
] | 531 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
359,281 | https://en.wikipedia.org/wiki/Phase%20detector | A phase detector or phase comparator is a frequency mixer, analog multiplier or logic circuit that generates a signal which represents the difference in phase between two signal inputs.
The phase detector is an essential element of the phase-locked loop (PLL). Detecting phase difference is important in other applications, such as motor control, radar and telecommunication systems, servo mechanisms, and demodulators.
Types
Phase detectors for phase-locked loop circuits may be classified in two types. A Type I detector is designed to be driven by analog signals or square-wave digital signals and produces an output pulse at the difference frequency. The Type I detector always produces an output waveform, which must be filtered to control the phase-locked loop voltage-controlled oscillator (VCO). A type II detector is sensitive only to the relative timing of the edges of the input and reference pulses and produces a constant output proportional to phase difference when both signals are at the same frequency. This output will tend not to produce ripple in the control voltage of the VCO.
Analog phase detector
The phase detector needs to compute the phase difference of its two input signals. Let α be the phase of the first input and β be the phase of the second. The actual input signals to the phase detector, however, are not α and β, but rather sinusoids such as sin(α) and cos(β). In general, computing the phase difference would involve computing the arcsine and arccosine of each normalized input (to get an ever-increasing phase) and doing a subtraction. Such an analog calculation is difficult. Fortunately, the calculation can be simplified by using some approximations.
Assume that the phase differences will be small (much less than 1 radian, for example). The small-angle approximation for the sine function and the sine angle addition formula yield:
The expression suggests a quadrature phase detector can be made by summing the outputs of two multipliers. The quadrature signals may be formed with phase shift networks. Two common implementations for multipliers are the double balanced diode mixer, diode ring and the four-quadrant multiplier, Gilbert cell.
Instead of using two multipliers, a more common phase detector uses a single multiplier and a different trigonometric identity:
The first term provides the desired phase difference. The second term is a sinusoid at twice the reference frequency, so it can be filtered out. In the case of general waveforms the phase detector output is described with the phase detector characteristic.
A mixer-based detector (e.g., a Schottky diode-based double-balanced mixer) provides "the ultimate in phase noise floor performance" and "in system sensitivity." since it does not create finite pulse widths at the phase detector output. Another advantage of a mixer-based PD is its relative simplicity. Both the quadrature and simple multiplier phase detectors have an output that depends on the input amplitudes as well as the phase difference. In practice, the input amplitudes of input signals are normalized prior to input into the detector to remove the amplitude dependency.
Digital phase detector
A phase detector suitable for square wave signals can be made from an exclusive-OR (XOR) logic gate. When the two signals being compared are completely in-phase, the XOR gate's output will have a constant level of zero. When the two signals differ in phase by 1°, the XOR gate's output will be high for 1/180th of each cycle — the fraction of a cycle during which the two signals differ in value. When the signals differ by 180° — that is, one signal is high when the other is low, and vice versa — the XOR gate's output remains high throughout each cycle. This phase detector requires inputs that are symmetrical square waves, or nearly so.
The XOR detector compares well to the analog mixer in that it locks near a 90° phase difference and has a pulse wave output at twice the reference frequency. The output changes duty cycle in proportion to the phase difference. Applying the XOR gate's output to a low-pass filter results in an analog voltage that is proportional to the phase difference between the two signals. The remainder of its characteristics are very similar to the analog mixer for capture range, lock time, reference spurious and low-pass filter requirements.
Digital phase detectors can also be based on a sample and hold circuit, a charge pump, or a logic circuit consisting of flip-flops. When a phase detector based on logic gates is used in a PLL, it can quickly force the VCO to synchronize with an input signal, even when the frequency of the input signal differs substantially from the initial frequency of the VCO. Such phase detectors also have other desirable properties, such as better accuracy when there are only small phase differences between the two signals being compared and superior pull-in range.
Phase frequency detector
A phase frequency detector (PFD) is an asynchronous circuit originally made of four flip-flops (i.e., the phase-frequency detectors found in both the RCA CD4046 and the motorola MC4344 ICs introduced in the 1970s). The logic determines which of the two signals has a zero-crossing earlier or more often. When used in a PLL application, lock can be achieved even when it is off frequency.
The PFD improves the pull-in range and lock time over simpler phase detector designs such as multipliers or XOR gates. Those designs work well when the two input phases are already near lock or in lock, but perform poorly when the phase difference is too large. When the phase difference is too large (which will happen when the instantaneous frequency difference is large), then the sign of the loop gain can reverse and start driving the VCO away from lock. The PFD has the advantage of producing an output even when the two signals being compared differ not only in phase but in frequency. A phase frequency detector prevents a false lock condition in PLL applications, in which the PLL synchronizes with the wrong phase of the input signal or with the wrong frequency (e.g., a harmonic of the input signal).
A bang-bang charge pump phase frequency detector supplies current pulses with fixed total charge, either positive or negative, to the capacitor acting as an integrator. A phase detector for a bang-bang charge pump must always have a dead band where the phases of inputs are close enough that the detector fires either both or neither of the charge pumps, for no total effect. Bang-bang phase detectors are simple but are associated with significant minimum peak-to-peak jitter, because of drift within the dead band.
In 1976 it was shown that by using a three-state phase frequency detector configuration (using only two flip-flops) instead of the original RCA/Motorola four flip-flops configurations, this problem could be elegantly overcome. For other types of phase-frequency detectors other, though possibly less-elegant, solutions exist to the dead zone phenomenon. Other solutions are necessary since the three-state phase-frequency detector does not work for certain applications involving randomized signal degradation, which can be found on the inputs to some signal regeneration systems (e.g., clock recovery designs).
A proportional phase detector employs a charge pump that supplies charge amounts in proportion to the phase error detected. Some have dead bands and some do not. Specifically, some designs produce both up and down control pulses even when the phase difference is zero. These pulses are small, nominally the same duration, and cause the charge pump to produce equal-charge positive and negative current pulses when the phase is perfectly matched. Phase detectors with this kind of control system don't exhibit a dead band and typically have lower minimum peak-to-peak jitter when used in PLLs.
In PLL applications it is frequently required to know when the loop is out of lock. The more complex digital phase-frequency detectors usually have an output that allows a reliable indication of an out-of-lock condition.
Electronic phase detector
Some signal processing techniques such as those used in radar may require both the amplitude and the phase of a signal, to recover all the information encoded in that signal. One technique is to feed an amplitude-limited signal into one port of a product detector and a reference signal into the other port; the output of the detector will represent the phase difference between the signals.
Optical phase detectors
In optics phase detectors are also known as interferometers. For pulsed (amplitude modulated) light, it is said to measure the phase between the carriers. It is also possible to measure the delay between the envelopes of two short optical pulses by means of cross correlation in a nonlinear crystal. And it is possible to measure the phase between the envelope and the carrier of an optical pulse, by sending a pulse into a nonlinear crystal. There the spectrum gets wider and at the edges the shape depends significantly on the phase.
See also
Carrier recovery
Differential amplifier
References
Further reading
External links
Chapter 8 Modulators and Demodulators
Phase-Lock Loop Applications Using the MAX9382
Phase-Lock Loop Phase Detectors
Electronic circuits
Communication circuits
Analog circuits | Phase detector | [
"Engineering"
] | 1,897 | [
"Telecommunications engineering",
"Analog circuits",
"Electronic circuits",
"Electronic engineering",
"Communication circuits"
] |
359,377 | https://en.wikipedia.org/wiki/Abstract%20structure | An abstract structure is an abstraction that might be of the geometric spaces
or a set structure, or a hypostatic abstraction that is defined by a set of mathematical theorems and laws, properties and relationships in a way that is logically if not always historically independent of the structure of contingent experiences, for example, those involving physical objects. Abstract structures are studied not only in logic and mathematics but in the fields that apply them, as computer science and computer graphics, and in the studies that reflect on them, such as philosophy (especially the philosophy of mathematics). Indeed, modern mathematics has been defined in a very general sense as the study of abstract structures (by the Bourbaki group: see discussion there, at algebraic structure and also structure).
An abstract structure may be represented (perhaps with some degree of approximation) by one or more physical objectsthis is called an implementation or instantiation of the abstract structure. But the abstract structure itself is defined in a way that is not dependent on the properties of any particular implementation.
An abstract structure has a richer structure than a concept or an idea. An abstract structure must include precise rules of behaviour which can be used to determine whether a candidate implementation actually matches the abstract structure in question, and it must be free from contradictions. Thus we may debate how well a particular government fits the concept of democracy, but there is no room for debate over whether a given sequence of moves is or is not a valid game of chess (for example Kasparovian approaches).
Examples
A sorting algorithm is an abstract structure, but a recipe is not, because it depends on the properties and quantities of its ingredients.
A simple melody is an abstract structure, but an orchestration is not, because it depends on the properties of particular instruments.
Euclidean geometry is an abstract structure, but the theory of continental drift is not, because it depends on the geology of the Earth.
A formal language is an abstract structure, but a natural language is not, because its rules of grammar and syntax are open to debate and interpretation.
Notes
See also
Abstraction in computer science
Abstraction in general
Abstraction in mathematics
Abstract object
Deductive apparatus
Formal sciences
Mathematical structure
Abstraction
Mathematical terminology
Structure
da:Abstrakt (begreb) | Abstract structure | [
"Mathematics"
] | 451 | [
"nan"
] |
359,586 | https://en.wikipedia.org/wiki/List%20of%20mathematical%20examples | This page will attempt to list examples in mathematics. To qualify for inclusion, an article should be about a mathematical object with a fair amount of concreteness. Usually a definition of an abstract concept, a theorem, or a proof would not be an "example" as the term should be understood here (an elegant proof of an isolated but particularly striking fact, as opposed to a proof of a general theorem, could perhaps be considered an "example"). The discussion page for list of mathematical topics has some comments on this. Eventually this page may have its own discussion page. This page links to itself in order that edits to this page will be included among related changes when the user clicks on that button.
The concrete example within the article titled Rao-Blackwell theorem is perhaps one of the best ways for a probabilist ignorant of statistical inference to get a quick impression of the flavor of that subject.
Uncategorized examples, alphabetized
Alexander horned sphere
All horses are the same color
Cantor function
Cantor set
Checking if a coin is biased
Concrete illustration of the central limit theorem
Differential equations of mathematical physics
Dirichlet function
Discontinuous linear map
Efron's non-transitive dice
Example of a game without a value
Examples of contour integration
Examples of differential equations
Examples of generating functions
List of space groups
Examples of Markov chains
Examples of vector spaces
Fano plane
Frieze group
Gray graph
Hall–Janko graph
Higman–Sims graph
Hilbert matrix
Illustration of a low-discrepancy sequence
Illustration of the central limit theorem
An infinitely differentiable function that is not analytic
Leech lattice
Lewy's example on PDEs
List of finite simple groups
Long line
Normally distributed and uncorrelated does not imply independent
Pairwise independence of random variables need not imply mutual independence.
Petersen graph
Sierpinski space
Simple example of Azuma's inequality for coin flips
Proof that 22/7 exceeds π
Solenoid (mathematics)
Sorgenfrey plane
Stein's example
Three cards and a top hat
Topologist's sine curve
Tsirelson space
Tutte eight cage
Weierstrass function
Wilkinson's polynomial
Wallpaper group
Uses of trigonometry (The "examples" in that article are not mathematical objects, i.e., numbers, functions, equations, sets, etc., but applications of trigonometry or scientific fields to which trigonometry is applied.)
Specialized lists of mathematical examples
List of algebraic surfaces
List of curves
List of complexity classes
List of examples in general topology
List of finite simple groups
List of Fourier-related transforms
List of mathematical functions
List of knots
List of mathematical knots and links
List of manifolds
List of mathematical shapes
List of matrices
List of numbers
List of polygons, polyhedra and polytopes
List of prime numbers —not merely a numerical table, but a list of various kinds of prime numbers, each with a table
List of regular polytopes
List of surfaces
List of small groups
Table of Lie groups
Sporadic groups
See also list of finite simple groups.
Baby Monster group
Conway group
Fischer groups
Harada–Norton group
Held group
Higman–Sims group
Janko groups
Lyons group
The Mathieu groups
McLaughlin group
Monster group
O'Nan group
Rudvalis group
Suzuki sporadic group
Thompson group
See also
Counterexample
List of examples in general topology
Examples | List of mathematical examples | [
"Mathematics"
] | 675 | [
"nan"
] |
359,626 | https://en.wikipedia.org/wiki/Second%20Industrial%20Revolution | The Second Industrial Revolution, also known as the Technological Revolution, was a phase of rapid scientific discovery, standardisation, mass production and industrialisation from the late 19th century into the early 20th century. The First Industrial Revolution, which ended in the middle of the 19th century, was punctuated by a slowdown in important inventions before the Second Industrial Revolution in 1870. Though a number of its events can be traced to earlier innovations in manufacturing, such as the establishment of a machine tool industry, the development of methods for manufacturing interchangeable parts, as well as the invention of the Bessemer process and open hearth furnace to produce steel, later developments heralded the Second Industrial Revolution, which is generally dated between 1870 and 1914 (the beginning of World War I).
Advancements in manufacturing and production technology enabled the widespread adoption of technological systems such as telegraph and railroad networks, gas and water supply, and sewage systems, which had earlier been limited to a few select cities. The enormous expansion of rail and telegraph lines after 1870 allowed unprecedented movement of people and ideas, which culminated in a new wave of colonialism and globalization. In the same time period, new technological systems were introduced, most significantly electrical power and telephones. The Second Industrial Revolution continued into the 20th century with early factory electrification and the production line; it ended at the beginning of World War I.
Starting in 1947, the Information Age is sometimes also called the Third Industrial Revolution.
Overview
The Second Industrial Revolution was a period of rapid industrial development, primarily in the United Kingdom, Germany, and the United States, but also in France, the Low Countries, Italy and Japan. It followed on from the First Industrial Revolution that began in Britain in the late 18th century that then spread throughout Western Europe. It came to an end with the start of the World War I. While the First Revolution was driven by limited use of steam engines, interchangeable parts and mass production, and was largely water-powered, especially in the United States, the Second was characterized by the build-out of railroads, large-scale iron and steel production, widespread use of machinery in manufacturing, greatly increased use of steam power, widespread use of the telegraph, use of petroleum and the beginning of electrification. It also was the period during which modern organizational methods for operating large-scale businesses over vast areas came into use.
The concept was introduced by Patrick Geddes, Cities in Evolution (1910), and was being used by economists such as Erich Zimmermann (1951), but David Landes' use of the term in a 1966 essay and in The Unbound Prometheus (1972) standardized scholarly definitions of the term, which was most intensely promoted by Alfred Chandler (1918–2007). However, some continue to express reservations about its use. In 2003, Landes stressed the importance of new technologies, especially the internal combustion engine, petroleum, new materials and substances, including alloys and chemicals, electricity and communication technologies, such as the telegraph, telephone, and radio.
One author has called the period from 1867 to 1914, during which most of the great innovations were developed, "The Age of Synergy" since the inventions and innovations were engineering and science-based.
Industry and technology
A synergy between iron and steel, railroads and coal developed at the beginning of the Second Industrial Revolution. Railroads allowed cheap transportation of materials and products, which in turn led to cheap rails to build more roads. Railroads also benefited from cheap coal for their steam locomotives. This synergy led to the laying of 75,000 miles of track in the U.S. in the 1880s, the largest amount anywhere in world history.
Iron
The hot blast technique, in which the hot flue gas from a blast furnace is used to preheat combustion air blown into a blast furnace, was invented and patented by James Beaumont Neilson in 1828 at Wilsontown Ironworks in Scotland. Hot blast was the single most important advance in fuel efficiency of the blast furnace as it greatly reduced the fuel consumption for making pig iron, and was one of the most important technologies developed during the Industrial Revolution. Falling costs for producing wrought iron coincided with the emergence of the railway in the 1830s.
The early technique of hot blast used iron for the regenerative heating medium. Iron caused problems with expansion and contraction, which stressed the iron and caused failure. Edward Alfred Cowper developed the Cowper stove in 1857. This stove used firebrick as a storage medium, solving the expansion and cracking problem. The Cowper stove was also capable of producing high heat, which resulted in very high throughput of blast furnaces. The Cowper stove is still used in today's blast furnaces.
With the greatly reduced cost of producing pig iron with coke using hot blast, demand grew dramatically and so did the size of blast furnaces.
Steel
The Bessemer process, invented by Sir Henry Bessemer, allowed the mass-production of steel, increasing the scale and speed of production of this vital material, and decreasing the labor requirements. The key principle was the removal of excess carbon and other impurities from pig iron by oxidation with air blown through the molten iron. The oxidation also raises the temperature of the iron mass and keeps it molten.
The "acid" Bessemer process had a serious limitation in that it required relatively scarce hematite ore which is low in phosphorus. Sidney Gilchrist Thomas developed a more sophisticated process to eliminate the phosphorus from iron. Collaborating with his cousin, Percy Gilchrist a chemist at the Blaenavon Ironworks, Wales, he patented his process in 1878; Bolckow Vaughan & Co. in Yorkshire was the first company to use his patented process. His process was especially valuable on the continent of Europe, where the proportion of phosphoric iron was much greater than in England, and both in Belgium and in Germany the name of the inventor became more widely known than in his own country. In America, although non-phosphoric iron largely predominated, an immense interest was taken in the invention.
The next great advance in steel making was the Siemens–Martin process. Sir Charles William Siemens developed his regenerative furnace in the 1850s, for which he claimed in 1857 to able to recover enough heat to save 70–80% of the fuel. The furnace operated at a high temperature by using regenerative preheating of fuel and air for combustion. Through this method, an open-hearth furnace can reach temperatures high enough to melt steel, but Siemens did not initially use it in that manner.
French engineer Pierre-Émile Martin was the first to take out a license for the Siemens furnace and apply it to the production of steel in 1865. The Siemens–Martin process complemented rather than replaced the Bessemer process. Its main advantages were that it did not expose the steel to excessive nitrogen (which would cause the steel to become brittle), it was easier to control, and that it permitted the melting and refining of large amounts of scrap steel, lowering steel production costs and recycling an otherwise troublesome waste material. It became the leading steel making process by the early 20th century.
The availability of cheap steel allowed building larger bridges, railroads, skyscrapers, and ships. Other important steel products—also made using the open hearth process—were steel cable, steel rod and sheet steel which enabled large, high-pressure boilers and high-tensile strength steel for machinery which enabled much more powerful engines, gears and axles than were previously possible. With large amounts of steel it became possible to build much more powerful guns and carriages, tanks, armored fighting vehicles and naval ships.
Rail
The increase in steel production from the 1860s meant that railways could finally be made from steel at a competitive cost. Being a much more durable material, steel steadily replaced iron as the standard for railway rail, and due to its greater strength, longer lengths of rails could now be rolled. Wrought iron was soft and contained flaws caused by included dross. Iron rails could also not support heavy locomotives and were damaged by hammer blow. The first to make durable rails of steel rather than wrought iron was Robert Forester Mushet at the Darkhill Ironworks, Gloucestershire in 1857.
The first of Mushet's steel rails was sent to Derby Midland railway station. The rails were laid at part of the station approach where the iron rails had to be renewed at least every six months, and occasionally every three. Six years later, in 1863, the rail seemed as perfect as ever, although some 700 trains had passed over it daily. This provided the basis for the accelerated construction of railways throughout the world in the late nineteenth century.
The first commercially available steel rails in the US were manufactured in 1867 at the Cambria Iron Works in Johnstown, Pennsylvania.
Steel rails lasted over ten times longer than did iron, and with the falling cost of steel, heavier weight rails were used. This allowed the use of more powerful locomotives, which could pull longer trains, and longer rail cars, all of which greatly increased the productivity of railroads. Rail became the dominant form of transport infrastructure throughout the industrialized world, producing a steady decrease in the cost of shipping seen for the rest of the century.
Electrification
The theoretical and practical basis for the harnessing of electric power was laid by the scientist and experimentalist Michael Faraday. Through his research on the magnetic field around a conductor carrying a direct current, Faraday established the basis for the concept of the electromagnetic field in physics. His inventions of electromagnetic rotary devices were the foundation of the practical use of electricity in technology.
In 1881, Sir Joseph Swan, inventor of the first feasible incandescent light bulb, supplied about 1,200 Swan incandescent lamps to the Savoy Theatre in the City of Westminster, London, which was the first theatre, and the first public building in the world, to be lit entirely by electricity. Swan's lightbulb had already been used in 1879 to light Mosley Street, in Newcastle upon Tyne, the first electrical street lighting installation in the world. This set the stage for the electrification of industry and the home. The first large scale central distribution supply plant was opened at Holborn Viaduct in London in 1882 and later at Pearl Street Station in New York City.
The first modern power station in the world was built by the English electrical engineer Sebastian de Ferranti at Deptford. Built on an unprecedented scale and pioneering the use of high voltage (10,000V) alternating current, it generated 800 kilowatts and supplied central London. On its completion in 1891 it supplied high-voltage AC power that was then "stepped down" with transformers for consumer use on each street. Electrification allowed the final major developments in manufacturing methods of the Second Industrial Revolution, namely the assembly line and mass production.
Electrification was called "the most important engineering achievement of the 20th century" by the National Academy of Engineering. Electric lighting in factories greatly improved working conditions, eliminating the heat and pollution caused by gas lighting, and reducing the fire hazard to the extent that the cost of electricity for lighting was often offset by the reduction in fire insurance premiums. Frank J. Sprague developed the first successful DC motor in 1886. By 1889 110 electric street railways were either using his equipment or in planning. The electric street railway became a major infrastructure before 1920. The AC motor (Induction motor) was developed in the 1890s and soon began to be used in the electrification of industry. Household electrification did not become common until the 1920s, and then only in cities. Fluorescent lighting was commercially introduced at the 1939 World's Fair.
Electrification also allowed the inexpensive production of electro-chemicals, such as aluminium, chlorine, sodium hydroxide, and magnesium.
Machine tools
The use of machine tools began with the onset of the First Industrial Revolution. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron—and hand working lacked precision and was a slow and expensive process. One of the first machine tools was John Wilkinson's boring machine, that bored a precise hole in James Watt's first steam engine in 1774. Advances in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth. Standardization of screw threads began with Henry Maudslay around 1800, when the modern screw-cutting lathe made interchangeable V-thread machine screws a practical commodity.
In 1841, Joseph Whitworth created a design that, through its adoption by many British railway companies, became the world's first national machine tool standard called British Standard Whitworth. During the 1840s through 1860s, this standard was often used in the United States and Canada as well, in addition to myriad intra- and inter-company standards.
The importance of machine tools to mass production is shown by the fact that production of the Ford Model T used 32,000 machine tools, most of which were powered by electricity. Henry Ford is quoted as saying that mass production would not have been possible without electricity because it allowed placement of machine tools and other equipment in the order of the work flow.
Paper making
The first paper making machine was the Fourdrinier machine, built by Sealy and Henry Fourdrinier, stationers in London. In 1800, Matthias Koops, working in London, investigated the idea of using wood to make paper, and began his printing business a year later. However, his enterprise was unsuccessful due to the prohibitive cost at the time.
It was in the 1840s, that Charles Fenerty in Nova Scotia and Friedrich Gottlob Keller in Saxony both invented a successful machine which extracted the fibres from wood (as with rags) and from it, made paper. This started a new era for paper making, and, together with the invention of the fountain pen and the mass-produced pencil of the same period, and in conjunction with the advent of the steam driven rotary printing press, wood based paper caused a major transformation of the 19th century economy and society in industrialized countries. With the introduction of cheaper paper, schoolbooks, fiction, non-fiction, and newspapers became gradually available by 1900. Cheap wood based paper also allowed keeping personal diaries or writing letters and so, by 1850, the clerk, or writer, ceased to be a high-status job. By the 1880s chemical processes for paper manufacture were in use, becoming dominant by 1900.
Petroleum
The petroleum industry, both production and refining, began in 1848 with the first oil works in Scotland. The chemist James Young set up a tiny business refining the crude oil in 1848. Young found that by slow distillation he could obtain a number of useful liquids from it, one of which he named "paraffine oil" because at low temperatures it congealed into a substance resembling paraffin wax. In 1850 Young built the first truly commercial oil-works and oil refinery in the world at Bathgate, using oil extracted from locally mined torbanite, shale, and bituminous coal to manufacture naphtha and lubricating oils; paraffin for fuel use and solid paraffin were not sold till 1856.
Cable tool drilling was developed in ancient China and was used for drilling brine wells. The salt domes also held natural gas, which some wells produced and which was used for evaporation of the brine. Chinese well drilling technology was introduced to Europe in 1828.
Although there were many efforts in the mid-19th century to drill for oil, Edwin Drake's 1859 well near Titusville, Pennsylvania, is considered the first "modern oil well". Drake's well touched off a major boom in oil production in the United States. Drake learned of cable tool drilling from Chinese laborers in the U. S. The first primary product was kerosene for lamps and heaters. Similar developments around Baku fed the European market.
Kerosene lighting was much more efficient and less expensive than vegetable oils, tallow and whale oil. Although town gas lighting was available in some cities, kerosene produced a brighter light until the invention of the gas mantle. Both were replaced by electricity for street lighting following the 1890s and for households during the 1920s. Gasoline was an unwanted byproduct of oil refining until automobiles were mass-produced after 1914, and gasoline shortages appeared during World War I. The invention of the Burton process for thermal cracking doubled the yield of gasoline, which helped alleviate the shortages.
Chemical
Synthetic dye was discovered by English chemist William Henry Perkin in 1856. At the time, chemistry was still in a quite primitive state; it was still a difficult proposition to determine the arrangement of the elements in compounds and chemical industry was still in its infancy. Perkin's accidental discovery was that aniline could be partly transformed into a crude mixture which when extracted with alcohol produced a substance with an intense purple colour. He scaled up production of the new "mauveine", and commercialized it as the world's first synthetic dye.
After the discovery of mauveine, many new aniline dyes appeared (some discovered by Perkin himself), and factories producing them were constructed across Europe. Towards the end of the century, Perkin and other British companies found their research and development efforts increasingly eclipsed by the German chemical industry which became world dominant by 1914.
Maritime technology
This era saw the birth of the modern ship as disparate technological advances came together.
The screw propeller was introduced in 1835 by Francis Pettit Smith who discovered a new way of building propellers by accident. Up to that time, propellers were literally screws, of considerable length. But during the testing of a boat propelled by one, the screw snapped off, leaving a fragment shaped much like a modern boat propeller. The boat moved faster with the broken propeller. The superiority of screw against paddles was taken up by navies. Trials with Smith's SS Archimedes, the first steam driven screw, led to the famous tug-of-war competition in 1845 between the screw-driven and the paddle steamer ; the former pulling the latter backward at 2.5 knots (4.6 km/h).
The first seagoing iron steamboat was built by Horseley Ironworks and named the Aaron Manby. It also used an innovative oscillating engine for power. The boat was built at Tipton using temporary bolts, disassembled for transportation to London, and reassembled on the Thames in 1822, this time using permanent rivets.
Other technological developments followed, including the invention of the surface condenser, which allowed boilers to run on purified water rather than salt water, eliminating the need to stop to clean them on long sea journeys. The Great Western
, built by engineer Isambard Kingdom Brunel, was the longest ship in the world at with a keel and was the first to prove that transatlantic steamship services were viable. The ship was constructed mainly from wood, but Brunel added bolts and iron diagonal reinforcements to maintain the keel's strength. In addition to its steam-powered paddle wheels, the ship carried four masts for sails.
Brunel followed this up with the Great Britain, launched in 1843 and considered the first modern ship built of metal rather than wood, powered by an engine rather than wind or oars, and driven by propeller rather than paddle wheel. Brunel's vision and engineering innovations made the building of large-scale, propeller-driven, all-metal steamships a practical reality, but the prevailing economic and industrial conditions meant that it would be several decades before transoceanic steamship travel emerged as a viable industry.
Highly efficient multiple expansion steam engines began being used on ships, allowing them to carry less coal than freight. The oscillating engine was first built by Aaron Manby and Joseph Maudslay in the 1820s as a type of direct-acting engine that was designed to achieve further reductions in engine size and weight. Oscillating engines had the piston rods connected directly to the crankshaft, dispensing with the need for connecting rods. To achieve this aim, the engine cylinders were not immobile as in most engines, but secured in the middle by trunnions which allowed the cylinders themselves to pivot back and forth as the crankshaft rotated, hence the term oscillating.
It was John Penn, engineer for the Royal Navy who perfected the oscillating engine. One of his earliest engines was the grasshopper beam engine. In 1844 he replaced the engines of the Admiralty yacht, with oscillating engines of double the power, without increasing either the weight or space occupied, an achievement which broke the naval supply dominance of Boulton & Watt and Maudslay, Son & Field. Penn also introduced the trunk engine for driving screw propellers in vessels of war. (1846) and (1848) were the first ships to be fitted with such engines and such was their efficacy that by the time of Penn's death in 1878, the engines had been fitted in 230 ships and were the first mass-produced, high-pressure and high-revolution marine engines.
The revolution in naval design led to the first modern battleships in the 1870s, evolved from the ironclad design of the 1860s. The Devastation-class turret ships were built for the British Royal Navy as the first class of ocean-going capital ship that did not carry sails, and the first whose entire main armament was mounted on top of the hull rather than inside it.
Rubber
The vulcanization of rubber, by American Charles Goodyear and Englishman Thomas Hancock in the 1840s paved the way for a growing rubber industry, especially the manufacture of rubber tyres
John Boyd Dunlop developed the first practical pneumatic tyre in 1887 in South Belfast. Willie Hume demonstrated the supremacy of Dunlop's newly invented pneumatic tyres in 1889, winning the tyre's first ever races in Ireland and then England.
Dunlop's development of the pneumatic tyre arrived at a crucial time in the development of road transport and commercial production began in late 1890.
Bicycles
The modern bicycle was designed by the English engineer Harry John Lawson in 1876, although it was John Kemp Starley who produced the first commercially successful safety bicycle a few years later. Its popularity soon grew, causing the bike boom of the 1890s.
Road networks improved greatly in the period, using the Macadam method pioneered by Scottish engineer John Loudon McAdam, and hard surfaced roads were built around the time of the bicycle craze of the 1890s. Modern tarmac was patented by British civil engineer Edgar Purnell Hooley in 1901.
Automobile
German inventor Karl Benz patented the world's first automobile in 1886. It featured wire wheels (unlike carriages' wooden ones) with a four-stroke engine of his own design between the rear wheels, with a very advanced coil ignition and evaporative cooling rather than a radiator. Power was transmitted by means of two roller chains to the rear axle. It was the first automobile entirely designed as such to generate its own power, not simply a motorized-stage coach or horse carriage.
Benz began to sell the vehicle, advertising it as the Benz Patent Motorwagen, in the late summer of 1888, making it the first commercially available automobile in history.
Henry Ford built his first car in 1896 and worked as a pioneer in the industry, with others who would eventually form their own companies, until the founding of Ford Motor Company in 1903. Ford and others at the company struggled with ways to scale up production in keeping with Henry Ford's vision of a car designed and manufactured on a scale so as to be affordable by the average worker. The solution that Ford Motor developed was a completely redesigned factory with machine tools and special purpose machines that were systematically positioned in the work sequence. All unnecessary human motions were eliminated by placing all work and tools within easy reach, and where practical on conveyors, forming the assembly line, the complete process being called mass production. This was the first time in history when a large, complex product consisting of 5000 parts had been produced on a scale of hundreds of thousands per year. The savings from mass production methods allowed the price of the Model T to decline from $780 in 1910 to $360 in 1916. In 1924 2 million T-Fords were produced and retailed $290 each.($ in dollars)
Applied science
Applied science opened many opportunities. By the middle of the 19th century there was a scientific understanding of chemistry and a fundamental understanding of thermodynamics and by the last quarter of the century both of these sciences were near their present-day basic form. Thermodynamic principles were used in the development of physical chemistry. Understanding chemistry greatly aided the development of basic inorganic chemical manufacturing and the aniline dye industries.
The science of metallurgy was advanced through the work of Henry Clifton Sorby and others. Sorby pioneered metallography, the study of metals under the microscope, which paved the way for a scientific understanding of metal and the mass-production of steel. In 1863 he used etching with acid to study the microscopic structure of metals and was the first to understand that a small but precise quantity of carbon gave steel its strength. This paved the way for Henry Bessemer and Robert Forester Mushet to develop the method for mass-producing steel.
Other processes were developed for purifying various elements such as chromium, molybdenum, titanium, vanadium and nickel which could be used for making alloys with special properties, especially with steel. Vanadium steel, for example, is strong and fatigue resistant, and was used in half the automotive steel. Alloy steels were used for ball bearings which were used in large scale bicycle production in the 1880s. Ball and roller bearings also began being used in machinery. Other important alloys are used in high temperatures, such as steam turbine blades, and stainless steels for corrosion resistance.
The work of Justus von Liebig and August Wilhelm von Hofmann laid the groundwork for modern industrial chemistry. Liebig is considered the "father of the fertilizer industry" for his discovery of nitrogen as an essential plant nutrient and went on to establish Liebig's Extract of Meat Company which produced the Oxo meat extract. Hofmann headed a school of practical chemistry in London, under the style of the Royal College of Chemistry, introduced modern conventions for molecular modeling and taught Perkin who discovered the first synthetic dye.
The science of thermodynamics was developed into its modern form by Sadi Carnot, William Rankine, Rudolf Clausius, William Thomson, James Clerk Maxwell, Ludwig Boltzmann and J. Willard Gibbs. These scientific principles were applied to a variety of industrial concerns, including improving the efficiency of boilers and steam turbines. The work of Michael Faraday and others was pivotal in laying the foundations of the modern scientific understanding of electricity.
Scottish scientist James Clerk Maxwell was particularly influential—his discoveries ushered in the era of modern physics. His most prominent achievement was to formulate a set of equations that described electricity, magnetism, and optics as manifestations of the same phenomenon, namely the electromagnetic field. The unification of light and electrical phenomena led to the prediction of the existence of radio waves and was the basis for the future development of radio technology by Hughes, Marconi and others.
Maxwell himself developed the first durable colour photograph in 1861 and published the first scientific treatment of control theory. Control theory is the basis for process control, which is widely used in automation, particularly for process industries, and for controlling ships and airplanes. Control theory was developed to analyze the functioning of centrifugal governors on steam engines. These governors came into use in the late 18th century on wind and water mills to correctly position the gap between mill stones, and were adapted to steam engines by James Watt. Improved versions were used to stabilize automatic tracking mechanisms of telescopes and to control speed of ship propellers and rudders. However, those governors were sluggish and oscillated about the set point. James Clerk Maxwell wrote a paper mathematically analyzing the actions of governors, which marked the beginning of the formal development of control theory. The science was continually improved and evolved into an engineering discipline.
Fertilizer
Justus von Liebig was the first to understand the importance of ammonia as fertilizer, and promoted the importance of inorganic minerals to plant nutrition. In England, he attempted to implement his theories commercially through a fertilizer created by treating phosphate of lime in bone meal with sulfuric acid. Another pioneer was John Bennet Lawes who began to experiment on the effects of various manures on plants growing in pots in 1837, leading to a manure formed by treating phosphates with sulphuric acid; this was to be the first product of the nascent artificial manure industry.
The discovery of coprolites in commercial quantities in East Anglia, led Fisons and Edward Packard to develop one of the first large-scale commercial fertilizer plants at Bramford, and Snape in the 1850s. By the 1870s superphosphates produced in those factories, were being shipped around the world from the port at Ipswich.
The Birkeland–Eyde process was developed by Norwegian industrialist and scientist Kristian Birkeland along with his business partner Sam Eyde in 1903, but was soon replaced by the much more efficient Haber process,
developed by the Nobel Prize-winning chemists Carl Bosch of IG Farben and Fritz Haber in Germany. The process used molecular nitrogen (N2) and methane (CH4) gas in an economically sustainable synthesis of ammonia (NH3). The ammonia produced in the Haber process is the main raw material for production of nitric acid.
Engines and turbines
The steam turbine was developed by Sir Charles Parsons in 1884. His first model was connected to a dynamo that generated 7.5 kW (10 hp) of electricity. The invention of Parson's steam turbine made cheap and plentiful electricity possible and revolutionized marine transport and naval warfare. By the time of Parson's death, his turbine had been adopted for all major world power stations. Unlike earlier steam engines, the turbine produced rotary power rather than reciprocating power which required a crank and heavy flywheel. The large number of stages of the turbine allowed for high efficiency and reduced size by 90%. The turbine's first application was in shipping followed by electric generation in 1903.
The first widely used internal combustion engine was the Otto type of 1876. From the 1880s until electrification it was successful in small shops because small steam engines were inefficient and required too much operator attention. The Otto engine soon began being used to power automobiles, and remains as today's common gasoline engine.
The diesel engine was independently designed by Rudolf Diesel and Herbert Akroyd Stuart in the 1890s using thermodynamic principles with the specific intention of being highly efficient. It took several years to perfect and become popular, but found application in shipping before powering locomotives. It remains the world's most efficient prime mover.
Telecommunications
The first commercial telegraph system was installed by Sir William Fothergill Cooke and Charles Wheatstone in May 1837 between Euston railway station and Camden Town in London.
The rapid expansion of telegraph networks took place throughout the century, with the first undersea telegraph cable being built by John Watkins Brett between France and England.
The Atlantic Telegraph Company was formed in London in 1856 to undertake construction of a commercial telegraph cable across the Atlantic Ocean. This was successfully completed on 18 July 1866 by the ship SS Great Eastern, captained by Sir James Anderson after many mishaps along the away. From the 1850s until 1911, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line.
The telephone was patented in 1876 by Alexander Graham Bell, and like the early telegraph, it was used mainly to speed business transactions.
As mentioned above, one of the most important scientific advancements in all of history was the unification of light, electricity and magnetism through Maxwell's electromagnetic theory. A scientific understanding of electricity was necessary for the development of efficient electric generators, motors and transformers. David Edward Hughes and Heinrich Hertz both demonstrated and confirmed the phenomenon of electromagnetic waves that had been predicted by Maxwell.
It was Italian inventor Guglielmo Marconi who successfully commercialized radio at the turn of the century. He founded The Wireless Telegraph & Signal Company in Britain in 1897 and in the same year transmitted Morse code across Salisbury Plain, sent the first ever wireless communication over open sea and made the first transatlantic transmission in 1901 from Poldhu, Cornwall to Signal Hill, Newfoundland. Marconi built high-powered stations on both sides of the Atlantic and began a commercial service to transmit nightly news summaries to subscribing ships in 1904.
The key development of the vacuum tube by Sir John Ambrose Fleming in 1904 underpinned the development of modern electronics and radio broadcasting. Lee De Forest's subsequent invention of the triode allowed the amplification of electronic signals, which paved the way for radio broadcasting in the 1920s.
Modern business management
Railroads are credited with creating the modern business enterprise by scholars such as Alfred Chandler. Previously, the management of most businesses had consisted of individual owners or groups of partners, some of whom often had little daily hands-on operations involvement. Centralized expertise in the home office was not enough. A railroad required expertise available across the whole length of its trackage, to deal with daily crises, breakdowns and bad weather. A collision in Massachusetts in 1841 led to a call for safety reform. This led to the reorganization of railroads into different departments with clear lines of management authority. When the telegraph became available, companies built telegraph lines along the railroads to keep track of trains.
Railroads involved complex operations and employed extremely large amounts of capital and ran a more complicated business compared to anything previous. Consequently, they needed better ways to track costs. For example, to calculate rates they needed to know the cost of a ton-mile of freight. They also needed to keep track of cars, which could go missing for months at a time. This led to what was called "railroad accounting", which was later adopted by steel and other industries, and eventually became modern accounting.
Later in the Second Industrial Revolution, Frederick Winslow Taylor and others in America developed the concept of scientific management or Taylorism. Scientific management initially concentrated on reducing the steps taken in performing work (such as bricklaying or shoveling) by using analysis such as time-and-motion studies, but the concepts evolved into fields such as industrial engineering, manufacturing engineering, and business management that helped to completely restructure the operations of factories, and later entire segments of the economy.
Taylor's core principles included:
replacing rule-of-thumb work methods with methods based on a scientific study of the tasks
scientifically selecting, training, and developing each employee rather than passively leaving them to train themselves
providing "detailed instruction and supervision of each worker in the performance of that worker's discrete task"
dividing work nearly equally between managers and workers, such that the managers apply scientific-management principles to planning the work and the workers actually perform the tasks
Socio-economic impacts
The period from 1870 to 1890 saw the greatest increase in economic growth in such a short period as ever in previous history. Living standards improved significantly in the newly industrialized countries as the prices of goods fell dramatically due to the increases in productivity. This caused unemployment and great upheavals in commerce and industry, with many laborers being displaced by machines and many factories, ships and other forms of fixed capital becoming obsolete in a very short time span.
"The economic changes that have occurred during the last quarter of a century -or during the present generation of living men- have unquestionably been more important and more varied than during any period of the world's history".
Crop failures no longer resulted in starvation in areas connected to large markets through transport infrastructure.
Massive improvements in public health and sanitation resulted from public health initiatives, such as the construction of the London sewerage system in the 1860s and the passage of laws that regulated filtered water supplies—(the Metropolis Water Act introduced regulation of the water supply companies in London, including minimum standards of water quality for the first time in 1852). This greatly reduced the infection and death rates from many diseases.
By 1870 the work done by steam engines exceeded that done by animal and human power. Horses and mules remained important in agriculture until the development of the internal combustion tractor near the end of the Second Industrial Revolution.
Improvements in steam efficiency, like triple-expansion steam engines, allowed ships to carry much more freight than coal, resulting in greatly increased volumes of international trade. Higher steam engine efficiency caused the number of steam engines to increase several fold, leading to an increase in coal usage, the phenomenon being called the Jevons paradox.
By 1890 there was an international telegraph network allowing orders to be placed by merchants in England or the US to suppliers in India and China for goods to be transported in efficient new steamships. This, plus the opening of the Suez Canal, led to the decline of the great warehousing districts in London and elsewhere, and the elimination of many middlemen.
The tremendous growth in productivity, transportation networks, industrial production and agricultural output lowered the prices of almost all goods. This led to many business failures and periods that were called depressions that occurred as the world economy actually grew. See also: Long depression
The factory system centralized production in separate buildings funded and directed by specialists (as opposed to work at home). The division of labor made both unskilled and skilled labor more productive, and led to a rapid growth of population in industrial centers. The shift away from agriculture toward industry had occurred in Britain by the 1730s, when the percentage of the working population engaged in agriculture fell below 50%, a development that would only happen elsewhere (the Low Countries) in the 1830s and '40s. By 1890, the figure had fallen to under 10% and the vast majority of the British population was urbanized. This milestone was reached by the Low Countries and the US in the 1950s.
Like the first industrial revolution, the second supported population growth and saw most governments protect their national economies with tariffs. Britain retained its belief in free trade throughout this period. The wide-ranging social impact of both revolutions included the remaking of the working class as new technologies appeared. The changes resulted in the creation of a larger, increasingly professional, middle class, the decline of child labor and the dramatic growth of a consumer-based, material culture.
By 1900, the leaders in industrial production was Britain with 24% of the world total, followed by the US (19%), Germany (13%), Russia (9%) and France (7%). Europe together accounted for 62%.
The great inventions and innovations of the Second Industrial Revolution are part of our modern life. They continued to be drivers of the economy until after WWII. Major innovations occurred in the post-war era, some of which are: computers, semiconductors, the fiber optic network and the Internet, cellular telephones, combustion turbines (jet engines) and the Green Revolution. Although commercial aviation existed before WWII, it became a major industry after the war.
United Kingdom
New products and services were introduced which greatly increased international trade. Improvements in steam engine design and the wide availability of cheap steel meant that slow, sailing ships were replaced with faster steamship, which could handle more trade with smaller crews. The chemical industries also moved to the forefront. Britain invested less in technological research than the U.S. and Germany, which caught up.
The development of more intricate and efficient machines along with mass production techniques after 1910 greatly expanded output and lowered production costs. As a result, production often exceeded domestic demand. Among the new conditions, more markedly evident in Britain, the forerunner of Europe's industrial states, were the long-term effects of the severe Long Depression of 1873–1896, which had followed fifteen years of great economic instability. Businesses in practically every industry suffered from lengthy periods of low – and falling – profit rates and price deflation after 1873.
United States
The U.S. had its highest economic growth rate in the last two decades of the Second Industrial Revolution; however, population growth slowed while productivity growth peaked around the mid 20th century. The Gilded Age in America was based on heavy industry such as factories, railroads and coal mining. The iconic event was the opening of the First transcontinental railroad in 1869, providing six-day service between the East Coast and San Francisco.
During the Gilded Age, American railroad mileage tripled between 1860 and 1880, and tripled again by 1920, opening new areas to commercial farming, creating a truly national marketplace and inspiring a boom in coal mining and steel production. The voracious appetite for capital of the great trunk railroads facilitated the consolidation of the nation's financial market in Wall Street. By 1900, the process of economic concentration had extended into most branches of industry—a few large corporations, some organized as "trusts" (e.g. Standard Oil), dominated in steel, oil, sugar, meatpacking, and the manufacture of agriculture machinery. Other major components of this infrastructure were the new methods for manufacturing steel, especially the Bessemer process. The first billion-dollar corporation was United States Steel, formed by financier J. P. Morgan in 1901, who purchased and consolidated steel firms built by Andrew Carnegie and others.
Increased mechanization of industry and improvements to worker efficiency, increased the productivity of factories while undercutting the need for skilled labor. Mechanical innovations such as batch and continuous processing began to become much more prominent in factories. This mechanization made some factories an assemblage of unskilled laborers performing simple and repetitive tasks under the direction of skilled foremen and engineers. In some cases, the advancement of such mechanization substituted for low-skilled workers altogether. Both the number of unskilled and skilled workers increased, as their wage rates grew Engineering colleges were established to feed the enormous demand for expertise. Together with rapid growth of small business, a new middle class was rapidly growing, especially in northern cities.
Germany
The German Empire came to rival Britain as Europe's primary industrial nation during this period. Since Germany industrialized later, it was able to model its factories after those of Britain, thus making more efficient use of its capital and avoiding legacy methods in its leap to the envelope of technology. Germany invested more heavily than the British in research, especially in chemistry, motors and electricity. The German concern system (known as Konzerne), being significantly concentrated, was able to make more efficient use of capital. Germany was not weighted down with an expensive worldwide empire that needed defense. Following Germany's annexation of Alsace-Lorraine in 1871, it absorbed parts of what had been France's industrial base.
By 1900 the German chemical industry dominated the world market for synthetic dyes. The three major firms BASF, Bayer and Hoechst produced several hundred different dyes, along with the five smaller firms. In 1913 these eight firms produced almost 90 percent of the world supply of dyestuffs, and sold about 80 percent of their production abroad. The three major firms had also integrated upstream into the production of essential raw materials and they began to expand into other areas of chemistry such as pharmaceuticals, photographic film, agricultural chemicals and electrochemical. Top-level decision-making was in the hands of professional salaried managers, leading Chandler to call the German dye companies "the world's first truly managerial industrial enterprises". There were many spin offs from research—such as the pharmaceutical industry, which emerged from chemical research.
Belgium
Belgium during the Belle Époque showed the value of the railways for speeding the Second Industrial Revolution. After 1830, when it broke away from the Netherlands and became a new nation, it decided to stimulate industry. It planned and funded a simple cruciform system that connected major cities, ports and mining areas, and linked to neighboring countries. Belgium thus became the railway center of the region. The system was soundly built along British lines, so that profits were low but the infrastructure necessary for rapid industrial growth was put in place.
Alternative uses
There have been other times that have been called "second industrial revolution". Industrial revolutions may be renumbered by taking earlier developments, such as the rise of medieval technology in the 12th century, or of ancient Chinese technology during the Tang dynasty, or of ancient Roman technology, as first. "Second industrial revolution" has been used in the popular press and by technologists or industrialists to refer to the changes following the spread of new technology after World War I.
Excitement and debate over the dangers and benefits of the Atomic Age were more intense and lasting than those over the Space age but both were predicted to lead to another industrial revolution. At the start of the 21st century the term "second industrial revolution" has been used to describe the anticipated effects of hypothetical molecular nanotechnology systems upon society. In this more recent scenario, they would render the majority of today's modern manufacturing processes obsolete, transforming all facets of the modern economy. Subsequent industrial revolutions include the Digital revolution and Environmental revolution.
See also
in alphabetical order
British Agricultural Revolution
Capitalism in the nineteenth century
Chemical Revolution
Digital Revolution, also known as the Third Industrial Revolution, late 1990s until present
Fourth Industrial Revolution
Green Revolution
Industrial Revolution
Information Revolution
Transport Revolution
Nanotechnology
Kondratiev wave
List of steel producers
Machine Age
Neolithic Revolution
Productivity improving technologies (historical)
Scientific Revolution
Suez Canal
Economic history of selected countries:
United Kingdom (19th century) & 1900–1945
United States (late 19th century) & Early 20th century
France (1789–1914) & 1914–1944
Economic history of Germany#Industrial Revolution & Early 20th century
Italy (1861–1918)
Japan (Meiji period) & Early 20th century
Notes
References
Atkeson, Andrew and Patrick J. Kehoe. "Modeling the Transition to a New Economy: Lessons from Two Technological Revolutions," American Economic Review, March 2007, Vol. 97 Issue 1, pp 64–88 in EBSCO
Appleby, Joyce Oldham. The Relentless Revolution: A History of Capitalism (2010) excerpt and text search
Beaudreau, Bernard C. The Economic Consequences of Mr. Keynes: How the Second Industrial Revolution Passed Great Britain (2006)
Broadberry, Stephen, and Kevin H. O'Rourke. The Cambridge Economic History of Modern Europe (2 vol. 2010), covers 1700 to present
Chandler, Jr., Alfred D. Scale and Scope: The Dynamics of Industrial Capitalism (1990).
Chant, Colin, ed. Science, Technology and Everyday Life, 1870–1950 (1989) emphasis on Britain
Hull, James O. "From Rostow to Chandler to You: How revolutionary was the second industrial revolution?" Journal of European Economic History, Spring 1996, Vol. 25 Issue 1, pp. 191–208
Kornblith, Gary. The Industrial Revolution in America (1997)
Licht, Walter. Industrializing America: The Nineteenth Century (1995)
Mokyr, Joel The Second Industrial Revolution, 1870–1914 (1998)
Mokyr, Joel. The Enlightened Economy: An Economic History of Britain 1700–1850 (2010)
Rider, Christine, ed. Encyclopedia of the Age of the Industrial Revolution, 1700–1920 (2 vol. 2007)
Roberts, Wayne. "Toronto Metal Workers and the Second Industrial Revolution, 1889–1914," Labour / Le Travail, Autumn 1980, Vol. 6, pp 49–72
. Reprinted by McGraw-Hill, New York and London, 1926 (); and by Lindsay Publications, Inc., Bradley, Illinois, ().
Smil, Vaclav. Creating the Twentieth Century: Technical Innovations of 1867–1914 and Their Lasting Impact
External links
.
Industrial Revolution, 2nd
Industrial Revolution, 2nd
Industrial Revolution, 2nd
Industrial Revolution, 2nd
Electric power
Mass production
Industrial Revolution, 2nd | Second Industrial Revolution | [
"Physics",
"Technology",
"Engineering"
] | 9,698 | [
"Physical quantities",
"Science and technology studies",
"Power (physics)",
"Electric power",
"History of technology",
"Electrical engineering",
"History of science and technology"
] |
359,657 | https://en.wikipedia.org/wiki/Pollux%20%28star%29 | Pollux is the brightest star in the constellation of Gemini. It has the Bayer designation β Geminorum, which is Latinised to Beta Geminorum and abbreviated Beta Gem or β Gem. This is an orange-hued, evolved red giant located at a distance of 34 light-years, making it the closest red giant (and giant star) to the Sun. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. In 2006 an exoplanet (designated Pollux b or β Geminorum b, later named Thestias) was announced to be orbiting it.
Nomenclature
β Geminorum (Latinised to Beta Geminorum) is the star's Bayer designation.
The traditional name Pollux refers to the twins Castor and Pollux in Greek and Roman mythology. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Pollux for this star.
Castor and Pollux are the two "heavenly twin" stars giving the constellation Gemini (Latin, 'the twins') its name. The stars, however, are quite different in detail. Castor is a complex sextuple system of hot, bluish-white type A stars and dim red dwarfs, while Pollux is a single, cooler yellow-orange giant. In Percy Shelley's 1818 poem Homer's Hymn to Castor and Pollux, the star is referred to as "... mild Pollux, void of blame."
Originally the planet was designated Pollux b. In July 2014 the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Thestias for this planet. The winning name was based on that originally submitted by theSkyNet of Australia; namely Leda, Pollux's mother. At the request of the IAU, 'Thestias' (the patronym of Leda, a daughter of Thestius) was substituted. This was because 'Leda' was already attributed to an asteroid and to one of Jupiter's satellites.
In the catalogue of stars in the Calendarium of al Achsasi al Mouakket, this star was designated Muekher al Dzira, which was translated into Latin as Posterior Brachii, meaning the end in the paw.
In Chinese, (), meaning North River, refers to an asterism consisting of Pollux, ρ Geminorum, and Castor. Consequently, Pollux itself is known as (, .)
Physical characteristics
At an apparent visual magnitude of 1.14, Pollux is the brightest star in its constellation, even brighter than its neighbor Castor (α Geminorum). Pollux is 6.7 degrees north of the ecliptic, presently too far north to be occulted by the Moon. The last lunar occultation visible from Earth was on 30 September 116 BCE from high southern latitudes.
Parallax measurements by the Hipparcos astrometry satellite place Pollux at a distance of about from the Sun. This is close to the standard unit for determining a star's absolute magnitude (a star's apparent magnitude as viewed from 10 parsecs). Hence, Pollux's apparent and absolute magnitudes are quite close.
The star is larger than the Sun, with about two times its mass and almost nine times its radius. Once an A-type main-sequence star similar to Sirius, Pollux has exhausted the hydrogen at its core and evolved into a giant star with a stellar classification of K0 III. The effective temperature of this star's outer envelope is about , which lies in the range that produces the characteristic orange hue of K-type stars. Pollux has a projected rotational velocity of . The abundance of elements other than hydrogen and helium, what astronomers term the star's metallicity, is uncertain, with estimates ranging from 85% to 155% of the Sun's abundance.
An old estimate for Pollux's diameter obtained in 1925 by John Stanley Plaskett via interferometry was 13 million miles (20.9 million km, or ), significantly larger than modern estimates. A more recent measurement by the Navy Precision Optical Interferometer give a radius of . Another estimate that uses Pollux's spectral lines obtained .
Evidence for a low level of magnetic activity came from the detection of weak X-ray emission using the ROSAT orbiting telescope. The X-ray emission from this star is about 1027 erg s−1, which is roughly the same as the X-ray emission from the Sun. A magnetic field with a strength below 1 gauss has since been confirmed on the surface of Pollux; one of the weakest fields ever detected on a star. The presence of this field suggests that Pollux was once an Ap star with a much stronger magnetic field. The star displays small amplitude radial velocity variations, but is not photometrically variable.
Planetary system
Since 1993 scientists have suspected an exoplanet orbiting Pollux, from measured radial velocity oscillations. The existence of the planet, Pollux b, was confirmed and announced on June 16, 2006. Pollux b is calculated to have a mass at least 2.3 times that of Jupiter. The planet is orbiting Pollux with a period of about 590 days.
The existence of Pollux b has been disputed; the possibility that the observed radial velocity variations are caused by stellar magnetic activity cannot be ruled out.
References
External links
K-type giants
Suspected variables
Planetary systems with one confirmed planet
Geminorum, Beta
2990
Durchmusterung objects
Geminorum, 78
62509
037826
0286
Pollux
Castor and Pollux | Pollux (star) | [
"Astronomy"
] | 1,241 | [
"Castor and Pollux",
"Astronomical myths"
] |
359,674 | https://en.wikipedia.org/wiki/Variance%20%28land%20use%29 | A variance is a deviation from the set of rules a municipality applies to land use and land development, typically a zoning ordinance, building code or municipal code. The manner in which variances are employed can differ greatly depending on the municipality. A variance may also be known as a standards variance, referring to the development standards contained in code. A variance is often granted by a Board or Committee of adjustment.
Description
A variance is an administrative exception to land use regulations. The use and application of variances can differ considerably throughout the great number of municipalities worldwide that regulate land use on this model. The issuance of variances may be very common, or nearly unheard-of in a given municipality. This can depend on a municipality's regulations, built environment and development pattern, and even political climate. One city may view variances as a routine matter, while another city may see variances as highly unusual exceptions to the norm. Community attitudes and political climates can change within a city as well, affecting the manner in which variances are granted even when no changes are made to the regulations governing variances.
Typically, in the United States, the process for a variance must be made available to a landowner upon request, or the municipality may be in danger of committing a regulatory taking. The variance process has been described as "a constitutional safety valve" to protect the rights of landowners.
Types
Two broad categories of variances generally are used in the practice of local land use planning: area (or bulk) variances and use variances.
An area variance is the most common type. It can be requested by a builder or landowner when an odd configuration of the land, or sometimes the physical improvements (structures) on the land, requires a relaxation of the applicable regulations to avoid denying the landowner the same rights and use of the property enjoyed by owners of neighboring properties. A textbook example would be a house built on an oddly-shaped lot. If the odd shape of the lot makes it onerous for the landowner or builder to comply with the standard building setbacks specified in the code, a variance could be requested to allow a reduced setback. Another would be a house built on a sloping lot. If the slope of the lot makes it onerous to comply with the height limit—typically due to the way the municipality's code requires height to be measured—then a variance could be requested for a structure of increased height because of the special conditions on the lot.
A use variance is a variance that authorizes a land use not normally permitted by the zoning ordinance. Such a variance has much in common with a special-use permit (sometimes known as a conditional use permit). Some municipalities do not offer this process, opting to handle such situations under special use permits instead. Grant of a use variance also can be similar, in effect, to a zone change. This may, in certain cases, be considered spot zoning, which is prohibited in many jurisdictions.
In either case, the variance request is justified only if special conditions exist on the lot that create a hardship making it too difficult to comply with the code's normal requirements. Likewise, a request for a variance on a normal lot with no special conditions could judiciously be denied. The special conditions or hardship typically must arise from some physical configuration of the lot or its structures. The financial or personal situation of the applicant normally cannot be taken into consideration. Under most codes governing variances, approval of the variance must not result in a public health or safety hazard and must not grant special privilege to the property owner. In other words, when a variance is granted, any other property owner with similar site conditions should be able to obtain a similar variance; this criterion is often addressed by citing precedent.
See also
Zoning
Spot zoning
Zoning in the United States (land use)
Special use permit
Nonconforming use
References
1 ^zoning information about Residential Investment Property
External links
Schindler's Land Use Page (Michigan State University Extension Land Use Team)
Land Policy Institute at Michigan State University
Local government in the United States
Zoning | Variance (land use) | [
"Engineering"
] | 818 | [
"Construction",
"Zoning"
] |
359,684 | https://en.wikipedia.org/wiki/Cumulant | In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa.
The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the th-order cumulant of their sum is equal to the sum of their th-order cumulants. As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property.
Just as for moments, where joint moments are used for collections of random variables, it is possible to define joint cumulants.
Definition
The cumulants of a random variable are defined using the cumulant-generating function , which is the natural logarithm of the moment-generating function:
The cumulants are obtained from a power series expansion of the cumulant generating function:
This expansion is a Maclaurin series, so the th cumulant can be obtained by differentiating the above expansion times and evaluating the result at zero:
If the moment-generating function does not exist, the cumulants can be defined in terms of the relationship between cumulants and moments discussed later.
Alternative definition of the cumulant generating function
Some writers prefer to define the cumulant-generating function as the natural logarithm of the characteristic function, which is sometimes also called the second characteristic function,
An advantage of —in some sense the function evaluated for purely imaginary arguments—is that is well defined for all real values of even when is not well defined for all real values of , such as can occur when there is "too much" probability that has a large magnitude. Although the function will be well defined, it will nonetheless mimic in terms of the length of its Maclaurin series, which may not extend beyond (or, rarely, even to) linear order in the argument , and in particular the number of cumulants that are well defined will not change. Nevertheless, even when does not have a long Maclaurin series, it can be used directly in analyzing and, particularly, adding random variables. Both the Cauchy distribution (also called the Lorentzian) and more generally, stable distributions (related to the Lévy distribution) are examples of distributions for which the power-series expansions of the generating functions have only finitely many well-defined terms.
Some basic properties
The th cumulant of (the distribution of) a random variable enjoys the following properties:
If and is constant (i.e. not random) then i.e. the cumulant is translation invariant. (If then we have
If is constant (i.e. not random) then i.e. the th cumulant is homogeneous of degree .
If random variables are independent then That is, the cumulant is cumulative — hence the name.
The cumulative property follows quickly by considering the cumulant-generating function:
so that each cumulant of a sum of independent random variables is the sum of the corresponding cumulants of the addends. That is, when the addends are statistically independent, the mean of the sum is the sum of the means, the variance of the sum is the sum of the variances, the third cumulant (which happens to be the third central moment) of the sum is the sum of the third cumulants, and so on for each order of cumulant.
A distribution with given cumulants can be approximated through an Edgeworth series.
First several cumulants as functions of the moments
All of the higher cumulants are polynomial functions of the central moments, with integer coefficients, but only in degrees 2 and 3 are the cumulants actually central moments.
mean
the variance, or second central moment.
the third central moment.
the fourth central moment minus three times the square of the second central moment. Thus this is the first case in which cumulants are not simply moments or central moments. The central moments of degree more than 3 lack the cumulative property.
Cumulants of some discrete probability distributions
The constant random variables . The cumulant generating function is . The first cumulant is and the other cumulants are zero, .
The Bernoulli distributions, (number of successes in one trial with probability of success). The cumulant generating function is . The first cumulants are and . The cumulants satisfy a recursion formula
The geometric distributions, (number of failures before one success with probability of success on each trial). The cumulant generating function is . The first cumulants are , and . Substituting gives and .
The Poisson distributions. The cumulant generating function is . All cumulants are equal to the parameter: .
The binomial distributions, (number of successes in independent trials with probability of success on each trial). The special case is a Bernoulli distribution. Every cumulant is just times the corresponding cumulant of the corresponding Bernoulli distribution. The cumulant generating function is . The first cumulants are and . Substituting gives and . The limiting case is a Poisson distribution.
The negative binomial distributions, (number of failures before successes with probability of success on each trial). The special case is a geometric distribution. Every cumulant is just times the corresponding cumulant of the corresponding geometric distribution. The derivative of the cumulant generating function is . The first cumulants are , and . Substituting gives and . Comparing these formulas to those of the binomial distributions explains the name 'negative binomial distribution'. The limiting case is a Poisson distribution.
Introducing the variance-to-mean ratio
the above probability distributions get a unified formula for the derivative of the cumulant generating function:
The second derivative is
confirming that the first cumulant is and the second cumulant is .
The constant random variables have .
The binomial distributions have so that .
The Poisson distributions have .
The negative binomial distributions have so that .
Note the analogy to the classification of conic sections by eccentricity: circles , ellipses , parabolas , hyperbolas .
Cumulants of some continuous probability distributions
For the normal distribution with expected value and variance , the cumulant generating function is . The first and second derivatives of the cumulant generating function are and . The cumulants are , , and . The special case is a constant random variable .
The cumulants of the uniform distribution on the interval are , where is the th Bernoulli number.
The cumulants of the exponential distribution with rate parameter are .
Some properties of the cumulant generating function
The cumulant generating function , if it exists, is infinitely differentiable and convex, and passes through the origin. Its first derivative ranges monotonically in the open interval from the infimum to the supremum of the support of the probability distribution, and its second derivative is strictly positive everywhere it is defined, except for the degenerate distribution of a single point mass. The cumulant-generating function exists if and only if the tails of the distribution are majorized by an exponential decay, that is, (see Big O notation)
where is the cumulative distribution function. The cumulant-generating function will have vertical asymptote(s) at the negative supremum of such , if such a supremum exists, and at the supremum of such , if such a supremum exists, otherwise it will be defined for all real numbers.
If the support of a random variable has finite upper or lower bounds, then its cumulant-generating function , if it exists, approaches asymptote(s) whose slope is equal to the supremum or infimum of the support,
respectively, lying above both these lines everywhere. (The integrals
yield the -intercepts of these asymptotes, since .)
For a shift of the distribution by , For a degenerate point mass at , the cumulant generating function is the straight line , and more generally, if and only if and are independent and their cumulant generating functions exist; (subindependence and the existence of second moments sufficing to imply independence.)
The natural exponential family of a distribution may be realized by shifting or translating , and adjusting it vertically so that it always passes through the origin: if is the pdf with cumulant generating function and is its natural exponential family, then and
If is finite for a range then if then is analytic and infinitely differentiable for . Moreover for real and is strictly convex, and is strictly increasing.
Further properties of cumulants
A negative result
Given the results for the cumulants of the normal distribution, it might be hoped to find families of distributions for which
for some , with the lower-order cumulants (orders 3 to ) being non-zero. There are no such distributions. The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than 2.
Cumulants and moments
The moment generating function is given by:
So the cumulant generating function is the logarithm of the moment generating function
The first cumulant is the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.
The moments can be recovered in terms of cumulants by evaluating the th derivative of at ,
Likewise, the cumulants can be recovered in terms of moments by evaluating the th derivative of at ,
The explicit expression for the th moment in terms of the first cumulants, and vice versa, can be obtained by using Faà di Bruno's formula for higher derivatives of composite functions. In general, we have
where are incomplete (or partial) Bell polynomials.
In the like manner, if the mean is given by , the central moment generating function is given by
and the th central moment is obtained in terms of cumulants as
Also, for , the th cumulant in terms of the central moments is
The th moment is an th-degree polynomial in the first cumulants. The first few expressions are:
The "prime" distinguishes the moments from the central moments . To express the central moments as functions of the cumulants, just drop from these polynomials all terms in which appears as a factor:
Similarly, the th cumulant is an th-degree polynomial in the first non-central moments. The first few expressions are:
In general, the cumulant is the determinant of a matrix:
To express the cumulants for as functions of the central moments, drop from these polynomials all terms in which μ'1 appears as a factor:
The cumulants can be related to the moments by differentiating the relationship with respect to , giving , which conveniently contains no exponentials or logarithms. Equating the coefficient of on the left and right sides and using gives the following formulas for :
These allow either or to be computed from the other using knowledge of the lower-order cumulants and moments. The corresponding formulas for the central moments for are formed from these formulas by setting and replacing each with for :
Cumulants and set-partitions
These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is
where
runs through the list of all partitions of a set of size ;
"" means is one of the "blocks" into which the set is partitioned; and
is the size of the set .
Thus each monomial is a constant times a product of cumulants in which the sum of the indices is (e.g., in the term , the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer corresponds to each term. The coefficient in each term is the number of partitions of a set of members that collapse to that partition of the integer when the members of the set become indistinguishable.
Cumulants and combinatorics
Further connection between cumulants and combinatorics can be found in the work of Gian-Carlo Rota, where links to invariant theory, symmetric functions, and binomial sequences are studied via umbral calculus.
Joint cumulants
The joint cumulant of several random variables is defined as the coefficient in the Maclaurin series of the multivariate cumulant generating function, see Section 3.1 in,
Note that
and, in particular
As with a single variable, the generating function and cumulant can instead be defined via
in which case
and
Repeated random variables and relation between the coefficients κk1, ..., kn
Observe that can also be written as
from which we conclude that
For example
and
In particular, the last equality shows that the cumulants of a single random variable are the joint cumulants of multiple copies of that random variable.
Relation with mixed moments
The joint cumulant or random variables can be expressed as an alternate sum of products of their mixed moments, see Equation (3.2.7) in,
where runs through the list of all partitions of ; where runs through the list of all blocks of the partition ; and where is the number of parts in the partition.
For example,
is the expected value of ,
is the covariance of and , and
For zero-mean random variables , any mixed moment of the form vanishes if is a partition of which contains a singleton .
Hence, the expression of their joint cumulant in terms of mixed moments simplifies.
For example, if X,Y,Z,W are zero mean random variables, we have
More generally, any coefficient of the Maclaurin series can also be expressed in terms of mixed moments, although there are no concise formulae.
Indeed, as noted above, one can write it as a joint cumulant by repeating random variables appropriately, and then apply the above formula to express it in terms of mixed moments. For example
If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero.
The combinatorial meaning of the expression of mixed moments in terms of cumulants is easier to understand than that of cumulants in terms of mixed moments, see Equation (3.2.6) in:
For example:
Further properties
Another important property of joint cumulants is multilinearity:
Just as the second cumulant is the variance, the joint cumulant of just two random variables is the covariance. The familiar identity
generalizes to cumulants:
Conditional cumulants and the law of total cumulance
The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case , expressed in the language of (central) moments rather than that of cumulants, says
In general,
where
the sum is over all partitions of the set of indices, and
1, ..., b are all of the "blocks" of the partition ; the expression indicates that the joint cumulant of the random variables whose indices are in that block of the partition.
Conditional cumulants and the conditional expectation
For certain settings, a derivative identity can be established between the conditional cumulant and the conditional expectation. For example, suppose that where is standard normal independent of , then for any it holds that
The results can also be extended to the exponential family.
Relation to statistical physics
In statistical physics many extensive quantities – that is quantities that are proportional to the volume or size of a given system – are related to cumulants of random variables. The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions. The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants.
A system in equilibrium with a thermal bath at temperature have a fluctuating internal energy , which can be considered a random variable drawn from a distribution . The partition function of the system is
where = and is the Boltzmann constant and the notation has been used rather than for the expectation value to avoid confusion with the energy, . Hence the first and second cumulant for the energy give the average energy and heat capacity.
The Helmholtz free energy expressed in terms of
further connects thermodynamic quantities with cumulant generating function for the energy. Thermodynamics properties that are derivatives of the free energy, such as its internal energy, entropy, and specific heat capacity, all can be readily expressed in terms of these cumulants. Other free energy can be a function of other variables such as the magnetic field or chemical potential , e.g.
where is the number of particles and is the grand potential. Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of and .
History
The history of cumulants is discussed by Anders Hald.
Cumulants were first introduced by Thorvald N. Thiele, in 1889, who called them semi-invariants. They were first called cumulants in a 1932 paper by Ronald Fisher and John Wishart. Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention. Stephen Stigler has said that the name cumulant was suggested to Fisher in a letter from Harold Hotelling. In a paper published in 1929, Fisher had called them cumulative moment functions.
The partition function in statistical physics was introduced by Josiah Willard Gibbs in 1901. The free energy is often called Gibbs free energy. In statistical mechanics, cumulants are also known as Ursell functions relating to a publication in 1927.
Cumulants in generalized settings
Formal cumulants
More generally, the cumulants of a sequence , not necessarily the moments of any probability distribution, are, by definition,
where the values of for are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.
Bell numbers
In combinatorics, the th Bell number is the number of partitions of a set of size . All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1.
Cumulants of a polynomial sequence of binomial type
For any sequence of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence of formal moments, given by the polynomials above. For those polynomials, construct a polynomial sequence in the following way. Out of the polynomial
make a new polynomial in these plus one additional variable :
and then generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on . Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell.
This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.
Free cumulants
In the above moment-cumulant formula\
for joint cumulants, one sums over all partitions of the set . If instead, one sums only over the noncrossing partitions, then, by solving these formulae for the in terms of the moments, one gets free cumulants rather than conventional cumulants treated above. These free cumulants were introduced by Roland Speicher and play a central role in free probability theory. In that theory, rather than considering independence of random variables, defined in terms of tensor products of algebras of random variables, one considers instead free independence of random variables, defined in terms of free products of algebras.
The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The free cumulants of degree higher than 2 of the Wigner semicircle distribution are zero. This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.
See also
Entropic value at risk
Cumulant generating function from a multiset
Cornish–Fisher expansion
Edgeworth expansion
Polykay
k-statistic, a minimum-variance unbiased estimator of a cumulant
Ursell function
Total position spread tensor as an application of cumulants to analyse the electronic wave function in quantum chemistry.
References
External links
cumulant on the Earliest known uses of some of the words of mathematics
Moment (mathematics) | Cumulant | [
"Physics",
"Mathematics"
] | 4,488 | [
"Mathematical analysis",
"Moments (mathematics)",
"Physical quantities",
"Moment (physics)"
] |
359,884 | https://en.wikipedia.org/wiki/Aristolochic%20acid | Aristolochic acids () are a family of carcinogenic, mutagenic, and nephrotoxic phytochemicals commonly found in the flowering plant family Aristolochiaceae (birthworts). Aristolochic acid (AA) I is the most abundant one. The family Aristolochiaceae includes the genera Aristolochia and Asarum (wild ginger), which are commonly used in Chinese herbal medicine. Although these compounds are widely associated with kidney problems, liver and urothelial cancers, the use of AA-containing plants for medicinal purposes has a long history. The FDA has issued warnings regarding consumption of AA-containing supplements.
History
Early medical uses
Birthwort plants, and the aristolochic acids they contain, were quite common in ancient Greek and Roman medical texts, well-established as an herb there by the fifth century BC. Birthworts appeared in Ayurvedic texts by 400 AD, and in Chinese texts later in the fifth century. In these ancient times, it was used to treat kidney and urinary problems, as well as gout, snakebites, and a variety of other ailments. It was also considered to be an effective contraceptive. In many of these cases, birthworts were just some of the many ingredients used to create ointments or salves. In the early first century, in Roman texts, aristolochic acids are first mentioned as a component of frequently ingested medicines to treat things such as asthma, hiccups, spasms, pains, and expulsion of afterbirth.
Discovery of toxicity
Kidney damage
Aristolochic acid poisoning was first diagnosed at a clinic in Brussels, Belgium, when cases of nephritis leading to rapid kidney failure were seen in a group of women who had all taken the same weight-loss supplement, Aristolochia fangchi, which contained aristolochic acid. This nephritis was termed “Chinese herbs nephropathy” (CHN) due to the origin of the weight-loss supplement. A similar condition previously known as Balkan endemic nephropathy (BEN), first characterized in the 1950s in southeastern Europe, was later discovered to be also the result of aristolochic acid (AA) consumption. BEN is more slowly progressive than the nephritis that is seen in CHN, but is likely caused by low-level AA exposure, possibly from contamination of wheat flour seeds by a plant of the birthwort family, Aristolochia clematitis. CHN and BEN fall under the umbrella of what is now known as aristolochic acid nephropathy, the prevalent symptom of AA poisoning.
Liver cancer
A study reported in the Science Translational Medicine journal in October 2017 reported high incidents of liver cancer in Asia, particularly Taiwan, which bore the "well-defined mutational signature" of aristolochic acids. The same link was found in Vietnam and other South-east Asian countries. This was compared with much lower rates found in Europe and North America.
Biosynthesis
The herbal drug known as aristolochic acid contains a mixture of numerous structurally related nitrophenanthrene carboxylic acids generally consisting of two major compounds: aristolochic acid I (AA-I) and aristolochic acid II (AA-II). The biosynthesis of these compounds has been of considerable interest due in large part to the inclusion of both an aryl carboxylic acid and an aryl nitro functionality (uncommon in natural products) within their structures, which suggested an apparent biogenetic relationship to the well-known aporphine alkaloids. Furthermore, this association thereby suggested a biosynthetic relationship with norlaudanosoline (tetrahydropapaveroline) or related benzylisoquinoline precursors, which in turn are derived from tyrosine (2). Feeding studies (Aristolochia sipho) independently using uniquely 14C-labeled compounds [3-14C]-tyrosine, [2-14C]-dopamine and [2-14C]-dihydroxyphenylalanine resulted in the isolation of [14C]-AA-I in each case, which illustrated that the aporphine alkaloid stephanine (11) could be a precursor of AA-I since tyrosine, L-DOPA (3) and dopamine (4) were known precursors of norlaudanosoline: tyrosine (2) is metabolized to L-DOPA (3) which is converted into dopamine (4) which is metabolized to 3,4-dihydroxyphenylacetaldehyde (DOPAL); cyclization of these two compounds results in the formation of norlaudanosoline via a Pictet-Spengler like condensation catalyzed by norlaudanosoline synthetase.
Subsequent feeding studies that used (±)‑[4‑14C]-norlaudanosoline also resulted in the formation of 14C‑labeled-AAI, further suggesting that norlaudanosoline and stephanine (11) could have a possible intermediacy in the biosynthesis of AA-I. Degradation studies of the isolated 14C-labeled AA-I demonstrated that the carbon atom at ring position C4 of the benzyltetrahydroisoquinoline norlaudanosoline was incorporated exclusively in the carboxylic acid moiety of AAI. When this study was repeated but using [4‑14C]-tetrahydropapaverine no labeled AAI was isolated; this observation established that a phenol oxidative reaction was required for the biosynthesis of AA-I from norlaudanosoline, further supporting the intermediacy of aporphine intermediates. The results of a feeding experiment (A. sipho) with (±)‑[3‑14C, 15N]-tyrosine followed by degradation of the isolated doubly labeled AA-I provided evidence that the nitro group of AA-I originates from the amino group of tyrosine.
Confirmation of the involvement of aporphine intermediates in the biogenetic route from norlaudanosoline to AA-I was obtained some two decades later through a series of feeding studies (Aristolochia bracteata) using several labeled hypothetical benzyltetrahydroisoquinoline and aporphine precursors. Feeding experiments with (±)‑[5’,8‑3H2; 6-methoxy‑14C]-nororientaline resulted in the isolation of the doubly labeled AA-I. Cleavage of the methylenedioxy group with trapping of the resulting 14C‑labeled formaldehyde confirmed that this functionality was formed from the o‑methoxyphenol segment of the tetrahydroisoquinoline ring of nororientaline. (±)‑[5’,8‑3H2]‑Orientaline was also incorporated into AA-I. These observations implied that the aporphine prestephanine (10) would be an obligatory intermediate in the biosynthesis, which would involve the intermediacy of the proaporphines orientalinone (8) and orientalinol (9) via the known intramolecular dienone-dienol-phenol sequence for the transformation of benzyltetrahydroisoquinolines to aporphines. A potential role for CYP80G2, a cytochrome P450 that has been demonstrated to catalyze the intramolecular C-C phenol coupling of several benzyltetrahydroisoquinolines, in this orientaline (7) to prestephanine (10) transformation has been suggested. (±)‑[aryl‑3H]‑Prestephanine was incorporated into AA-I confirming its intermediacy in the biosynthesis; and also (±)‑[aryl‑3H]‑stephanine was incorporated into AA-I. This final transformation, that is stephanine (11) to AA-I (12), involves an uncommon oxidative cleavage of the B ring of the aporphine structure to give a nitro substituted phenanthrene carboxylic acid. Hence, taken together these experiments support the sequence outlined for the biosynthesis of aristolochic acid I from norlaudanosoline.
Symptoms and diagnosis
Exposure to aristolochic acid is associated with a high incidence of uroepithelial tumorigenesis, and is linked to urothelial cancer. Since aristolochic acid is a mutagen, it does damage over time. Patients are often first diagnosed with aristolochic acid nephropathy (AAN), which is a rapidly progressive nephropathy and puts them at risk for renal failure and urothelial cancer. However, urothelial cancer is only observed long after consumption. One study estimated, on average, detectable cancer develops ten years from the start of daily aristolochic acid consumption.
A patient thought to have AAN can be confirmed through phytochemical analysis of plant products consumed and detection of aristolactam DNA adducts in the renal cells. (Aristolochic acid is metabolised into aristolactam.) Additionally, mutated proteins in renal cancers as a result of transversion of A:T pairings to T:A are characteristically seen in aristolochic acid-induced mutations. In some cases, early detection resulting in cessation of aristolochia-product consumption can lead to reverse of the kidney damage.
Pharmacology
Absorption, distribution, metabolism, and excretion
Once orally ingested, aristolochic acid I is absorbed through the gastrointestinal tract into the blood stream. It is distributed throughout the body via the blood stream.
Aristolochic acids are metabolized by oxidation and reduction pathways, or phase I metabolism. Reduction of aristolochic acid I produces aristolactam I which has been observed in the urine. Further processing of aristolactam I by O-demethylation results in aristolactam Ia, the primary metabolite. Additionally, nitroreduction results in an N-acylnitrenium ion, which can form DNA-base adducts, thus giving aristolochic acid I its mutagenic properties.
Aristolactam I adducts that are bound to DNA are extremely stable; they have been detected in patient biopsy samples taken 20 years after exposure to plants containing aristolochic acid.
Excretion of aristolochic acids and their metabolites is through the urine.
Mechanism of action
The exact mechanism of action of aristolochic acid is not known, especially in regards to nephropathy. The carcinogenic effects of aristolochic acids are thought to be a result of mutation of the tumor suppressor gene TP53, which seems to be unique to aristolochic acid-associated carcinogenesis. Nephropathy caused by aristolochic acid consumption is not mechanistically understood, but DNA adducts characteristic of aristolochic acid-induced mutations are found in the kidneys of AAN patients, indicating these might play a role.
Regulation
In April 2001, the Food and Drug Administration issued a consumer health alert warning against consuming botanical products, sold as "traditional medicines" or as ingredients in dietary supplements, containing aristolochic acid. The agency warned that consumption of aristolochic acid-containing products was associated with "permanent kidney damage, sometimes resulting in kidney failure that has required kidney dialysis or kidney transplantation. In addition, some patients have developed certain types of cancers, most often occurring in the urinary tract."
In August 2013, two studies identified an aristolochic acid mutational signature in upper urinary tract cancer patients from Taiwan. The carcinogenic effect is the most potent found thus far, exceeding the amount of mutations in smoking-induced lung cancer and UV-exposed melanoma. Exposure to aristolochic acid may also cause certain types of liver cancer.
See also
List of herbs with known adverse effects
Piperolactam A
Stephania tetrandra
References
Further reading
External links
Complete list of warnings from the US Food and Drug Administration
FDA Concerned About Botanical Products, Including Dietary Supplements, Containing Aristolochic Acid May 2000.
Plants Containing Aristolochic Acid
Herbal medicines causing kidney failure, bladder cancer in India, Times of India, Mar 19, 2013
Benzodioxoles
Benzoic acids
IARC Group 1 carcinogens
Nephrotoxins
Nitroarenes
Phenanthrenes
Phenol ethers
Plant toxins | Aristolochic acid | [
"Chemistry"
] | 2,733 | [
"Chemical ecology",
"Plant toxins"
] |
359,954 | https://en.wikipedia.org/wiki/Book%20of%20Abraham | The Book of Abraham is a religious text of the Latter Day Saint movement, first published in 1842 by Joseph Smith. Smith said the book was a translation from several Egyptian scrolls discovered in the early 19th century during an archeological expedition by Antonio Lebolo, and purchased by members of the Church of Jesus Christ of Latter-day Saints (LDS Church) from a traveling mummy exhibition on July 3, 1835. According to Smith, the book was "a translation of some ancient records... purporting to be the writings of Abraham, while he was in Egypt, called the Book of Abraham, written by his own hand, upon papyrus". The Book of Abraham is about Abraham's early life, his travels to Canaan and Egypt, and his vision of the cosmos and its creation.
The Latter-day Saints believe the work is divinely inspired scripture, published as part of the Pearl of Great Price since 1880. It thus forms a doctrinal foundation for the LDS Church and Mormon fundamentalist denominations, though other groups, such as the Community of Christ, do not consider it a sacred text. The book contains several doctrines that are particular to Mormonism, such as the idea that God organized eternal elements to create the universe (instead of creating it ex nihilo), the potential exaltation of humanity, a pre-mortal existence, the first and second estates, and the plurality of gods.
The Book of Abraham papyri were thought to have been lost in the 1871 Great Chicago Fire. However, in 1966 several fragments of the papyri were found in the archives of the Metropolitan Museum of Art in New York and in the LDS Church archives. They are now referred to as the Joseph Smith Papyri. Upon examination by professional Egyptologists (both Mormon and otherwise), these fragments were identified as Egyptian funerary texts, including the "Breathing Permit of Hôr" and the "Book of the Dead", among others. Although some Mormon apologists defend the authenticity of the Book of Abraham, no scholars regard it as an ancient text.
Origin
Eleven mummies and several papyri were discovered near the ancient Egyptian city of Thebes by Antonio Lebolo between 1818 and 1822. Following Lebolo's death in 1830, the mummies and assorted objects were sent to New York with instructions that they should be sold in order to benefit the heirs of Lebolo. Michael H. Chandler eventually came into possession of the mummies and artifacts and began displaying them, starting in Philadelphia. Over the next two years Chandler toured the eastern United States, displaying and selling some of the mummies as he traveled.
In late June or early July 1835, Chandler exhibited his collection in Kirtland, Ohio. A promotional flyer created by Chandler states that the mummies "may have lived in the days of Jacob, Moses, or David". At the time, Kirtland was the home of the Latter Day Saints, led by Joseph Smith. In 1830 Smith published the Book of Mormon which he said he translated from ancient golden plates that had been inscribed with "reformed Egyptian" text. He took an immediate interest in the papyri and soon offered Chandler a preliminary translation of the scrolls. Smith said that the scrolls contained the writings of Abraham and Joseph, as well as a short history of an Egyptian princess named "Katumin". He wrote:
Smith, Joseph Coe, and Simeon Andrews soon purchased the four mummies and at least five papyrus documents for $2,400 ().
Translation process
During Smith's lifetime, the recent decoding of Ancient Egyptian writing systems with the Rosetta Stone was not widely known in the Americas. Between July and November 1835 Smith began "translating an alphabet to the Book of Abraham, and arranging a grammar of the Egyptian language as practiced by the ancients." In so doing, Smith worked closely with Cowdery and Phelps. The result of this effort was a collection of documents and manuscripts now known as the Kirtland Egyptian papers. One of these manuscripts was a bound book titled simply "Grammar & A[l]phabet of the Egyptian Language", which contained Smith's interpretations of the Egyptian glyphs. The first part of the book focuses almost entirely on deciphering Egyptian characters, and the second part deals with a form of astronomy that was supposedly practiced by the ancient Egyptians. Most of the writing in the book was written not by Smith but rather by a scribe taking down what Smith said.
The "Egyptian Alphabet" manuscript is particularly important because it illustrates how Smith attempted to translate the papyri. First, the characters on the papyri were transcribed onto the left-hand side of the book. Next, a postulation as to what the symbols sounded like was devised. Finally, an English interpretation of the symbol was provided. Smith's subsequent translation of the papyri takes on the form of five "degrees" of interpretation, each degree representing a deeper and more complex level of interpretation.
In translating the book, Smith dictated, and Phelps, Warren Parrish, and Frederick G. Williams acted as scribes. The complete work was first published serially in the Latter Day Saint movement newspaper Times and Seasons in 1842, and was later canonized in 1880 by the LDS Church as part of its Pearl of Great Price.
Eyewitness accounts of how the Papyri were translated are few and vague. Warren Parish, who was Joseph Smith's scribe at the time of the translation, wrote in 1838 after he had left the church: "I have set by his side and penned down the translation of the Egyptian Hieroglyphicks [sic] as he claimed to receive it by direct inspiration from Heaven." Wilford Woodruff and Parley P. Pratt intimated second hand that the Urim and Thummim were used in the translation.
A non-church member who saw the mummies in Kirtland spoke about the state of the papyri, and the translation process:
Content
Book of Abraham text
The Book of Abraham's narrative tells of Abraham's life, travels to Canaan and Egypt, and a vision he received concerning the universe, a pre-mortal existence, and the creation of the world.
The book has five chapters:
Nearly half of the Book of Abraham shows a dependence on the King James Version of the Book of Genesis. According to H. Michael Marquardt, "It seems clear that Smith had the Bible open to Genesis as he dictated this section [i.e., Chapter2] of the 'Book of Abraham. Smith explained the similarities by reasoning that when Moses penned Genesis, he used the Book of Abraham as a guide, abridging and condensing where he saw fit. As such, since Moses was recalling Abraham's lifetime, his version was in the third person, whereas the Book of Abraham, being written by its eponymous author, was composed in the first person.
The Book of Abraham was incomplete when Joseph Smith died in 1844. It is unknown how long the text would be, but Oliver Cowdery gave an indication in 1835 that it could be quite large:
A visitor to Kirtland saw the mummies, and noted, "They say that the mummies were Epyptian, but the records are those of Abraham and Joseph...and a larger volume than the Bible will be required to contain them."
Distinct doctrines
The Book of Abraham text is a source of some distinct Latter Day Saint doctrines, which Mormon author Randal S. Chase calls "truths of the gospel of Jesus Christ that were previously unknown to Church members of Joseph Smith's day." Examples include the nature of the priesthood, an understanding of the cosmos, the exaltation of humanity, a pre-mortal existence, the first and second estates, and the plurality of gods.
The Book of Abraham expands upon the nature of the priesthood in the Latter Day Saint movement, and it is suggested in the work that those who are foreordained to the priesthood earned this right by valor or nobility in the pre-mortal life. In a similar vein, the book explicitly denotes that Pharaoh was a descendant of Ham and thus "of that lineage by which he could not have the right of Priesthood". This passage is the only one found in any Mormon scripture that bars a particular lineage of people from holding the priesthood. Even though nothing in the Book of Abraham explicitly connects the line of Pharaoh and Ham to black Africans, this passage was used as a scriptural basis for withholding the priesthood from black individuals. An 1868 Juvenile Instructor article points to the Pearl of Great Price as the "source of racial attitudes in church doctrine", and in 1900, First Presidency member George Q. Cannon began using the story of Pharaoh as a scriptural basis for the ban. In 1912, the First Presidency responded to an inquiry about the priesthood ban by using the story of Pharaoh. By the early 1900s, it became the foundation of church policy in regards to the priesthood ban. The 2002 Doctrine and Covenants Student Manual points to Abraham 1:21–27 as the reasoning behind not giving black people the priesthood until 1978.
Chapter 3 of the Book of Abraham describes a unique (and purportedly Egyptian) understanding of the hierarchy of heavenly bodies, each with different movements and measurements of time. In regard to this chapter, Randal S. Chase notes, "With divine help, Abraham was able to gain greater comprehension of the order of the galaxies, stars, and planets than he could have obtained from earthly sources." At the pinnacle of the cosmos is the slowest-rotating body, Kolob, which, according to the text, is the star closest to where God lives. The Book of Abraham is the only work in the Latter Day Saint canon to mention the star Kolob. According to the Book:
Based on this verse, the LDS Church claims that "Kolob is the star nearest to the presence of God [and] the governing star in all the universe." Time moves slowly on the celestial body; one Kolob-day corresponds to 1,000 earth-years. The Church also notes: "Kolob is also symbolic of Jesus Christ, the central figure in God's plan of salvation."
The Book of Abraham also explores pre-mortal existence. The LDS Church website explains: "Life did not begin at birth, as is commonly believed. Prior to coming to earth, individuals existed as spirits." These spirits are eternal and of different intelligences. Prior to mortal existence, spirits exist in the "first estate". Once certain spirits (i.e., those who choose to follow the plan of salvation offered by God the Father of their own accord) take on a mortal form, they enter into what is called the "second estate". The doctrine of the second estate is explicitly named only in this book. The purpose of earthly life, therefore, is for humans to prepare for a meeting with God; the Church, citing , notes: "All who accept and obey the saving principles and ordinances of the gospel of Jesus Christ will receive eternal life, the greatest gift of God, and will have 'glory added upon their heads for ever and ever'."
Also notable is the Book of Abraham's description of a plurality of gods, and that "the gods" created the Earth, not ex nihilo, but rather from pre-existing, eternal matter. This shift away from monotheism and towards henotheism occurred , when Smith was imprisoned in the Liberty Jail in Clay County, Missouri (this was after the majority of the Book of Abraham had been supposedly translated, but prior to its publication). Smith noted that there would be "a time come in the which nothing shall be with held whither there be one god or many gods they shall be manifest all thrones and dominions, principalities and powers shall be revealed and set forth upon all who have indured valiently for the gospel of Jesus Christ" and that all will be revealed "according to that which was ordained in the midst of the councyl of the eternal God of all other Gods before this world was."
Facsimiles
Three images (facsimiles of vignettes on the papyri) and Joseph Smith's explanations of them were printed in the 1842 issues of the Times and Seasons. These three illustrations were prepared by Smith and an engraver named Reuben Hedlock. The facsimiles and their respective explanations were later included with the text of the Pearl of Great Price in a re-engraved format. According to Smith's explanations, Facsimile No.1 portrays Abraham fastened to an altar, with the idolatrous priest of Elkenah attempting to sacrifice him. Facsimile No.2 contains representations of celestial objects, including the heavens and earth, fifteen other planets or stars, the sun and moon, the number 1,000 and God revealing the grand key-words of the holy priesthood. Facsimile No.3 portrays Abraham in the court of Pharaoh "reasoning upon the principles of Astronomy".
Interpretations and contributions to the LDS movement
The Church of Jesus Christ of Latter-day Saints
The Book of Abraham was canonized in 1880 by the LDS Church, and it remains a part of the larger scriptural work, the Pearl of Great Price. For Latter-day Saints, the book links Old and New Testament covenants into a universal narrative of Christian salvation, expands on premortal existence, depicts ex materia cosmology, and informed Smith's developing understanding of temple theology, making the scripture "critical to understanding the totality of his gospel conception".
Church leadership traditionally described the Book of Abraham straightforwardly as "translated by the Prophet [Joseph Smith] from a papyrus record taken from the catacombs of Egypt", and "Some have assumed that hieroglyphs adjacent to and surrounding facsimile 1 must be a source for the text of the book of Abraham". However, modern Egyptological translations of papyrus fragments reveal the surviving Egyptian text matches the Breathing Permit of Hôr, an Egyptian funerary text, and does not mention Abraham. The church acknowledges this, and its members have adopted a range of interpretations of the Book of Abraham to accommodate the seeming disconnect between the surviving papyrus and Smith's Book of Abraham revelation. The two most common interpretations are sometimes called the "missing scroll theory" and the "catalyst theory", though the relative popularity of these theories among Latter-day Saints is unclear.
The "missing scroll theory" holds that Smith may have translated the Book of Abraham from a now-lost portion of papyri, with the text of Breathing Permit of Hôr having nothing to do with Smith's translation. John Gee, an Egyptologist and Latter-day Saint, and the apologetic organization FAIR (Faithful Answers, Informed Response; formerly FairMormon) favor this view.
Other Latter-day Saints hold to the "catalyst theory," which hypothesizes that Smith's "study of the papyri may have led to a revelation about key events and teachings in the life of Abraham", allowing him to "translate" the Book of Abraham from the Breathing Permit of Hôr papyrus by inspiration without actually relying on the papyrus' textual meaning. This theory draws theological basis from Smith's "New Translation" of the Bible, wherein in the course of rereading the first few chapters of Genesis, he dictated as a revelatory translation the much longer Book of Moses.
FAIR has claimed the church "favors the missing scroll theory". However, in 2019, the Joseph Smith Papers' documentary research on the Book of Abraham and Egyptian papyri makes it "clear that Joseph Smith and/or his clerks associated the characters from the [surviving Breathing Permit of Hôr] papyri with the English Book of Abraham text".
Community of Christ
The Community of Christ, formerly known as the Reorganized Church of Jesus Christ of Latter Day Saints, does not include the Book of Abraham in its scriptural canon, although it was referenced in early church publications.
Church of Jesus Christ of Latter Day Saints (Strangite)
The Strangite branch of the movement does not take an official position on the Book of Abraham. The branch notes, "We know that 'The Book of Abraham' was published in an early periodical as a text 'purporting to be the writings of Abraham' with no indication of its translation process (see Times and Seasons, March 1, 1842), and therefore have no authorized position on it."
Fundamentalist Church of Jesus Christ of Latter-Day Saints
The Fundamentalist Church of Jesus Christ of Latter-Day Saints holds to the canonicity of the Book of Abraham.
Loss and rediscovery of the papyrus
After Joseph Smith's death, the Egyptian artifacts were in the possession of his mother, Lucy Mack Smith, and she and her son William Smith continued to exhibit the four mummies and associated papyri to visitors. Two weeks after Lucy's death in May 1856, Smith's widow, Emma Hale Smith Bidamon, her second husband Lewis C. Bidamon, and her son Joseph Smith III, sold "four Egyptian mummies with the records with them" to Abel Combs on May26, 1856. Combs later sold two of the mummies, along with some papyri, to the St. Louis Museum in 1856. Upon the closing of the St.Louis Museum, these artifacts were purchased by Joseph H. Wood and found their way to the Chicago Museum in about 1863, and were promptly put on display. The museum and all its contents were burned in 1871 during the Great Chicago Fire. Today it is presumed that the papyri that formed the basis for Facsimiles2 and3 were lost in the conflagration.
After the fire, however, it was believed that all the sources for the book had been lost. Despite this belief, Abel Combs still owned several papyri fragments and two mummies. While the fate of the mummies is unknown, the fragments were passed to Combs' nurse Charlotte Benecke Weaver, who gave them to her daughter, Alice Heusser. In 1918 Heusser approached the New York Metropolitan Museum of Art (MMA) about purchasing the items; at the time, the museum curators were not interested, but in 1947 they changed their mind, and the museum bought the papyri from Heusser's widower husband, Edward. In the 1960s the MMA decided to raise money by selling some of its items which were considered "less unique". Among these were the papyri that Heusser had sold to the museum several decades earlier. In May 1966, Aziz S. Atiya, a Coptic scholar from the University of Utah, was looking through the MMA's collection when he came across the Heusser fragments; upon examining them, he recognized one as the vignette known as Facsmile1 from The Pearl of Great Price. He informed LDS Church leaders, and several months later, on November27, 1967, the LDS Church was able to procure the fragments, and according to Henry G. Fischer, curator of the Egyptian Collection at the MMA, an anonymous donation to the MMA made it possible for the LDS Church to acquire the papyri. The subsequent transfer included ten pieces of papyri, including the original of Facsimile1. The eleventh fragment had been given to Brigham Young (then church president) previously by Chief Banquejappa of the Pottawatomie tribe in 1846.
Three of these fragments were designated Joseph Smith Papyrus (JSP) I, X, and XI. Other fragments, designated JSPII, IV, V, VI, VII, and VIII, are thought by critics to be the Book of Joseph to which Smith had referred. Egyptologist John A. Wilson stated that the recovered fragments indicated the existence of at least six to eight separate documents. The twelfth fragment was discovered in the LDS Church Historian's office and was dubbed the "Church Historian's Fragment". Disclosed by the church in 1968, the fragment was designated JSPIX. Although there is some debate about how much of the papyrus collection is missing, there is broad agreement that the recovered papyri are portions of Smith's original purchase, partly based on the fact that they were pasted onto paper which had "drawings of a temple and maps of the Kirtland, Ohio area" on the back, as well as the fact that they were accompanied by an affidavit by Emma Smith stating that they had been in the possession of Joseph Smith.
Controversy and criticism
Since its publication in 1842, the Book of Abraham has been a source of controversy. Egyptologists, beginning in the late 19th century, have disagreed with Joseph Smith's explanations of the facsimiles. They have also asserted that damaged portions of the papyri have been reconstructed incorrectly. In 1912, the book 'Joseph Smith, Jr., As a Translator' was published, containing refutations to Smith's translations. Refuters included Archibald Sayce, Flinders Petrie, James Henry Breasted, Arthur Cruttenden Mace (refutation below), John Punnett Peters, C. Mercer, Eduard Meyer, and Friedrich Wilhelm von Bissing.
I return herewith, under separate cover, the 'Pearl of Great Price.' The 'Book of Abraham,' it is hardly necessary to say, is a pure fabrication. Cuts 1 and 3 are inaccurate copies of well known scenes on funeral papyri, and cut 2 is a copy of one of the magical discs which in the late Egyptian period were placed under the heads of mummies. There were about forty of these latter known in museums and they are all very similar in character. Joseph Smith's interpretation of these cuts is a farrago of nonsense from beginning to end. Egyptian characters can now be read almost as easily as Greek, and five minutes' study in an Egyptian gallery of any museum should be enough to convince any educated man of the clumsiness of the imposture.
The controversy intensified in the late 1960s when portions of the Joseph Smith Papyri were located. The translation of the papyri by both Mormon and non-Mormon Egyptologists does not match the text of the Book of Abraham as purportedly translated by Joseph Smith. Indeed, the transliterated text from the recovered papyri and facsimiles published in the Book of Abraham contain no direct references, either historical or textual, to Abraham, and Abraham's name does not appear anywhere in the papyri or the facsimiles. Edward Ashment notes, "The sign that Smith identified with Abraham [...] is nothing more than the hieratic version of [...] a 'w' in Egyptian. It has no phonetic or semantic relationship to [Smith's] 'Ah-broam. University of Chicago Egyptologist Robert K. Ritner concluded in 2014 that the source of the Book of Abraham "is the 'Breathing Permit of Hôr,' misunderstood and mistranslated by Joseph Smith", and that the other papyri are common Egyptian funerary documents like the Book of the Dead.
Original manuscripts of the Book of Abraham, microfilmed in 1966 by Jerald Tanner, show portions of the Joseph Smith Papyri and their purported translations into the Book of Abraham. Ritner concludes, contrary to the LDS position, due to the microfilms being published prior to the rediscovery of the Joseph Smith Papyri, that "it is not true that 'no eyewitness account of the translation survives, that the Book of Abraham is "confirmed as a perhaps well-meaning, but erroneous invention by Joseph Smith", and "despite its inauthenticity as a genuine historical narrative, the Book of Abraham remains a valuable witness to early American religious history and to the recourse to ancient texts as sources of modern religious faith and speculation".
Book of Joseph
As noted above, a second untranslated work was identified by Joseph Smith after scrutinizing the original papyri. He said that one scroll contained "the writings of Joseph of Egypt". Based on descriptions by Oliver Cowdery, some, including Charles M. Larson, believe that the fragments Joseph Smith Papyri II, IV, V, VI, VII, and VIII are the source of this work.
See also
Kirtland Egyptian Papers
Mormon cosmology
Scrolls of Abraham
Testament of Abraham
Breathing Permit of Hôr
Decipherment of ancient Egyptian scripts
Notes
References
Footnotes
Bibliography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Translation and Historicity of the Book of Abraham, from the LDS Church website
The Pearl of Great Price (containing the Book of Abraham), from the LDS Church website
Book of Abraham manuscript materials from The Joseph Smith Papers
1835 books
1835 in Christianity
Creation myths
Egyptology
Human sacrifice in folklore and mythology
Idolatry
Mormonism and other religions
Mormonism-related controversies
Pearl of Great Price (Mormonism)
Polytheism
Unfinished books
Works based on the Old Testament
Works in the style of the King James Version
Works originally published in Times and Seasons | Book of Abraham | [
"Astronomy"
] | 5,170 | [
"Cosmogony",
"Creation myths"
] |
359,967 | https://en.wikipedia.org/wiki/Unicoherent%20space | In mathematics, a unicoherent space is a topological space that is connected and in which the following property holds:
For any closed, connected with , the intersection is connected.
For example, any closed interval on the real line is unicoherent, but a circle is not.
If a unicoherent space is more strongly hereditarily unicoherent (meaning that every subcontinuum is unicoherent) and arcwise connected, then it is called a dendroid. If in addition it is locally connected then it is called a dendrite. The Phragmen–Brouwer theorem states that, for locally connected spaces, unicoherence is equivalent to a separation property of the closed sets of the space.
References
External links
General topology
Trees (topology) | Unicoherent space | [
"Mathematics"
] | 167 | [
"Topology stubs",
"General topology",
"Topology",
"Trees (topology)"
] |
359,970 | https://en.wikipedia.org/wiki/Proof%20by%20exhaustion | Proof by exhaustion, also known as proof by cases, proof by case analysis, complete induction or the brute force method, is a method of mathematical proof in which the statement to be proved is split into a finite number of cases or sets of equivalent cases, and where each type of case is checked to see if the proposition in question holds. This is a method of direct proof. A proof by exhaustion typically contains two stages:
A proof that the set of cases is exhaustive; i.e., that each instance of the statement to be proved matches the conditions of (at least) one of the cases.
A proof of each of the cases.
The prevalence of digital computers has greatly increased the convenience of using the method of exhaustion (e.g., the first computer-assisted proof of four color theorem in 1976), though such approaches can also be challenged on the basis of mathematical elegance. Expert systems can be used to arrive at answers to many of the questions posed to them. In theory, the proof by exhaustion method can be used whenever the number of cases is finite. However, because most mathematical sets are infinite, this method is rarely used to derive general mathematical results.
In the Curry–Howard isomorphism, proof by exhaustion and case analysis are related to ML-style pattern matching.
Example
Proof by exhaustion can be used to prove that if an integer is a perfect cube, then it must be either a multiple of 9, 1 more than a multiple of 9, or 1 less than a multiple of 9.
Proof:
Each perfect cube is the cube of some integer n, where n is either a multiple of 3, 1 more than a multiple of 3, or 1 less than a multiple of 3. So these three cases are exhaustive:
Case 1: If n = 3p, then n3 = 27p3, which is a multiple of 9.
Case 2: If n = 3p + 1, then n3 = 27p3 + 27p2 + 9p + 1, which is 1 more than a multiple of 9. For instance, if n = 4 then n3 = 64 = 9×7 + 1.
Case 3: If n = 3p − 1, then n3 = 27p3 − 27p2 + 9p − 1, which is 1 less than a multiple of 9. For instance, if n = 5 then n3 = 125 = 9×14 − 1. Q.E.D.
Elegance
Mathematicians prefer to avoid proofs by exhaustion with large numbers of cases, which are viewed as inelegant. An illustration as to how such proofs might be inelegant is to look at the following proofs that all modern Summer Olympic Games are held in years which are divisible by 4:
Proof: The first modern Summer Olympics were held in 1896, and then every 4 years thereafter (neglecting exceptional situations such as when the games' schedule were disrupted by World War I, World War II and the COVID-19 pandemic.). Since 1896 = 474 × 4 is divisible by 4, the next Olympics would be in year 474 × 4 + 4 = (474 + 1) × 4, which is also divisible by four, and so on (this is a proof by mathematical induction). Therefore, the statement is proved.
The statement can also be proved by exhaustion by listing out every year in which the Summer Olympics were held, and checking that every one of them can be divided by four. With 28 total Summer Olympics as of 2016, this is a proof by exhaustion with 28 cases.
In addition to being less elegant, the proof by exhaustion will also require an extra case each time a new Summer Olympics is held. This is to be contrasted with the proof by mathematical induction, which proves the statement indefinitely into the future.
Number of cases
There is no upper limit to the number of cases allowed in a proof by exhaustion. Sometimes there are only two or three cases. Sometimes there may be thousands or even millions. For example, rigorously solving a chess endgame puzzle might involve considering a very large number of possible positions in the game tree of that problem.
The first proof of the four colour theorem was a proof by exhaustion with 1834 cases. This proof was controversial because the majority of the cases were checked by a computer program, not by hand. The shortest known proof of the four colour theorem today still has over 600 cases.
In general the probability of an error in the whole proof increases with the number of cases. A proof with a large number of cases leaves an impression that the theorem is only true by coincidence, and not because of some underlying principle or connection. Other types of proofs—such as proof by induction (mathematical induction)—are considered more elegant. However, there are some important theorems for which no other method of proof has been found, such as
The proof that there is no finite projective plane of order 10.
The classification of finite simple groups.
The Kepler conjecture.
The Boolean Pythagorean triples problem.
See also
British Museum algorithm
Computer-assisted proof
Enumerative induction
Mathematical induction
Proof by contradiction
Disjunction elimination
Notes
Mathematical proofs
Methods of proof
Problem solving methods
de:Beweis (Mathematik)#Vollständige Fallunterscheidung | Proof by exhaustion | [
"Mathematics"
] | 1,084 | [
"Methods of proof",
"nan",
"Proof theory"
] |
360,030 | https://en.wikipedia.org/wiki/Question%20answering | Question answering (QA) is a computer science discipline within the fields of information retrieval and natural language processing (NLP) that is concerned with building systems that automatically answer questions that are posed by humans in a natural language.
Overview
A question-answering implementation, usually a computer program, may construct its answers by querying a structured database of knowledge or information, usually a knowledge base. More commonly, question-answering systems can pull answers from an unstructured collection of natural language documents.
Some examples of natural language document collections used for question answering systems include:
a collection of reference texts
internal organization documents and web pages
compiled newswire reports
a set of Wikipedia pages
a subset of World Wide Web pages
Types of question answering
Question-answering research attempts to develop ways of answering a wide range of question types, including fact, list, definition, how, why, hypothetical, semantically constrained, and cross-lingual questions.
Answering questions related to an article in order to evaluate reading comprehension is one of the simpler form of question answering, since a given article is relatively short compared to the domains of other types of question-answering problems. An example of such a question is "What did Albert Einstein win the Nobel Prize for?" after an article about this subject is given to the system.
Closed-book question answering is when a system has memorized some facts during training and can answer questions without explicitly being given a context. This is similar to humans taking closed-book exams.
Closed-domain question answering deals with questions under a specific domain (for example, medicine or automotive maintenance) and can exploit domain-specific knowledge frequently formalized in ontologies. Alternatively, "closed-domain" might refer to a situation where only a limited type of questions are accepted, such as questions asking for descriptive rather than procedural information. Question answering systems machine reading applications have also been constructed in the medical domain, for instance Alzheimer's disease.
Open-domain question answering deals with questions about nearly anything and can only rely on general ontologies and world knowledge. Systems designed for open-domain question answering usually have much more data available from which to extract the answer. An example of an open-domain question is "What did Albert Einstein win the Nobel Prize for?" while no article about this subject is given to the system.
Another way to categorize question-answering systems is by the technical approach used. There are a number of different types of QA systems, including
rule-based systems,
statistical systems, and
hybrid systems.
Rule-based systems use a set of rules to determine the correct answer to a question. Statistical systems use statistical methods to find the most likely answer to a question. Hybrid systems use a combination of rule-based and statistical methods.
History
Two early question answering systems were BASEBALL and LUNAR. BASEBALL answered questions about Major League Baseball over a period of one year. LUNAR answered questions about the geological analysis of rocks returned by the Apollo Moon missions. Both question answering systems were very effective in their chosen domains. LUNAR was demonstrated at a lunar science convention in 1971 and it was able to answer 90% of the questions in its domain that were posed by people untrained on the system. Further restricted-domain question answering systems were developed in the following years. The common feature of all these systems is that they had a core database or knowledge system that was hand-written by experts of the chosen domain. The language abilities of BASEBALL and LUNAR used techniques similar to ELIZA and DOCTOR, the first chatterbot programs.
SHRDLU was a successful question-answering program developed by Terry Winograd in the late 1960s and early 1970s. It simulated the operation of a robot in a toy world (the "blocks world"), and it offered the possibility of asking the robot questions about the state of the world. The strength of this system was the choice of a very specific domain and a very simple world with rules of physics that were easy to encode in a computer program.
In the 1970s, knowledge bases were developed that targeted narrower domains of knowledge. The question answering systems developed to interface with these expert systems produced and valid responses to questions within an area of knowledge. These expert systems closely resembled modern question answering systems except in their internal architecture. Expert systems rely heavily on expert-constructed and organized knowledge bases, whereas many modern question answering systems rely on statistical processing of a large, unstructured, natural language text corpus.
The 1970s and 1980s saw the development of comprehensive theories in computational linguistics, which led to the development of ambitious projects in text comprehension and question answering. One example was the Unix Consultant (UC), developed by Robert Wilensky at U.C. Berkeley in the late 1980s. The system answered questions pertaining to the Unix operating system. It had a comprehensive, hand-crafted knowledge base of its domain, and it aimed at phrasing the answer to accommodate various types of users. Another project was LILOG, a text-understanding system that operated on the domain of tourism information in a German city. The systems developed in the UC and LILOG projects never went past the stage of simple demonstrations, but they helped the development of theories on computational linguistics and reasoning.
Specialized natural-language question answering systems have been developed, such as EAGLi for health and life scientists.
Applications
QA systems are used in a variety of applications, including
Fact-checking if a fact is verified, by posing a question like: is fact X true or false?
customer service,
technical support,
market research,
generating reports or conducting research.
Architecture
, question-answering systems typically included a question classifier module that determined the type of question and the type of answer.
Different types of question-answering systems employ different architectures. For example, modern open-domain question answering systems may use a retriever-reader architecture. The retriever is aimed at retrieving relevant documents related to a given question, while the reader is used to infer the answer from the retrieved documents. Systems such as GPT-3, T5, and BART use an end-to-end architecture in which a transformer-based architecture stores large-scale textual data in the underlying parameters. Such models can answer questions without accessing any external knowledge sources.
Question answering methods
Question answering is dependent on a good search corpus; without documents containing the answer, there is little any question answering system can do. Larger collections generally mean better question answering performance, unless the question domain is orthogonal to the collection. Data redundancy in massive collections, such as the web, means that nuggets of information are likely to be phrased in many different ways in differing contexts and documents, leading to two benefits:
If the right information appears in many forms, the question answering system needs to perform fewer complex NLP techniques to understand the text.
Correct answers can be filtered from false positives because the system can rely on versions of the correct answer appearing more times in the corpus than incorrect ones.
Some question answering systems rely heavily on automated reasoning.
Open domain question answering
In information retrieval, an open-domain question answering system tries to return an answer in response to the user's question. The returned answer is in the form of short texts rather than a list of relevant documents. The system finds answers by using a combination of techniques from computational linguistics, information retrieval, and knowledge representation.
The system takes a natural language question as an input rather than a set of keywords, for example: "When is the national day of China?" It then transforms this input sentence into a query in its logical form. Accepting natural language questions makes the system more user-friendly, but harder to implement, as there are a variety of question types and the system will have to identify the correct one in order to give a sensible answer. Assigning a question type to the question is a crucial task; the entire answer extraction process relies on finding the correct question type and hence the correct answer type.
Keyword extraction is the first step in identifying the input question type. In some cases, words clearly indicate the question type, e.g., "Who", "Where", "When", or "How many"—these words might suggest to the system that the answers should be of type "Person", "Location", "Date", or "Number", respectively. POS (part-of-speech) tagging and syntactic parsing techniques can also determine the answer type. In the example above, the subject is "Chinese National Day", the predicate is "is" and the adverbial modifier is "when", therefore the answer type is "Date". Unfortunately, some interrogative words like "Which", "What", or "How" do not correspond to unambiguous answer types: Each can represent more than one type. In situations like this, other words in the question need to be considered. A lexical dictionary such as WordNet can be used for understanding the context.
Once the system identifies the question type, it uses an information retrieval system to find a set of documents that contain the correct keywords. A tagger and NP/Verb Group chunker can verify whether the correct entities and relations are mentioned in the found documents. For questions such as "Who" or "Where", a named-entity recogniser finds relevant "Person" and "Location" names from the retrieved documents.
A vector space model can classify the candidate answers. Check if the answer is of the correct type as determined in the question type analysis stage. An inference technique can validate the candidate answers. A score is then given to each of these candidates according to the number of question words it contains and how close these words are to the candidate—the more and the closer the better. The answer is then translated by parsing into a compact and meaningful representation. In the previous example, the expected output answer is "1st Oct."
Mathematical question answering
An open-source, math-aware, question answering system called MathQA, based on Ask Platypus and Wikidata, was published in 2018. MathQA takes an English or Hindi natural language question as input and returns a mathematical formula retrieved from Wikidata as a succinct answer, translated into a computable form that allows the user to insert values for the variables. The system retrieves names and values of variables and common constants from Wikidata if those are available. It is claimed that the system outperforms a commercial computational mathematical knowledge engine on a test set. MathQA is hosted by Wikimedia at https://mathqa.wmflabs.org/. In 2022, it was extended to answer 15 math question types.
MathQA methods need to combine natural and formula language. One possible approach is to perform supervised annotation via Entity Linking. The "ARQMath Task" at CLEF 2020 was launched to address the problem of linking newly posted questions from the platform Math Stack Exchange to existing ones that were already answered by the community. Providing hyperlinks to already answered, semantically related questions helps users to get answers earlier but is a challenging problem because semantic relatedness is not trivial. The lab was motivated by the fact that 20% of mathematical queries in general-purpose search engines are expressed as well-formed questions. The challenge contained two separate sub-tasks. Task 1: "Answer retrieval" matching old post answers to newly posed questions, and Task 2: "Formula retrieval" matching old post formulae to new questions. Starting with the domain of mathematics, which involves formula language, the goal is to later extend the task to other domains (e.g., STEM disciplines, such as chemistry, biology, etc.), which employ other types of special notation (e.g., chemical formulae).
The inverse of mathematical question answering—mathematical question generation—has also been researched. The PhysWikiQuiz physics question generation and test engine retrieves mathematical formulae from Wikidata together with semantic information about their constituting identifiers (names and values of variables). The formulae are then rearranged to generate a set of formula variants. Subsequently, the variables are substituted with random values to generate a large number of different questions suitable for individual student tests. PhysWikiquiz is hosted by Wikimedia at https://physwikiquiz.wmflabs.org/.
Progress
Question answering systems have been extended in recent years to encompass additional domains of knowledge For example, systems have been developed to automatically answer temporal and geospatial questions, questions of definition and terminology, biographical questions, multilingual questions, and questions about the content of audio, images, and video. Current question answering research topics include:
interactivity—clarification of questions or answers
answer reuse or caching
semantic parsing
answer presentation
knowledge representation and semantic entailment
social media analysis with question answering systems
sentiment analysis
utilization of thematic roles
Image captioning for visual question answering
Embodied question answering
In 2011, Watson, a question answering computer system developed by IBM, competed in two exhibition matches of Jeopardy! against Brad Rutter and Ken Jennings, winning by a significant margin.
Facebook Research made their DrQA system available under an open source license. This system uses Wikipedia as knowledge source. The open source framework Haystack by deepset combines open-domain question answering with generative question answering and supports the of the language models for .
Large Language Models (LLMs)[36] like GPT-4[37], Gemini[38] are examples of successful QA systems that are enabling more sophisticated understanding and generation of text. When coupled with Multimodal[39] QA Systems, which can process and understand information from various modalities like text, images, and audio, LLMs significantly improve the capabilities of QA systems.
References
Further reading
Dragomir R. Radev, John Prager, and Valerie Samn. Ranking suspected answers to natural language questions using predictive annotation . In Proceedings of the 6th Conference on Applied Natural Language Processing, Seattle, WA, May 2000.
John Prager, Eric Brown, Anni Coden, and Dragomir Radev. Question-answering by predictive annotation . In Proceedings, 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Athens, Greece, July 2000.
L. Fortnow, Steve Homer (2002/2003). A Short History of Computational Complexity. In D. van Dalen, J. Dawson, and A. Kanamori, editors, The History of Mathematical Logic. North-Holland, Amsterdam.
External links
Question Answering Evaluation at TREC
Question Answering Evaluation at CLEF
Natural language processing
Computational linguistics
Information retrieval genres
Tasks of natural language processing
Deep learning | Question answering | [
"Technology"
] | 3,011 | [
"Natural language and computing",
"Computational linguistics"
] |
360,033 | https://en.wikipedia.org/wiki/Ordnance%20Survey%20National%20Grid | The Ordnance Survey National Grid reference system (OSGB), also known as British National Grid (BNG), is a system of geographic grid references, distinct from latitude and longitude, whereby any location in Great Britain can be described in terms of its distance from the origin (0, 0), which lies to the west of the Isles of Scilly.
The Ordnance Survey (OS) devised the national grid reference system, and it is heavily used in its survey data, and in maps based on those surveys, whether published by the Ordnance Survey or by commercial map producers. Grid references are also commonly quoted in other publications and data sources, such as guide books and government planning documents.
A number of different systems exist that can provide grid references for locations within the British Isles: this article describes the system created solely for Great Britain and its outlying islands (including the Isle of Man). The Irish grid reference system is a similar system created by the Ordnance Survey of Ireland and the Ordnance Survey of Northern Ireland for the island of Ireland. The Irish Transverse Mercator (ITM) coordinate reference system was adopted in 2001 and is now the preferred coordinate reference system across Ireland. ITM is based on the Universal Transverse Mercator coordinate system (UTM), used to provide grid references for worldwide locations, and this is the system commonly used for the Channel Islands. European-wide agencies also use UTM when mapping locations, or may use the Military Grid Reference System (MGRS), or variants of it.
Grid letters
The first letter of the British National Grid is derived from a larger set of 25 squares of size 500 km by 500 km, labelled A to Z, omitting one letter (I) (refer diagram below), previously used as a military grid. Four of these largest squares contain significant land area within Great Britain: S, T, N and H. The O square contains a tiny area of North Yorkshire, Beast Cliff at , almost all of which lies below mean high tide.
For the second letter, each 500 km square is subdivided into 25 squares of size 100 km by 100 km, each with a letter code from A to Z (again omitting I) starting with A in the north-west corner to Z in the south-east corner. These squares are outlined in light grey on the "100km squares" map, with those containing land lettered. The central (2° W) meridian is shown in red.
Grid digits
Within each square, eastings and northings from the south west corner of the square are given numerically. For example, NH0325 means a 1 km square whose south-west corner is 3 km east and 25 km north from the south-west corner of square NH. A location can be indicated to varying resolutions numerically, usually from two digits in each coordinate (for a 1 km square) through to five (for a 1 m square); in each case the first half of the digits is for the first coordinate and the second half for the other. The most common usage is the six figure grid reference, employing three digits in each coordinate to determine a 100 m square. For example, the grid reference of the 100 m square containing the summit of Ben Nevis is . (Grid references may be written with or without spaces; e.g., also NN166712.) NN has an easting of 200 km and northing of 700 km, so the OSGB36 National Grid location for Ben Nevis is at 216600, 771200.
All-numeric grid references
Grid references may also be quoted as a pair of numbers: eastings then northings in metres, measured from the southwest corner of the SV square. 13 digits may be required for locations in Orkney and further north. For example, the grid reference for Sullom Voe Oil Terminal in the Shetland islands may be given as or 439668,1175316.
Another, distinct, form of all-numeric grid reference is an abbreviated alphanumeric reference where the letters are simply omitted, e.g. 166712 for the summit of Ben Nevis. Unlike the numeric references described above, this abbreviated grid reference is incomplete; it gives the location relative to an OS 100×100 km square, but does not specify which square. It is often used informally when the context identifies the OS 2-letter square. For example, within the context of a location known to be on OS Landranger sheet 41 (which extends from NN000500 in the south-west to NN400900 in the north-east) the abbreviated grid reference 166712 is equivalent to NN166712. If working with more than one Landranger sheet, this may also be given as 41/166712.
Alternatively, sometimes numbers instead of the two-letter combinations are used for the 100×100 km squares. The numbering follows a grid index where the tens denote the progress from West to East and the units from South to North. In the north of Scotland, the numbering is modified: the 100 km square to the north of 39 is numbered N30; the square to the north of 49 is N40, etc.
Compatibility with related systems
The grid is based on the OSGB36 datum (Ordnance Survey Great Britain 1936, based on the Airy 1830 ellipsoid), and was introduced after the retriangulation of 1936–1962. It replaced the Cassini Grid which had previously been the standard projection for Ordnance Survey maps.
The Airy ellipsoid is a regional best fit for Britain; more modern mapping tends to use the GRS80 ellipsoid used by the Global Positioning System (the Airy ellipsoid assumes the Earth to be about 1 km smaller in diameter than the GRS80 ellipsoid, and to be slightly less flattened). The British maps adopt a transverse Mercator projection with an origin (the "true" origin) at 49° N, 2° W (an offshore point in the English Channel which lies between the island of Jersey and the French port of St. Malo). Over the Airy ellipsoid a straight line grid, the National Grid, is placed with a new false origin to eliminate negative numbers, creating a 700 km by 1300 km grid. This false origin is located south-west of the Isles of Scilly.
In order to minimize the overall scale error, a factor of 2499/2500 is applied. This creates two lines of longitude about 180 km east and west of the central meridian along which the local scale factor equals 1, i.e. map scale is correct. Inside these lines the local scale factor is less than 1, with a minimum of 0.04% too small at the central meridian. Outside these lines the local scale factor is greater than 1, and is about 0.04% too large near the east and west coasts. Grid north and true north are only aligned on the central meridian (400 km easting) of the grid which is 2° W (OSGB36) and approx. (WGS 84).
A geodetic transformation between OSGB 36 and other terrestrial reference systems (like ITRF2000, ETRS89, or WGS 84) can become quite tedious if attempted manually. The most common transformation is called the Helmert datum transformation, which results in a typical 7 m error from true. The definitive transformation from ETRS89 that is published by the Ordnance Survey is called the National Grid Transformation OSTN15. This models the detailed distortions in the 1936–1962 retriangulation, and achieves backwards compatibility in grid coordinates to sub-metre accuracy.
Datum shift between OSGB 36 and WGS 84
The difference between the coordinates on different datums varies from place to place. The longitude and latitude positions on OSGB 36 are the same as for WGS 84 at a point in the Atlantic Ocean well to the west of Great Britain. In Cornwall, the WGS 84 longitude lines are about 70 metres east of their OSGB 36 equivalents, this value rising gradually to about 120 m east on the east coast of East Anglia. The WGS 84 latitude lines are about 70 m south of the OSGB 36 lines in South Cornwall, the difference diminishing to zero in the Scottish Borders, and then increasing to about 50 m north on the north coast of Scotland. (If the lines are further east, then the longitude value of any given point is further west. Similarly, if the lines are further south, the values will give the point a more northerly latitude.) The smallest datum shift is on the west coast of Scotland and the greatest in Kent.
Datum shift between OSGB 36 and ED 50
These two datums are not both in general use in any one place, but for a point in the English Channel halfway between Dover and Calais, the ED50 longitude lines are about 20 m east of the OSGB36 equivalents, and the ED50 latitude lines are about 150 m south of the OSGB36 ones.
Summary parameters of the coordinate system
Datum: OSGB36
Map projection: Transverse Mercator projection using Redfearn series
True origin: 49°N, 2°W
False origin: 400 km west, 100 km north of True Origin
Scale factor: 0.9996012717
EPSG Code: EPSG:27700
Ellipsoid: Airy 1830
Semi-major axis a:
Semi-minor axis b:
Flattening (derived constant): 1/299.3249646
See also
Ordnance Datum Newlyn
Irish grid reference system
Maidenhead Locator System
United States National Grid
World Geodetic System
Custom units of measure
Tetrad
Hectad
Myriad
Notes
References
External links
Ordnance Survey A guide to coordinate systems in Great Britain: An introduction to mapping coordinate systems and the use of GPS datasets with Ordnance Survey mapping; Version 3.6, 2020 [Retrieved 19 February 2022].
Ordnance Survey's Grid script: a brief introduction to the National Grid Reference; Version November 2011 [Retrieved 13 February 2014].
- Multiple-format co-ordinate transformer for Great Britain & Channel Islands
(JavaScript source code)
Web utility to find a UK grid reference
LatLong <> OS Grid Ref converts & presents in many formats, generates specific links to that location for several useful map web pages - 1840–present. LatLong WSG84 <> GB, Ireland (inc NI) and Chanel Islands (30U) GR formats recognised. Distance measure for dog-leg routes & area calculations.
Open source dataset (in GeoPackage format) of the British National Grids at various resolutions, available for download from Ordnance Survey's GitHub.
Geography of the United Kingdom
Maps from Ordnance Survey
Geographic coordinate systems
Land surveying systems
Geodesy
Geocodes
Surveying of the United Kingdom | Ordnance Survey National Grid | [
"Mathematics"
] | 2,227 | [
"Geographic coordinate systems",
"Applied mathematics",
"Geodesy",
"Coordinate systems"
] |
360,036 | https://en.wikipedia.org/wiki/Free%20product | In mathematics, specifically group theory, the free product is an operation that takes two groups G and H and constructs a new The result contains both G and H as subgroups, is generated by the elements of these subgroups, and is the “universal” group having these properties, in the sense that any two homomorphisms from G and H into a group K factor uniquely through a homomorphism from to K. Unless one of the groups G and H is trivial, the free product is always infinite. The construction of a free product is similar in spirit to the construction of a free group (the universal group with a given set of generators).
The free product is the coproduct in the category of groups. That is, the free product plays the same role in group theory that disjoint union plays in set theory, or that the direct sum plays in module theory. Even if the groups are commutative, their free product is not, unless one of the two groups is the trivial group. Therefore, the free product is not the coproduct in the category of abelian groups.
The free product is important in algebraic topology because of van Kampen's theorem, which states that the fundamental group of the union of two path-connected topological spaces whose intersection is also path-connected is always an amalgamated free product of the fundamental groups of the spaces. In particular, the fundamental group of the wedge sum of two spaces (i.e. the space obtained by joining two spaces together at a single point) is, under certain conditions given in the Seifert van-Kampen theorem, the free product of the fundamental groups of the spaces.
Free products are also important in Bass–Serre theory, the study of groups acting by automorphisms on trees. Specifically, any group acting with finite vertex stabilizers on a tree may be constructed from finite groups using amalgamated free products and HNN extensions. Using the action of the modular group on a certain tessellation of the hyperbolic plane, it follows from this theory that the modular group is isomorphic to the free product of cyclic groups of orders 4 and 6 amalgamated over a cyclic group of order 2.
Construction
If G and H are groups, a word on G and H is a sequence of the form
where each si is either an element of G or an element of H. Such a word may be reduced using the following operations:
Remove an instance of the identity element (of either G or H).
Replace a pair of the form g1g2 by its product in G, or a pair h1h2 by its product in H.
Every reduced word is either the empty sequence, contains exactly one element of G or H, or is an alternating sequence of elements of G and elements of H, e.g.
The free product G ∗ H is the group whose elements are the reduced words in G and H, under the operation of concatenation followed by reduction.
For example, if G is the infinite cyclic group , and H is the infinite cyclic group , then every element of G ∗ H is an alternating product of powers of x with powers of y. In this case, G ∗ H is isomorphic to the free group generated by x and y.
Presentation
Suppose that
is a presentation for G (where SG is a set of generators and RG is a set of relations), and suppose that
is a presentation for H. Then
That is, G ∗ H is generated by the generators for G together with the generators for H, with relations consisting of the relations from G together with the relations from H (assume here no notational clashes so that these are in fact disjoint unions).
Examples
For example, suppose that G is a cyclic group of order 4,
and H is a cyclic group of order 5
Then G ∗ H is the infinite group
Because there are no relations in a free group, the free product of free groups is always a free group. In particular,
where Fn denotes the free group on n generators.
Another example is the modular group . It is isomorphic to the free product of two cyclic groups:
Generalization: Free product with amalgamation
The more general construction of free product with amalgamation is correspondingly a special kind of pushout in the same category. Suppose and are given as before, along with monomorphisms (i.e. injective group homomorphisms):
and
where is some arbitrary group. Start with the free product and adjoin as relations
for every in . In other words, take the smallest normal subgroup of containing all elements on the left-hand side of the above equation, which are tacitly being considered in by means of the inclusions of and in their free product. The free product with amalgamation of and , with respect to and , is the quotient group
The amalgamation has forced an identification between in with in , element by element. This is the construction needed to compute the fundamental group of two connected spaces joined along a path-connected subspace, with taking the role of the fundamental group of the subspace. See: Seifert–van Kampen theorem.
Karrass and Solitar have given a description of the subgroups of a free product with amalgamation. For example, the homomorphisms from and to the quotient group that are induced by and are both injective, as is the induced homomorphism from .
Free products with amalgamation and a closely related notion of HNN extension are basic building blocks in Bass–Serre theory of groups acting on trees.
In other branches
One may similarly define free products of other algebraic structures than groups, including algebras over a field. Free products of algebras of random variables play the same role in defining "freeness" in the theory of free probability that Cartesian products play in defining statistical independence in classical probability theory.
See also
Direct product of groups
Coproduct
Graph of groups
Kurosh subgroup theorem
Normal form for free groups and free product of groups
Universal property
References
Group products
Algebraic topology
Free algebraic structures | Free product | [
"Mathematics"
] | 1,227 | [
"Mathematical structures",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Algebraic structures",
"Free algebraic structures"
] |
360,039 | https://en.wikipedia.org/wiki/Georges-Louis%20Leclerc%2C%20Comte%20de%20Buffon | Georges-Louis Leclerc, Comte de Buffon (; 7 September 1707 – 16 April 1788) was a French naturalist, mathematician, and cosmologist. He held the position of intendant (director) at the Jardin du Roi, now called the Jardin des plantes.
Buffon's works influenced the next two generations of naturalists, including two prominent French scientists Jean-Baptiste Lamarck and Georges Cuvier. Buffon published thirty-six quarto volumes of his Histoire Naturelle during his lifetime, with additional volumes based on his notes and further research being published in the two decades following his death.
Ernst Mayr wrote that "Truly, Buffon was the father of all thought in natural history in the second half of the 18th century". Credited with being one of the first naturalists to recognize ecological succession, he was later forced by the theology committee at the University of Paris to recant his theories about geological history and animal evolution because they contradicted the biblical narrative of Creation.
Early life
Georges Louis Leclerc (later Comte de Buffon) was born at Montbard, in the province of Burgundy to Benjamin François Leclerc, a minor local official in charge of the salt tax and Anne-Christine Marlin, also from a family of civil servants. Georges was named after his mother's uncle (his godfather) Georges Blaisot, the tax-farmer of the Duke of Savoy for all of Sicily. In 1714 Blaisot died childless, leaving a considerable fortune to his seven-year-old godson. Benjamin Leclerc then purchased an estate containing the nearby village of Buffon and moved the family to Dijon acquiring various offices there as well as a seat in the Dijon Parlement.
Georges attended the Jesuit College of Godrans in Dijon from the age of ten onwards. From 1723 to 1726 he then studied law in Dijon, the prerequisite for continuing the family tradition in civil service. In 1728 Georges left Dijon to study mathematics and medicine at the University of Angers in France. At Angers in 1730 he made the acquaintance of the young English Duke of Kingston, who was on his grand tour of Europe, and traveled with him on a large and expensive entourage for a year and a half through southern France and parts of Italy.
There are persistent but completely undocumented rumors from this period about duels, abductions and secret trips to England. In 1732 after the death of his mother and before the impending remarriage of his father, Georges left Kingston and returned to Dijon to secure his inheritance. Having added 'de Buffon' to his name while traveling with the Duke, he repurchased the village of Buffon, which his father had meanwhile sold off. With a fortune of about 80,000 livres (at the time, worth nearly 27 kilograms of gold), Buffon set himself up in Paris to pursue science, at first primarily mathematics and mechanics, and the increase of his fortune.
Career
In 1732 he moved to Paris, where he made the acquaintance of Voltaire and other intellectuals. He lived in the Faubourg Saint-Germain, with Gilles-François Boulduc, first apothecary of the King, professor of chemistry at the Royal Garden of Plants, member of the Academy of Sciences. He first made his mark in the field of mathematics and, in his (On the game of fair-square), introduced differential and integral calculus into probability theory; the problem of Buffon's needle in probability theory is named after him. In 1734 he was admitted to the French Academy of Sciences. During this period he corresponded with the Swiss mathematician Gabriel Cramer.
His protector Maurepas had asked the Academy of Sciences to do research on wood for the construction of ships in 1733. Soon afterward, Buffon began a long-term study, performing some of the most comprehensive tests to date on the mechanical properties of wood. Included were a series of tests to compare the properties of small specimens with those of large members. After carefully testing more than a thousand small specimens without knots or other defects, Buffon concluded that it was not possible to extrapolate to the properties of full-size timbers, and he began a series of tests on full-size structural members.
In 1739 he was appointed head of the Parisian Jardin du Roi with the help of Maurepas; he held this position to the end of his life. Buffon was instrumental in transforming the Jardin du Roi into a major research center and museum. He also enlarged it, arranging the purchase of adjoining plots of land and acquiring new botanical and zoological specimens from all over the world.
Thanks to his talent as a writer, he was invited to join Paris's second great academy, the Académie Française in 1753 and then in 1768 he was elected to the American Philosophical Society. In his ("Discourse on Style"), pronounced before the Académie française, he said, "Writing well consists of thinking, feeling and expressing well, of clarity of mind, soul and taste ... The style is the man himself" (""). Unfortunately for him, Buffon's reputation as a literary stylist also gave ammunition to his detractors: the mathematician Jean le Rond d'Alembert, for example, called him "the great phrase-monger".
In 1752 Buffon married Marie-Françoise de Saint-Belin-Malain, the daughter of an impoverished noble family from Burgundy, who had been enrolled in the convent school run by his sister. Madame de Buffon's second child, a son born in 1764, survived childhood; she herself died in 1769. When in 1772 Buffon became seriously ill and the promise that his son (then only 8) should succeed him as director of the Jardin became clearly impracticable and was withdrawn, the King raised Buffon's estates in Burgundy to the status of a county – and thus Buffon (and his son) became a count. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1782. Buffon died in Paris in 1788.
He was buried in a chapel adjacent to the church of Sainte-Urse Montbard; during the French Revolution, his tomb was broken into and the lead that covered the coffin was ransacked to produce bullets. His son, George-Louie-Marie Buffon (often called Buffonet) was guillotined on July 10, 1794. Buffon's heart was initially saved, as it was guarded by Suzanne Necker (wife of Jacques Necker), but was later lost. Today, only Buffon's cerebellum remains, as it is kept in the base of the statue by Pajou that Louis XVI had commissioned in his honor in 1776, located at the Museum of Natural History in Paris.
His was also a source of inspiration for the painters of the Sèvres factory, giving rise to porcelain services called Buffon. The name of the different species, faithfully reproduced, is inscribed on the back of each piece. Several "Buffon services" were produced during the reign of Louis XVI; the first was intended for the Count of Artois, in 1782.
Histoire Naturelle
Buffon's Histoire naturelle, générale et particulière (1749–1788: in 36 volumes; an additional volume based on his notes appeared in 1789) was originally intended to cover all three "kingdoms" of nature but the Histoire naturelle ended up being limited to the animal and mineral kingdoms, and the animals covered were only the birds and quadrupeds. "Written in a brilliant style, this work was read ... by every educated person in Europe". Those who assisted him in the production of this great work included Louis Jean-Marie Daubenton, Philibert Guéneau de Montbeillard, and Gabriel-Léopold Bexon, along with numerous artists. Buffon's was translated into many different languages, making him one of the most widely read authors of the day, a rival to Montesquieu, Rousseau, and Voltaire.
In the opening volumes of the Histoire naturelle Buffon questioned the usefulness of mathematics, criticized Carl Linnaeus's taxonomical approach to natural history, outlined a history of the Earth with little relation to the Biblical account, and proposed a theory of reproduction that ran counter to the prevailing theory of pre-existence. The early volumes were condemned by the Faculty of Theology at the Sorbonne. Buffon published a retraction, but he continued publishing the offending volumes without any change.
In the course of his examination of the animal world, Buffon noted that different regions have distinct plants and animals despite similar environments, a concept later known as Buffon's Law. This is considered to be the first principle of biogeography. He made the suggestion that species may have both "improved" and "degenerated" after dispersing from a center of creation. In volume 14 he argued that all the world's quadrupeds had developed from an original set of just thirty-eight quadrupeds. On this basis, he is sometimes considered a "transformist" and a precursor of Darwin. He also asserted that climate change may have facilitated the worldwide spread of species from their centers of origin. Still, interpreting his ideas on the subject is not simple, for he returned to topics many times in the course of his work.
Buffon originally held that “the animals common both to the old and new world are smaller in the latter,” ascribing this to environmental conditions. Upon meeting Buffon, Thomas Jefferson attempted “to convince him of his error,” noting that “the reindeer could walk under the belly of our moose.” Buffon, who was “absolutely unacquainted” with the moose, asked for a specimen. Jefferson dispatched twenty soldiers to the New Hampshire woods to find a bull moose for Buffon as proof of the "stature and majesty of American quadrupeds". According to Jefferson, the specimen “convinced Mr. Buffon. He promised in his next volume to set these things right."
In Les époques de la nature (1778) Buffon discussed the origins of the Solar System, speculating that the planets had been created by a comet's collision with the Sun. He also suggested that the Earth originated much earlier than 4004 BC, the date determined by Archbishop James Ussher. Basing his figures on the cooling rate of iron tested at his Laboratory the Petit Fontenet at Montbard, he calculated that the Earth was at least 75,000 years old. Once again, his ideas were condemned by the Sorbonne, and once again he issued a retraction to avoid further problems.
Buffon knew of the existence of extinct species as mammoths or European rhinos. And some of his assumptions have inspired current models, such as continental drift.
Publications
Histoire naturelle, générale et particuliére, 1749–1767. Paris: Imprimerie Royale. Volumes 3,
4,
5,
6,
7,
10,
11,
13,
14,
15.
Anthropological studies
Buffon believed in monogenism, the concept that all humanity has a single origin, and that physical differences arose from adaptation to environmental factors, including climate and diet. He speculated on the possibility that the first humans were dark-skinned Africans, but did not pinpoint the area of human origin beyond delineating it as “the most temperate climate [that] lies between the 40th and 50th degree of latitude.” This geophysical band encompasses portions of Europe, North America, North Africa, Mongolia, and China.
Controversially for a European of his era, Buffon did not believe that Europe was the cradle of human civilization. Instead he stated that Japanese and Chinese culture were “of a very ancient date,” and that Europe “only much later received the light from the East…it is thus in the northern countries of Asia that the stem of human knowledge grew."
Buffon thought that skin color could change in a single lifetime, depending on the conditions of climate and diet. Clarence Glacken suggests that "The environmental changes through human agency described by Buffon were those which were familiar and traditional in the history of Western civilization". However, Buffon also challenged Carl Linnaeus' conceptualization of the fixed division of race. In this sense, Buffon expands his perspective on monogenism that associating these dissimilar traits and features into one larger category rather than in a fixed division. This brought to his conceptualization on distinguishing race in a broad and narrow sense; in a broad sense, race means larger groups of people who inhabit a huge region known as a continent; while in a narrow sense, it denotes equivalently with "nation". With this, he implies his ambivalence in defining race by looking at specific traits to differenciate them but at the same time he rejects the idea of categorizing race in a specific fixed division. Therefore, because Buffon seems to favor in working on gerealization and marking the similarities rather than the difference in the race categorization.
Relevance to modern biology
Charles Darwin wrote in his preliminary historical sketch added to the third edition of On the Origin of Species: "Passing over ... Buffon, with whose writings I am not familiar". Then, from the fourth edition onwards, he amended this to say that "the first author who in modern times has treated it [evolution] in a scientific spirit was Buffon. But as his opinions fluctuated greatly at different periods, and as he does not enter on the causes or means of the transformation of species, I need not here enter on details". Buffon's work on degeneration, however, was immensely influential on later scholars but was overshadowed by strong moral overtones.
The paradox of Buffon is that, according to Ernst Mayr:
Buffon wrote about the concept of struggle for existence. He developed a system of heredity which was similar to Darwin's hypothesis of pangenesis. Commenting on Buffon's views, Darwin stated, "If Buffon had assumed that his organic molecules had been formed by each separate unit throughout the body, his view and mine would have been very closely similar."
“Buffon asked most all of the questions that science has since been striving to answer,” the historian Otis Fellows wrote in 1970.
Eponyms of Buffon
Buffon (crater), a lunar impact crater located on the southern hemisphere of the far side of the Moon.
Lycée Buffon, a secondary school in the 15th arrondissement of Paris.
, a street bordering the Jardin des Plantes in the 5th arrondissement of Paris.
Rue Buffon (Dijon) Rue Buffon], a street in Dijon, France.
An asteroid was named (7420) Buffon.
Works
Buffon, Œuvres, ed. S. Schmitt and C. Crémière, Paris: Gallimard, 2007.
Complete works
Vol 1. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roy. Tome I (1749). Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière, Paris: Honoré Champion, 2007, 1376 p. ()
Vol 2. Histoire naturelle, générale et particulière avec la participation du Cabinet du Roy. Tome II. Texte établi, introduit et annoté par Stéphane Schmitt, avec la collaboration de Cédric Crémière, Paris: Honoré Champion, 2008, 808 p. ()
Vol 3. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roy. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome III (1749), Paris: Honoré Champion, 2009, 776 p. ()
Vol 4. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome IV (1753), Paris: Honoré Champion, 2010. 1 vol., 864 p. ()
Vol 5. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome V (1755), Paris: Honoré Champion, 2010. 1 vol., 536 p. ()
Vol 6. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome VI (1756), Paris: Honoré Champion, 2011. 1 vol., 504 p. ()
Vol. 7. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome VII (1758), Paris: Honoré Champion, 2011. 1 vol., 544 p. ()
Vol. 8. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome VIII (1760), Paris: Honoré Champion, 2014, 640 p. ()
Vol. 9. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome IX (1761), Paris: Honoré Champion, 2016, 720 p. ()
Vol. 10. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome X (1763), Paris: Honoré Champion, 2017, 814 p. ()
Vol. 11. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome XI (1764), Paris: Honoré Champion, 2018, 724 p. ()
Vol. 12. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome XII (1764), Paris: Honoré Champion, 2018, 810 p. ()
Vol. 13. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome XIII (1765), Paris: Honoré Champion, 2019, 887 p.
Vol. 14. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome XIV (1768), Paris: Honoré Champion, 2020, 605 p.
Vol. 15. Histoire naturelle, générale et particulière, avec la description du Cabinet du Roi. Texte établi, introduit et annoté par Stéphane Schmitt avec la collaboration de Cédric Crémière. Tome XV (1767), Paris: Honoré Champion, 2021, 764 p.
See also
Buffon's needle
Rejection sampling
Scientific Revolution
Suites à Buffon
References
External links
The Buffon project :
The same, in English: L'histoire naturelle
Buffon's Hypothesis about the Origin of the Earth
Buffon's View of Domestic Cats
Digital text Kyoto University
Buffon's American Degeneracy, from The Academy of Natural Sciences
William Smellie's English Translation of Buffon's Natural History, General and Particular, 3rd Edition
Discours sur le Style – at athena.unige.ch
Gaedike, R.; Groll, E. K. & Taeger, A. 2012: Bibliography of the entomological literature from the beginning until 1863 : online database - version 1.0 - Senckenberg Deutsches Entomologisches Institut.
A collection of high-resolution scans of animal illustrations from several books by Buffon , from the Linda Hall Library
Buffon's Histoire naturelle des époches de la nature , (this ed. published as Histoire naturelle, générale et particulière, avec la description du cabinet du roy, suppl. vol. 5. in 1778) - digital facsimile from the Linda Hall Library
1707 births
1788 deaths
People from Montbard
French naturalists
French ornithologists
18th-century French zoologists
French Roman Catholics
University of Angers (pre-1793) alumni
Proto-evolutionary biologists
Members of the Académie Française
Members of the French Academy of Sciences
Contributors to the Encyclopédie (1751–1772)
Counts of Buffon
French science writers
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Society
18th-century French male writers
18th-century French mathematicians
Founder fellows of the Royal Society of Edinburgh
French male non-fiction writers
National Museum of Natural History (France) people | Georges-Louis Leclerc, Comte de Buffon | [
"Biology"
] | 4,513 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
360,066 | https://en.wikipedia.org/wiki/Robopsychology | Robopsychology is the study of the personalities and behavior of intelligent machines. The term was coined by Isaac Asimov in the short stories collected in I, Robot, which featured robopsychologist Dr. Susan Calvin, and whose plots largely revolved around the protagonist solving problems connected with intelligent robot behaviour. The term has been also used in some academic studies from the field of psychology and human–computer interactions, and it refers to the study of the psychological consequences of living in societies where the application of robotics is becoming increasingly common.
In real life
Andrea Kuszewski, a self-described robopsychologist gives the following examples of potential responsibilities for a robopsychologist in Discover.
"Assisting in the design of cognitive architectures
Developing appropriate lesson plans for teaching AI targeted skills
Create guides to help the AI through the learning process
Address any maladaptive machine behaviors
Research the nature of ethics and how it can be taught and/or reinforced
Create new and innovative therapy approaches for the domain of computer-based intelligences"
There is a robopsychology research division at Ars Electronica Futurelab.
The term "robopsychology" has been proposed to indicate a "sub-discipline in psychology to systematically study the psychological corollaries of living in societies where the application of robotic and artificial intelligence (AI) technologies is becoming increasingly common." According to proponents of robopsychology, such a discipline does not currently exist: a systematic review of scientific literature shows that in 2022 there was no psychological sub-discipline dedicated to the study of the effects robots have on people's lives.
A.V. Libin and E.V. Libin define the term as follows: "[it is] a systematic study of compatibility between people and artificial creatures on many different levels [...]. Robotic psychology studies individual differences in people’s interactions with various robots, as well as the diversity of the robots themselves, applying principles of differential psychology to the traditional fields of human factors and human–computer interactions. Moreover, robopsychologists study psychological mechanisms of the animation of the technological entity which result in a unique phenomenon defined as a robot’s “personality.”"
In fiction
As described by Asimov, robopsychology appears to be a mixture of detailed mathematical analysis and traditional psychology, applied to robots. Human psychology is also a part, covering human interaction with robots. This includes the "Frankenstein complex" – the irrational fear that robots (or other creations) will turn on their creator.
See also
Cybernetics
Human-robot interaction
Psychohistory
Three Laws of Robotics
References
Foundation universe
Fictional science
Cybernetics
Robotics
Human–computer interaction | Robopsychology | [
"Engineering"
] | 538 | [
"Human–computer interaction",
"Human–machine interaction",
"Robotics",
"Automation"
] |
360,113 | https://en.wikipedia.org/wiki/Constructive%20proof | In mathematics, a constructive proof is a method of proof that demonstrates the existence of a mathematical object by creating or providing a method for creating the object. This is in contrast to a non-constructive proof (also known as an existence proof or pure existence theorem), which proves the existence of a particular kind of object without providing an example. For avoiding confusion with the stronger concept that follows, such a constructive proof is sometimes called an effective proof.
A constructive proof may also refer to the stronger concept of a proof that is valid in constructive mathematics.
Constructivism is a mathematical philosophy that rejects all proof methods that involve the existence of objects that are not explicitly built. This excludes, in particular, the use of the law of the excluded middle, the axiom of infinity, and the axiom of choice, and induces a different meaning for some terminology (for example, the term "or" has a stronger meaning in constructive mathematics than in classical).
Some non-constructive proofs show that if a certain proposition is false, a contradiction ensues; consequently the proposition must be true (proof by contradiction). However, the principle of explosion (ex falso quodlibet) has been accepted in some varieties of constructive mathematics, including intuitionism.
Constructive proofs can be seen as defining certified mathematical algorithms: this idea is explored in the Brouwer–Heyting–Kolmogorov interpretation of constructive logic, the Curry–Howard correspondence between proofs and programs, and such logical systems as Per Martin-Löf's intuitionistic type theory, and Thierry Coquand and Gérard Huet's calculus of constructions.
A historical example
Until the end of 19th century, all mathematical proofs were essentially constructive. The first non-constructive constructions appeared with Georg Cantor’s theory of infinite sets, and the formal definition of real numbers.
The first use of non-constructive proofs for solving previously considered problems seems to be Hilbert's Nullstellensatz and Hilbert's basis theorem. From a philosophical point of view, the former is especially interesting, as implying the existence of a well specified object.
The Nullstellensatz may be stated as follows: If are polynomials in indeterminates with complex coefficients, which have no common complex zeros, then there are polynomials such that
Such a non-constructive existence theorem was such a surprise for mathematicians of that time that one of them, Paul Gordan, wrote: "this is not mathematics, it is theology".
Twenty five years later, Grete Hermann provided an algorithm for computing which is not a constructive proof in the strong sense, as she used Hilbert's result. She proved that, if exist, they can be found with degrees less than
.
This provides an algorithm, as the problem is reduced to solving a system of linear equations, by considering as unknowns the finite number of coefficients of the
Examples
Non-constructive proofs
First consider the theorem that there are an infinitude of prime numbers. Euclid's proof is constructive. But a common way of simplifying Euclid's proof postulates that, contrary to the assertion in the theorem, there are only a finite number of them, in which case there is a largest one, denoted n. Then consider the number n! + 1 (1 + the product of the first n numbers). Either this number is prime, or all of its prime factors are greater than n. Without establishing a specific prime number, this proves that one exists that is greater than n, contrary to the original postulate.
Now consider the theorem "there exist irrational numbers and such that is rational." This theorem can be proven by using both a constructive proof, and a non-constructive proof.
The following 1953 proof by Dov Jarden has been widely used as an example of a non-constructive proof since at least 1970:
CURIOSA
339. A Simple Proof That a Power of an Irrational Number to an Irrational Exponent May Be Rational.
is either rational or irrational. If it is rational, our statement is proved. If it is irrational, proves our statement.
Dov Jarden Jerusalem
In a bit more detail:
Recall that is irrational, and 2 is rational. Consider the number . Either it is rational or it is irrational.
If is rational, then the theorem is true, with and both being .
If is irrational, then the theorem is true, with being and being , since
At its core, this proof is non-constructive because it relies on the statement "Either q is rational or it is irrational"—an instance of the law of excluded middle, which is not valid within a constructive proof. The non-constructive proof does not construct an example a and b; it merely gives a number of possibilities (in this case, two mutually exclusive possibilities) and shows that one of them—but does not show which one—must yield the desired example.
As it turns out, is irrational because of the Gelfond–Schneider theorem, but this fact is irrelevant to the correctness of the non-constructive proof.
Constructive proofs
A constructive proof of the theorem that a power of an irrational number to an irrational exponent may be rational gives an actual example, such as:
The square root of 2 is irrational, and 3 is rational. is also irrational: if it were equal to , then, by the properties of logarithms, 9n would be equal to 2m, but the former is odd, and the latter is even.
A more substantial example is the graph minor theorem. A consequence of this theorem is that a graph can be drawn on the torus if, and only if, none of its minors belong to a certain finite set of "forbidden minors". However, the proof of the existence of this finite set is not constructive, and the forbidden minors are not actually specified. They are still unknown.
Brouwerian counterexamples
In constructive mathematics, a statement may be disproved by giving a counterexample, as in classical mathematics. However, it is also possible to give a Brouwerian counterexample to show that the statement is non-constructive. This sort of counterexample shows that the statement implies some principle that is known to be non-constructive. If it can be proved constructively that the statement implies some principle that is not constructively provable, then the statement itself cannot be constructively provable.
For example, a particular statement may be shown to imply the law of the excluded middle. An example of a Brouwerian counterexample of this type is Diaconescu's theorem, which shows that the full axiom of choice is non-constructive in systems of constructive set theory, since the axiom of choice implies the law of excluded middle in such systems. The field of constructive reverse mathematics develops this idea further by classifying various principles in terms of "how nonconstructive" they are, by showing they are equivalent to various fragments of the law of the excluded middle.
Brouwer also provided "weak" counterexamples. Such counterexamples do not disprove a statement, however; they only show that, at present, no constructive proof of the statement is known. One weak counterexample begins by taking some unsolved problem of mathematics, such as Goldbach's conjecture, which asks whether every even natural number larger than 4 is the sum of two primes. Define a sequence a(n) of rational numbers as follows:
For each n, the value of a(n) can be determined by exhaustive search, and so a is a well defined sequence, constructively. Moreover, because a is a Cauchy sequence with a fixed rate
of convergence, a converges to some real number α, according to the usual treatment of real numbers in constructive mathematics.
Several facts about the real number α can be proved constructively. However, based on the different meaning of the words in constructive mathematics, if there is a constructive proof that "α = 0 or α ≠ 0" then this would mean that there is a constructive proof of Goldbach's conjecture (in the former case) or a constructive proof that Goldbach's conjecture is false (in the latter case). Because no such proof is known, the quoted statement must also not have a known constructive proof. However, it is entirely possible that Goldbach's conjecture may have a constructive proof (as we do not know at present whether it does), in which case the quoted statement would have a constructive proof as well, albeit one that is unknown at present. The main practical use of weak counterexamples is to identify the "hardness" of a problem. For example, the counterexample just shown shows that the quoted statement is "at least as hard to prove" as Goldbach's conjecture. Weak counterexamples of this sort are often related to the limited principle of omniscience.
See also
Constructivism (philosophy of mathematics)
Errett Bishop - author of the book "Foundations of Constructive Analysis".
Non-constructive algorithm existence proofs
Probabilistic method
References
Further reading
J. Franklin and A. Daoud (2011) Proof in Mathematics: An Introduction. Kew Books, , ch. 4
Hardy, G. H. & Wright, E. M. (1979) An Introduction to the Theory of Numbers (Fifth Edition). Oxford University Press.
Anne Sjerp Troelstra and Dirk van Dalen (1988) "Constructivism in Mathematics: Volume 1" Elsevier Science.
External links
Weak counterexamples by Mark van Atten, Stanford Encyclopedia of Philosophy
Mathematical proofs
Constructivism (mathematics) | Constructive proof | [
"Mathematics"
] | 1,977 | [
"Mathematical logic",
"Constructivism (mathematics)",
"nan"
] |
360,136 | https://en.wikipedia.org/wiki/Maximal%20torus | In the mathematical theory of compact Lie groups a special role is played by torus subgroups, in particular by the maximal torus subgroups.
A torus in a compact Lie group G is a compact, connected, abelian Lie subgroup of G (and therefore isomorphic to the standard torus Tn). A maximal torus is one which is maximal among such subgroups. That is, T is a maximal torus if for any torus T′ containing T we have T = T′. Every torus is contained in a maximal torus simply by dimensional considerations. A noncompact Lie group need not have any nontrivial tori (e.g. Rn).
The dimension of a maximal torus in G is called the rank of G. The rank is well-defined since all maximal tori turn out to be conjugate. For semisimple groups the rank is equal to the number of nodes in the associated Dynkin diagram.
Examples
The unitary group U(n) has as a maximal torus the subgroup of all diagonal matrices. That is,
T is clearly isomorphic to the product of n circles, so the unitary group U(n) has rank n. A maximal torus in the special unitary group SU(n) ⊂ U(n) is just the intersection of T and SU(n) which is a torus of dimension n − 1.
A maximal torus in the special orthogonal group SO(2n) is given by the set of all simultaneous rotations in any fixed choice of n pairwise orthogonal planes (i.e., two dimensional vector spaces). Concretely, one maximal torus consists of all block-diagonal matrices with diagonal blocks, where each diagonal block is a rotation matrix.
This is also a maximal torus in the group SO(2n+1) where the action fixes the remaining direction. Thus both SO(2n) and SO(2n+1) have rank n. For example, in the rotation group SO(3) the maximal tori are given by rotations about a fixed axis.
The symplectic group Sp(n) has rank n. A maximal torus is given by the set of all diagonal matrices whose entries all lie in a fixed complex subalgebra of H.
Properties
Let G be a compact, connected Lie group and let be the Lie algebra of G. The first main result is the torus theorem, which may be formulated as follows:
Torus theorem: If T is one fixed maximal torus in G, then every element of G is conjugate to an element of T.
This theorem has the following consequences:
All maximal tori in G are conjugate.
All maximal tori have the same dimension, known as the rank of G.
A maximal torus in G is a maximal abelian subgroup, but the converse need not hold.
The maximal tori in G are exactly the Lie subgroups corresponding to the maximal abelian subalgebras of (cf. Cartan subalgebra)
Every element of G lies in some maximal torus; thus, the exponential map for G is surjective.
If G has dimension n and rank r then n − r is even.
Root system
If T is a maximal torus in a compact Lie group G, one can define a root system as follows. The roots are the weights for the adjoint action of T on the complexified Lie algebra of G. To be more explicit, let denote the Lie algebra of T, let denote the Lie algebra of , and let denote the complexification of . Then we say that an element is a root for G relative to T if and there exists a nonzero such that
for all . Here is a fixed inner product on that is invariant under the adjoint action of connected compact Lie groups.
The root system, as a subset of the Lie algebra of T, has all the usual properties of a root system, except that the roots may not span . The root system is a key tool in understanding the classification and representation theory of G.
Weyl group
Given a torus T (not necessarily maximal), the Weyl group of G with respect to T can be defined as the normalizer of T modulo the centralizer of T. That is,
Fix a maximal torus in G; then the corresponding Weyl group is called the Weyl group of G (it depends up to isomorphism on the choice of T).
The first two major results about the Weyl group are as follows.
The centralizer of T in G is equal to T, so the Weyl group is equal to N(T)/T.
The Weyl group is generated by reflections about the roots of the associated Lie algebra. Thus, the Weyl group of T is isomorphic to the Weyl group of the root system of the Lie algebra of G.
We now list some consequences of these main results.
Two elements in T are conjugate if and only if they are conjugate by an element of W. That is, each conjugacy class of G intersects T in exactly one Weyl orbit. In fact, the space of conjugacy classes in G is homeomorphic to the orbit space T/W.
The Weyl group acts by (outer) automorphisms on T (and its Lie algebra).
The identity component of the normalizer of T is also equal to T. The Weyl group is therefore equal to the component group of N(T).
The Weyl group is finite.
The representation theory of G is essentially determined by T and W.
As an example, consider the case with being the diagonal subgroup of . Then belongs to if and only if maps each standard basis element to a multiple of some other standard basis element , that is, if and only if permutes the standard basis elements, up to multiplication by some constants. The Weyl group in this case is then the permutation group on elements.
Weyl integral formula
Suppose f is a continuous function on G. Then the integral over G of f with respect to the normalized Haar measure dg may be computed as follows:
where is the normalized volume measure on the quotient manifold and is the normalized Haar measure on T. Here Δ is given by the Weyl denominator formula and is the order of the Weyl group. An important special case of this result occurs when f is a class function, that is, a function invariant under conjugation. In that case, we have
Consider as an example the case , with being the diagonal subgroup. Then the Weyl integral formula for class functions takes the following explicit form:
Here , the normalized Haar measure on is , and denotes the diagonal matrix with diagonal entries and .
See also
Compact group
Cartan subgroup
Cartan subalgebra
Toral Lie algebra
Bruhat decomposition
Weyl character formula
Representation theory of a connected compact Lie group
References
Lie groups
Representation theory of Lie groups | Maximal torus | [
"Mathematics"
] | 1,427 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures"
] |
360,243 | https://en.wikipedia.org/wiki/Function%20of%20several%20complex%20variables | The theory of functions of several complex variables is the branch of mathematics dealing with functions defined on the complex coordinate space , that is, -tuples of complex numbers. The name of the field dealing with the properties of these functions is called several complex variables (and analytic space), which the Mathematics Subject Classification has as a top-level heading.
As in complex analysis of functions of one variable, which is the case , the functions studied are holomorphic or complex analytic so that, locally, they are power series in the variables . Equivalently, they are locally uniform limits of polynomials; or locally square-integrable solutions to the -dimensional Cauchy–Riemann equations. For one complex variable, every domain(), is the domain of holomorphy of some function, in other words every domain has a function for which it is the domain of holomorphy. For several complex variables, this is not the case; there exist domains () that are not the domain of holomorphy of any function, and so is not always the domain of holomorphy, so the domain of holomorphy is one of the themes in this field. Patching the local data of meromorphic functions, i.e. the problem of creating a global meromorphic function from zeros and poles, is called the Cousin problem. Also, the interesting phenomena that occur in several complex variables are fundamentally important to the study of compact complex manifolds and complex projective varieties () and has a different flavour to complex analytic geometry in or on Stein manifolds, these are much similar to study of algebraic varieties that is study of the algebraic geometry than complex analytic geometry.
Historical perspective
Many examples of such functions were familiar in nineteenth-century mathematics; abelian functions, theta functions, and some hypergeometric series, and also, as an example of an inverse problem; the Jacobi inversion problem. Naturally also same function of one variable that depends on some complex parameter is a candidate. The theory, however, for many years didn't become a full-fledged field in mathematical analysis, since its characteristic phenomena weren't uncovered. The Weierstrass preparation theorem would now be classed as commutative algebra; it did justify the local picture, ramification, that addresses the generalization of the branch points of Riemann surface theory.
With work of Friedrich Hartogs, , E. E. Levi, and of Kiyoshi Oka in the 1930s, a general theory began to emerge; others working in the area at the time were Heinrich Behnke, Peter Thullen, Karl Stein, Wilhelm Wirtinger and Francesco Severi. Hartogs proved some basic results, such as every isolated singularity is removable, for every analytic function
whenever . Naturally the analogues of contour integrals will be harder to handle; when an integral surrounding a point should be over a three-dimensional manifold (since we are in four real dimensions), while iterating contour (line) integrals over two separate complex variables should come to a double integral over a two-dimensional surface. This means that the residue calculus will have to take a very different character.
After 1945 important work in France, in the seminar of Henri Cartan, and Germany with Hans Grauert and Reinhold Remmert, quickly changed the picture of the theory. A number of issues were clarified, in particular that of analytic continuation. Here a major difference is evident from the one-variable theory; while for every open connected set D in we can find a function that will nowhere continue analytically over the boundary, that cannot be said for . In fact the D of that kind are rather special in nature (especially in complex coordinate spaces and Stein manifolds, satisfying a condition called pseudoconvexity). The natural domains of definition of functions, continued to the limit, are called Stein manifolds and their nature was to make sheaf cohomology groups vanish, on the other hand, the Grauert–Riemenschneider vanishing theorem is known as a similar result for compact complex manifolds, and the Grauert–Riemenschneider conjecture is a special case of the conjecture of Narasimhan. In fact it was the need to put (in particular) the work of Oka on a clearer basis that led quickly to the consistent use of sheaves for the formulation of the theory (with major repercussions for algebraic geometry, in particular from Grauert's work).
From this point onwards there was a foundational theory, which could be applied to analytic geometry, automorphic forms of several variables, and partial differential equations. The deformation theory of complex structures and complex manifolds was described in general terms by Kunihiko Kodaira and D. C. Spencer. The celebrated paper GAGA of Serre pinned down the crossover point from géometrie analytique to géometrie algébrique.
C. L. Siegel was heard to complain that the new theory of functions of several complex variables had few functions in it, meaning that the special function side of the theory was subordinated to sheaves. The interest for number theory, certainly, is in specific generalizations of modular forms. The classical candidates are the Hilbert modular forms and Siegel modular forms. These days these are associated to algebraic groups (respectively the Weil restriction from a totally real number field of , and the symplectic group), for which it happens that automorphic representations can be derived from analytic functions. In a sense this doesn't contradict Siegel; the modern theory has its own, different directions.
Subsequent developments included the hyperfunction theory, and the edge-of-the-wedge theorem, both of which had some inspiration from quantum field theory. There are a number of other fields, such as Banach algebra theory, that draw on several complex variables.
The complex coordinate space
The complex coordinate space is the Cartesian product of copies of , and when is a domain of holomorphy, can be regarded as a Stein manifold, and more generalized Stein space. is also considered to be a complex projective variety, a Kähler manifold, etc. It is also an -dimensional vector space over the complex numbers, which gives its dimension over . Hence, as a set and as a topological space, may be identified to the real coordinate space and its topological dimension is thus .
In coordinate-free language, any vector space over complex numbers may be thought of as a real vector space of twice as many dimensions, where a complex structure is specified by a linear operator (such that ) which defines multiplication by the imaginary unit .
Any such space, as a real space, is oriented. On the complex plane thought of as a Cartesian plane, multiplication by a complex number may be represented by the real matrix
with determinant
Likewise, if one expresses any finite-dimensional complex linear operator as a real matrix (which will be composed from 2 × 2 blocks of the aforementioned form), then its determinant equals to the square of absolute value of the corresponding complex determinant. It is a non-negative number, which implies that the (real) orientation of the space is never reversed by a complex operator. The same applies to Jacobians of holomorphic functions from to .
Holomorphic functions
Definition
A function f defined on a domain and with values in is said to be holomorphic at a point if it is complex-differentiable at this point, in the sense that there exists a complex linear map such that
The function f is said to be holomorphic if it is holomorphic at all points of its domain of definition D.
If f is holomorphic, then all the partial maps :
are holomorphic as functions of one complex variable : we say that f is holomorphic in each variable separately. Conversely, if f is holomorphic in each variable separately, then f is in fact holomorphic : this is known as Hartog's theorem, or as Osgood's lemma under the additional hypothesis that f is continuous.
Cauchy–Riemann equations
In one complex variable, a function defined on the plane is holomorphic at a point if and only if its real part and its imaginary part satisfy the so-called Cauchy-Riemann equations at :
In several variables, a function is holomorphic if and only if it is holomorphic in each variable separately, and hence if and only if the real part and the imaginary part of satisfiy the Cauchy Riemann equations :
Using the formalism of Wirtinger derivatives, this can be reformulated as :
or even more compactly using the formalism of complex differential forms, as :
Cauchy's integral formula I (Polydisc version)
Prove the sufficiency of two conditions (A) and (B). Let f meets the conditions of being continuous and separately homorphic on domain D. Each disk has a rectifiable curve , is piecewise smoothness, class Jordan closed curve. () Let be the domain surrounded by each . Cartesian product closure is . Also, take the closed polydisc so that it becomes . and let be the center of each disk.) Using the Cauchy's integral formula of one variable repeatedly,
Because is a rectifiable Jordanian closed curve and f is continuous, so the order of products and sums can be exchanged so the iterated integral can be calculated as a multiple integral. Therefore,
Cauchy's evaluation formula
Because the order of products and sums is interchangeable, from () we get
f is class -function.
From (2), if f is holomorphic, on polydisc and , the following evaluation equation is obtained.
Therefore, Liouville's theorem hold.
Power series expansion of holomorphic functions on polydisc
If function f is holomorphic, on polydisc , from the Cauchy's integral formula, we can see that it can be uniquely expanded to the next power series.
In addition, f that satisfies the following conditions is called an analytic function.
For each point , is expressed as a power series expansion that is convergent on D :
We have already explained that holomorphic functions on polydisc are analytic. Also, from the theorem derived by Weierstrass, we can see that the analytic function on polydisc (convergent power series) is holomorphic.
If a sequence of functions which converges uniformly on compacta inside a domain D, the limit function f of also uniformly on compacta inside a domain D. Also, respective partial derivative of also compactly converges on domain D to the corresponding derivative of f.
Radius of convergence of power series
It is possible to define a combination of positive real numbers such that the power series converges uniformly at and does not converge uniformly at .
In this way it is possible to have a similar, combination of radius of convergence for a one complex variable. This combination is generally not unique and there are an infinite number of combinations.
Laurent series expansion
Let be holomorphic in the annulus and continuous on their circumference, then there exists the following expansion ;
The integral in the second term, of the right-hand side is performed so as to see the zero on the left in every plane, also this integrated series is uniformly convergent in the annulus , where and , and so it is possible to integrate term.
Bochner–Martinelli formula (Cauchy's integral formula II)
The Cauchy integral formula holds only for polydiscs, and in the domain of several complex variables, polydiscs are only one of many possible domains, so we introduce the Bochner–Martinelli formula.
Suppose that f is a continuously differentiable function on the closure of a domain D on with piecewise smooth boundary , and let the symbol denotes the exterior or wedge product of differential forms. Then the Bochner–Martinelli formula states that if z is in the domain D then, for , z in the Bochner–Martinelli kernel is a differential form in of bidegree , defined by
In particular if f is holomorphic the second term vanishes, so
Identity theorem
Holomorphic functions of several complex variables satisfy an identity theorem, as in one variable : two holomorphic functions defined on the same connected open set and which coincide on an open subset N of D, are equal on the whole open set D. This result can be proven from the fact that holomorphics functions have power series extensions, and it can also be deduced from the one variable case. Contrary to the one variable case, it is possible that two different holomorphic functions coincide on a set which has an accumulation point, for instance the maps and coincide on the whole complex line of defined by the equation .
The maximal principle, inverse function theorem, and implicit function theorems also hold. For a generalized version of the implicit function theorem to complex variables, see the Weierstrass preparation theorem.
Biholomorphism
From the establishment of the inverse function theorem, the following mapping can be defined.
For the domain U, V of the n-dimensional complex space , the bijective holomorphic function and the inverse mapping is also holomorphic. At this time, is called a U, V biholomorphism also, we say that U and V are biholomorphically equivalent or that they are biholomorphic.
The Riemann mapping theorem does not hold
When , open balls and open polydiscs are not biholomorphically equivalent, that is, there is no biholomorphic mapping between the two. This was proven by Poincaré in 1907 by showing that their automorphism groups have different dimensions as Lie groups. However, even in the case of several complex variables, there are some results similar to the results of the theory of uniformization in one complex variable.
Analytic continuation
Let U, V be domain on , such that and , ( is the set/ring of holomorphic functions on U.) assume that and is a connected component of . If then f is said to be connected to V, and g is said to be analytic continuation of f. From the identity theorem, if g exists, for each way of choosing W it is unique. When n > 2, the following phenomenon occurs depending on the shape of the boundary : there exists domain U, V, such that all holomorphic functions over the domain U, have an analytic continuation . In other words, there may be not exist a function such that as the natural boundary. There is called the Hartogs's phenomenon. Therefore, researching when domain boundaries become natural boundaries has become one of the main research themes of several complex variables. In addition, when , it would be that the above V has an intersection part with U other than W. This contributed to advancement of the notion of sheaf cohomology.
Reinhardt domain
In polydisks, the Cauchy's integral formula holds and the power series expansion of holomorphic functions is defined, but polydisks and open unit balls are not biholomorphic mapping because the Riemann mapping theorem does not hold, and also, polydisks was possible to separation of variables, but it doesn't always hold for any domain. Therefore, in order to study of the domain of convergence of the power series, it was necessary to make additional restriction on the domain, this was the Reinhardt domain. Early knowledge into the properties of field of study of several complex variables, such as Logarithmically-convex, Hartogs's extension theorem, etc., were given in the Reinhardt domain.
Let () to be a domain, with centre at a point , such that, together with each point , the domain also contains the set
A domain D is called a Reinhardt domain if it satisfies the following conditions:
Let is a arbitrary real numbers, a domain D is invariant under the rotation: .
The Reinhardt domains which are defined by the following condition; Together with all points of , the domain contains the set
A Reinhardt domain D is called a complete Reinhardt domain with centre at a point a if together with all point it also contains the polydisc
A complete Reinhardt domain D is star-like with regard to its centre a. Therefore, the complete Reinhardt domain is simply connected, also when the complete Reinhardt domain is the boundary line, there is a way to prove the Cauchy's integral theorem without using the Jordan curve theorem.
Logarithmically-convex
When a some complete Reinhardt domain to be the domain of convergence of a power series, an additional condition is required, which is called logarithmically-convex.
A Reinhardt domain D is called logarithmically convex if the image of the set
under the mapping
is a convex set in the real coordinate space .
Every such domain in is the interior of the set of points of absolute convergence of some power series in , and conversely; The domain of convergence of every power series in is a logarithmically-convex Reinhardt domain with centre .
But, there is an example of a complete Reinhardt domain D which is not logarithmically convex.
Some results
Hartogs's extension theorem and Hartogs's phenomenon
When examining the domain of convergence on the Reinhardt domain, Hartogs found the Hartogs's phenomenon in which holomorphic functions in some domain on the were all connected to larger domain.
On the polydisk consisting of two disks when .
Internal domain of
Hartogs's extension theorem (1906); Let f be a holomorphic function on a set , where is a bounded (surrounded by a rectifiable closed Jordan curve) domain on () and K is a compact subset of G. If the complement is connected, then every holomorphic function f regardless of how it is chosen can be each extended to a unique holomorphic function on G.
It is also called Osgood–Brown theorem is that for holomorphic functions of several complex variables, the singularity is a accumulation point, not an isolated point. This means that the various properties that hold for holomorphic functions of one-variable complex variables do not hold for holomorphic functions of several complex variables. The nature of these singularities is also derived from Weierstrass preparation theorem. A generalization of this theorem using the same method as Hartogs was proved in 2007.
From Hartogs's extension theorem the domain of convergence extends from to . Looking at this from the perspective of the Reinhardt domain, is the Reinhardt domain containing the center z = 0, and the domain of convergence of has been extended to the smallest complete Reinhardt domain containing .
Thullen's classic results
Thullen's classical result says that a 2-dimensional bounded Reinhard domain containing the origin is biholomorphic to one of the following domains provided that the orbit of the origin by the automorphism group has positive dimension:
(polydisc);
(unit ball);
(Thullen domain).
Sunada's results
Toshikazu Sunada (1978) established a generalization of Thullen's result:
Two n-dimensional bounded Reinhardt domains and are mutually biholomorphic if and only if there exists a transformation given by , being a permutation of the indices), such that .
Natural domain of the holomorphic function (domain of holomorphy)
When moving from the theory of one complex variable to the theory of several complex variables, depending on the range of the domain, it may not be possible to define a holomorphic function such that the boundary of the domain becomes a natural boundary. Considering the domain where the boundaries of the domain are natural boundaries (In the complex coordinate space call the domain of holomorphy), the first result of the domain of holomorphy was the holomorphic convexity of H. Cartan and Thullen. Levi's problem shows that the pseudoconvex domain was a domain of holomorphy. (First for , later extended to .) Kiyoshi Oka's notion of idéal de domaines indéterminés is interpreted theory of sheaf cohomology by
H. Cartan and more development Serre. In sheaf cohomology, the domain of holomorphy has come to be interpreted as the theory of Stein manifolds. The notion of the domain of holomorphy is also considered in other complex manifolds, furthermore also the complex analytic space which is its generalization.
Domain of holomorphy
When a function f is holomorpic on the domain and cannot directly connect to the domain outside D, including the point of the domain boundary , the domain D is called the domain of holomorphy of f and the boundary is called the natural boundary of f. In other words, the domain of holomorphy D is the supremum of the domain where the holomorphic function f is holomorphic, and the domain D, which is holomorphic, cannot be extended any more. For several complex variables, i.e. domain , the boundaries may not be natural boundaries. Hartogs' extension theorem gives an example of a domain where boundaries are not natural boundaries.
Formally, a domain D in the n-dimensional complex coordinate space is called a domain of holomorphy if there do not exist non-empty domain and , and such that for every holomorphic function f on D there exists a holomorphic function g on V with on U.
For the case, the every domain () was the domain of holomorphy; we can define a holomorphic function with zeros accumulating everywhere on the boundary of the domain, which must then be a natural boundary for a domain of definition of its reciprocal.
Properties of the domain of holomorphy
If are domains of holomorphy, then their intersection is also a domain of holomorphy.
If is an increasing sequence of domains of holomorphy, then their union is also a domain of holomorphy (see Behnke–Stein theorem).
If and are domains of holomorphy, then is a domain of holomorphy.
The first Cousin problem is always solvable in a domain of holomorphy, also Cartan showed that the converse of this result was incorrect for . this is also true, with additional topological assumptions, for the second Cousin problem.
Holomorphically convex hull
Let be a domain, or alternatively for a more general definition, let be an dimensional complex analytic manifold. Further let stand for the set of holomorphic functions on G. For a compact set , the holomorphically convex hull of K is
One obtains a narrower concept of polynomially convex hull by taking instead to be the set of complex-valued polynomial functions on G. The polynomially convex hull contains the holomorphically convex hull.
The domain is called holomorphically convex if for every compact subset is also compact in G. Sometimes this is just abbreviated as holomorph-convex.
When , every domain is holomorphically convex since then is the union of K with the relatively compact components of .
When , if f satisfies the above holomorphic convexity on D it has the following properties. for every compact subset K in D, where
denotes the distance between K and . Also, at this time, D is a domain of holomorphy. Therefore, every convex domain is domain of holomorphy.
Pseudoconvexity
Hartogs showed that
If such a relations holds in the domain of holomorphy of several complex variables, it looks like a more manageable condition than a holomorphically convex. The subharmonic function looks like a kind of convex function, so it was named by Levi as a pseudoconvex domain (Hartogs's pseudoconvexity). Pseudoconvex domain (boundary of pseudoconvexity) are important, as they allow for classification of domains of holomorphy. A domain of holomorphy is a global property, by contrast, pseudoconvexity is that local analytic or local geometric property of the boundary of a domain.
Definition of plurisubharmonic function
A function
with domain
is called plurisubharmonic if it is upper semi-continuous, and for every complex line
with
the function is a subharmonic function on the set
In full generality, the notion can be defined on an arbitrary complex manifold or even a Complex analytic space as follows. An upper semi-continuous function
is said to be plurisubharmonic if and only if for any holomorphic map
the function
is subharmonic, where denotes the unit disk.
In one-complex variable, necessary and sufficient condition that the real-valued function , that can be second-order differentiable with respect to z of one-variable complex function is subharmonic is . Therefore, if is of class , then is plurisubharmonic if and only if the hermitian matrix is positive semidefinite.
Equivalently, a -function u is plurisubharmonic if and only if is a positive (1,1)-form.
Strictly plurisubharmonic function
When the hermitian matrix of u is positive-definite and class , we call u a strict plurisubharmonic function.
(Weakly) pseudoconvex (p-pseudoconvex)
Weak pseudoconvex is defined as : Let be a domain. One says that X is pseudoconvex if there exists a continuous plurisubharmonic function on X such that the set is a relatively compact subset of X for all real numbers x. i.e. there exists a smooth plurisubharmonic exhaustion function . Often, the definition of pseudoconvex is used here and is written as; Let X be a complex n-dimensional manifold. Then is said to be weeak pseudoconvex there exists a smooth plurisubharmonic exhaustion function .
Strongly (Strictly) pseudoconvex
Let X be a complex n-dimensional manifold. Strongly (or Strictly) pseudoconvex if there exists a smooth strictly plurisubharmonic exhaustion function , i.e., is positive definite at every point. The strongly pseudoconvex domain is the pseudoconvex domain. Strongly pseudoconvex and strictly pseudoconvex (i.e. 1-convex and 1-complete) are often used interchangeably, see Lempert for the technical difference.
Levi form
(Weakly) Levi(–Krzoska) pseudoconvexity
If boundary , it can be shown that D has a defining function; i.e., that there exists which is so that , and . Now, D is pseudoconvex iff for every and in the complex tangent space at p, that is,
, we have
If D does not have a boundary, the following approximation result can be useful.
Proposition 1 If D is pseudoconvex, then there exist bounded, strongly Levi pseudoconvex domains with class -boundary which are relatively compact in D, such that
This is because once we have a as in the definition we can actually find a exhaustion function.
Strongly (or Strictly) Levi (–Krzoska) pseudoconvex (a.k.a. Strongly (Strictly) pseudoconvex)
When the Levi (–Krzoska) form is positive-definite, it is called strongly Levi (–Krzoska) pseudoconvex or often called simply strongly (or strictly) pseudoconvex.
Levi total pseudoconvex
If for every boundary point of D, there exists an analytic variety passing which lies entirely outside D in some neighborhood around , except the point itself. Domain D that satisfies these conditions is called Levi total pseudoconvex.
Oka pseudoconvex
Family of Oka's disk
Let n-functions be continuous on , holomorphic in when the parameter t is fixed in [0, 1], and assume that are not all zero at any point on . Then the set is called an analytic disc de-pending on a parameter t, and is called its shell. If and , Q(t) is called Family of Oka's disk.
Definition
When holds on any family of Oka's disk, D is called Oka pseudoconvex. Oka's proof of Levi's problem was that when the unramified Riemann domain over was a domain of holomorphy (holomorphically convex), it was proved that it was necessary and sufficient that each boundary point of the domain of holomorphy is an Oka pseudoconvex.
Locally pseudoconvex (a.k.a. locally Stein, Cartan pseudoconvex, local Levi property)
For every point there exist a neighbourhood U of x and f holomorphic. ( i.e. be holomorphically convex.) such that f cannot be extended to any neighbourhood of x. i.e., let be a holomorphic map, if every point has a neighborhood U such that admits a -plurisubharmonic exhaustion function (weakly 1-complete), in this situation, we call that X is locally pseudoconvex (or locally Stein) over Y. As an old name, it is also called Cartan pseudoconvex. In the locally pseudoconvex domain is itself a pseudoconvex domain and it is a domain of holomorphy. For example, Diederich–Fornæss found local pseudoconvex bounded domains with smooth boundary on non-Kähler manifolds such that is not weakly 1-complete.
Conditions equivalent to domain of holomorphy
For a domain the following conditions are equivalent:
D is a domain of holomorphy.
D is holomorphically convex.
D is the union of an increasing sequence of analytic polyhedrons in D.
D is pseudoconvex.
D is Locally pseudoconvex.
The implications , , and are standard results. Proving , i.e. constructing a global holomorphic function which admits no extension from non-extendable functions defined only locally. This is called the Levi problem (after E. E. Levi) and was solved for unramified Riemann domains over by Kiyoshi Oka, but for ramified Riemann domains, pseudoconvexity does not characterize holomorphically convexity, and then by Lars Hörmander using methods from functional analysis and partial differential equations (a consequence of -problem(equation) with a L2 methods).
Sheaves
The introduction of sheaves into several complex variables allowed the reformulation of and solution to several important problems in the field.
Idéal de domaines indéterminés (The predecessor of the notion of the coherent (sheaf))
Oka introduced the notion which he termed "idéal de domaines indéterminés" or "ideal of indeterminate domains". Specifically, it is a set of pairs , holomorphic on a non-empty open set , such that
If and is arbitrary, then .
For each , then
The origin of indeterminate domains comes from the fact that domains change depending on the pair . Cartan translated this notion into the notion of the coherent (sheaf) (Especially, coherent analytic sheaf) in sheaf cohomology. This name comes from
H. Cartan. Also, Serre (1955) introduced the notion of the coherent sheaf into algebraic geometry, that is, the notion of the coherent algebraic sheaf. The notion of coherent (coherent sheaf cohomology) helped solve the problems in several complex variables.
Coherent sheaf
Definition
The definition of the coherent sheaf is as follows.
A quasi-coherent sheaf on a ringed space is a sheaf of -modules which has a local presentation, that is, every point in has an open neighborhood in which there is an exact sequence
for some (possibly infinite) sets and .
A coherent sheaf on a ringed space is a sheaf satisfying the following two properties:
is of finite type over , that is, every point in has an open neighborhood in such that there is a surjective morphism for some natural number ;
for each open set , integer , and arbitrary morphism of -modules, the kernel of is of finite type.
Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves of -modules.
Also, Jean-Pierre Serre (1955) proves that
If in an exact sequence of sheaves of -modules two of the three sheaves are coherent, then the third is coherent as well.
(Oka–Cartan) coherent theorem
(Oka–Cartan) coherent theorem says that each sheaf that meets the following conditions is a coherent.
the sheaf of germs of holomorphic functions on , or the structure sheaf of complex submanifold or every complex analytic space
the ideal sheaf of an analytic subset A of an open subset of . (Cartan 1950)
the normalization of the structure sheaf of a complex analytic space
From the above Serre(1955) theorem, is a coherent sheaf, also, (i) is used to prove Cartan's theorems A and B.
Cousin problem
In the case of one variable complex functions, Mittag-Leffler's theorem was able to create a global meromorphic function from a given and principal parts (Cousin I problem), and Weierstrass factorization theorem was able to create a global meromorphic function from a given zeroes or zero-locus (Cousin II problem). However, these theorems do not hold in several complex variables because the singularities of analytic function in several complex variables are not isolated points; these problems are called the Cousin problems and are formulated in terms of sheaf cohomology. They were first introduced in special cases by Pierre Cousin in 1895. It was Oka who showed the conditions for solving first Cousin problem for the domain of holomorphy on the complex coordinate space, also solving the second Cousin problem with additional topological assumptions. The Cousin problem is a problem related to the analytical properties of complex manifolds, but the only obstructions to solving problems of a complex analytic property are pure topological; Serre called this the Oka principle. They are now posed, and solved, for arbitrary complex manifold M, in terms of conditions on M. M, which satisfies these conditions, is one way to define a Stein manifold. The study of the cousin's problem made us realize that in the study of several complex variables, it is possible to study of global properties from the patching of local data, that is it has developed the theory of sheaf cohomology. (e.g.Cartan seminar.)
First Cousin problem
Without the language of sheaves, the problem can be formulated as follows. On a complex manifold M, one is given several meromorphic functions along with domains where they are defined, and where each difference is holomorphic (wherever the difference is defined). The first Cousin problem then asks for a meromorphic function on M such that is holomorphic on ; in other words, that shares the singular behaviour of the given local function.
Now, let K be the sheaf of meromorphic functions and O the sheaf of holomorphic functions on M. The first Cousin problem can always be solved if the following map is surjective:
By the long exact cohomology sequence,
is exact, and so the first Cousin problem is always solvable provided that the first cohomology group H1(M,O) vanishes. In particular, by Cartan's theorem B, the Cousin problem is always solvable if M is a Stein manifold.
Second Cousin problem
The second Cousin problem starts with a similar set-up to the first, specifying instead that each ratio is a non-vanishing holomorphic function (where said difference is defined). It asks for a meromorphic function on M such that is holomorphic and non-vanishing.
Let be the sheaf of holomorphic functions that vanish nowhere, and the sheaf of meromorphic functions that are not identically zero. These are both then sheaves of abelian groups, and the quotient sheaf is well-defined. If the following map is surjective, then Second Cousin problem can be solved:
The long exact sheaf cohomology sequence associated to the quotient is
so the second Cousin problem is solvable in all cases provided that
The cohomology group for the multiplicative structure on can be compared with the cohomology group with its additive structure by taking a logarithm. That is, there is an exact sequence of sheaves
where the leftmost sheaf is the locally constant sheaf with fiber . The obstruction to defining a logarithm at the level of H1 is in , from the long exact cohomology sequence
When M is a Stein manifold, the middle arrow is an isomorphism because for so that a necessary and sufficient condition in that case for the second Cousin problem to be always solvable is that (This condition called Oka principle.)
Manifolds and analytic varieties with several complex variables
Stein manifold (non-compact Kähler manifold)
Since a non-compact (open) Riemann surface always has a non-constant single-valued holomorphic function, and satisfies the second axiom of countability, the open Riemann surface is in fact a 1-dimensional complex manifold possessing a holomorphic mapping into the complex plane . (In fact, Gunning and Narasimhan have shown (1967) that every non-compact Riemann surface actually has a holomorphic immersion into the complex plane. In other words, there is a holomorphic mapping into the complex plane whose derivative never vanishes.) The Whitney embedding theorem tells us that every smooth n-dimensional manifold can be embedded as a smooth submanifold of , whereas it is "rare" for a complex manifold to have a holomorphic embedding into . For example, for an arbitrary compact connected complex manifold X, every holomorphic function on it is constant by Liouville's theorem, and so it cannot have any embedding into complex n-space. That is, for several complex variables, arbitrary complex manifolds do not always have holomorphic functions that are not constants. So, consider the conditions under which a complex manifold has a holomorphic function that is not a constant. Now if we had a holomorphic embedding of X into , then the coordinate functions of would restrict to nonconstant holomorphic functions on X, contradicting compactness, except in the case that X is just a point. Complex manifolds that can be holomorphic embedded into are called Stein manifolds. Also Stein manifolds satisfy the second axiom of countability.
A Stein manifold is a complex submanifold of the vector space of n complex dimensions. They were introduced by and named after Karl Stein (1951). A Stein space is similar to a Stein manifold but is allowed to have singularities. Stein spaces are the analogues of affine varieties or affine schemes in algebraic geometry. If the univalent domain on is connection to a manifold, can be regarded as a complex manifold and satisfies the separation condition described later, the condition for becoming a Stein manifold is to satisfy the holomorphic convexity. Therefore, the Stein manifold is the properties of the domain of definition of the (maximal) analytic continuation of an analytic function.
Definition
Suppose X is a paracompact complex manifolds of complex dimension and let denote the ring of holomorphic functions on X. We call X a Stein manifold if the following conditions hold:
X is holomorphically convex, i.e. for every compact subset , the so-called holomorphically convex hull,
is also a compact subset of X.
X is holomorphically separable, i.e. if are two points in X, then there exists such that
The open neighborhood of every point on the manifold has a holomorphic chart to the .
Note that condition (3) can be derived from conditions (1) and (2).
Every non-compact (open) Riemann surface is a Stein manifold
Let X be a connected, non-compact (open) Riemann surface. A deep theorem of Behnke and Stein (1948) asserts that X is a Stein manifold.
Another result, attributed to Hans Grauert and Helmut Röhrl (1956), states moreover that every holomorphic vector bundle on X is trivial. In particular, every line bundle is trivial, so . The exponential sheaf sequence leads to the following exact sequence:
Now Cartan's theorem B shows that , therefore .
This is related to the solution of the second (multiplicative) Cousin problem.
Levi problems
Cartan extended Levi's problem to Stein manifolds.
If the relative compact open subset of the Stein manifold X is a Locally pseudoconvex, then D is a Stein manifold, and conversely, if D is a Locally pseudoconvex, then X is a Stein manifold. i.e. Then X is a Stein manifold if and only if D is locally the Stein manifold.
This was proved by Bremermann by embedding it in a sufficiently high dimensional , and reducing it to the result of Oka.
Also, Grauert proved for arbitrary complex manifolds M.
If the relative compact subset of a arbitrary complex manifold M is a strongly pseudoconvex on M, then M is a holomorphically convex (i.e. Stein manifold). Also, D is itself a Stein manifold.
And Narasimhan extended Levi's problem to complex analytic space, a generalized in the singular case of complex manifolds.
A Complex analytic space which admits a continuous strictly plurisubharmonic exhaustion function (i.e.strongly pseudoconvex) is Stein space.
Levi's problem remains unresolved in the following cases;
Suppose that X is a singular Stein space, . Suppose that for all there is an open neighborhood so that is Stein space. Is D itself Stein?
more generalized
Suppose that N be a Stein space and f an injective, and also a Riemann unbranched domain, such that map f is a locally pseudoconvex map (i.e. Stein morphism). Then M is itself Stein ?
and also,
Suppose that X be a Stein space and an increasing union of Stein open sets. Then D is itself Stein ?
This means that Behnke–Stein theorem, which holds for Stein manifolds, has not found a conditions to be established in Stein space.
K-complete
Grauert introduced the concept of K-complete in the proof of Levi's problem.
Let X is complex manifold, X is K-complete if, to each point , there exist finitely many holomorphic map of X into , , such that is an isolated point of the set . This concept also applies to complex analytic space.
Properties and examples of Stein manifolds
The standard complex space is a Stein manifold.
Every domain of holomorphy in is a Stein manifold.
It can be shown quite easily that every closed complex submanifold of a Stein manifold is a Stein manifold, too.
The embedding theorem for Stein manifolds states the following: Every Stein manifold X of complex dimension n can be embedded into by a biholomorphic proper map.
These facts imply that a Stein manifold is a closed complex submanifold of complex space, whose complex structure is that of the ambient space (because the embedding is biholomorphic).
Every Stein manifold of (complex) dimension n has the homotopy type of an n-dimensional CW-Complex.
In one complex dimension the Stein condition can be simplified: a connected Riemann surface is a Stein manifold if and only if it is not compact. This can be proved using a version of the Runge theorem for Riemann surfaces, due to Behnke and Stein.
Every Stein manifold X is holomorphically spreadable, i.e. for every point , there are n holomorphic functions defined on all of X which form a local coordinate system when restricted to some open neighborhood of x.
The first Cousin problem can always be solved on a Stein manifold.
Being a Stein manifold is equivalent to being a (complex) strongly pseudoconvex manifold. The latter means that it has a strongly pseudoconvex (or plurisubharmonic) exhaustive function, i.e. a smooth real function on X (which can be assumed to be a Morse function) with , such that the subsets are compact in X for every real number c. This is a solution to the so-called Levi problem, named after E. E. Levi (1911). The function invites a generalization of Stein manifold to the idea of a corresponding class of compact complex manifolds with boundary called Stein domain. A Stein domain is the preimage . Some authors call such manifolds therefore strictly pseudoconvex manifolds.
Related to the previous item, another equivalent and more topological definition in complex dimension 2 is the following: a Stein surface is a complex surface X with a real-valued Morse function f on X such that, away from the critical points of f, the field of complex tangencies to the preimage is a contact structure that induces an orientation on Xc agreeing with the usual orientation as the boundary of That is, is a Stein filling of Xc.
Numerous further characterizations of such manifolds exist, in particular capturing the property of their having "many" holomorphic functions taking values in the complex numbers. See for example Cartan's theorems A and B, relating to sheaf cohomology.
In the GAGA set of analogies, Stein manifolds correspond to affine varieties.
Stein manifolds are in some sense dual to the elliptic manifolds in complex analysis which admit "many" holomorphic functions from the complex numbers into themselves. It is known that a Stein manifold is elliptic if and only if it is fibrant in the sense of so-called "holomorphic homotopy theory".
Complex projective varieties (compact complex manifold)
Meromorphic function in one-variable complex function were studied in a
compact (closed) Riemann surface, because since the Riemann-Roch theorem (Riemann's inequality) holds for compact Riemann surfaces (Therefore the theory of compact Riemann surface can be regarded as the theory of (smooth (non-singular) projective) algebraic curve over ). In fact, compact Riemann surface had a non-constant single-valued meromorphic function, and also a compact Riemann surface had enough meromorphic functions. A compact one-dimensional complex manifold was a Riemann sphere . However, the abstract notion of a compact Riemann surface is always algebraizable (The Riemann's existence theorem, Kodaira embedding theorem.), but it is not easy to verify which compact complex analytic spaces are algebraizable. In fact, Hopf found a class of compact complex manifolds without nonconstant meromorphic functions. However, there is a Siegel result that gives the necessary conditions for compact complex manifolds to be algebraic. The generalization of the Riemann-Roch theorem to several complex variables was first extended to compact analytic surfaces by Kodaira, Kodaira also extended the theorem to three-dimensional, and n-dimensional Kähler varieties. Serre formulated the Riemann–Roch theorem as a problem of dimension of coherent sheaf cohomology, and also Serre proved Serre duality. Cartan and Serre proved the following property: the cohomology group is finite-dimensional for a coherent sheaf on a compact complex manifold M. Riemann–Roch on a Riemann surface for a vector bundle was proved by Weil in 1938.
Hirzebruch generalized the theorem to compact complex manifolds in 1994 and Grothendieck generalized it to a relative version (relative statements about morphisms.). Next, the generalization of the result that "the compact Riemann surfaces are projective" to the high-dimension. In particular, consider the conditions that when embedding of compact complex submanifold X into the complex projective space . The vanishing theorem (was first introduced by Kodaira in 1953) gives the condition, when the sheaf cohomology group vanishing, and the condition is to satisfy a kind of positivity. As an application of this theorem, the Kodaira embedding theorem says that a compact Kähler manifold M, with a Hodge metric, there is a complex-analytic embedding of M into complex projective space of enough high-dimension N. In addition the Chow's theorem shows that the complex analytic subspace (subvariety) of a closed complex projective space to be an algebraic that is, so it is the common zero of some homogeneous polynomials, such a relationship is one example of what is called Serre's GAGA principle. The complex analytic sub-space(variety) of the complex projective space has both algebraic and analytic properties. Then combined with Kodaira's result, a compact Kähler manifold M embeds as an algebraic variety. This result gives an example of a complex manifold with enough meromorphic functions. Broadly, the GAGA principle says that the geometry of projective complex analytic spaces (or manifolds) is equivalent to the geometry of projective complex varieties. The combination of analytic and algebraic methods for complex projective varieties lead to areas such as Hodge theory. Also, the deformation theory of compact complex manifolds has developed as Kodaira–Spencer theory. However, despite being a compact complex manifold, there are counterexample of that cannot be embedded in projective space and are not algebraic. Analogy of the Levi problems on the complex projective space by Takeuchi.
See also
Bicomplex number
Complex geometry
CR manifold
Dolbeault cohomology
Harmonic maps
Harmonic morphisms
Infinite-dimensional holomorphy
Oka–Weil theorem
Annotation
References
Inline citations
Textbooks
Encyclopedia of Mathematics
Further reading
External links
Tasty Bits of Several Complex Variables open source book by Jiří Lebl
Complex Analytic and Differential Geometry (OpenContent book See B2)
Victor Guillemin. 18.117 Topics in Several Complex Variables. Spring 2005. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.
Multivariable calculus | Function of several complex variables | [
"Mathematics"
] | 10,556 | [
"Functions and mappings",
"Calculus",
"Several complex variables",
"Mathematical objects",
"Mathematical relations",
"Multivariable calculus"
] |
360,374 | https://en.wikipedia.org/wiki/Young%20Scientist%20and%20Technology%20Exhibition | The BT Young Scientist and Technology Exhibition, commonly called the Young Scientist Exhibition, is an Irish annual school students' science competition that has been held in the Royal Dublin Society, Dublin, Ireland, every January since the competition was founded by Tom Burke and Tony Scott in 1965.
The competition
The purpose of the competition is to encourage interest in science in secondary schools. For the 51st year of the competition in 2016, there were over 2,000 entries, from 396 schools which was the highest number ever, 550 of which were selected for the Exhibition at the RDS.
Students apply to participate in the competition. Their science project entries are evaluated by judges and about one-third of applicants are accepted to participate in the public exhibition. Students are allocated exhibition stands in an exhibition hall where they set up their projects for viewing by the public. Competing projects are judged during the three days of the exhibition, and prizes are awarded.
Projects are awarded in five categories: biology, physics, social and behavioural sciences, health and wellbeing and technology. Health and wellbeing is the newest category, only being added in 2023 to celebrate the 60th anniversary and to lower admissions to social and behavioural sciences. Three levels of entry are accepted, junior, intermediate and senior. In each category three main prizes are awarded; other prizes include a display award, highly commended rosettes, and a cancer awareness award. The winners of the BT Young Scientist and Technology Exhibition advance to participate in prestigious international events such as the European Union Contest for Young Scientists.
John Monahan was the inaugural winner of the Young Scientist Exhibition in 1965; then a student of Newbridge College, his project was an explanation of the process of digestion in the human stomach. He went on to establish a NASDAQ-listed biotech company in California after attending University College Dublin.
Aer Lingus sponsored the competition for the first 33 years. 2021 marked the 21st year in which the Exhibition was sponsored by BT Ireland. It has produced at least one author, Sarah Flannery, and one billionaire, Patrick Collison. Many of the past winners have gone on to establish international companies in the technology they developed. One of the most notable was Baltimore Technologies.
Tom Burke, who co-founded the exhibition with physicist Tony Scott, died in March 2008. An award at the event (a bursary offered to senior participants) was named in his memory.
Due to the COVID-19 pandemic, the first ever virtual Young Scientist & Technology Exhibition was held in January 2021 with over 1,000 students representing more than 200 schools taking part.
Overall winners by year
Winners by age
The youngest winners are listed first.
See also
Education in the Republic of Ireland
Science Week Ireland
References
External links
Official archive
List of past winners
News article about 1999 project
News article about 1999 project
Slashdot Article on Adnan Osmani's Project
BT Group
1965 establishments in Ireland
Competitions in Ireland
Education in the Republic of Ireland
Recurring events established in 1965
Science competitions
Science and technology in the Republic of Ireland
Youth science
Youth in the Republic of Ireland
Science events in Ireland | Young Scientist and Technology Exhibition | [
"Technology"
] | 623 | [
"Science and technology awards",
"Science competitions"
] |
360,501 | https://en.wikipedia.org/wiki/Aliquot%20stringing | Aliquot stringing is the use of extra, un-struck strings in a piano for the purpose of enriching the tone. Aliquot systems use an additional (hence fourth) string in each note of the top three piano octaves. This string is positioned slightly above the other three strings so that it is not struck by the hammer. Whenever the hammer strikes the three conventional strings, the aliquot string vibrates sympathetically. Aliquot stringing broadens the vibrational energy throughout the instrument, and creates an unusually complex and colorful tone.
Etymology
The word aliquot ultimately comes from a Latin word meaning 'some, several'. In mathematics, aliquot means 'an exact part or divisor', reflecting the fact that the length of an aliquot string forms an exact division of the length of longer strings with which it vibrates sympathetically.
History
Julius Blüthner invented the aliquot stringing system in 1873. The Blüthner aliquot system uses an additional (hence fourth) string in each note of the top three piano octaves. This string is positioned slightly above the other three strings so that it is not struck by the hammer. Whenever the hammer strikes the three conventional strings, the aliquot string vibrates sympathetically. This string resonance also occurs when other notes are played that are harmonically related to the pitch of an aliquot string, though only when the related notes' dampers are raised. Many piano-makers enrich the tone of the piano through sympathetic vibration, but use a different method known as duplex scaling (see piano). Confusingly, the portions of the strings used in duplex scaling are sometimes called "aliquot strings", and the contact points used in duplex scales are called aliquots. Aliquot stringing and the duplex scale, even if they use "aliquots", are not equivalent.
Because they are tuned an octave above their constituent pitch, true aliquot strings transmit strong vibrations to the soundboard. Duplex scaling, which typically is tuned a double octave or more above the speaking length, does not. And because aliquot strings are so active, they require dampers or they would sustain uncontrollably and muddy the sound. Aliquot stringing broadens the vibrational energy throughout the instrument, and creates an unusually complex and colorful tone. This results from hammers striking their respective three strings, followed by an immediate transfer of energy into their sympathetic strings. The noted piano authority Larry Fine observes that the Blüthner tone is "refined" and "delicate", particularly "at a low level of volume". The Blüthner company, however, claims that the effect of aliquot stringing is equally apparent in loud playing.
Tunable aliquots
Theodore Steinway of Steinway & Sons patented tunable aliquots in 1872. Short lengths of non-speaking wire were bridged by an aliquot throughout much of the upper range of the piano, always in locations that caused them to vibrate in conformity with their respective overtones—typically in doubled octaves and twelfths. This enhanced the power and sustain of the instrument's treble. Because it was time-consuming to correctly position each aliquot, Steinway abandoned individual aliquots for continuous cast-metal bars, each comprising an entire section of duplex bridge points. The company trusted that with an accurately templated bridge and carefully located duplex bar, the same result would be achieved with less fuss.
Mason & Hamlin, established in Boston in 1854, continued to use individual aliquots. They felt that the tuning of these short lengths of string was more accurate with an aliquot than what could be attained with a duplex bar. With the fixed points of a duplex bar, small variations in casting or bridge-pin positioning are liable to produce imperfections in the duplex string lengths. Furthermore, since variations in humidity can cause duplex scales to move in pitch more rapidly than the speaking scale, readjustments of aliquot positioning is more feasible than duplex bar re-positioning.
A modern piano manufacture, Fazioli (Sacile, Italy), has blended Steinway's original ideas by creating a stainless-steel track, fixed to the cast-iron plate, on which individual aliquots slide.
Other musical instruments
Makers of other string instruments sometimes use aliquot parts of the scale length to enhance the timbre. Examples of such instruments include the viola d'amore, and the sitar.
Notes
External links
A figure from the Blüthner company showing how their Patented Aliquot System is arranged
Blüthner—Photos and Aliquot-patent
Acoustics
Keyboard instruments
String instruments | Aliquot stringing | [
"Physics"
] | 972 | [
"Classical mechanics",
"Acoustics"
] |
360,507 | https://en.wikipedia.org/wiki/Gimel%20function | In axiomatic set theory, the gimel function is the following function mapping cardinal numbers to cardinal numbers:
where cf denotes the cofinality function; the gimel function is used for studying the continuum function and the cardinal exponentiation function. The symbol is a serif form of the Hebrew letter gimel.
Values of the gimel function
The gimel function has the property for all infinite cardinals by König's theorem.
For regular cardinals
,
, and Easton's theorem says we don't know much about the values of this function. For singular
, upper bounds for can be found from Shelah's PCF theory.
The gimel hypothesis
The gimel hypothesis states that . In essence, this means that for singular is the smallest value allowed by the axioms of Zermelo–Fraenkel set theory (assuming consistency).
Under this hypothesis cardinal exponentiation is simplified, though not to the extent of the continuum hypothesis (which implies the gimel hypothesis).
Reducing the exponentiation function to the gimel function
showed that all cardinal exponentiation is determined (recursively) by the gimel function as follows.
If is an infinite regular cardinal (in particular any infinite successor) then
If is infinite and singular and the continuum function is eventually constant below then
If is a limit and the continuum function is not eventually constant below then
The remaining rules hold whenever and are both infinite:
If then
If for some then
If and for all and then
If and for all and then
See also
Aleph number
Beth number
References
Thomas Jech, Set Theory, 3rd millennium ed., 2003, Springer Monographs in Mathematics, Springer, .
Cardinal numbers | Gimel function | [
"Mathematics"
] | 339 | [
"Cardinal numbers",
"Mathematical objects",
"Numbers",
"Infinity"
] |
360,581 | https://en.wikipedia.org/wiki/Tur%C3%A1n%20graph | The Turán graph, denoted by , is a complete multipartite graph; it is formed by partitioning a set of vertices into subsets, with sizes as equal as possible, and then connecting two vertices by an edge if and only if they belong to different subsets. Where and are the quotient and remainder of dividing by (so ), the graph is of the form , and the number of edges is
.
For , this edge count can be more succinctly stated as . The graph has subsets of size , and subsets of size ; each vertex has degree or . It is a regular graph if is divisible by (i.e. when ).
Turán's theorem
Turán graphs are named after Pál Turán, who used them to prove Turán's theorem, an important result in extremal graph theory.
By the pigeonhole principle, every set of r + 1 vertices in the Turán graph includes two vertices in the same partition subset; therefore, the Turán graph does not contain a clique of size r + 1. According to Turán's theorem, the Turán graph has the maximum possible number of edges among all (r + 1)-clique-free graphs with n vertices. show that the Turán graph is also the only (r + 1)-clique-free graph of order n in which every subset of αn vertices spans at least edges, if α is sufficiently close to 1. The Erdős–Stone theorem extends Turán's theorem by bounding the number of edges in a graph that does not have a fixed Turán graph as a subgraph. Via this theorem, similar bounds in extremal graph theory can be proven for any excluded subgraph, depending on the chromatic number of the subgraph.
Special cases
Several choices of the parameter r in a Turán graph lead to notable graphs that have been independently studied.
The Turán graph T(2n,n) can be formed by removing a perfect matching from a complete graph K2n. As showed, this graph has boxicity exactly n; it is sometimes known as the Roberts graph. This graph is also the 1-skeleton of an n-dimensional cross-polytope; for instance, the graph T(6,3) = K2,2,2 is the octahedral graph, the graph of the regular octahedron. If n couples go to a party, and each person shakes hands with every person except his or her partner, then this graph describes the set of handshakes that take place; for this reason, it is also called the cocktail party graph.
The Turán graph T(n,2) is a complete bipartite graph and, when n is even, a Moore graph. When r is a divisor of n, the Turán graph is symmetric and strongly regular, although some authors consider Turán graphs to be a trivial case of strong regularity and therefore exclude them from the definition of a strongly regular graph.
The class of Turán graphs can have exponentially many maximal cliques, meaning this class does not have few cliques. For example, the Turán graph has 3a2b maximal cliques, where
3a + 2b = n and b ≤ 2; each maximal clique is formed by choosing one vertex from each partition subset. This is the largest number of maximal cliques possible among all n-vertex graphs regardless of the number of edges in the graph; these graphs are sometimes called Moon–Moser graphs.
Other properties
Every Turán graph is a cograph; that is, it can be formed from individual vertices by a sequence of disjoint union and complement operations. Specifically, such a sequence can begin by forming each of the independent sets of the Turán graph as a disjoint union of isolated vertices. Then, the overall graph is the complement of the disjoint union of the complements of these independent sets.
show that the Turán graphs are chromatically unique: no other graphs have the same chromatic polynomials. Nikiforov (2005) uses Turán graphs to supply a lower bound for the sum of the kth eigenvalues of a graph and its complement.
develop an efficient algorithm for finding clusters of orthologous groups of genes in genome data, by representing the data as a graph and searching for large Turán subgraphs.
Turán graphs also have some interesting properties related to geometric graph theory. give a lower bound of Ω((rn)3/4) on the volume of any three-dimensional grid embedding of the Turán graph. conjectures that the maximum sum of squared distances, among n points with unit diameter in Rd, is attained for a configuration formed by embedding a Turán graph onto the vertices of a regular simplex.
An n-vertex graph G is a subgraph of a Turán graph T(n,r) if and only if G admits an equitable coloring with r colors. The partition of the Turán graph into independent sets corresponds to the partition of G into color classes. In particular, the Turán graph is the unique maximal n-vertex graph with an r-color equitable coloring.
Notes
References
External links
Parametric families of graphs
Extremal graph theory | Turán graph | [
"Mathematics"
] | 1,089 | [
"Mathematical relations",
"Graph theory",
"Extremal graph theory"
] |
360,601 | https://en.wikipedia.org/wiki/Tur%C3%A1n%27s%20theorem | In graph theory, Turán's theorem bounds the number of edges that can be included in an undirected graph that does not have a complete subgraph of a given size. It is one of the central results of extremal graph theory, an area studying the largest or smallest graphs with given properties, and is a special case of the forbidden subgraph problem on the maximum number of edges in a graph that does not have a given subgraph.
An example of an -vertex graph that does not contain any -vertex clique may be formed by partitioning the set of vertices into parts of equal or nearly equal size, and connecting two vertices by an edge whenever they belong to two different parts. The resulting graph is the Turán graph . Turán's theorem states that the Turán graph has the largest number of edges among all -free -vertex graphs.
Turán's theorem, and the Turán graphs giving its extreme case, were first described and studied by Hungarian mathematician Pál Turán in 1941. The special case of the theorem for triangle-free graphs is known as Mantel's theorem; it was stated in 1907 by Willem Mantel, a Dutch mathematician.
Statement
Turán's theorem states that every graph with vertices that does not contain as a subgraph has at most as many edges as the Turán graph . For a fixed value of , this graph hasedges, using little-o notation. Intuitively, this means that as gets larger, the fraction of edges included in gets closer and closer to . Many of the following proofs only give the upper bound of .
Proofs
list five different proofs of Turán's theorem.
Many of the proofs involve reducing to the case where the graph is a complete multipartite graph, and showing that the number of edges is maximized when there are parts of size as close as possible to equal.
Induction
This was Turán's original proof. Take a -free graph on vertices with the maximal number of edges. Find a (which exists by maximality), and partition the vertices into the set of the vertices in the and the set of the other vertices.
Now, one can bound edges above as follows:
There are exactly edges within .
There are at most edges between and , since no vertex in can connect to all of .
The number of edges within is at most the number of edges of by the inductive hypothesis.
Adding these bounds gives the result.
Maximal Degree Vertex
This proof is due to Paul Erdős. Take the vertex of largest degree. Consider the set of vertices not adjacent to and the set of vertices adjacent to .
Now, delete all edges within and draw all edges between and . This increases the number of edges by our maximality assumption and keeps the graph -free. Now, is -free, so the same argument can be repeated on .
Repeating this argument eventually produces a graph in the same form as a Turán graph, which is a collection of independent sets, with edges between each two vertices from different independent sets. A simple calculation shows that the number of edges of this graph is maximized when all independent set sizes are as close to equal as possible.
Complete Multipartite Optimization
This proof, as well as the Zykov Symmetrization proof, involve reducing to the case where the graph is a complete multipartite graph, and showing that the number of edges is maximized when there are independent sets of size as close as possible to equal. This step can be done as follows:
Let be the independent sets of the multipartite graph. Since two vertices have an edge between them if and only if they are not in the same independent set, the number of edges is
where the left hand side follows from direct counting, and the right hand side follows from complementary counting. To show the bound, applying the Cauchy–Schwarz inequality to the term on the right hand side suffices, since .
To prove the Turán Graph is optimal, one can argue that no two differ by more than one in size. In particular, supposing that we have for some , moving one vertex from to (and adjusting edges accordingly) would increase the value of the sum. This can be seen by examining the changes to either side of the above expression for the number of edges, or by noting that the degree of the moved vertex increases.
Lagrangian
This proof is due to . They begin by considering a free graph with vertices labelled , and considering maximizing the functionover all nonnegative with sum . This function is known as the Lagrangian of the graph and its edges.
The idea behind their proof is that if are both nonzero while are not adjacent in the graph, the functionis linear in . Hence, one can either replace with either or without decreasing the value of the function. Hence, there is a point with at most nonzero variables where the function is maximized.
Now, the Cauchy–Schwarz inequality gives that the maximal value is at most . Plugging in for all gives that the maximal value is at least , giving the desired bound.
Probabilistic Method
The key claim in this proof was independently found by Caro and Wei. This proof is due to Noga Alon and Joel Spencer, from their book The Probabilistic Method. The proof shows that every graph with degrees has an independent set of size at leastThe proof attempts to find such an independent set as follows:
Consider a random permutation of the vertices of a -free graph
Select every vertex that is adjacent to none of the vertices before it.
A vertex of degree is included in this with probability , so this process gives an average of vertices in the chosen set.
Applying this fact to the complement graph and bounding the size of the chosen set using the Cauchy–Schwarz inequality proves Turán's theorem. See for more.
Zykov Symmetrization
Aigner and Ziegler call the final one of their five proofs "the most beautiful of them all". Its origins are unclear, but the approach is often referred to as Zykov Symmetrization as it was used in Zykov's proof of a generalization of Turán's Theorem . This proof goes by taking a -free graph, and applying steps to make it more similar to the Turán Graph while increasing edge count.
In particular, given a -free graph, the following steps are applied:
If are non-adjacent vertices and has a higher degree than , replace with a copy of . Repeat this until all non-adjacent vertices have the same degree.
If are vertices with and non-adjacent but adjacent, then replace both and with copies of .
All of these steps keep the graph free while increasing the number of edges.
Now, non-adjacency forms an equivalence relation. The equivalence classes give that any maximal graph the same form as a Turán graph. As in the maximal degree vertex proof, a simple calculation shows that the number of edges is maximized when all independent set sizes are as close to equal as possible.
Mantel's theorem
The special case of Turán's theorem for is Mantel's theorem: The maximum number of edges in an -vertex triangle-free graph is In other words, one must delete nearly half of the edges in to obtain a triangle-free graph.
A strengthened form of Mantel's theorem states that any Hamiltonian graph with at least edges must either be the complete bipartite graph or it must be pancyclic: not only does it contain a triangle, it must also contain cycles of all other possible lengths up to the number of vertices in the graph.
Another strengthening of Mantel's theorem states that the edges of every -vertex graph may be covered by at most cliques which are either edges or triangles. As a corollary, the graph's intersection number (the minimum number of cliques needed to cover all its edges) is at most .
Generalizations
Other Forbidden Subgraphs
Turán's theorem shows that the largest number of edges in a -free graph is . The Erdős–Stone theorem finds the number of edges up to a error in all other graphs:(Erdős–Stone) Suppose is a graph with chromatic number . The largest possible number of edges in a graph where does not appear as a subgraph iswhere the constant only depends on . One can see that the Turán graph cannot contain any copies of , so the Turán graph establishes the lower bound. As a has chromatic number , Turán's theorem is the special case in which is a .
The general question of how many edges can be included in a graph without a copy of some is the forbidden subgraph problem.
Maximizing Other Quantities
Another natural extension of Turán's theorem is the following question: if a graph has no s, how many copies of can it have? Turán's theorem is the case where . Zykov's Theorem answers this question:(Zykov's Theorem) The graph on vertices with no s and the largest possible number of s is the Turán graph This was first shown by Zykov (1949) using Zykov Symmetrization. Since the Turán Graph contains parts with size around , the number of s in is around .
A paper by Alon and Shikhelman in 2016 gives the following generalization, which is similar to the Erdos-Stone generalization of Turán's theorem:(Alon-Shikhelman, 2016) Let be a graph with chromatic number . The largest possible number of s in a graph with no copy of isAs in Erdős–Stone, the Turán graph attains the desired number of copies of .
Edge-Clique region
Turan's theorem states that if a graph has edge homomorphism density strictly above , it has a nonzero number of s. One could ask the far more general question: if you are given the edge density of a graph, what can you say about the density of s?
An issue with answering this question is that for a given density, there may be some bound not attained by any graph, but approached by some infinite sequence of graphs. To deal with this, weighted graphs or graphons are often considered. In particular, graphons contain the limit of any infinite sequence of graphs.
For a given edge density , the construction for the largest density is as follows:Take a number of vertices approaching infinity. Pick a set of of the vertices, and connect two vertices if and only if they are in the chosen set.This gives a density of The construction for the smallest density is as follows:Take a number of vertices approaching infinity. Let be the integer such that . Take a -partite graph where all parts but the unique smallest part have the same size, and sizes of the parts are chosen such that the total edge density is .For , this gives a graph that is -partite and hence gives no s.
The lower bound was proven by Razborov (2008) for the case of triangles, and was later generalized to all cliques by Reiher (2016). The upper bound is a consequence of the Kruskal–Katona theorem .
See also
Erdős–Stone theorem, a generalization of Turán's theorem from forbidden cliques to forbidden Turán graphs
References
Extremal graph theory
Theorems in graph theory
Articles containing proofs | Turán's theorem | [
"Mathematics"
] | 2,337 | [
"Graph theory",
"Theorems in discrete mathematics",
"Mathematical relations",
"Extremal graph theory",
"Articles containing proofs",
"Theorems in graph theory"
] |
360,725 | https://en.wikipedia.org/wiki/Acetylcysteine | N-acetylcysteine, also known as Acetylcysteine and NAC, is a medication that is used to treat paracetamol (acetaminophen) overdose and to loosen thick mucus in individuals with chronic bronchopulmonary disorders, such as pneumonia and bronchitis. It has been used to treat lactobezoar in infants. It can be taken intravenously, orally (swallowed by mouth), or inhaled as a mist. It is also sometimes used as a dietary supplement.
Common side effects include nausea and vomiting when taken orally. The skin may occasionally become red and itchy with any route of administration. A non-immune type of anaphylaxis may also occur. It appears to be safe in pregnancy. For paracetamol overdose, it works by increasing the level of glutathione, an antioxidant that can neutralize the toxic breakdown products of paracetamol. When inhaled, it acts as a mucolytic by decreasing the thickness of mucus.
Acetylcysteine was initially patented in 1960 and came into medical use in 1968. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
The sulfur-containing amino acids cysteine and methionine are more easily oxidized than the other amino acids.
Uses
Medical uses
Paracetamol overdose antidote
Intravenous and oral formulations of acetylcysteine are available for the treatment of paracetamol (acetaminophen) overdose. When paracetamol is taken in large quantities, a minor metabolite called N-acetyl-p-benzoquinone imine (NAPQI) accumulates within the body. It is normally conjugated by glutathione, but when taken in excess, the body's glutathione reserves are not sufficient to deactivate the toxic NAPQI. This metabolite is then free to react with key hepatic enzymes, thereby damaging liver cells. This may lead to severe liver damage and even death by acute liver failure.
In the treatment of paracetamol (acetaminophen) overdose, acetylcysteine acts to maintain or replenish depleted glutathione reserves in the liver and enhance non-toxic metabolism of acetaminophen. These actions serve to protect liver cells from NAPQI toxicity. It is most effective in preventing or lessening hepatic injury when administered within 8–10 hours after overdose. Research suggests that the rate of liver toxicity is approximately 3% when acetylcysteine is administered within 10 hours of overdose.
Although IV and oral acetylcysteine are equally effective for this indication, oral administration is generally poorly tolerated due to the higher dosing required to overcome its low oral bioavailability, its foul taste and odor, and a higher incidence of adverse effects when taken orally, particularly nausea and vomiting. Prior pharmacokinetic studies of acetylcysteine did not consider acetylation as a reason for the low bioavailability of acetylcysteine. Oral acetylcysteine is identical in bioavailability to cysteine precursors. However, 3% to 6% of people given intravenous acetylcysteine show a severe, anaphylaxis-like allergic reaction, which may include extreme breathing difficulty (due to bronchospasm), a decrease in blood pressure, rash, angioedema, and sometimes also nausea and vomiting. Repeated doses of intravenous acetylcysteine will cause these allergic reactions to progressively worsen in these people.
Several studies have found this anaphylaxis-like reaction to occur more often in people given intravenous acetylcysteine despite serum levels of paracetamol not high enough to be considered toxic.
Mucolytic agent
Acetylcysteine exhibits mucolytic properties, meaning it reduces the viscosity and adhesiveness of mucus. This therapeutic effect is achieved through the cleavage of disulfide bonds within mucoproteins (strongly cross-linked mucins), thereby decreasing the mucus viscosity and facilitating its clearance from the respiratory tract. This mechanism is particularly beneficial in conditions characterized by excessive or thickened mucus, such as chronic obstructive pulmonary disease (COPD), cystic fibrosis, rhinitis or sinusitis. Acetylcysteine can be administered as a part of a complex molecule, Thiamphenicol glycinate acetylcysteine, which also contains thiamphenicol, an antibiotic.
Lungs
Inhaled acetylcysteine has been used for mucolytic therapy in addition to other therapies in respiratory conditions with excessive and/or thick mucus production. It is also used post-operatively, as a diagnostic aid, and in tracheotomy care. It may be considered ineffective in cystic fibrosis. A 2013 Cochrane review in cystic fibrosis found no evidence of benefit.
Acetylcysteine is used in the treatment of obstructive lung disease as an adjuvant treatment.
Other uses
Acetylcysteine has been used to complex palladium, to help it dissolve in water. This helps to remove palladium from drugs or precursors synthesized by palladium-catalyzed coupling reactions. N-acetylcysteine can be used to protect the liver.
Microbiological use
Acetylcysteine can be used in Petroff's method of liquefaction and decontamination of sputum, in preparation for recovery of mycobacterium. It also displays significant antiviral activity against influenza A viruses.
Acetylcysteine has bactericidal properties and breaks down bacterial biofilms of clinically relevant pathogens including Pseudomonas aeruginosa, Staphylococcus aureus, Enterococcus faecalis, Enterobacter cloacae, Staphylococcus epidermidis, and Klebsiella pneumoniae.
Side effects
The most commonly reported adverse effects for IV formulations of acetylcysteine are rash, urticaria, and itchiness.
Adverse effects for inhalational formulations of acetylcysteine include nausea, vomiting, stomatitis, fever, rhinorrhea, drowsiness, clamminess, chest tightness, and bronchoconstriction. Although infrequent, bronchospasm has been reported to occur unpredictably in some patients.
Adverse effects for oral formulations of acetylcysteine have been reported to include nausea, vomiting, rash, and fever.
Large doses in a mouse model showed that acetylcysteine could potentially cause damage to the heart and lungs. They found that acetylcysteine was metabolized to S-nitroso-N-acetylcysteine (), which increased blood pressure in the lungs and right ventricle of the heart (pulmonary artery hypertension) in mice treated with acetylcysteine. The effect was similar to that observed following a 3-week exposure to an oxygen-deprived environment (chronic hypoxia). The authors also found that SNOAC induced a hypoxia-like response in the expression of several important genes both in vitro and in vivo. The implications of these findings for long-term treatment with acetylcysteine have not yet been investigated. The dose used by Palmer and colleagues was dramatically higher than that used in humans, the equivalent of about 20 grams per day. In humans, a much lower dosages (600 mg per day) have been observed to counteract some age-related decline in the hypoxic ventilatory response as tested by inducing prolonged hypoxia.
Although N-acetylcysteine prevented liver damage in mice when taken before alcohol, when taken four hours after alcohol it made liver damage worse in a dose-dependent fashion.
Pharmacology
Pharmacodynamics
Acetylcysteine serves as a prodrug to L-cysteine, a precursor to the biologic antioxidant glutathione. Hence administration of acetylcysteine replenishes glutathione stores.
Glutathione, along with oxidized glutathione (GSSG) and S-nitrosoglutathione (GSNO), have been found to bind to the glutamate recognition site of the NMDA and AMPA receptors (via their γ-glutamyl moieties), and may be endogenous neuromodulators. At millimolar concentrations, they may also modulate the redox state of the NMDA receptor complex. In addition, glutathione has been found to bind to and activate ionotropic receptors that are different from any other excitatory amino acid receptor, and which may constitute glutathione receptors, potentially making it a neurotransmitter. As such, since N-acetylcysteine is a prodrug of glutathione, it may modulate all of the aforementioned receptors as well.
Glutathione also modulates the NMDA receptor by acting at the redox site.
L-cysteine also serves as a precursor to cystine, which in turn serves as a substrate for the cystine-glutamate antiporter on astrocytes; hence there is increasing glutamate release into the extracellular space. This glutamate in turn acts on mGluR2/3 receptors, and at higher doses of acetylcysteine, mGluR5. Acetylcysteine may have other biological functions in the brain, such as the modulation of dopamine release and the reduction in inflammatory cytokine formation possibly via inhibiting NF-κB and modulating cytokine synthesis. These properties, along with the reduction of oxidative stress and the re-establishment of glutamatergic balance, would lead to an increase in growth factors, such as brain-derived neurotrophic factor (BDNF), and the regulation of neuronal cell death through B-cell lymphoma 2 expression (BLC-2).
Pharmacokinetics
The oral bioavailability of acetylcysteine is relatively low due to extensive first-pass metabolism in the gut wall and liver. It ranges between 6% and 10%.
Intravenous administration of acetylcysteine bypasses the first-pass metabolism, resulting in higher bioavailability compared to oral administration. Intravenous administration of acetylcysteine ensures nearly 100% bioavailability as it directly enters the bloodstream.
Acetylcysteine is extensively liver metabolized, CYP450 minimal, urine excretion is 22–30% with a half-life of 5.6 hours in adults and 11 hours in newborns.
Acetylcysteine is the N-acetyl derivative of the amino acid L-cysteine, and is a precursor in the formation of the antioxidant glutathione in the body. The thiol (sulfhydryl) group confers antioxidant effects and is able to reduce free radicals.
Chemistry
Pure acetylcysteine is in a solid state at room temperature, appearing as a white crystalline powder or granules. The solid form of acetylcysteine is stable under normal conditions, but it can undergo oxidation if exposed to air or moisture over time, leading to the formation of its dimeric form, diacetylcysteine, which can have different properties. Acetylcysteine is highly hygroscopic, i.e., it absorbs moisture if exposed to open air.
Acetylcysteine can sometimes appear as a light yellow cast powder instead of pure white due to oxidation. The sulfur-containing amino acids, like cysteine, are more easily oxidized than other amino acids. When exposed to air or moisture, acetylcysteine can oxidize, leading to a slight yellowish tint.
Acetylcysteine in a form of a white or white with light yellow cast powder has a pKa of 9.5 at 30 °C.
N-acetyl-L-cysteine is soluble in water and alcohol, and practically insoluble in chloroform and ether.
Acetylcysteine is highly soluble in water: it dissolves readily in water, forming a clear solution. The pH of a maximum-saturated acetylcysteine solution typically ranges between 6.0 and 7.5, depending on temperature, purity of the compount, presense of other ions (thet can affect the pH by interacting with acetylcysteine or altering the overall ionic strength of the solution), thus on the concentraton of acetylcysteine itself: higher concentrations of acetylcysteine can lead to a lower pH due to the increased presence of the acetylcysteine molecule itself. This range of pH between 6.0 and 7.5 ensures that the solution is neither too acidic nor too alkaline, making it suitable for various medical applications. Aqueous solutions of acetylcysteine are compatible with 0.9% sodium chloride solution; compatibility with 5% and 10% glucose solutions is also good.
As for the sunlight stability, acetylcysteine in dry powder form is relatively stable and does not degrade quickly when exposed to sunlight, but when dissolved in aqueous solution, acetylcysteine can degrade when exposed to sunlight, especially if the solution is not stored in a dark, cool place. Besides that, acetylcysteine in aqueous solution can undergo hydrolysis, leading to the breakdown of the amide bond in the molecule. Still, aqueous solutions of acetylcysteine are generally stable when stored properly: the solutions should be kept in tightly sealed containers and stored at controlled room temperature to prolong the stability.
Society and culture
Acetylcysteine was first studied as a drug in 1963. Amazon removed acetylcysteine for sale in the US in 2021, due to claims by the FDA of it being classified as a drug rather than a supplement. In April 2022, the FDA released draft guidance on FDA's policy regarding products labeled as dietary supplements that contain N-acetyl-L-cysteine. Amazon subsequently re-listed NAC products as of August 2022.
Research
While many antioxidants have been researched to treat a large number of diseases by reducing the negative effect of oxidative stress, acetylcysteine is one of the few that has yielded promising results, and is currently already approved for the treatment of paracetamol overdose.
In mouse mdx models of Duchenne's muscular dystrophy, treatment with 1–2% acetylcysteine in drinking water significantly reduces muscle damage and improves strength.
It is being studied in conditions such as autism, where cysteine and related sulfur amino acids may be depleted due to multifactorial dysfunction of methylation pathways involved in methionine catabolism.
Animal studies have also demonstrated its efficacy in reducing the damage associated with moderate traumatic brain or spinal injury, and also ischaemia-induced brain injury. In particular, it has been demonstrated to reduce neuronal losses and to improve cognitive and neurological outcomes associated with these traumatic events.
Research on acetylcysteine usage seems to show a positive efficiency in treating androgenetic alopecia (male baldness), with or without adjacent treatments such as the use of topical minoxidil solution.
Research on mouse models also shows that acetylcysteine could be "used as an efficient and safe therapeutic option for hair loss induced by chemotherapy".
It has been suggested that acetylcysteine may help people with aspirin-exacerbated respiratory disease by increasing levels of glutathione allowing faster breakdown of salicylates, although there is no evidence that it is of benefit.
Small studies have shown acetylcysteine to be of benefit to people with blepharitis. It has been shown to reduce ocular soreness caused by Sjögren's syndrome.
Research has found that acetylcysteine may have otoprotective properties and could potentially be useful for preventing hearing loss and tinnitus in some cases. A 2011 study showed that N-acetylcysteine may protect the human cochlea from subclinical hearing loss caused by loud noises such as impulse noise. In animal models, it reduced age-related hearing loss.
It has been shown effective in the treatment of Unverricht-Lundborg disease in an open trial in four patients. A marked decrease in myoclonus and some normalization of somatosensory evoked potentials with acetylcysteine treatment has been documented.
Addiction to certain addictive drugs (including cocaine, heroin, alcohol, and nicotine) is correlated with a persistent reduction in the expression of excitatory amino acid transporter 2 (EAAT2) in the nucleus accumbens (NAcc); the reduced expression of EAAT2 in this region is implicated in addictive drug-seeking behavior. In particular, the long-term dysregulation of glutamate neurotransmission in the NAcc of long-term, drug-dependent users is associated with an increase in vulnerability to relapse after re-exposure to the addictive drug or its associated drug cues. Drugs that help to normalize the expression of EAAT2 in this region, such as N-acetylcysteine, have been proposed as an adjunct therapy for the treatment of addiction to cocaine, nicotine, alcohol, and other drugs.
It has been tested for the reduction of hangover symptoms, though the overall results indicate very limited efficacy.
A double-blind placebo controlled trial of 262 patients has shown NAC treatment was well-tolerated and resulted in a significant decrease in the frequency of influenza-like episodes, severity, and length of time confined to bed.
Kidney and bladder
N-acetylcysteine has been widely believed to prevent adverse effects of long term Ketamine use on the bladder and kidneys, and there is growing body of evidence to support this.
Evidence for the benefit of acetylcysteine to prevent radiocontrast induced kidney disease is mixed.
Acetylcysteine has been used for cyclophosphamide-induced haemorrhagic cystitis, although mesna is generally preferred due to the ability of acetylcysteine to diminish the effectiveness of cyclophosphamide.
Psychiatry
Acetylcysteine has been studied for major psychiatric disorders, including bipolar disorder, major depressive disorder, and schizophrenia.
Tentative evidence exists for N-acetylcysteine also in the treatment of Alzheimer's disease, autism, obsessive-compulsive disorder, specific drug addictions (cocaine), drug-induced neuropathy, trichotillomania, excoriation disorder, and a certain form of epilepsy (progressive myoclonic). Preliminary evidence showed efficacy in anxiety disorder, attention deficit hyperactivity disorder and mild traumatic brain injury although confirmatory studies are required. Tentative evidence also supports use in cannabis use disorder.
It is also being studied for use as a treatment of body-focused repetitive behavior.
Addiction
Evidence to date does not support the efficacy for N-acetylcysteine in treating addictions to gambling, methamphetamine, or nicotine. Based upon limited evidence, NAC appears to normalize glutamate neurotransmission in the nucleus accumbens and other brain structures, in part by upregulating the expression of excitatory amino acid transporter 2 (EAAT2), glutamate transporter 1 (GLT1), in individuals with addiction. While NAC has been demonstrated to modulate glutamate neurotransmission in adult humans who are addicted to cocaine, NAC does not appear to modulate glutamate neurotransmission in healthy adult humans. NAC has been hypothesized to exert beneficial effects through its modulation of glutamate and dopamine neurotransmission as well as its antioxidant properties.
Bipolar disorder
In bipolar disorder, N-acetylcysteine has been repurposed as an augmentation strategy for depressive episodes in light of the possible role of inflammation in the pathogenesis of mood disorders. Nonetheless, meta-analytic evidence shows that add-on N-acetylcysteine was more effective than placebo only in reducing depression scales scores (low quality evidence), without positive effects on response and remission outcomes, limiting its possible role in clinical practice to date.
COVID-19
Acetylcysteine has been studied as a possible treatment for COVID-19.
A combination of guanfacine and N-acetylcysteine has been found to lift the "brain fog" of eight patients with long COVID, according to researchers, but the results are inconclusive and have not been confirmed by other studies.
A combination of glycine and N-acetylcysteine is suspected to have potential to safely replenish depleted glutathione levels in COVID-19 patients.
External links
References
Acetamides
Alpha-Amino acids
Amino acid derivatives
Antidotes
Antioxidants
Excipients
Excitatory amino acid receptor ligands
Ophthalmology drugs
Sulfur compounds
Thiols
Treatment of bipolar disorder
Treatment of obsessive–compulsive disorder
Wikipedia medicine articles ready to translate
World Health Organization essential medicines | Acetylcysteine | [
"Chemistry"
] | 4,663 | [
"Organic compounds",
"Thiols"
] |
360,726 | https://en.wikipedia.org/wiki/Planisphere | In astronomy, a planisphere () is a star chart analog computing instrument in the form of two adjustable disks that rotate on a common pivot. It can be adjusted to display the visible stars for any time and date. It is an instrument to assist in learning how to recognize stars and constellations. The astrolabe, an instrument that has its origins in Hellenistic astronomy, is a predecessor of the modern planisphere.
The term planisphere contrasts with armillary sphere, where the celestial sphere is represented by a three-dimensional framework of rings.
Description
A planisphere consists of a circular star chart attached at its center to an opaque circular overlay that has a clear window or hole so that only a portion of the sky map will be visible in the window or hole area at any given time. The chart and overlay are mounted so that they are free to rotate about a common axis. The star chart contains the brightest stars, constellations and (possibly) deep-sky objects visible from a particular latitude on Earth. The night sky that one sees from the Earth depends on whether the observer is in the northern or southern hemispheres and the latitude. A planisphere window is designed for a particular latitude and will be accurate enough for a certain band either side of that. Planisphere makers will usually offer them in a number of versions for different latitudes. Planispheres only show the stars visible from the observer's latitude; stars below the horizon are not included.
A complete twenty-four-hour time cycle is marked on the rim of the overlay. A full twelve months of calendar dates are marked on the rim of the starchart. The window is marked to show the direction of the eastern and western horizons. The disk and overlay are adjusted so that the observer's local time of day on the overlay corresponds to that day's date on the star chart disc. The portion of the star chart visible in the window then represents (with a distortion because it is a flat surface representing a spherical volume) the distribution of stars in the sky at that moment for the planisphere's designed location. Users hold the planisphere above their head with the eastern and western horizons correctly aligned to match the chart to actual star positions.
History
The word planisphere (Latin planisphaerium) was originally used in the second century by Claudius Ptolemy to describe the representation of a spherical Earth by a map drawn in the plane.
This usage continued into the Renaissance: for example Gerardus Mercator described his 1569 world map as a planisphere.
In this article the word describes the representation of the star-filled celestial sphere on a flat disc.
The first star chart to have the name "planisphere" was made in 1624 by Jacob Bartsch.
Bartsch was the son-in-law of Johannes Kepler, discoverer of Kepler's laws of planetary motion.
The star chart
Since the planisphere shows the celestial sphere in a printed flat, there is always considerable distortion. Planispheres, like all charts, are made using a certain projection method. For planispheres there are two major methods in use, leaving the choice with the designer. One such method is the polar azimuthal equidistant projection. Using this projection the sky is charted centered on one of the celestial poles (polar), while circles of equal declination (for instance 60°, 30°, 0° (the celestial equator), −30°, and −60°) lie equidistant from each other and from the poles (equidistant). The shapes of the constellations are proportionally correct in a straight line from the centre outwards, but at right angles to this direction (parallel to the declination circles) there is considerable distortion. That distortion will be worse as the distance to the pole gets greater. If we study the famous constellation of Orion in this projection and compare this to the real Orion, we can clearly see this distortion. One notable planisphere using azimuthal equidistant projection addresses this issue by printing a northern view on one side and the southern view on the other, thus reducing the distance charted from the center outward.
The stereographic projection solves this problem while introducing another. Using this projection the distances between the declination circles are enlarged in such a way that the shapes of the constellations remain correct. Naturally in this projection the constellations on the edge become too large in comparison to constellations near the celestial pole: Orion will be twice as high as it should be. (This is the same effect that makes Greenland so huge in Mercator maps.) Another disadvantage is that, with more space for constellations near the edge of the planisphere, the space for the constellations around the celestial pole in question will be less than they deserve. For observers at moderate latitudes, who can see the sky near the celestial pole of their hemisphere better than that nearer the horizon, this may be a good reason to prefer a planisphere made with the polar azimuthal equidistant projection method.
The upper disc
The upper disc contains a "horizon", that defines the visible part of the sky at any given moment, which is naturally half of the total starry sky. That horizon line is most of the time also distorted, for the same reason the constellations are distorted.
The horizon line on a stereographic projection is a perfect circle.
The horizon line on other projections is a kind of "collapsed" oval.
The horizon is designed for a particular latitude and thus determines the area for which a planisphere is meant. Some more expensive planispheres have several upper discs that can be exchanged, or have an upper disc with more horizon-lines, for different latitudes.
When a planisphere is used in a latitude zone other than the zone for which it was designed, the user will either see stars that are not in the planisphere, or the planisphere will show stars that are not visible in that latitude zone's sky. To study the starry sky thoroughly it may be necessary to buy a planisphere particularly for the area in question.
However, most of the time the part of the sky near the horizon will not show many stars, due to hills, woods, buildings or just because of the thickness of the atmosphere we look through. The lower 5° above the horizon in particular hardly shows any stars (let alone objects) except under the very best conditions. Therefore, a planisphere can fairly accurately be used from +5° to −5° of the design latitude. For example, a planisphere for 40° north can be used between 35° and 45° north.
Coordinates
Accurate planispheres represent the celestial coordinates: right ascension and declination. The changing positions of planets, asteroids or comets in terms of these coordinates can be looked up in annual astronomical guides, and these enable planisphere users to find them in the sky.
Some planispheres use a separate pointer for the declination, using the same pivot point as the upper disc. Some planispheres have a declination feature printed on the upper disc, along the line connecting north and south on the horizon. Right ascension is represented on the edge, where the dates with which to set the planisphere are also found.
See also
Celestial globe - the representation of the starry sky on an apparent celestial sphere.
List of astronomical instruments
Armillary sphere - a framework of brass rings, which represent the principal circles of the heavens.
Volvelle
References
External links
Bartsch, Jacob. Usus Astronomicus Planisphaerii Stellati, 1624. (Scans by Felice Stoppa.) The first cartographic use of the term planisphere.
Uncle Al's Sky Wheel – northern hemisphere planisphere.
Southern Star Wheel – southern hemisphere planisphere.
Toshimi Taki's planisphere – double-sided planisphere mainly for equatorial areas.
Astronomy in Your Hands - create your planisphere customized to any latitude/longitude in the globe.
Star atlases
Navigational equipment
Astronomy education
Analog computers
Astronomical instruments | Planisphere | [
"Astronomy"
] | 1,714 | [
"Astronomy education",
"Astronomical instruments"
] |
360,759 | https://en.wikipedia.org/wiki/One%20Piece | One Piece (stylized in all caps) is a Japanese manga series written and illustrated by Eiichiro Oda. It has been serialized in Shueisha's manga magazine Weekly Shōnen Jump since July 1997, with its chapters compiled in 110 volumes . The series follows the adventures of Monkey D. Luffy and his crew, the Straw Hat Pirates, as he explores the Grand Line in search of the mythical treasure known as the "One Piece" to become the next King of the Pirates.
The manga spawned a media franchise, having been adapted into a festival film by Production I.G, and an anime series by Toei Animation, which began broadcasting in 1999. Additionally, Toei has developed fourteen animated feature films, one original video animation, and thirteen television specials. Several companies have developed various types of merchandising and media, such as a trading card game and numerous video games. The manga series was licensed for an English language release in North America and the United Kingdom by Viz Media and in Australia by Madman Entertainment. The anime series was licensed by 4Kids Entertainment for an English-language release in North America in 2004 before the license was dropped and subsequently acquired by Funimation in 2007. Netflix released a live action TV series adaptation in 2023.
One Piece has received praise for its storytelling, world-building, art, characterization, and humour. It has received many awards and is ranked by critics, reviewers, and readers as one of the best manga of all time. By August 2022, it had over 516.6 million copies in circulation in 61 countries and regions worldwide, making it the best-selling manga series in history, and the best-selling comic series printed in a book volume. Several volumes of the manga have broken publishing records, including the highest initial print run of any book in Japan. In 2015 and 2022, One Piece set the Guinness World Record for "the most copies published for the same comic book series by a single author". It was the best-selling manga for eleven consecutive years from 2008 to 2018 and is the only manga that had an initial print of volumes of above 3 million continuously for more than 10 years, as well as the only one that had achieved more than 1 million copies sold in all of its over 100 published volumes. One Piece is the only manga whose volumes have ranked first every year in Oricon's weekly comic chart existence since 2008.
Synopsis
Setting
The world of One Piece is populated by humans and other races such as dwarves (more akin to faeries in size), giants, merfolk, fish-men, long-limbed tribes, long-necked people known as the Snakeneck Tribe, and animal people (known as "Minks"). The world is governed by an intercontinental organization known as the World Government, consisting of dozens of member countries. The Navy is the sea military branch of the World Government that protects the known seas from pirates and other criminals. There is also Cipher Pol which is a group of agencies within the World Government that are their secret police. While pirates are major opponents of the Government, the ones who challenge their rule are the Revolutionary Army who seek to overthrow them. The central tension of the series pits the World Government and their forces against pirates. The series regularly emphasizes moral ambiguity over the label "pirate", which includes cruel villains, but also any individuals who do not submit to the World Government's authoritarian—and often morally ambiguous—rule. The One Piece world also has supernormal characteristics like Devil Fruits, which are mysterious fruits that grant whoever eats them transformative powers at the cost of becoming weakened in bodies of water, resulting in them losing the ability to swim. Another supernatural power is Haki, which grants its users enhanced willpower, observation, and fighting abilities, and it is one of the only effective methods of inflicting bodily harm on certain Devil Fruit users.
The world itself consists of two vast oceans divided by a massive mountain range called the Red Line. Within the oceans is a second global phenomenon known as the Grand Line, which is a sea that runs perpendicular to the Red Line and is bounded by the Calm Belt, strips of calm ocean infested with huge ship-eating monsters known as Sea Kings. These geographical barriers divide the world into four seas: North Blue, East Blue, West Blue, and South Blue. Passage between the four seas, and the Grand Line, is therefore difficult. Unique and mystical features enable transport between the seas, such as the use of Sea Prism Stone employed by government ships to mask their presence as they traverse the Calm Belt, or the Reverse Mountain where water from the four seas flows uphill before merging into a rapidly flowing and dangerous canal that enters the Grand Line. The Grand Line itself is split into two separate halves with the Red Line between being Paradise and the New World.
Premise
The series focuses on Monkey D. Luffy—a young man made of rubber after unintentionally eating the Gum-Gum Fruit—who sets off on a journey from the East Blue Sea to find the deceased King of the Pirates Gol D. Roger's ultimate treasure known as the "One Piece", and take over his prior title. Luffy sets sail as captain of the Straw Hat Pirates, and is joined by Roronoa Zoro, a swordsman and former bounty hunter; Nami, a money-obsessed thief and navigator; Usopp, a sniper and compulsive liar; and Sanji, an amorous but chivalrous cook. They acquire a ship, the Going Merry—later replaced by the Thousand Sunny—and engage in confrontations with notorious pirates. As Luffy and his crew set out on their adventures, others join the crew later in the series, including Tony Tony Chopper, an anthropomorphized reindeer doctor; Nico Robin, an archaeologist and former Baroque Works assassin; Franky, a cyborg shipwright; Brook, a skeleton musician and swordsman; and Jimbei, a whale shark-type fish-man and former member of the Seven Warlords of the Sea who becomes their helmsman. Together, they encounter other pirates, bounty hunters, criminal organizations, revolutionaries, secret agents, scientists, soldiers of the morally ambiguous World Government, and various other friends and foes, as they sail the seas in pursuit of their dreams.
Production
Concept and creation
Eiichiro Oda's interest in pirates began in his childhood, watching the animated series Vicky the Viking, which inspired him to want to draw a manga series about pirates. The reading of pirate biographies influenced Oda to incorporate the characteristics of real-life pirates into many of the characters in One Piece; for example, the character Marshall D. Teach is based on and named after the historical pirate Edward "Blackbeard" Teach. Apart from the history of piracy, Oda's biggest influence is Akira Toriyama and his series Dragon Ball, which is one of his favorite manga.
While working as an assistant to Nobuhiro Watsuki, Oda began writing One Piece in 1996. It started as two one-shot stories entitled Romance Dawn—which would later be used as the title for One Pieces first chapter and volume. They both featured the character of Luffy and included elements that would appear later in the main series. The first of these short stories was published in August 1996 in Shueisha's Akamaru Jump, and reprinted in 2002 in One Piece Red guidebook. The second was published in the 41st issue of Weekly Shōnen Jump in September 1996, and reprinted in 1998 in Oda's short story collection, Wanted! In an interview with TBS, Takanori Asada, the original editor of One Piece, revealed that the manga was rejected by Weekly Shōnen Jump three times before they agreed to publish the series. Kazuhiko Torishima, then the magazine's editor-in-chief, explained that they debated for two hours on whether or not to serialize One Piece. Although acknowledging that it had potential, he was one of those against the work because it was "incomplete". But Torishima ultimately approved serialization due to Asada being so "annoyingly earnest" that another editor suggested both Oda and Asada would be crushed if it was rejected at that time.
Development
Oda's primary inspiration for the concept of Devil Fruits was Doraemon; the Fruits' abilities and uses reflect Oda's daily life and his personal fantasies, similar to that of Doraemon's gadgets, such as the Gum-Gum Fruit being inspired by Oda's laziness. When designing the outward appearance of Devil Fruits Oda thinks of something that would fulfill a human desire; he added that he does not see why he would draw a Devil Fruit unless the fruit's appearance would entice one to eat it. The names of many special attacks, as well as other concepts in the manga, consist of a form of punning in which phrases written in kanji are paired with an idiosyncratic reading. The names of some characters' techniques are often mixed with other languages, and the names of several of Zoro's sword techniques are designed as jokes; they look fearsome when read by sight but sound like kinds of food when read aloud. For example, Zoro's signature move is Onigiri, which is written as demon cut but is pronounced the same as rice ball in Japanese. Eisaku Inoue, the animation director, has said that the creators did not use these kanji readings in the anime since they "might have cut down the laughs by about half". Nevertheless, Konosuke Uda, the director, said that he believes that the creators "made the anime pretty close to the manga".
Oda was "sensitive" about how his work would be translated. In many instances, the English version of the One Piece manga uses one onomatopoeia for multiple onomatopoeiae used in the Japanese version. For instance, "saaa" (the sound of light rain, close to a mist) and "zaaa" (the sound of pouring rain) are both translated as "fshhhhhhh". Unlike other manga artists, Oda draws everything that moves himself to create a consistent look while leaving his staff to draw the backgrounds based on sketches he has drawn. This workload forces him to keep tight production rates, starting from five in the morning until two in the morning the next day, with short breaks only for meals. Oda's work program includes the first three days of the week dedicated to the writing of the storyboard and the remaining time for the definitive inking of the boards and the possible colouring. When a reader asked who Nami was in love with, Oda replied that there would hardly be any love affairs within Luffy's crew. The author also explained he deliberately avoids including them in One Piece since the series is a shōnen manga and the boys who read it are not interested in love stories.
Conclusion
Oda revealed that he originally planned One Piece to last five years and that he had already planned the ending. However, he found it would take longer than he had expected as Oda realized that he liked the story too much to end it in that period of time. In 2016, nineteen years after the start of serialization, the author said that the manga has reached 65% of the story he intends to tell. In July 2018, on the occasion of the twenty-first anniversary of One Piece, Oda said that the manga has reached 80% of the plot. In a television special aired in Japan in January 2019, Oda said that One Piece is on its way to the conclusion, but that it would exceed the 100th volume, also commenting that he would be willing to change the ending if the fans were to be able to predict it. When asked if the titular treasure is "family bonds", Oda replied: "No, I hate that kind of thing", mentioning the ending of The Wizard of Oz and claiming that he does not endure stories where the reward of adventure is the adventure itself, opting for a story where travel is important, but even more important is the goal. In August 2019, Oda said that, according to his predictions, the manga would end in five years. However, Oda stated that the ending would be what he had decided in the beginning; he is committed to seeing it through. In August 2020, Shueisha announced in the year's 35th issue of Weekly Shōnen Jump that One Piece was "headed toward the upcoming final saga." On January 4, 2021, One Piece reached its thousandth chapter. In June 2022, Oda announced that the manga would enter a one-month break to prepare for its 25th anniversary and its final saga, set to begin with the release of chapter 1054.
Media
Manga
Written and illustrated by Eiichiro Oda, One Piece has been serialized by Shueisha in the manga anthology Weekly Shōnen Jump since July 22, 1997. Shueisha has collected its chapters into individual volumes. The first volume was released on December 24, 1997. By November 1, 2024, a total of 110 volumes have been released.
The first English translation of One Piece was released by Viz Media in November 2002, who published its chapters in the manga anthology Shonen Jump, and later collected in volumes since June 30, 2003. In 2009, Viz announced the release of five volumes per month during the first half of 2010 to catch up with the serialization in Japan. Following the discontinuation of the print Shonen Jump, Viz began releasing One Piece chapterwise in its digital successor Weekly Shonen Jump on January 30, 2012. Following the digital Weekly Shonen Jumps cancelation in December 2018, Viz Media started simultaneously publishing One Piece through its Shonen Jump service, and by Shueisha through Manga Plus, in January 2019.
In the United Kingdom, the volumes were published by Gollancz Manga, starting in March 2006, until Viz Media took it over after the fourteenth volume. In Australia and New Zealand, the English volumes have been distributed by Madman Entertainment since November 10, 2008.
Spin-offs and crossovers
Oda teamed up with Akira Toriyama to create a single crossover of One Piece and Toriyama's Dragon Ball. Entitled Cross Epoch, the one-shot was published in the December 25, 2006, issue of Weekly Shōnen Jump and the April 2011 issue of the English Shonen Jump. Oda collaborated with Mitsutoshi Shimabukuro, author of Toriko, for a crossover one-shot of their series titled , published in Weekly Shōnen Jump on April 4, 2011. The spin-off series , written by Ei Andō in a super deformed art style, began serialization in Saikyō Jump on December 5, 2011. Its final chapter was published on Shōnen Jump+ on February 2, 2021.
Anime
Festival films and original video animation
One Piece: Defeat Him! The Pirate Ganzack! was produced by Production I.G for the 1998 Jump Super Anime Tour and was directed by Gorō Taniguchi. Luffy, Nami, and Zoro are attacked by a sea monster that destroys their boat and separates them. Luffy is found on an island beach, where he saves a little girl, Medaka, from two pirates. All the villagers, including Medaka's father, have been abducted by Ganzack and his crew and forced into labour. After hearing that Ganzack also stole all the food, Luffy and Zoro rush out to retrieve it. As they fight the pirates, one of them kidnaps Medaka. A fight starts between Luffy and Ganzack, ending with Luffy's capture. Meanwhile, Zoro is forced to give up after a threat is made to kill all the villagers. They rise against Ganzack, and while the islanders and pirates fight, Nami unlocks the three captives. Ganzack defeats the rebellion and reveals his armoured battleship. The Straw Hat Pirates are forced to fight Ganzack once more to prevent him from destroying the island.
A second film, One Piece: Romance Dawn Story, was produced by Toei Animation in July 2008 for the Jump Super Anime Tour. It is 34 minutes in length and based on the first version of Romance Dawn. It includes the Straw Hat Pirates up to Brook and their second ship, the Thousand Sunny. In search for food for his crew, Luffy arrives at a port after defeating a pirate named Crescent Moon Gally on the way. There he meets a girl named Silk, who was abandoned by attacking pirates as a baby and raised by the mayor. Her upbringing causes her to value the town as her "treasure". The villagers mistake Luffy for Gally and capture him just as the real Gally returns. Gally throws Luffy in the water and plans to destroy the town, but Silk saves him and Luffy pursues Gally. His crew arrives to help him, and with their help, he recovers the treasure for the town, acquires food, and destroys Gally's ship. The film was later released as a triple feature DVD with Dragon Ball: Yo! Son Goku and His Friends Return!! and Tegami Bachi: Light and Blue Night, that was available only through a mail-in offer exclusively to Japanese residents.
The One Piece Film Strong World: Episode 0 original video animation adapts the manga's special "Chapter 0", which shows how things were before and after the death of Roger. It received a limited release of three thousand DVDs as a collaboration with the House Foods brand.
1999 TV series
An anime television series adaptation produced by Toei Animation premiered on Fuji Television on October 20, 1999; the series reached its 1,000th episode in November 2021.
Theatrical films
Fourteen animated theatrical films produced by Toei Animation based on the One Piece series have been released. The films are typically released in March to coincide with the spring vacation of Japanese schools. The films feature self-contained, completely original plots, or alternate retellings of story arcs with animation of a higher quality than what the weekly anime allows. The first three films were typically double features paired up with other anime films and were thus usually an hour or less in length. The films themselves offer contradictions in both chronology and design that make them incompatible with a single continuity. Funimation has licensed the eighth, tenth, and twelfth films for release in North America, and these films have received in-house dubs by the company.
Upcoming original net animation
In December 2023 at the Jump Festa '24 event, it was announced that Wit Studio would be producing an original net animation (ONA) series remake for Netflix, starting from the East Blue story arc, to commemorate the 25th anniversary of the original anime series. The remake will be titled The One Piece. It will be directed by Masashi Koizuka, with Hideaki Abe serving as assistant director, and Kyoji Asano and Takatoshi Honda as character designers and chief animation directors. Yasuhiro Kajino will be in charge of the image board and creature design, and Eri Taguchi will be in charge of the prop design. Taku Kishimoto will be in charge of the series scripts, and Ken Imaizumi and Shuhei Fukuda will serve as action animators. Tomonori Kuroda will be the art director, and Ryōma Kawamura will be the animation producer.
Live-action series
On July 21, 2017, Weekly Shōnen Jump editor-in-chief Hiroyuki Nakano announced that Tomorrow Studios (a partnership between Marty Adelstein and ITV Studios) and Shueisha would commence production of an American live-action television adaptation of Eiichiro Oda's One Piece manga series as part of the series' 20th anniversary celebrations. Eiichiro Oda served as executive producer for the series alongside Tomorrow Studios CEO Adelstein and Becky Clements. The series would reportedly begin with the East Blue arc.
In January 2020, Oda revealed that Netflix ordered a first season consisting of ten episodes. On May 19, 2020, producer Marty Adelstein revealed during an interview with SyFy Wire, that the series was originally set to begin filming in Cape Town sometime around August, but has since been delayed to around September due to COVID-19. He also revealed that, during the same interview, all ten scripts had been written for the series and they were set to begin casting sometime in June. However, executive producer Matt Owens stated in September 2020 that casting had not yet commenced.
In March 2021, production started up again with showrunner Steven Maeda revealing that the series codename is Project Roger. In November 2021, it was announced that the casting for the series includes Iñaki Godoy as Monkey D. Luffy, Mackenyu as Roronoa Zoro, Emily Rudd as Nami, Jacob Romero Gibson as Usopp and Taz Skylar as Sanji. In March 2022, Netflix added Morgan Davies as Koby, Ilia Isorelýs Paulino as Alvida, Aidan Scott as Helmeppo, Jeff Ward as Buggy, McKinley Belcher III as Arlong, Vincent Regan as Garp and Peter Gadiot as Shanks to the cast in recurring roles.
The series was positively received by both fans and critics, and on September 15, 2023, Oda revealed that the show has been renewed for a second season.
Video games
The One Piece franchise has been adapted into multiple video games published by subsidiaries of Bandai and later as part of Bandai Namco Entertainment. The games have been released on a variety of video game, handheld consoles, and mobile devices. The video games feature role-playing games, and fighting games, such as the titles of the Grand Battle! meta-series. The series debuted on July 19, 2000, with From TV Animation – One Piece: Become the Pirate King!. Over forty games have been produced based on the franchise. Additionally, One Piece characters and settings have appeared in various Shonen Jump crossover games, such as Battle Stadium D.O.N, Jump Super Stars, Jump Ultimate Stars, J-Stars Victory VS and Jump Force.
Music
Music soundtracks have been released that are based on songs that premiered in the series. Kohei Tanaka and Shiro Hamaguchi composed the score for One Piece. Various theme songs and character songs were released on a total of 51 singles. Eight compilation albums and seventeen soundtrack CDs have been released featuring songs and themes that were introduced in the series. On August 11, 2019, it was announced that the musical group Sakuramen is collaborating with Kohei Tanaka to compose music for the anime's "Wano Country" story arc.
Light novels
A series of light novels was published based on the first festival film, certain episodes of the anime television series, and all but the first feature film. They feature artwork by Oda and are written by Tatsuya Hamasaki. The first of these novels, One Piece: Defeat The Pirate Ganzak! was released on June 3, 1999. One Piece: Logue Town Chapter followed on July 17, 2000, as an adaptation of the anime television series' Logue Town story arc. The first feature film to be adapted was Clockwork Island Adventure on March 19, 2001. The second, and so far last, light novel adaptation of an anime television series arc, One Piece: Thousand-year Dragon Legend, was published on December 25, 2001. The adaptation of Chopper's Kingdom on the Island of Strange Animals was released on March 22, 2002, and that of Dead End Adventure on March 10, 2003. Curse of the Sacred Sword followed on March 22, 2004, and Baron Omatsuri and the Secret Island on March 14, 2005. The light novel of The Giant Mechanical Soldier of Karakuri Castle was released on March 6, 2006, and that of The Desert Princess and the Pirates: Adventures in Alabasta on March 7, 2007. A novel adaptation of Episodes of Chopper Plus: Bloom in the Winter, Miracle Cherry Blossom was released on February 25, 2008.
Art and guidebooks
Five art books and five guidebooks for the One Piece series have been released. The first art book, One Piece: Color Walk 1, released June 2001, was also released in English by Viz Media on November 8, 2005. A second art book, One Piece: Color Walk 2, was released on November 4, 2003; and One Piece: Color Walk 3 – Lion the third art book, was released January 5, 2006. The fourth art book, subtitled Eagle, was released on March 4, 2010, and One Piece: Shark, the fifth art book, was released on December 3, 2010.
The first guidebook One Piece: Red – Grand Characters was released on March 2, 2002. The second, One Piece: Blue – Grand Data File, followed on August 2, 2002. The third guidebook, One Piece: Yellow – Grand Elements, was released on April 4, 2007, and the fourth, One Piece: Green – Secret Pieces, followed on November 4, 2010. An anime guidebook, One Piece: Rainbow!, was released on May 1, 2007, and covers the first eight years of the TV anime.
Other media
Other One Piece media include a trading card game by Bandai called One Piece CCG and a drama CD centering on the character of Nefertari Vivi released by Avex Trax on December 26, 2002. A Hello Kitty-inspired Chopper was used for several pieces of merchandise as a collaboration between One Piece and Hello Kitty. A play inspired by One Piece, Super Kabuki II: One Piece, ran at Tokyo's Shinbashi Enbujō throughout October and November 2015.
An event called "One Piece Premier Show" debuted at Universal Studios Japan in 2007. The event has been held at the same location every year since 2010. (except in 2020, when the event was canceled due to the COVID-19 pandemic). By 2018, the event has attracted over 1 million visitors. The Baratie restaurant, modeled after the restaurant of the same name in the manga, opened in June 2013 at the Fuji Television headquarters. An indoor theme park located inside the Tokyo Tower called the Tokyo One Piece Tower, which includes some attractions, shops and restaurants, opened on March 13, 2015.
One Piece is the first-ever manga series to hold a "Dome Tour", in which events were held from March 25–27, 2011, at the Kyocera Dome in Osaka, and from April 27 – May 1 of the same year at the Tokyo Dome. In 2014, the first One Piece exhibition in South Korea was held at the War Memorial of Korea, and the second exhibition in Hongik Daehango Art Center. In 2015, a One Piece exhibition was held at the Hong Kong 3D Museum.
One Piece on Ice: Episode of Alabaster premiered on August 11, 2023, in Yokohama, starring two-time reigning world champion Shoma Uno in the lead role of Monkey D. Luffy and junior world champion Marin Honda as Princess Vivi. Other cast members included Four Continents champion Nobunari Oda, Kazuki Tomono, Keijii Tanaka, Koshiro Shimada, and Rika Hongo.
Reception
Sales
One Piece is the best-selling manga series in history; in 2012, Oricon, a Japanese company that began its own annual manga sales ranking chart in year 2008, reported that the series was the first to sell 100 million copies (the company does not report on sales figures before April 2008). The series had over 300 million copies in circulation by November 2013; it had over 440 million copies in circulation worldwide by May 2018; 460 million copies by December 2019; 470 million copies by April 2020; 480 million copies in circulation in forty-three countries worldwide by February 2021. It reached 490 million copies in print worldwide by July 2021. By August 2022, the manga had reached 516.566 million copies in circulation worldwide. By 2004, the brand's merchandise had made more than $1 billion in retail sales in Japan.
One Piece was the best-selling manga series for eleven consecutive years from 2008 until 2018. In 2019, the manga did not top the chart for the first time in twelve years, ranking second in the annual manga sales ranking with over 10.1 million copies sold, although it remained as the best-selling manga by volume in its twelfth consecutive year. It was the third best-selling manga series in 2020, with over 7.7 million copies sold, while volumes 95–97 were the 23rd–25th best-selling manga volumes of 2020, behind the first twenty-two volumes of Demon Slayer: Kimetsu no Yaiba. In 2021, it was the sixth best selling manga with over 7 million copies sold, while volumes 98, 99, and 100 were the sixth, eighth, and ninth best-selling manga volumes, respectively. It was the fourth best-selling manga series in 2022, with over 10.3 million copies sold; volumes 101–104 were among the 10 best-selling manga volumes of the year. It was the fifth best-selling manga series in the first half of 2023 (period between November 2022 and May 2023), with over 3.5 million copies sold, while volume 105 was the best-selling manga volume from the same period; volume 104 placed nineteenth. Volumes 105–107 were among the best-selling manga volumes of 2023. Volume 108 was Shueisha's highest first print run manga volume of 2023–2024 (period between April 2023 and March 2024), with 3.2 million copies printed.
Individual volumes of One Piece have broken publishing and sales records in Japan. In 2009, the 56th volume had a print run of 2.85 million, the highest initial print run of any manga by then. The 57th volume had a print run of 3 million in 2010, a record that was broken several times by subsequent volumes. The 60th volume had a first print run of 3.4 million and was the first book to sell over two million copies in its opening week on Oricon book rankings, and later became the first book to sell over three million copies in Oricon's history. In 2012, the 67th volume had an initial print run of 4.05 million, holding the record of the volume with the highest number of copies in the first print. One Piece is the only manga that had an initial print of volumes of above 3 million continuously for more than ten years. In May 2023, it was reported that each of the 105 volumes, published by then, had sold over 1 million copies. Additionally, One Piece is the only work whose volumes have ranked first every year in Oricon's weekly comic chart existence since 2008.
One Piece has also sold well in North America, charting on Publishers Weeklys list of best-selling comics for April/May 2007 and numerous times on The New York Times Manga Best Seller list. On ICv2s list of Top 25 Manga Properties Fall 2008 for North America, which is compiled by interviews with retailers and distributors, Nielsen BookScan's Top 20 Lists of graphic novels and ICv2s own analysis of information provided by Diamond Comic Distributors, One Piece came in fifteenth place. It rose to second place on their Top 25 Manga Properties Q3 2010 list. By August 2022, the manga has sold 2.9 million copies in print in North America (including single volumes and omnibus editions).
In France, One Piece has been the best-selling manga since 2011, with over 31.80 million copies sold by August 2022. The manga is very popular in the country, where its sales alone represent 8.5% of the French manga market by 2021. The first volume had sold more than 1 million copies in France by July 2021. The 100th volume had one of the biggest initial prints ever for a manga in the French market, selling 131,270 copies in just three days, the best-selling manga volume in a week in the country. The manga sold 6,011,536 copies in 2021. This amount represents almost 20% of the total sales in the country; almost one in five volumes of the series was sold in the year.
In Italy, One Piece had 18 million copies in circulation by April 2021. which represents around 22.5% of the series market outside Japan. In September 2021, the limited edition of the ninety-eighth volume ranked first in the best-selling books weekly ranking, making it the first time that a manga reaches that achievement.
In Germany, One Piece is the second best-selling manga behind Dragon Ball. The manga had sold 6.7 million copies in the country.
Critical response
Allen Divers of Anime News Network comments in 2003 that the art style One Piece employs "initially seems very cartoonish with much of the character designs showing more North American influence than that from its Japanese origins", adding that the "artwork and settings come across as timeless in their presentation". He also notes that the influence of Akira Toriyama (Dragon Ball) shines through in Oda's style of writing with its "huge epic battles punctuated by a lot of humor" and that, in One Piece, he "manages to share a rich tale without getting bogged down by overly complicated plots". Rebecca Silverman of the same site stated that one of the series' strengths is to "blend action, humor, and heavy fare together" and praised the art, but stated that the panels could get too crowded for easy reading. The website activeAnime describes the artwork in One Piece as "wonderfully quirky and full of expression". Mario Vuk from Splash Comics commented that Oda's "pleasantly bright and dynamic" art style suits the story's "funny and exciting" atmosphere. Isaiah Colbert of Kotaku called One Piece a "masterpiece", highlighting Oda's character writing, world-building and the balance between "fun and serious subject matter". Dale Bashir of IGN wrote that One Piece is more about the world-building, adventuring, and the meaning of freedom instead of the "usual shonen battling" from series like Dragon Ball and Naruto. Bashir concluded: "While not everyone would want to go so far for a franchise that isn't even finished yet, trust me when I say that it is definitely worth it."
EX Media lauds Oda's art for its "crispy" monochrome pictures, "great use of subtle shade changes" on color pages, "sometimes exquisite" use of angles, and for its consistency. Shaenon K. Garrity, who at some point edited the series for English Shonen Jump, said that, while doing so, her amazement over Oda's craft grew steadily. She states that "he has a natural, playful mastery of the often restrictive weekly-manga format," notes that "interesting things [are] going on deep in the narrative structure," and recommends "sticking through to the later volumes to see just how crazy and Peter Max-y the art gets". Mania Entertainment writer Jarred Pine commented: "One Piece is a fun adventure story, with an ensemble cast that is continuing to develop, with great action and character drama." He praised Oda's artwork as "imaginative and creative" and commented that "Oda's imagination just oozes all of the panels ". He also noted that "Oda's panel work [...] features a lot of interesting perspectives and direction, especially during the explosive action sequences which are always a blast".
In March 2021, Mobile Suit Gundams creator, Yoshiyuki Tomino, said in his interview that One Piece is the "only manga to trust". He praised the manga, commenting: "Still, we are working in the same studio and I saw storyboards near the photocopier. Unlike mine, those storyboards are good. But, you know, among the popular manga there is manga with very beautiful art and manga with bad art, but interesting nonetheless. And I don't trust manga with very beautiful art unless it is One Piece.
After the release of the hundredth volume, Weekly Shonen Jumps editor-in-chief, Hiroyuki Nakano, explained how One Piece changed the history of manga and the way of making it. Nakano said that Weekly Shonen Jump is "a game of weekly popularity", and before One Piece, he aimed for something "interesting this week without thinking about the next"; however, the series reached overwhelming popularity due to its style that involves a story concept and detailed hints, adding that the series had a huge impact on other series. Nakano lauded Oda for his "overwhelming passion, talent and power" and his "unwavering will" to deliver a story to boys and girls, adding that he goes far beyond the reader's expectations, with the belief in "don't fool the reader" and "there is something interesting ahead of it".
Awards and accolades
One Piece was nominated for the 23rd Kodansha Manga Award in the category in 1999. It was a finalist for the Tezuka Osamu Cultural Prize three times in a row from 2000 to 2002, with the highest number of fan nominations in the first two years. The manga was nominated for Favorite Manga Series in Nickelodeon Magazines 2009 Comics Awards. In 2012, the series won the 41st Japan Cartoonists Association Award Grand Prize, alongside Kimuchi Yokoyama's Neko Darake. In 2014, the series received the 18th Yomiuri Advertising Award's Golden Medal. It also won the 34th Newspaper Advertising Award in the Advertising category and the 67th Advertising Dentsu Award in Newspaper Advertising Planning category.
The forty-sixth volume of One Piece was the best manga of 2007, according to the Oricon's Japanese Book of the Year Action Committee. The series was chosen as one of the best continuing manga for all ages/teens in 2011 by critics from About.com, Anime News Network, and ComicsAlliance. The series has ranked on the "Book of the Year" list from Media Factory's Da Vinci magazine, where professional book reviewers, bookstore employees, and Da Vinci readers participate; it ranked fifth in 2011; second in 2012; third in 2013; second in 2014, 2015 and 2016; third in 2017 and 2018; second in 2019; third in 2020 and 2021; second in 2022; third in 2023; and fourteenth in 2024. It ranked eighth in the 2023 edition of Takarajimasha's Kono Manga ga Sugoi! list of best manga for male readers.
The German translation of the manga won the Sondermann Award in the international manga category in 2005. The series received the award for the forty-fourth volume in 2008 and the forty-eighth volume in 2009. One Piece won the AnimeLands Anime & Manga 19th Grand Prix for the "Best Classic Shōnen" category in 2012.
In a poll conducted by Oricon in 2008 about "the most moving (touching) manga ever", One Piece ranked first in both male and female categories. In another 2008 poll by Oricon, Japanese teenagers voted it the most interesting manga. On Tencent's anime and manga web portal, One Piece ranked first in a poll of "must-read manga for the younger generation in China". In a poll conducted by eBookJapan in 2014 about "manga that children want to read" for "Children's Reading Day" by the Ministry of Education, Culture, Sports, Science and Technology, the series also ranked first.
On June 15, 2015, it was announced that Eiichiro Oda and One Piece had set the Guinness World Record for "The most copies published for the same comic book series by a single author" with 320,866,000 copies printed worldwide by December 2014; it updated the record on August 4, 2022, when it reached over 500 million copies in circulation worldwide in both print and digital copies (416,566,000 in Japan and 100 million copies in 60 countries and territories outside of Japan). The series ranked fourth on the first annual Tsutaya Comic Awards' All-Time Best Section in 2017. In 2021, TV Asahi announced the results of its "Manga General Election" poll in which 150,000 people voted for their "Most Favorite Manga", One Piece ranked first on the list.
In 2014, the "One Piece Premiere Summer" event received the "Best Overall Production" award from the International Association of Amusement Parks and Attractions.
Cultural impact
As part of an effort to help Kumamoto Prefecture recover from the 2016 earthquakes, Oda helped set up 10 statues of the Straw Hat Pirates around the prefecture. Luffy was the first statue to be unveiled in front of the Kumamoto Prefectural Government Office on November 30, 2018. Jinbe was the last statue, unveiled at Sumiyoshi Kaigan Park on July 23, 2022.
At the 2020 Tokyo Olympics, Greek athlete Miltiadis Tentoglou performed a "Gear Second" pose before winning a gold medal in the men's long jump competition. A gene in the fruit fly (Drosophila melanogaster) was named "Baramicin", partly taking inspiration from the One Piece character Buggy. The gene encodes a protein that is split up into multiple parts. A testate amoeba genus was named Alabasta, partly in reference to the One Piece Kingdom of Alabasta, also known as the Kingdom of Sand, a desert kingdom located on Sandy Island in the Paradise region.
Notes Japanese names'''
References
Further reading
External links
of Weekly Shōnen Jump''
of Viz Media
Adventure anime and manga
Bandai brands
Cyborgs in anime and manga
Dinosaurs in anime and manga
Fantasy anime and manga
Fiction about size change
Japanese mythology in anime and manga
Manga adapted into films
Mermaids in anime and manga
Anime and manga about pirates
Pirate comics
Science fiction anime and manga
Shueisha franchises
Shueisha manga
Shōnen manga
Viz Media manga
Viz Media novels
War in anime and manga
World record holders | One Piece | [
"Physics",
"Mathematics"
] | 8,601 | [
"Fiction about size change",
"Quantity",
"Physical quantities",
"Size"
] |
360,781 | https://en.wikipedia.org/wiki/Fuel%20tax | A fuel tax (also known as a petrol, gasoline or gas tax, or as a fuel duty) is an excise tax imposed on the sale of fuel. In most countries the fuel tax is imposed on fuels which are intended for transportation. Fuel tax receipts are often dedicated or hypothecated to transportation projects, in which case the fuel tax can be considered a user fee. In other countries, the fuel tax is a source of general revenue. Sometimes, a fuel tax is used as an ecotax, to promote ecological sustainability. Fuel taxes are often considered by government agencies such as the Internal Revenue Service as regressive taxes.
Fuels used to power agricultural vehicles, as well as home heating oil which is similar to diesel, are taxed at a different, usually lower rate. These fuels may be dyed to prevent their use for transportation.
Aviation fuel is typically charged at a different rate to fuel for ground-based vehicles. Jet fuel and avgas can attract different rates. In many jurisdictions such as the United States and the European Union, commercial aviation fuel is tax free.
Other fuels such as gases, or solid fuels such as coal, may also be taxed.
In countries with a sales tax or a value added tax, these taxes may also be levied on top of fuel taxes. The rate can vary depending on the fuel, as well as the location.
Role in energy policy
Taxes on transportation fuels have been advocated as a way to reduce pollution and the possibility of global warming and conserve energy. Placing higher taxes on fossil fuels makes petrol just as expensive as other fuels such as natural gas, biodiesel or electric batteries, at a cost to the consumer in the form of inflation as transportation costs rise to transport goods all over the country.
Proponents advocate that automobiles should pay for the roads they use and argue that the user tax should not be applied to mass transit projects.
The Intergovernmental Panel on Climate Change, the International Energy Agency, the International Monetary Fund, and the World Bank have called on governments to increase gasoline tax rates in order to combat the social and environmental costs of gasoline consumption. Fuel taxes can be implicit carbon pricing.
Tax rates
International pump prices for diesel and gasoline are tracked by several websites, including Bloomberg L.P. Price differences mostly reflect differences in tax policy.
A Nature study has shown that while gasoline taxes have increased in more countries than they have decreased in during the period 2003–2015, the global mean gasoline tax has decreased due to greater consumption in the low-tax countries.
Asia
China
Chinese gasoline taxes have increased the most among the top twenty -emitting countries over the period 2003–2015.
In China, fuel tax has been a very contentious issue. Efforts by the State Council to institute a fuel tax in order to finance the National Trunk Highway System have run into strong opposition from the National People's Congress, largely out of concern for its impact on farmers. This has been one of the uncommon instances in which the legislature has asserted its authority.
Hong Kong
The following is a list of fuel tax rates for different fuels in Hong Kong:
Aviation fuel: HK$6.51
Light diesel oil: HK$2.89
Leaded petrol: HK$6.82
Unleaded petrol: HK$6.06
Ultra low sulphur diesel: HK$2.89
Euro V diesel: HK$0
Singapore
The following is a list of fuel tax rates for different fuels in Singapore:
98 Octane and above petrol: S$0.79 per litre
92 to 95 Octane petrol: S$0.66 per litre
India
In India, the pricing of fuel varies by state, though central taxes still are part of the pump price of fuel. The Central and state government's taxes make up nearly half of petrol's pump price. The Central govt has different taxes, which amount to about 10–20% of the final cost. The states taxes vary, but on average end up making about 17–20% of the final cost. As a result, approximately 50% - 60% of the pump cost goes to the government in the form of different taxes.
For example, in Delhi, as of February 18, 2021, price of petrol is per litre. Out of this go to Central Govt of India in the form of excise and customs tax. is collected by state government in the form of sales tax and entry tax. Thus, a total of is collected due to various taxes (which accounts for around 58% of the total price).
Israel
In Israel, tax on fuel is 1.35 USD per liter which includes direct fuel tax and VAT. This totals to 78% of total pump price.
Europe
Jet fuel tax is banned on commercial flights within the European Union, according to the 2003 Energy Taxation Directive. It can be levied on domestic flights or by agreement between Member States, however no such agreements exist.
France
As of 2017, the excise tax on gasoline was €0.651 per liter (regional prices varied from €0.407 to €0.6682). With a VAT rate of 20%, the percent of the total price of gasoline that came from taxes was 63.9% The excise tax on diesel fuel was €0.531 per liter (€0.5307 to €0.5631). With the 20% VAT, 59.3% of the total cost of diesel fuel was taxes.
Petroleum products destined for utilisation by aircraft engaged in commercial flights outside of the customs territory of continental France are exempt from all customs duties and domestic taxes. Recently, a rise of 23% in the diesel fuel tax has caused serious protests in major cities of France, leaving disruption and damage behind them. Before the protests, the French government expected to increase both the petrol and diesel taxes until they both reached €0.78 per liter in 2022.
Germany
Fuel taxes in Germany are €0.4704 per litre for ultra-low sulphur Diesel and €0.6545 per litre for conventional unleaded petrol, plus Value Added Tax (19%) on the fuel itself and the Fuel Tax. That adds up to prices of for ultra-low sulphur Diesel and for unleaded petrol (December 2019).
Luxembourg
Since January 2023, petrol is taxed at a rate of €0.53799/litre and diesel at a rate of €0.42875/litre, with a VAT of 16% added to the total price. As of 2022, a "maximum fuel price" has been established by the government, capped at €1.534/litre for EURO 95 petrol and at €1.498/litre for diesel since 7 January 2025..
Netherlands
The sale of fuels in the Netherlands is levied with an excise tax. As of 2015, petrol excise tax is EUR0.766 per litre and diesel excise tax is EUR0.482 per litre, while LPG excise tax is EUR0.185 per litre. The 2007 fuel tax was . On top of that is 21% VAT over the entire fuel price, making the Dutch taxes one of the highest in the world. In total, taxes account for 68.84% of the total price of petrol and 56.55% of the total price of diesel. A 1995 excise was raised by Dutch gulden 25 cents (€0.11), the Kok Quarter (€0.08 raise per litre gasoline and €0.03 raise per litre diesel), by then Prime-Minister Wim Kok is now specifically set aside by the second Balkenende cabinet for use in road creation and road and public transport maintenance.
Norway
Motor fuel is taxed with both a road use tax and a CO2-tax. The road use tax on petrol is NOK 4.62 per litre and the CO2-tax on petrol is NOK 0.88 per litre. The road use tax on auto diesel is NOK 3.62 per litre mineral oil and NOK 1.81 per litre bio diesel. The CO2-tax on mineral oil is NOK 0.59 per litre.
Poland
In Poland half of the end-user price charged at a petrol station goes towards 3 distinct taxes:
akcyza (meaning excise)
opłata paliwowa (meaning fuel tax)
Value-added tax "VAT" at 23% (from summary of akcyza and opłata paliwowa, and the price of petrol)
Excise and fuel tax are prescribed by European Commission law, and therefore cannot be lower in any EU nation. However it is even higher than this EU minimum in Poland, a policy pursued by the former Minister of finance.
Russia
Tax on mineral resource extraction (2008–2009):
Oil: varies 1000 RUR/t – 13800 RUR/t; middle MRET 3000 RUR/t (0.058 €/l = 0.284 $/gal).
Natural gas: 147 RUR/1000m3 (4 €/1000m3).
Petroleum gas: no
Excise tax on motor fuel 2008–2009:
RON >80: 3629 RUR/t. (0.071 €/l = 0.343 $/US gal)
RON <=80: 2657 RUR/t. (0.052 €/l = 0.251 $/US
Other fuel (like avia gasoline, jet fuel, heavy oils, natural gas and autogas) prices has no excise tax.
Value Added Tax — 18% on fuel and taxes.
Full tax rate is near 55% of motor fuel prices (ministry of industry and energy facts 2006).
Sweden
The fuel tax in Sweden comprises a carbon tax and an energy tax. The total tax (including value added tax) is, from July 1, 2018, per liter petrol and per liter diesel.
United Kingdom
From 23 March 2022 the UK duty rate for the road fuels unleaded petrol, diesel, biodiesel and bioethanol is .
Value Added Tax at 20% is also charged on the price of the fuel and on the duty. An additional vehicle excise duty, depending on a vehicle's theoretical production per kilometre, which is applied regardless of the amount of fuel actually consumed, is also levied.
Diesel for use by farmers and construction vehicles is coloured red (red diesel) and has a much reduced tax, currently .
Jet fuel used for international aviation attracts no duty, and no VAT.
North America
Canada
Fuel taxes in Canada can vary greatly between locales. On average, about one-third of the total price of gas at the pump is tax. Excise taxes on gasoline and diesel are collected both federal and provincial governments, as well as by some select municipalities (Montreal, Vancouver, and Victoria); with combined excise taxes varying from 16.2 ¢/L (73.6 ¢/imperial gal)) in the Yukon to 30.5 ¢/L ($1.386/imperial gal) in Vancouver. As well, the federal government and some provincial governments (Newfoundland and Labrador, Nova Scotia, and Quebec) collect sales tax (GST and PST) on top of the retail price and the excise taxes.
United States
The first U.S. state to enact a gas tax was Oregon in 1919. The states of Colorado, North Dakota, and New Mexico followed shortly thereafter. By 1929, all existing 48 states had enacted some sort of gas tax. Today, fuel taxes in the United States vary by state. The United States federal excise tax on gasoline is and for diesel fuel. On average, as of July 2016, state and local taxes add 29.78 cents to gasoline and 29.81 cents to diesel for a total US average fuel tax of for gas and for diesel.
The state and local tax figures includes fixed-per-gallon taxes as well as variable-rate taxes such as those levied as a percentage of the sales price. For state-level fuel taxes, nineteen states and the District of Columbia levy variable-rate taxes of some kind. The other thirty one states do not tie the per-gallon tax rate to inflation, gas prices, or other factors, and the rate changes only by legislation. As of July 2016, twenty one states had gone ten years or more without an increase in their per-gallon gasoline tax rate.
Because the fuel tax is universally styled as a "road use" tax, exempting off-road farming, marine, etc. use; states impose a tax on commercial operators traveling through their state as if the fuel used was bought there, wherever the fuel is purchased. While most commercial truck drivers have an agent to handle the required paperwork: what's reported is how much tax was collected in each state, how much should have been paid to each state, the net tax for each state and the combined net tax for all states to be paid by or refunded to the operator by their base jurisdiction where they file. The operator carries paperwork proving compliance. The member jurisdictions, the US states and the CA provinces, transmit the return information to each other and settle their net tax balances with each other either by a single transmittal through a clearinghouse set up by the IFTA and operated by Morgan Stanley, or by separate transfers with the other member jurisdictions.
Oceania
Australia
The fuel tax system in Australia is very similar to Canada in terms of featuring both a fixed and a variable tax, but varies in the case of exemptions including tax credits and certain excise free fuel.
Since October 2018, the fuel tax in Australia is A$0.412 per litre for petrol and ultra-low sulphur diesel (conventional diesel being taxed at A$0.412 per litre) and the excise for LPG is $0.282 per litre. Since 2000, there is also the GST (goods and services tax) on top of the fuel tax and inflation calculated twice a year called CPI (consumer price index) into the fuel tax since 2015.
New Zealand
Fuel taxes in New Zealand are considered an excise applied by the New Zealand Customs Service on shipments brought into the country. A breakdown of the fuel taxes is published by the Ministry of Economic Development. Excise as at 1 August 2012 totals 50.524 cents per litre (NZ on petrol. In addition the national compulsory Accident Compensation Corporation motor vehicle account receives a contribution of . The ethanol component of bio blended petrol currently attracts no excise duty. This was to be reviewed in 2012. Diesel is not taxed at the pump, but road users with vehicles over 3.5 tonne in Gross Laden Weight and any vehicles not powered wholly by any combination of petrol, LPG or CNG must pay the Road User Charge instead. The Goods and Services Tax (15%) is then applied to the combined total of the value of the commodity and the various taxes. On 25 July 2007 the Minister of Transport Annette King announced that from 1 July 2008 all fuel excise collected would be hypothecated to the National Land Transport Programme.
Africa
South Africa
South Africa imposes a fuel tax, in Dec 2020, per (unleaded 93 octane, inland) liter, composed of the Fuel Levy – R3,37, Road Accident Fund levy – R1,93, associated costs – R3,12, and the Basic Fuel Price – R5,81 for a total of R14.23. (R = South African Rand (ZAR) ~ R15 per US$ in Dec 2020)
See also
Carbon fee and dividend
Carbon tax
Fuel taxes and rising oil prices – how the taxes of various countries could be used to mitigate the rise in oil prices since 2003
Excise tax
Vehicle miles traveled tax
International Fuel Tax Agreement
References
External links
International Fuel Prices 2009 with diesel and gasoline prices of 172 countries and information on fuel taxation for state financing
2012 NACS Retail Fuels Report
Actual taxes on Unleaded and diesel Fuels in Europe
Fuel Price in Gurgoan India
Environmental tax
Petroleum products
Energy economics
Transport economics
Vehicle taxes | Fuel tax | [
"Chemistry",
"Environmental_science"
] | 3,273 | [
"Petroleum",
"Environmental social science",
"Energy economics",
"Petroleum products"
] |
360,788 | https://en.wikipedia.org/wiki/Backdoor%20%28computing%29 | A backdoor is a typically covert method of bypassing normal authentication or encryption in a computer, product, embedded device (e.g. a home router), or its embodiment (e.g. part of a cryptosystem, algorithm, chipset, or even a "homunculus computer"—a tiny computer-within-a-computer such as that found in Intel's AMT technology). Backdoors are most often used for securing remote access to a computer, or obtaining access to plaintext in cryptosystems. From there it may be used to gain access to privileged information like passwords, corrupt or delete data on hard drives, or transfer information within autoschediastic networks.
In the United States, the 1994 Communications Assistance for Law Enforcement Act forces internet providers to provide backdoors for government authorities. In 2024, the U.S. government realized that China had been tapping communications in the U.S. using that infrastructure for months, or perhaps longer; China recorded presidential candidate campaign office phone calls —including employees of the then-vice president of the nation– and of the candidates themselves.
A backdoor may take the form of a hidden part of a program, a separate program (e.g. Back Orifice may subvert the system through a rootkit), code in the firmware of the hardware, or parts of an operating system such as Windows. Trojan horses can be used to create vulnerabilities in a device. A Trojan horse may appear to be an entirely legitimate program, but when executed, it triggers an activity that may install a backdoor. Although some are secretly installed, other backdoors are deliberate and widely known. These kinds of backdoors have "legitimate" uses such as providing the manufacturer with a way to restore user passwords.
Many systems that store information within the cloud fail to create accurate security measures. If many systems are connected within the cloud, hackers can gain access to all other platforms through the most vulnerable system. Default passwords (or other default credentials) can function as backdoors if they are not changed by the user. Some debugging features can also act as backdoors if they are not removed in the release version. In 1993, the United States government attempted to deploy an encryption system, the Clipper chip, with an explicit backdoor for law enforcement and national security access. The chip was unsuccessful.
Recent proposals to counter backdoors include creating a database of backdoors' triggers and then using neural networks to detect them.
Overview
The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference. They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning (see trapdoor function), and thus the term "backdoor" is now preferred, only after the term trapdoor went out of use. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under DARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970.
While initially targeting the computer vision domain, backdoor attacks have expanded to encompass various other domains, including text, audio, ML-based computer-aided design, and ML-based wireless signal classification. Additionally, vulnerabilities in backdoors have been demonstrated in deep generative models, reinforcement learning (e.g., AI GO), and deep graph models. These broad-ranging potential risks have prompted concerns from national security agencies regarding their potentially disastrous consequences.
A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. An example of this sort of backdoor was used as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password-less account which gave the user access to the system, and to undocumented parts of the system (in particular, a video game-like simulation mode and direct interaction with the artificial intelligence).
Although the number of backdoors in systems using proprietary software (software whose source code is not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission.
Politics and attribution
There are a number of cloak and dagger considerations that come into play when apportioning responsibility.
Covert backdoors sometimes masquerade as inadvertent defects (bugs) for reasons of plausible deniability. In some cases, these might begin life as an actual bug (inadvertent error), which, once discovered are then deliberately left unfixed and undisclosed, whether by a rogue employee for personal advantage, or with executive awareness and oversight.
It is also possible for an entirely above-board corporation's technology base to be covertly and untraceably tainted by external agents (hackers), though this level of sophistication is thought to exist mainly at the level of nation state actors. For example, if a photomask obtained from a photomask supplier differs in a few gates from its photomask specification, a chip manufacturer would be hard-pressed to detect this if otherwise functionally silent; a covert rootkit running in the photomask etching equipment could enact this discrepancy unbeknown to the photomask manufacturer, either, and by such means, one backdoor potentially leads to another.
In general terms, the long dependency-chains in the modern, highly specialized technological economy and innumerable human-elements process control-points make it difficult to conclusively pinpoint responsibility at such time as a covert backdoor becomes unveiled.
Even direct admissions of responsibility must be scrutinized carefully if the confessing party is beholden to other powerful interests.
Examples
Worms
Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit, placed secretly on millions of music CDs through late 2005, are intended as DRM measures—and, in that case, as data-gathering agents, since both surreptitious programs they installed routinely contacted central servers.
A sophisticated attempt to plant a backdoor in the Linux kernel, exposed in November 2003, added a small and subtle code change by subverting the revision control system. In this case, a two-line change appeared to check root access permissions of a caller to the sys_wait4 function, but because it used assignment = instead of equality checking ==, it actually granted permissions to the system. This difference is easily overlooked, and could even be interpreted as an accidental typographical error, rather than an intentional attack.
In January 2014, a backdoor was discovered in certain Samsung Android products, like the Galaxy devices. The Samsung proprietary Android versions are fitted with a backdoor that provides remote access to the data stored on the device. In particular, the Samsung Android software that is in charge of handling the communications with the modem, using the Samsung IPC protocol, implements a class of requests known as remote file server (RFS) commands, that allows the backdoor operator to perform via modem remote I/O operations on the device hard disk or other storage. As the modem is running Samsung proprietary Android software, it is likely that it offers over-the-air remote control that could then be used to issue the RFS commands and thus to access the file system on the device.
Object code backdoors
Harder to detect backdoors involve modifying object code, rather than source code—object code is much harder to inspect, as it is designed to be machine-readable, not human-readable. These backdoors can be inserted either directly in the on-disk object code, or inserted at some point during compilation, assembly linking, or loading—in the latter case the backdoor never appears on disk, only in memory. Object code backdoors are difficult to detect by inspection of the object code, but are easily detected by simply checking for changes (differences), notably in length or in checksum, and in some cases can be detected or analyzed by disassembling the object code. Further, object code backdoors can be removed (assuming source code is available) by simply recompiling from source on a trusted system.
Thus for such backdoors to avoid detection, all extant copies of a binary must be subverted, and any validation checksums must also be compromised, and source must be unavailable, to prevent recompilation. Alternatively, these other tools (length checks, diff, checksumming, disassemblers) can themselves be compromised to conceal the backdoor, for example detecting that the subverted binary is being checksummed and returning the expected value, not the actual value. To conceal these further subversions, the tools must also conceal the changes in themselves—for example, a subverted checksummer must also detect if it is checksumming itself (or other subverted tools) and return false values. This leads to extensive changes in the system and tools being needed to conceal a single change.
As object code can be regenerated by recompiling (reassembling, relinking) the original source code, making a persistent object code backdoor (without modifying source code) requires subverting the compiler itself—so that when it detects that it is compiling the program under attack it inserts the backdoor—or alternatively the assembler, linker, or loader. As this requires subverting the compiler, this in turn can be fixed by recompiling the compiler, removing the backdoor insertion code. This defense can in turn be subverted by putting a source meta-backdoor in the compiler, so that when it detects that it is compiling itself it then inserts this meta-backdoor generator, together with the original backdoor generator for the original program under attack. After this is done, the source meta-backdoor can be removed, and the compiler recompiled from original source with the compromised compiler executable: the backdoor has been bootstrapped. This attack dates to a 1974 paper by Karger and Schell, and was popularized in Thompson's 1984 article, entitled "Reflections on Trusting Trust"; it is hence colloquially known as the "Trusting Trust" attack. See compiler backdoors, below, for details. Analogous attacks can target lower levels of the system,
such as the operating system, and can be inserted during the system booting process; these are also mentioned by Karger and Schell in 1974, and now exist in the form of boot sector viruses.
Asymmetric backdoors
A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology – Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g. via publishing, being discovered and disclosed by reverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termed kleptography; they can be carried out in software, hardware (for example, smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now called cryptovirology. Notably, NSA inserted a kleptographic backdoor into the Dual EC DRBG standard.
There exists an experimental asymmetric backdoor in RSA key generation. This OpenSSL RSA backdoor, designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available.
Compiler backdoors
A sophisticated form of black box backdoor is a compiler backdoor, where not only is a compiler subverted—to insert a backdoor in some other program, such as a login program—but it is further modified to detect when it is compiling itself and then inserts both the backdoor insertion code (targeting the other program) and the code-modifying self-compilation, like the mechanism through which retroviruses infect their host. This can be done by modifying the source code, and the resulting compromised compiler (object code) can compile the original (unmodified) source code and insert itself: the exploit has been boot-strapped.
This attack was originally presented in Karger & Schell (1974), which was a United States Air Force security analysis of Multics, where they described such an attack on a PL/I compiler, and call it a "compiler trap door". They also mention a variant where the system initialization code is modified to insert a backdoor during booting, as this is complex and poorly understood, and call it an "initialization trapdoor"; this is now known as a boot sector virus.
This attack was then actually implemented by Ken Thompson, and popularized in his Turing Award acceptance speech in 1983, "Reflections on Trusting Trust", which points out that trust is relative, and the only software one can truly trust is code where every step of the bootstrapping has been inspected. This backdoor mechanism is based on the fact that people only review source (human-written) code, and not compiled machine code (object code). A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job.
Thompson's paper describes a modified version of the Unix C compiler that would put an invisible backdoor in the Unix login command when it noticed that the login program was being compiled, and would also add this feature undetectably to future compiler versions upon their compilation as well. As the compiler itself was a compiled program, users would be extremely unlikely to notice the machine code instructions that performed these tasks. (Because of the second task, the compiler's source code would appear "clean".) What's worse, in Thompson's proof of concept implementation, the subverted compiler also subverted the analysis program (the disassembler), so that anyone who examined the binaries in the usual way would not actually see the real code that was running, but something else instead.
Karger and Schell gave an updated analysis of the original exploit in 2002, and, in 2009, Wheeler wrote a historical overview and survey of the literature. In 2023, Cox published an annotated version of Thompson's backdoor source code.
Occurrences
Thompson's version was, officially, never released into the wild. However, it is believed that a version was distributed to BBN and at least one use of the backdoor was recorded. There are scattered anecdotal reports of such backdoors in subsequent years.
In August 2009, an attack of this kind was discovered by Sophos labs. The W32/Induc-A virus infected the program compiler for Delphi, a Windows programming language. The virus introduced its own code to the compilation of new Delphi programs, allowing it to infect and propagate to many systems, without the knowledge of the software programmer. The virus looks for a Delphi installation, modifies the SysConst.pas file, which is the source code of a part of the standard library and compiles it. After that, every program compiled by that Delphi installation will contain the virus. An attack that propagates by building its own Trojan horse can be especially hard to discover. It resulted in many software vendors releasing infected executables without realizing it, sometimes claiming false positives. After all, the executable was not tampered with, the compiler was. It is believed that the Induc-A virus had been propagating for at least a year before it was discovered.
In 2015, a malicious copy of Xcode, XcodeGhost, also performed a similar attack and infected iOS apps from a dozen of software companies in China. Globally, 4,000 apps were found to be affected. It was not a true Thompson Trojan, as it does not infect development tools themselves, but it did prove that toolchain poisoning can cause substantial damages.
Countermeasures
Once a system has been compromised with a backdoor or Trojan horse, such as the Trusting Trust compiler, it is very hard for the "rightful" user to regain control of the system – typically one should rebuild a clean system and transfer data (but not executables) over. However, several practical weaknesses in the Trusting Trust scheme have been suggested. For example, a sufficiently motivated user could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to hide the Trojan horse, such as subverting the disassembler; but there are ways to counter that defense, too, such as writing a disassembler from scratch.
A generic method to counter trusting trust attacks is called diverse double-compiling. The method requires a different compiler and the source code of the compiler-under-test. That source, compiled with both compilers, results in two different stage-1 compilers, which however should have the same behavior. Thus the same source compiled with both stage-1 compilers must then result in two identical stage-2 compilers. A formal proof is given that the latter comparison guarantees that the purported source code and executable of the compiler-under-test correspond, under some assumptions. This method was applied by its author to verify that the C compiler of the GCC suite (v. 3.0.4) contained no trojan, using icc (v. 11.0) as the different compiler.
In practice such verifications are not done by end users, except in extreme circumstances of intrusion detection and analysis, due to the rarity of such sophisticated attacks, and because programs are typically distributed in binary form. Removing backdoors (including compiler backdoors) is typically done by simply rebuilding a clean system. However, the sophisticated verifications are of interest to operating system vendors, to ensure that they are not distributing a compromised system, and in high-security settings, where such attacks are a realistic concern.
List of known backdoors
Back Orifice was created in 1998 by hackers from Cult of the Dead Cow group as a remote administration tool. It allowed Windows computers to be remotely controlled over a network and parodied the name of Microsoft's BackOffice.
The Dual EC DRBG cryptographically secure pseudorandom number generator was revealed in 2013 to possibly have a kleptographic backdoor deliberately inserted by NSA, who also had the private key to the backdoor.
Several backdoors in the unlicensed copies of WordPress plug-ins were discovered in March 2014. They were inserted as obfuscated JavaScript code and silently created, for example, an admin account in the website database. A similar scheme was later exposed in a Joomla plugin.
Borland Interbase versions 4.0 through 6.0 had a hard-coded backdoor, put there by the developers. The server code contains a compiled-in backdoor account (username: politically, password: correct), which could be accessed over a network connection; a user logging in with this backdoor account could take full control over all Interbase databases. The backdoor was detected in 2001 and a patch was released.
Juniper Networks backdoor inserted in the year 2008 into the versions of firmware ScreenOS from 6.2.0r15 to 6.2.0r18 and from 6.3.0r12 to 6.3.0r20 that gives any user administrative access when using a special master password.
Several backdoors were discovered in C-DATA Optical Line Termination (OLT) devices. Researchers released the findings without notifying C-DATA because they believe the backdoors were intentionally placed by the vendor.
A backdoor in versions 5.6.0 and 5.6.1 of the popular Linux utility XZ Utils was discovered in March 2024 by software developer Andres Freund. The backdoor gives an attacker who possesses a specific Ed448 private key remote code execution capabilities on the affected Linux systems. The issue has been assigned a CVSS score of 10.0, the highest possible score.
See also
Backdoor:Win32.Hupigon
Hardware backdoor
Titanium (malware)
Notes
References
External links
Finding and Removing Backdoors
Three Archaic Backdoor Trojan Programs That Still Serve Great Pranks
Backdoors removal — List of backdoors and their removal instructions.
FAQ Farm's Backdoors FAQ: wiki question and answer forum
List of backdoors and Removal
Types of malware
Spyware
Espionage techniques
Rootkits
Cryptography | Backdoor (computing) | [
"Mathematics",
"Engineering"
] | 4,454 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
360,812 | https://en.wikipedia.org/wiki/Orichalcum | Orichalcum or aurichalcum is a metal mentioned in several ancient writings, including the story of Atlantis in the Critias of Plato. Within the dialogue, Critias (460–403 BC) says that orichalcum had been considered second only to gold in value and had been found and mined in many parts of Atlantis in ancient times, but that by Critias's own time, orichalcum was known only by name.
Orichalcum may have been a noble metal such as platinum, as it was supposed to be mined, but has been identified as pure copper or certain alloys of bronze, and especially brass alloys in the case of antique Roman coins, the latter being of "similar appearance to modern brass" according to scientific research.
Overview
The name is derived from the Greek , (from , , mountain and , , copper), literally meaning "mountain copper".
The Romans transliterated "orichalcum" as "aurichalcum", which was thought to mean literally "gold copper". It is known from the writings of Cicero that the metal which they called orichalcum resembled gold in color but had a much lower value. In Virgil's Aeneid, the breastplate of Turnus is described as "stiff with gold and white orichalc".
Orichalcum has been vaguely identified by ancient Greek authors to be either a gold–copper alloy, a form of pure copper or a copper ore or various chemicals based on copper, but also copper–tin and copper–zinc alloys, or a metal or metallic alloy supposedly no longer known.
In later years "orichalcum" was used to describe the sulfide mineral chalcopyrite and also to describe brass. These usages are difficult to reconcile with the claims of Plato's Critias, who states that the metal was "only a name" by his time, while brass and chalcopyrite were very important in the time of Plato, as they still are today.
Joseph Needham notes that Bishop Richard Watson, an 18th-century professor of chemistry, wrote of an ancient idea that there were "two sorts of brass or orichalcum". Needham also suggests that the Greeks may not have known how orichalcum was made and that they might even have had an imitation of the original.
Ingots found
In 2015, 39 ingots were discovered in a sunken vessel on the coast of Gela in Sicily which have tentatively been dated at 2,100 years old. They were analyzed with X-ray fluorescence and found to be an alloy consisting of 75–80% copper, 15–20% zinc, and smaller percentages of nickel, lead, and iron. Another cache of 47 ingots was recovered in February 2016 and found to have similar composition as measured with ICP-OES and ICP-MS: around 65–80% copper, 15–25% zinc, 4–7% lead, 0.5–1% nickel, and trace amounts of silver, antimony, arsenic, bismuth, and other elements.
In ancient literature
Orichalcum is first mentioned in the 7th century BC by Hesiod, and in the Homeric hymn dedicated to Aphrodite, dated to the 630s BC.
According to the Critias of Plato, the inner wall surrounding the citadel of Atlantis with the Temple of Poseidon "flashed with the red light of orichalcum". The interior walls, pillars, and floors of the temple were completely covered in orichalcum, and the roof was variegated with gold, silver, and orichalcum. In the center of the temple stood a pillar of orichalcum, on which the laws of Poseidon and records of the first son princes of Poseidon were inscribed.
Pliny the Elder points out that orichalcum had lost currency due to the mines being exhausted. Pseudo-Aristotle in De mirabilibus auscultationibus (62) describes a type of copper that is "very shiny and white, not because there is tin mixed with it, but because some earth is combined and molten with it." This might be a reference to orichalcum obtained during the smelting of copper with the addition of "cadmia", a kind of earth formerly found on the shores of the Black Sea, which is attributed to be zinc oxide.
Numismatics
In numismatics, the term "orichalcum" is used to refer exclusively to a type of brass alloy used for minting Roman as, sestertius, dupondius, and semis type of coins. It is considered more valuable than copper, of which the as coin was previously made.
See also
Ashtadhatu
Auricupride
Corinthian bronze
Electrum
Hepatizon
Panchaloha
Shakudō
Shibuichi
Thokcha
Tumbaga
References
External links
Atlantis
Coins of ancient Rome
Precious metal alloys
Mythological substances
Ancient Greek metalwork
Coinage metals and alloys
Objects in Greek mythology
Fictional metals
el:Ορείχαλκος | Orichalcum | [
"Chemistry"
] | 1,048 | [
"Mythological substances",
"Precious metal alloys",
"Alloys",
"Coinage metals and alloys"
] |
360,835 | https://en.wikipedia.org/wiki/Coercivity | Coercivity, also called the magnetic coercivity, coercive field or coercive force, is a measure of the ability of a ferromagnetic material to withstand an external magnetic field without becoming demagnetized. Coercivity is usually measured in oersted or ampere/meter units and is denoted .
An analogous property in electrical engineering and materials science, electric coercivity, is the ability of a ferroelectric material to withstand an external electric field without becoming depolarized.
Ferromagnetic materials with high coercivity are called magnetically hard, and are used to make permanent magnets. Materials with low coercivity are said to be magnetically soft. The latter are used in transformer and inductor cores, recording heads, microwave devices, and magnetic shielding.
Definitions
Coercivity in a ferromagnetic material is the intensity of the applied magnetic field (H field) required to demagnetize that material, after the magnetization of the sample has been driven to saturation by a strong field. This demagnetizing field is applied opposite to the original saturating field. There are however different definitions of coercivity, depending on what counts as 'demagnetized', thus the bare term "coercivity" may be ambiguous:
The normal coercivity, , is the H field required to reduce the magnetic flux (average B field inside the material) to zero.
The intrinsic coercivity, , is the H field required to reduce the magnetization (average M field inside the material) to zero.
The remanence coercivity, , is the H field required to reduce the remanence to zero, meaning that when the H field is finally returned to zero, then both B and M also fall to zero (the material reaches the origin in the hysteresis curve).
The distinction between the normal and intrinsic coercivity is negligible in soft magnetic materials, however it can be significant in hard magnetic materials. The strongest rare-earth magnets lose almost none of the magnetization at HCn.
Experimental determination
Typically the coercivity of a magnetic material is determined by measurement of the magnetic hysteresis loop, also called the magnetization curve, as illustrated in the figure above. The apparatus used to acquire the data is typically a vibrating-sample or alternating-gradient magnetometer. The applied field where the data line crosses zero is the coercivity. If an antiferromagnet is present in the sample, the coercivities measured in increasing and decreasing fields may be unequal as a result of the exchange bias effect.
The coercivity of a material depends on the time scale over which a magnetization curve is measured. The magnetization of a material measured at an applied reversed field which is nominally smaller than the coercivity may, over a long time scale, slowly relax to zero. Relaxation occurs when reversal of magnetization by domain wall motion is thermally activated and is dominated by magnetic viscosity. The increasing value of coercivity at high frequencies is a serious obstacle to the increase of data rates in high-bandwidth magnetic recording, compounded by the fact that increased storage density typically requires a higher coercivity in the media.
Theory
At the coercive field, the vector component of the magnetization of a ferromagnet measured along the applied field direction is zero. There are two primary modes of magnetization reversal: single-domain rotation and domain wall motion. When the magnetization of a material reverses by rotation, the magnetization component along the applied field is zero because the vector points in a direction orthogonal to the applied field. When the magnetization reverses by domain wall motion, the net magnetization is small in every vector direction because the moments of all the individual domains sum to zero. Magnetization curves dominated by rotation and magnetocrystalline anisotropy are found in relatively perfect magnetic materials used in fundamental research. Domain wall motion is a more important reversal mechanism in real engineering materials since defects like grain boundaries and impurities serve as nucleation sites for reversed-magnetization domains. The role of domain walls in determining coercivity is complicated since defects may pin domain walls in addition to nucleating them. The dynamics of domain walls in ferromagnets is similar to that of grain boundaries and plasticity in metallurgy since both domain walls and grain boundaries are planar defects.
Significance
As with any hysteretic process, the area inside the magnetization curve during one cycle represents the work that is performed on the material by the external field in reversing the magnetization, and is dissipated as heat. Common dissipative processes in magnetic materials include magnetostriction and domain wall motion. The coercivity is a measure of the degree of magnetic hysteresis and therefore characterizes the lossiness of soft magnetic materials for their common applications.
The saturation remanence and coercivity are figures of merit for hard magnets, although maximum energy product is also commonly quoted. The 1980s saw the development of rare-earth magnets with high energy products but undesirably low Curie temperatures. Since the 1990s new exchange spring hard magnets with high coercivities have been developed.
See also
Magnetic susceptibility
Remanence
References
External links
Magnetization reversal applet (coherent rotation)
For a table of coercivities of various magnetic recording media, see "Degaussing Data Storage Tape Magnetic Media" (PDF), at fujifilmusa.com.
Physical quantities
Magnetic hysteresis | Coercivity | [
"Physics",
"Materials_science",
"Mathematics"
] | 1,149 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Physical properties",
"Hysteresis",
"Magnetic hysteresis"
] |
360,869 | https://en.wikipedia.org/wiki/Stewart%20platform | A Stewart platform is a type of parallel manipulator that has six prismatic actuators, commonly hydraulic jacks or electric linear actuators, attached in pairs to three positions on the platform's baseplate, crossing over to three mounting points on a top plate. All 12 connections are made via universal joints. Devices placed on the top plate can be moved in the six degrees of freedom in which it is possible for a freely-suspended body to move: three linear movements x, y, z (lateral, longitudinal, and vertical), and the three rotations (pitch, roll, and yaw).
Stewart platforms are known by various other names. In many applications, including in flight simulators, it is commonly referred to as a motion base. It is sometimes called a six-axis platform or 6-DoF platform because of its possible motions and, because the motions are produced by a combination of movements of multiple actuators, it may be referred to as a synergistic motion platform, due to the synergy (mutual interaction) between the way that the actuators are programmed. Because the device has six actuators, it is often called a hexapod (six legs) in common usage, a name which was originally trademarked by Geodetic Technology for Stewart platforms used in machine tools.
History
This specialised six-jack layout was first used by V E (Eric) Gough of the UK and was operational in 1954, the design later being publicised in a 1965 paper by D Stewart to the UK Institution of Mechanical Engineers. In 1962, prior to the publication of Stewart's paper, American engineer Klaus Cappel independently developed the same hexapod. Klaus patented his design and licensed it to the first flight simulator companies, and built the first commercial octahedral hexapod motion simulators.
Although the title Stewart platform is commonly used, some have posited that Gough–Stewart platform is a more appropriate name because the original Stewart platform had a slightly different design, while others argue that the contributions of all three engineers should be recognized.
Actuation
Linear actuation
In industrial applications, linear actuators (hydraulic or electric) are typically used for their simple and unique inverse kinematics closed form solution and their good strength and acceleration.
Rotary actuation
For prototyping and low budget applications, typically rotary servo motors are used. A unique closed form solution for the inverse kinematics of rotary actuators also exists, as shown by Robert Eisele
Applications
Stewart platforms have applications in flight simulators, machine tool technology, animatronics, crane technology, underwater research, simulation of earthquakes, air-to-sea rescue, mechanical bulls, satellite dish positioning, the Hexapod-Telescope, robotics, and orthopedic surgery.
Flight simulation
The Stewart platform design is extensively used in flight simulators, particularly in the full flight simulator which requires all 6 degrees of freedom. This application was developed by Redifon, whose simulators featuring it became available for the Boeing 707, Douglas DC-8, Sud Aviation Caravelle, Canadair CL-44, Boeing 727, Comet, Vickers Viscount, Vickers Vanguard, Convair CV 990, Lockheed C-130 Hercules, Vickers VC10, and Fokker F-27 by 1962.
In this role, the payload is a replica cockpit and a visual display system, normally of several channels, for showing the outside-world visual scene to the aircraft crew that are being trained.
Similar platforms are used in driving simulators, typically mounted on large X-Y tables to simulate short term acceleration. Long term acceleration can be simulated by tilting the platform, and an active research area is how to mix the two.
Robocrane
James S. Albus of the National Institute of Standards and Technology (NIST) developed the Robocrane, where the platform hangs from six cables instead of being supported by six jacks.
LIDS
The Low Impact Docking System developed by NASA uses a Stewart platform to manipulate space vehicles during the docking process.
CAREN
The Computer Assisted Rehabilitation Environment developed by Motek Medical uses a Stewart platform coupled with virtual reality to do advanced biomechanical and clinical research.
Taylor Spatial Frame
Dr. J. Charles Taylor used the Stewart platform to develop the Taylor Spatial Frame, an external fixator used in orthopedic surgery for the correction of bone deformities and treatment of complex fractures.
Mechanical testing
First application: Eric Gough was an automotive engineer and worked at Fort Dunlop, the Dunlop Tyres factory in Birmingham, England. He developed his "Universal Tyre-Testing Machine" (also called the "Universal Rig") in the 1950s and his platform was operational by 1954. The rig was able to mechanically test tyres under combined loads. Dr. Gough died in 1972 but his testing rig continued to be used up until the late 1980s when the factory was closed down and then demolished. His rig was saved and transported to the Science Museum, London storage facility at Wroughton near Swindon.
Recent applications: the rebirth of interest for a mechanical testing machine based on Gough-Stewart platform occurred in the mid '90s. They are often biomedical applications (for example spinal study) because of the complexity and large amplitude of the motions needed to reproduce human or animal behaviour. Such requirements are also encountered in the civil engineering field for seism simulation. Controlled by a full-field kinematic measurement algorithm, such machines can also be used to study complex phenomena on stiff specimens (for example the curved propagation of a crack through a concrete block) that need high load capacities and displacement accuracy.
Motion compensation
The Ampelmann system is a motion-compensated gangway using a Stewart platform. This allows access from a moving platform supply vessel to offshore constructions even in high wave conditions.
See also
Acceleration onset cueing
Actuator
Linear actuator
Parallel manipulator
Robot kinematics
References
Further reading
Bonev, I.A., "The True Origins of Parallel Robots", ParalleMIC online review
External links
Picture of the NIST/Ingersoll prototype octahedral hexapod
Hexapod Structures for Surgery
Hexapod for Astronomy
Mechanisms (engineering)
Parallel robots
1954 in robotics | Stewart platform | [
"Engineering"
] | 1,260 | [
"Mechanical engineering",
"Mechanisms (engineering)"
] |
360,876 | https://en.wikipedia.org/wiki/Supremacism | Supremacism is the belief that a certain group of people are superior to, and should have supreme authority over, all others. The presumed superior people can be defined by age, gender, race, ethnicity, religion, sexual orientation, language, social class, ideology, nationality, culture, generation or belong to any other part of a particular population.
Sexual
Male
Some feminist theorists have argued that in patriarchy, a standard of male "supremacism" is enforced through a variety of cultural, political, religious, sexual, and interpersonal strategies. Since the 19th century there have been a number of feminist movements opposed to male supremacism, usually aimed at achieving equal legal rights and protections for women in all cultural, political and interpersonal relations.
Female
Racial
White
Centuries of European colonialism in the Americas, Asia, Africa and Oceania were justified by Eurocentric attitudes as well as sometimes by white supremacist attitudes.
During the 19th century, "The White Man's Burden", the phrase which refers to the thought that whites have the obligation to make the societies of the other peoples more 'civilized', was widely used to justify colonial policies as a noble enterprise. Historian Thomas Carlyle, best known for his historical account of the French Revolution, The French Revolution: A History, argued that western policies were justified on the grounds that they provided the greatest benefit to "inferior" native peoples. However, even at the time of its publication in 1849, Carlyle's main work on the subject, the Occasional Discourse on the Negro Question, was poorly received by his contemporaries.
According to William Nicholls, religious antisemitism can be distinguished from racial antisemitism which is based on racial or ethnic grounds. "The dividing line was the possibility of effective conversion ... a Jew ceased to be a Jew upon baptism." However, with racial antisemitism, "Now the assimilated Jew was still a Jew, even after baptism ... . From the Enlightenment onward, it is no longer possible to draw clear lines of distinction between religious and racial forms of hostility towards Jews... Once Jews have been emancipated and secular thinking makes its appearance, without leaving behind the old Christian hostility towards Jews, the new term antisemitism becomes almost unavoidable, even before explicitly racist doctrines appear."
One of the first typologies which was used to classify various human races was invented by Georges Vacher de Lapouge (1854–1936), a theoretician of eugenics, who published L'Aryen et son rôle social (1899 – "The Aryan and his social role") in 1899. In his book, he divides humanity into various, hierarchical races, starting with the highest race which is the "Aryan white race, dolichocephalic", and ending with the lowest race which is the "brachycephalic", "mediocre and inert" race, that race is best represented by Southern European, Catholic peasants". Between these, Vacher de Lapouge identified the "Homo europaeus" (Teutonic, Protestant, etc.), the "Homo alpinus" (Auvergnat, Turkish, etc.), and finally the "Homo mediterraneus" (Neapolitan, Andalus, etc.) Jews were brachycephalic just like the Aryans were, according to Lapouge; but he considered them dangerous for this exact reason; they were the only group, he thought, which was threatening to displace the Aryan aristocracy. Georges Vacher de Lapouge became one of the leading inspirations of Nazi antisemitism and Nazi racist ideology.
United States
White Americans who participated in the Atlantic slave trade believed and Justified their economic exploitation of African Americans by creating a scientific theory of white superiority and black inferiority. Thomas Jefferson, who was a believer of scientific racism and enslaver of over 600 African Americans (regarded as property under the Articles of Confederation), wrote that blacks were "inferior to the whites in the endowments of body and mind."
A justification for the conquest of American Indian tribes emanated from their dehumanized perception as the "merciless Indian savages", as described in the United States Declaration of Independence.
Before the outbreak of the American Civil War, the Confederate States of America was founded with a constitution that contained clauses which restricted the government's ability to limit or interfere with the institution of "negro" slavery. In the 1861 Cornerstone Speech, Confederate vice president, Alexander Stephens declared that one of the Confederacy's foundational tenets was White Supremacy over African American slaves. Following the war, a hate group, known as the Ku Klux Klan, was founded in the American South, after the end of the American Civil War. Its purpose has been to maintain White, Protestant supremacy in the US after the Reconstruction period, which it did so through violence and intimidation.
The Anti-Defamation League (ADL) and Southern Poverty Law Center condemn writings about "Jewish Supremacism" by Holocaust-denier, former Grand Wizard of the KKK, and conspiracy theorist David Duke as antisemitic – in particular, his book Jewish Supremacism: My Awakening to the Jewish Question. Kevin B. MacDonald, known for his theory of Judaism as a "group evolutionary strategy", has also been accused of being "antisemitic" and a "white supremacist" in his writings on the subject by the ADL and his own university psychology department.
Nazi Germany
From 1933 to 1945, Nazi Germany, under the rule of Adolf Hitler, promoted the belief in the existence of a superior, Aryan Herrenvolk, or master race. The state's propaganda advocated the belief that Germanic peoples, whom they called "Aryans", were a master race or a Herrenvolk whose members were superior to the Jews, Slavs, and Romani people, so-called "gypsies". Arthur de Gobineau, a French racial theorist and aristocrat, blamed the fall of the ancien régime in France on racial intermixing, which he believed had destroyed the purity of the Nordic race. Gobineau's theories, which attracted a large and strong following in Germany, emphasized the belief in the existence of an irreconcilable polarity between Aryan and Jewish cultures.
Russia
Black
Cornel West, an African-American philosopher, writes that black supremacist religious views arose in America as a part of black Muslim theology in response to white supremacy.
Hutu supremacism
Arab
In Africa, black Southern Sudanese allege that they are being subjected to a racist form of Arab supremacy, which they equate with the historic white supremacism of South Africa's apartheid. The alleged genocide and ethnic cleansing in the ongoing War in Darfur has been described as an example of Arab racism.
For example, in their analysis of the sources of the conflict, Julie Flint and Alex de Waal say that Colonel Gaddafi, the leader of Libya, sponsored "Arab supremacism" across the Sahara during the 1970s. Gaddafi supported the "Islamic Legion" and the Sudanese opposition "National Front, including the Muslim Brothers and the Ansar, the Umma Party's military wing." Gaddafi tried to use such forces to annex Chad from 1979 to 1981. Gaddafi supported the Sudanese government's war in the South during the early 1980s, and in return, he was allowed to use the Darfur region as a "back door to Chad". As a result, the first signs of an "Arab racist political platform" appeared in Darfur in the early 1980s.
India
In Asia, Indians in Ancient India considered all foreigners barbarians. The Muslim scholar Al-Biruni wrote that the Indians called foreigners impure. A few centuries later, Dubois observes that "Hindus look upon Europeans as barbarians totally ignorant of all principles of honour and good breeding... In the eyes of a Hindu, a Pariah (outcaste) and a European are on the same level." The Chinese also considered the Europeans repulsive, ghost-like creatures, and they even considered them devils. Chinese writers also referred to foreigners as barbarians.
China
Religious
Christianity
Academics Carol Lansing and Edward D. English argue that Christian supremacism was a motivation for the Crusades in the Holy Land, as well as a motivation for crusades against Muslims and pagans throughout Europe. The blood libel is a widespread European conspiracy theory which led to centuries of pogroms and massacres of European Jewish minorities because it alleged that Jews required the pure blood of a Christian child in order to make matzah for Passover. Thomas of Cantimpré writes of the blood curse which the Jews put upon themselves and all of their generations at the court of Pontius Pilate where Jesus was sentenced to death: "A very learned Jew, who in our day has been converted to the (Christian) faith, informs us that one enjoying the reputation of a prophet among them, toward the close of his life, made the following prediction: 'Be assured that relief from this secret ailment, to which you are exposed, can only be obtained through Christian blood ("solo sanguine Christiano")." The Atlantic slave trade has also been partially attributed to Christian supremacism. The Ku Klux Klan has been described as a white supremacist Christian organization, as are many other white supremacist groups, such as the Posse Comitatus and the Christian Identity and Positive Christianity movements.
Islam
Academics Khaled Abou El Fadl, Ian Lague, and Joshua Cone note that, while the Quran and other Islamic scriptures express tolerant beliefs, such as Al-Baqara 256 "there is no compulsion in religion", there have also been numerous instances of Muslim or Islamic supremacism. Examples of how supremacists have interpreted Islam include the history of slavery in the Muslim world, Caliphate, Ottoman Empire, the early-20th-century pan-Islamism promoted by Abdul Hamid II, the jizya and supremacy of Sharia law, such as rules of marriage in Muslim countries being imposed on non-Muslims.
While non-violent proselytism of Islam (Dawah) is not Islamic supremacism, forced conversion to Islam is Islamic supremacism. Death penalty for apostasy in Islam is a sign of Islamic supremacism.
Numerous massacres and ethnic cleansing of Jews, Christians and non-Muslims occurred in some Muslim-majority countries including in Morocco, Libya, and Algeria, where eventually Jews were forced to live in ghettos. Decrees ordering the destruction of synagogues were enacted during the Middle Ages in Egypt, Syria, Iraq, and Yemen. At certain times in Yemen, Morocco, and Baghdad, Jews were forced to convert to Islam or face the Islamic death penalty. While there were antisemitic incidents before the 20th century, antisemitism increased after the Arab–Israeli conflict. Following the 1948 Arab–Israeli War, the Palestinian exodus, the creation of the State of Israel and Israeli victories during the wars of 1956 and 1967 were a severe humiliation to Israel's opponentsprimarily Egypt, Syria, and Iraq. However, by the mid-1970s the vast majority of Jews had left Muslim-majority countries, moving primarily to Israel, France, and the United States. The reasons for the Jewish exodus are varied and disputed.
Judaism
Ilan Pappé, an expatriate Israeli historian, writes that the First Aliyah to Israel "established a society based on Jewish supremacy" within "settlement-cooperatives" that were Jewish owned and operated. Joseph Massad, a professor of Arab studies, holds that "Jewish supremacism" has always been a "dominating principle" in religious and secular Zionism.
Other
Social
Political
See also
Chauvinism
Colonialism
Rule according to higher law
Legislative supremacy
Judicial supremacy
Notes
Ethnic supremacy
Narcissism
Political theories
Prejudice and discrimination
Racism
Social concepts
Pejorative terms | Supremacism | [
"Biology"
] | 2,470 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
360,918 | https://en.wikipedia.org/wiki/TenDRA%20Compiler | The TenDRA Compiler is a C/C++ compiler for POSIX-compatible operating systems available under the terms of the BSD license.
It was originally developed by the Defence Evaluation and Research Agency (DERA) in the United Kingdom. In the beginning of 2002 TenDRA was actively developed again by Jeroen Ruigrok van der Werven and offered as a BSD-licensed open source project through the website tendra.org. In the third quarter of 2002 the one-man effort was expanded to a small team.
The TDF technology behind TenDRA has an academic history dating back to work on algebraic code validation in the 1970s.
In August 2003 TenDRA split into two projects, TenDRA.org and Ten15.org. Both projects seemed to have disappeared from the web around 2006–2007, but actually they are still active.
The goals of TenDRA.org are:
to continuously produce correct code,
to ensure code correctness through various means, and
to continuously improve the performance of the compiler and resulting code, unless it would jeopardize the points above.
The goals of Ten15.org added:
to be a friendly competitor to GCC in order to get a best-of-breed compiler.
Features of both compilers include good error reporting with respect to standards compliance and a smaller code size than the same programs compiled on gcc. C++ support never got as developed as C support, and there was no STL supporting release. TenDRA uses the Architecture Neutral Distribution Format (ANDF), a specification created by the Open Group, as its intermediate language.
At a point, most of the Alpha OSF/1 kernel could be built with TenDRA C and afterwards there was also a similar effort to port the FreeBSD kernel.
Documentation
TenDRA.org has a comprehensive set of documentation available online at http://www.tendra.org/docs
Manual pages for references to programs and file formats are available at http://www.tendra.org/man
See also
TenDRA Distribution Format
References
External links
The TenDRA Project
Page on GitHub
bitbucket copy of the TenDRA src repository
TenDRA in the FreeBSD ports collection
TenDRA in Debian
mirror of the original TenDRA web page from DERA
Compilers
C (programming language) compilers
C++ compilers
Free and open source compilers
History of computing in the United Kingdom
Science and technology in Hampshire
Software using the BSD license
Unix programming tools | TenDRA Compiler | [
"Technology"
] | 510 | [
"History of computing",
"History of computing in the United Kingdom"
] |
361,028 | https://en.wikipedia.org/wiki/Nitrification | Nitrification is the biological oxidation of ammonia to nitrate via the intermediary nitrite. Nitrification is an important step in the nitrogen cycle in soil. The process of complete nitrification may occur through separate organisms or entirely within one organism, as in comammox bacteria. The transformation of ammonia to nitrite is usually the rate limiting step of nitrification. Nitrification is an aerobic process performed by small groups of autotrophic bacteria and archaea.
Microbiology
Ammonia oxidation
The process of nitrification begins with the first stage of ammonia oxidation, where ammonia (NH3) or ammonium (NH4+) get converted into nitrite (NO2−). This first stage is sometimes known as nitritation. It is performed by two groups of organisms, ammonia-oxidizing bacteria (AOB) and ammonia-oxidizing archaea (AOA).
Ammonia-Oxidizing Bacteria
Ammonia-Oxidizing Bacteria (AOB) are typically Gram-negative bacteria and belong to Betaproteobacteria and Gammaproteobacteria including the commonly studied genera including Nitrosomonas and Nitrococcus. They are known for their ability to utilize ammonia as an energy source and are prevalent in a wide range of environments, such as soils, aquatic systems, and wastewater treatment plants.
AOB possess enzymes called ammonia monooxygenases (AMOs), which are responsible for catalyzing the conversion of ammonia to hydroxylamine (NH2OH), a crucial intermediate in the process of nitrification. This enzymatic activity is sensitive to environmental factors, such as pH, temperature, and oxygen availability.
AOB play a vital role in soil nitrification, making them key players in nutrient cycling. They contribute to the transformation of ammonia derived from organic matter decomposition or fertilizers into nitrite, which subsequently serves as a substrate for nitrite-oxidizing bacteria (NOB).
Ammonia-Oxidizing Archaea
Prior to the discovery of archaea capable of ammonia oxidation, ammonia-oxidizing bacteria (AOB) were considered the only organisms capable of ammonia oxidation. Since their discovery in 2005, two isolates of AOAs have been cultivated: Nitrosopumilus maritimus and Nitrososphaera viennensis. When comparing AOB and AOA, AOA dominate in both soils and marine environments, suggesting that Nitrososphaerota (formerly Thaumarchaeota) may be greater contributors to ammonia oxidation in these environments.
Crenarchaeol, which is generally thought to be produced exclusively by AOA (specifically Nitrososphaerota), has been proposed as a biomarker for AOA and ammonia oxidation. Crenarchaeol abundance has been found to track with seasonal blooms of AOA, suggesting that it may be appropriate to use crenarchaeol abundances as a proxy for AOA populations and thus ammonia oxidation more broadly. However the discovery of Nitrososphaerota that are not obligate ammonia-oxidizers complicates this conclusion, as does one study that suggests that crenarchaeol may be produced by Marine Group II Euryarchaeota.
Nitrite oxidation
The second step of nitrification is the oxidation of nitrite into nitrate. This process is sometimes known as nitratation. Nitrite oxidation is conducted by nitrite-oxidizing bacteria (NOB) from the taxa Nitrospirota, Nitrospinota, Pseudomonadota and Chloroflexota. NOB are typically present in soil, geothermal springs, freshwater and marine ecosystems.
Complete ammonia oxidation
Ammonia oxidation to nitrate in a single step within one organism was predicted in 2006 and discovered in 2015 in the species Nitrospira inopinata. A pure culture of the organism was obtained in 2017, representing a revolution in our understanding of the nitrification process.
History
The idea that oxidation of ammonia to nitrate is in fact a biological process was first given by Louis Pasteur in 1862. Later in 1875, Alexander Müller, while conducting a quality assessment of water from wells in Berlin, noted that ammonium was stable in sterilized solutions but nitrified in natural waters. A. Müller put forward, that nitrification is thus performed by microorganisms. In 1877, Jean-Jacques Schloesing and Achille Müntz, two French agricultural chemists working in Paris, proved that nitrification is indeed microbially mediated process by the experiments with liquid sewage and artificial soil matrix (sterilized sand with powdered chalk). Their findings were confirmed soon (in 1878) by Robert Warington who was investigating nitrification ability of garden soil at the Rothamsted experimental station in Harpenden in England. R. Warington made also the first observation that nitrification is a two-step process in 1879 which was confirmed by John Munro in 1886. Although at that time, it was believed that two-step nitrification is separated into distinct life phases or character traits of a single microorganism.
The first pure nitrifier (ammonia-oxidizing) was most probably isolated in 1890 by Percy Frankland and Grace Frankland, two English scientists from Scotland. Before that, Warington, Sergei Winogradsky and the Franklands were only able to enrich cultures of nitrifiers. Frankland and Frankland succeeded with a system of serial dilutions with very low inoculum and long cultivation times counting in years. Sergei Winogradsky claimed pure culture isolation in the same year (1890), but his culture was still co-culture of ammonia- and nitrite-oxidizing bacteria. S. Winogradsky succeeded just one year later in 1891.
In fact, during the serial dilutions ammonia-oxidizers and nitrite-oxidizers were unknowingly separated resulting in pure culture with ammonia-oxidation ability only. Thus Frankland and Frankland observed that these pure cultures lose ability to perform both steps. Loss of nitrite oxidation ability was observed already by R. Warington. Cultivation of pure nitrite oxidizer happened later during 20th century, however it is not possible to be certain which cultures were without contaminants as all theoretically pure strains share same trait (nitrite consumption, nitrate production).
Ecology
Both steps are producing energy to be coupled to ATP synthesis. Nitrifying organisms are chemoautotrophs, and use carbon dioxide as their carbon source for growth. Some AOB possess the enzyme, urease, which catalyzes the conversion of the urea molecule to two ammonia molecules and one carbon dioxide molecule. Nitrosomonas europaea, as well as populations of soil-dwelling AOB, have been shown to assimilate the carbon dioxide released by the reaction to make biomass via the Calvin Cycle, and harvest energy by oxidizing ammonia (the other product of urease) to nitrite. This feature may explain enhanced growth of AOB in the presence of urea in acidic environments.
In most environments, organisms are present that will complete both steps of the process, yielding nitrate as the final product. However, it is possible to design systems in which nitrite is formed (the Sharon process).
Nitrification is important in agricultural systems, where fertilizer is often applied as ammonia. Conversion of this ammonia to nitrate increases nitrogen leaching because nitrate is more water-soluble than ammonia.
Nitrification also plays an important role in the removal of nitrogen from municipal wastewater. The conventional removal is nitrification, followed by denitrification. The cost of this process resides mainly in aeration (bringing oxygen in the reactor) and the addition of an external carbon source (e.g., methanol) for the denitrification.
Nitrification can also occur in drinking water. In distribution systems where chloramines are used as the secondary disinfectant, the presence of free ammonia can act as a substrate for ammonia-oxidizing microorganisms. The associated reactions can lead to the depletion of the disinfectant residual in the system. The addition of chlorite ion to chloramine-treated water has been shown to control nitrification.
Together with ammonification, nitrification forms a mineralization process that refers to the complete decomposition of organic material, with the release of available nitrogen compounds. This replenishes the nitrogen cycle.
Nitrification in the marine environment
In the marine environment, nitrogen is often the limiting nutrient, so the nitrogen cycle in the ocean is of particular interest. The nitrification step of the cycle is of particular interest in the ocean because it creates nitrate, the primary form of nitrogen responsible for "new" production. Furthermore, as the ocean becomes enriched in anthropogenic CO2, the resulting decrease in pH could lead to decreasing rates of nitrification. Nitrification could potentially become a "bottleneck" in the nitrogen cycle.
Nitrification, as stated above, is formally a two-step process; in the first step ammonia is oxidized to nitrite, and in the second step nitrite is oxidized to nitrate. Diverse microbes are responsible for each step in the marine environment. Several groups of ammonia-oxidizing bacteria (AOB) are known in the marine environment, including Nitrosomonas, Nitrospira, and Nitrosococcus. All contain the functional gene ammonia monooxygenase (AMO) which, as its name implies, is responsible for the oxidation of ammonia. Subsequent metagenomic studies and cultivation approaches have revealed that some Thermoproteota (formerly Crenarchaeota) possess AMO. Thermoproteota are abundant in the ocean and some species have a 200 times greater affinity for ammonia than AOB, contrasting with the previous belief that AOB are primarily responsible for nitrification in the ocean. Furthermore, though nitrification is classically thought to be vertically separated from primary production because the oxidation of nitrate by bacteria is inhibited by light, nitrification by AOA does not appear to be light inhibited, meaning that nitrification is occurring throughout the water column, challenging the classical definitions of "new" and "recycled" production.
In the second step, nitrite is oxidized to nitrate. In the oceans, this step is not as well understood as the first, but the bacteria Nitrospina and Nitrobacter are known to carry out this step in the ocean.
Chemistry and enzymology
Nitrification is a process of nitrogen compound oxidation (effectively, loss of electrons from the nitrogen atom to the oxygen atoms), and is catalyzed step-wise by a series of enzymes.
2NH4+ + 3O2 -> 2NO2- + 4H+ + 2H2O (Nitrosomonas, Comammox)
2NO2- + O2 -> 2NO3- (Nitrobacter, Nitrospira, Comammox)
OR
NH3 + O2 -> NO2- + 3H+ + 2e-
NO2- + H2O -> NO3- + 2H+ + 2e-
In Nitrosomonas europaea, the first step of oxidation (ammonia to hydroxylamine) is carried out by the enzyme ammonia monooxygenase (AMO).
NH3 + O2 + 2H+ -> NH2OH + H2O
The second step (hydroxylamine to nitrite) is catalyzed by two enzymes. Hydroxylamine oxidoreductase (HAO), converts hydroxylamine to nitric oxide.
NH2OH -> NO + 3H+ + 3e-
Another currently unknown enzyme converts nitric oxide to nitrite.
The third step (nitrite to nitrate) is completed in a distinct organism.
{nitrite} + acceptor <=> {nitrate} + reduced\ acceptor
Factors Affecting Nitrification Rates
Soil conditions
Due to its inherent microbial nature, nitrification in soils is greatly susceptible to soil conditions. In general, soil nitrification will proceed at optimal rates if the conditions for the microbial communities foster healthy microbial growth and activity. Soil conditions that have an effect on nitrification rates include:
Substrate availability (presence of NH4+)
Aeration (availability of O2)
Soil moisture content (availability of H2O)
pH (near neutral)
Temperature
Inhibitors of nitrification
Nitrification inhibitors are chemical compounds that slow the nitrification of ammonia, ammonium-containing, or urea-containing fertilizers, which are applied to soil as fertilizers. These inhibitors can help reduce losses of nitrogen in soil that would otherwise be used by crops. Nitrification inhibitors are used widely, being added to approximately 50% of the fall-applied anhydrous ammonia in states in the U.S., like Illinois. They are usually effective in increasing recovery of nitrogen fertilizer in row crops, but the level of effectiveness depends on external conditions and their benefits are most likely to be seen at less than optimal nitrogen rates.
The environmental concerns of nitrification also contribute to interest in the use of nitrification inhibitors: the primary product, nitrate, leaches into groundwater, producing toxicity in both humans and some species of wildlife and contributing to the eutrophication of standing water. Some inhibitors of nitrification also inhibit the production of methane, a greenhouse gas.
The inhibition of the nitrification process is primarily facilitated by the selection and inhibition/destruction of the bacteria that oxidize ammonia compounds. A multitude of compounds inhibit nitrification, which can be divided into the following areas: the active site of ammonia monooxygenase (AMO), mechanistic inhibitors, and the process of N-heterocyclic compounds. The process for the latter of the three is not yet widely understood, but is prominent. The presence of AMO has been confirmed on many substrates that are nitrogen inhibitors such as dicyandiamide, ammonium thiosulfate, and nitrapyrin.
The conversion of ammonia to hydroxylamine is the first step in nitrification, where AH2 represents a range of potential electron donors.
+ + → + A +
This reaction is catalyzed by AMO. Inhibitors of this reaction bind to the active site on AMO and prevent or delay the process. The process of oxidation of ammonia by AMO is regarded with importance due to the fact that other processes require the co-oxidation of NH3 for a supply of reducing equivalents. This is usually supplied by the compound hydroxylamine oxidoreductase (HAO) which catalyzes the reaction:
+ → − + 5 H+ + 4 e−
The mechanism of inhibition is complicated by this requirement. Kinetic analysis of the inhibition of NH3 oxidation has shown that the substrates of AMO have shown kinetics ranging from competitive to noncompetitive. The binding and oxidation can occur on two sites on AMO: in competitive substrates, binding and oxidation occurs at the NH3 site, while in noncompetitive substrates it occurs at another site.
Mechanism based inhibitors can be defined as compounds that interrupt the normal reaction catalyzed by an enzyme. This method occurs by the inactivation of the enzyme via covalent modification of the product, which ultimately inhibits nitrification. Through the process, AMO is deactivated and one or more proteins is covalently bonded to the final product. This is found to be most prominent in a broad range of sulfur or acetylenic compounds.
Sulfur-containing compounds, including ammonium thiosulfate (a popular inhibitor) are found to operate by producing volatile compounds with strong inhibitory effects such as carbon disulfide and thiourea.
In particular, thiophosphoryl triamide has been a notable addition where it has the dual purpose of inhibiting both the production of urease and nitrification. In a study of inhibitory effects of oxidation by the bacteria Nitrosomonas europaea, the use of thioethers resulted in the oxidation of these compounds to sulfoxides, where the S atom is the primary site of oxidation by AMO. This is most strongly correlated to the field of competitive inhibition.
N-heterocyclic compounds are also highly effective nitrification inhibitors and are often classified by their ring structure. The mode of action for these compounds is not well understood: while nitrapyrin, a widely used inhibitor and substrate of AMO, is a weak mechanism-based inhibitor of said enzyme, the effects of said mechanism are unable to correlate directly with the compound's ability to inhibit nitrification. It is suggested that nitrapyrin acts against the monooxygenase enzyme within the bacteria, preventing growth and CH4/NH4 oxidation. Compounds containing two or three adjacent ring N atoms (pyridazine, pyrazole, indazole) tend to have a significantly higher inhibition effect than compounds containing non-adjacent N atoms or singular ring N atoms (pyridine, pyrrole). This suggests that the presence of ring N atoms is directly correlated with the inhibition effect of this class of compounds.
Methane oxidation inhibition
Some enzymatic nitrification inhibitors, such as nitrapyrin, can also inhibit the oxidation of methane in methanotrophic bacteria. AMO shows similar kinetic turnover rates to methane monooxygenase (MMO) found in methanotrophs, indicating that MMO is a similar catalyst to AMO for the purpose of methane oxidation. Furthermore, methanotrophic bacteria share many similarities to oxidizers such as Nitrosomonas. The inhibitor profile of particulate forms of MMO (pMMO) shows similarity to the profile of AMO, leading to similarity in properties between MMO in methanotrophs and AMO in autotrophs.
Environmental concerns
Nitrification inhibitors are also of interest from an environmental standpoint because of the production of nitrates and nitrous oxide from the nitrification process. Nitrous oxide (N2O), although its atmospheric concentration is much lower than that of CO2, has a global warming potential of about 300 times greater than carbon dioxide and contributes 6% of planetary warming due to greenhouse gases. This compound is also notable for catalyzing the breakup of ozone in the stratosphere. Nitrates, a toxic compound for wildlife and livestock and a product of nitrification, are also of concern.
Soil, consisting of polyanionic clays and silicates, generally has a net anionic charge. Consequently, ammonium (NH4+) binds tightly to the soil, but nitrate ions (NO3−) do not. Because nitrate is more mobile, it leaches into groundwater supplies through agricultural runoff. Nitrates in groundwater can affect surface water concentrations through direct groundwater-surface water interactions (e.g., gaining stream reaches, springs) or from when it is extracted for surface use. For example, much of the drinking water in the United States comes from groundwater, but most wastewater treatment plants discharge to surface water.
Among wildlife, amphibians (tadpoles) and freshwater fish eggs are most sensitive to elevated nitrate levels and experience growth and developmental damage at levels commonly found in U.S. freshwater bodies (<20 mg/l). In contrast, freshwater invertebrates are more tolerant (~90+mg/l), and adult freshwater fish can tolerate very high levels (800 mg+/l). Nitrate levels also contribute to eutrophication, a process in which large algal blooms reduce oxygen levels in bodies of water and lead to death in oxygen-consuming creatures due to anoxia. Nitrification is also thought to contribute to the formation of photochemical smog, ground-level ozone, acid rain, changes in species diversity, and other undesirable processes. In addition, nitrification inhibitors have also been shown to suppress the oxidation of methane (CH4), a potent greenhouse gas, to CO2. Both nitrapyrin and acetylene are shown to be potent suppressors of both processes, although the modes of action distinguishing them are unclear.
See also
f-ratio
Haber process
Nitrifying bacteria
Nitrogen fixation
Simultaneous nitrification-denitrification
Comammox
References
External links
Nitrification at the heart of filtration at fishdoc.co.uk
Nitrification at University of Aberdeen · King's College
Nitrification Basics for Aerated Lagoon Operators at lagoonsonline.com
Biochemical reactions
Nitrogen cycle
Soil biology | Nitrification | [
"Chemistry",
"Biology"
] | 4,309 | [
"Biochemical reactions",
"Nitrogen cycle",
"Soil biology",
"Biochemistry",
"Metabolism"
] |
361,038 | https://en.wikipedia.org/wiki/Chemical%20polarity | In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole moment, with a negatively charged end and a positively charged end.
Polar molecules must contain one or more polar bonds due to a difference in electronegativity between the bonded atoms. Molecules containing polar bonds have no molecular polarity if the bond dipoles cancel each other out by symmetry.
Polar molecules interact through dipole-dipole intermolecular forces and hydrogen bonds. Polarity underlies a number of physical properties including surface tension, solubility, and melting and boiling points.
Polarity of bonds
Not all atoms attract electrons with the same force. The amount of "pull" an atom exerts on its electrons is called its electronegativity. Atoms with high electronegativitiessuch as fluorine, oxygen, and nitrogenexert a greater pull on electrons than atoms with lower electronegativities such as alkali metals and alkaline earth metals. In a bond, this leads to unequal sharing of electrons between the atoms, as electrons will be drawn closer to the atom with the higher electronegativity.
Because electrons have a negative charge, the unequal sharing of electrons within a bond leads to the formation of an electric dipole: a separation of positive and negative electric charge. Because the amount of charge separated in such dipoles is usually smaller than a fundamental charge, they are called partial charges, denoted as δ+ (delta plus) and δ− (delta minus). These symbols were introduced by Sir Christopher Ingold and Edith Hilda (Usherwood) Ingold in 1926. The bond dipole moment is calculated by multiplying the amount of charge separated and the distance between the charges.
These dipoles within molecules can interact with dipoles in other molecules, creating dipole-dipole intermolecular forces.
Classification
Bonds can fall between one of two extremescompletely nonpolar or completely polar. A completely nonpolar bond occurs when the electronegativities are identical and therefore possess a difference of zero. A completely polar bond is more correctly called an ionic bond, and occurs when the difference between electronegativities is large enough that one atom actually takes an electron from the other. The terms "polar" and "nonpolar" are usually applied to covalent bonds, that is, bonds where the polarity is not complete. To determine the polarity of a covalent bond using numerical means, the difference between the electronegativity of the atoms is used.
Bond polarity is typically divided into three groups that are loosely based on the difference in electronegativity between the two bonded atoms. According to the Pauling scale:
Nonpolar bonds generally occur when the difference in electronegativity between the two atoms is less than 0.5
Polar bonds generally occur when the difference in electronegativity between the two atoms is roughly between 0.5 and 2.0
Ionic bonds generally occur when the difference in electronegativity between the two atoms is greater than 2.0
Pauling based this classification scheme on the partial ionic character of a bond, which is an approximate function of the difference in electronegativity between the two bonded atoms. He estimated that a difference of 1.7 corresponds to 50% ionic character, so that a greater difference corresponds to a bond which is predominantly ionic.
As a quantum-mechanical description, Pauling proposed that the wave function for a polar molecule AB is a linear combination of wave functions for covalent and ionic molecules: ψ = aψ(A:B) + bψ(A+B−). The amount of covalent and ionic character depends on the values of the squared coefficients a2 and b2.
Bond dipole moments
The bond dipole moment uses the idea of electric dipole moment to measure the polarity of a chemical bond within a molecule. It occurs whenever there is a separation of positive and negative charges.
The bond dipole μ is given by:
.
The bond dipole is modeled as δ+ — δ– with a distance d between the partial charges δ+ and δ–. It is a vector, parallel to the bond axis, pointing from minus to plus, as is conventional for electric dipole moment vectors.
Chemists often draw the vector pointing from plus to minus. This vector can be physically interpreted as the movement undergone by electrons when the two atoms are placed a distance d apart and allowed to interact, the electrons will move from their free state positions to be localised more around the more electronegative atom.
The SI unit for electric dipole moment is the coulomb–meter. This is too large to be practical on the molecular scale.
Bond dipole moments are commonly measured in debyes, represented by the symbol D, which is obtained by measuring the charge in units of 10−10 statcoulomb and the distance d in Angstroms. Based on the conversion factor of
10−10 statcoulomb being 0.208 units of elementary charge, so 1.0 debye results from an electron and a proton separated by 0.208 Å. A useful conversion factor is 1 D = 3.335 64 C m.
For diatomic molecules there is only one (single or multiple) bond so the bond dipole moment is the molecular dipole moment, with typical values in the range of 0 to 11 D. At one extreme, a symmetrical molecule such as bromine, , has zero dipole moment, while near the other extreme, gas phase potassium bromide, KBr, which is highly ionic, has a dipole moment of 10.41 D.
For polyatomic molecules, there is more than one bond. The total molecular dipole moment may be approximated as the vector sum of the individual bond dipole moments. Often bond dipoles are obtained by the reverse process: a known total dipole of a molecule can be decomposed into bond dipoles. This is done to transfer bond dipole moments to molecules that have the same bonds, but for which the total dipole moment is not yet known. The vector sum of the transferred bond dipoles gives an estimate for the total (unknown) dipole of the molecule.
Polarity of molecules
A molecule is composed of one or more chemical bonds between molecular orbitals of different atoms. A molecule may be polar either as a result of polar bonds due to differences in electronegativity as described above, or as a result of an asymmetric arrangement of nonpolar covalent bonds and non-bonding pairs of electrons known as a full molecular orbital.
While the molecules can be described as "polar covalent", "nonpolar covalent", or "ionic", this is often a relative term, with one molecule simply being more polar or more nonpolar than another. However, the following properties are typical of such molecules.
Boiling point
When comparing a polar and nonpolar molecule with similar molar masses, the polar molecule in general has a higher boiling point, because the dipole–dipole interaction between polar molecules results in stronger intermolecular attractions. One common form of polar interaction is the hydrogen bond, which is also known as the H-bond. For example, water forms H-bonds and has a molar mass M = 18 and a boiling point of +100 °C, compared to nonpolar methane with M = 16 and a boiling point of –161 °C.
Solubility
Due to the polar nature of the water molecule itself, other polar molecules are generally able to dissolve in water. Most nonpolar molecules are water-insoluble (hydrophobic) at room temperature. Many nonpolar organic solvents, such as turpentine, are able to dissolve nonpolar substances.
Surface tension
Polar compounds tend to have higher surface tension than nonpolar compounds.
Capillary action
Polar liquids have a tendency to rise against gravity in a small diameter tube.
Viscosity
Polar liquids have a tendency to be more viscous than nonpolar liquids. For example, nonpolar hexane is much less viscous than polar water. However, molecule size is a much stronger factor on viscosity than polarity, where compounds with larger molecules are more viscous than compounds with smaller molecules. Thus, water (small polar molecules) is less viscous than hexadecane (large nonpolar molecules).
Examples
Polar molecules
A polar molecule has a net dipole as a result of the opposing charges (i.e. having partial positive and partial negative charges) from polar bonds arranged asymmetrically. Water (H2O) is an example of a polar molecule since it has a slight positive charge on one side and a slight negative charge on the other. The dipoles do not cancel out, resulting in a net dipole. The dipole moment of water depends on its state. In the gas phase the dipole moment is ≈ 1.86 debye (D), whereas liquid water (≈ 2.95 D) and ice (≈ 3.09 D) are higher due to differing hydrogen-bonded environments. Other examples include sugars (like sucrose), which have many polar oxygen–hydrogen (−OH) groups and are overall highly polar.
If the bond dipole moments of the molecule do not cancel, the molecule is polar. For example, the water molecule (H2O) contains two polar O−H bonds in a bent (nonlinear) geometry. The bond dipole moments do not cancel, so that the molecule forms a molecular dipole with its negative pole at the oxygen and its positive pole midway between the two hydrogen atoms. In the figure each bond joins the central O atom with a negative charge (red) to an H atom with a positive charge (blue).
The hydrogen fluoride, HF, molecule is polar by virtue of polar covalent bondsin the covalent bond electrons are displaced toward the more electronegative fluorine atom.
Ammonia, NH3, is a molecule whose three N−H bonds have only a slight polarity (toward the more electronegative nitrogen atom). The molecule has two lone electrons in an orbital that points towards the fourth apex of an approximately regular tetrahedron, as predicted by the VSEPR theory. This orbital is not participating in covalent bonding; it is electron-rich, which results in a powerful dipole across the whole ammonia molecule.
In ozone (O3) molecules, the two O−O bonds are nonpolar (there is no electronegativity difference between atoms of the same element). However, the distribution of other electrons is unevensince the central atom has to share electrons with two other atoms, but each of the outer atoms has to share electrons with only one other atom, the central atom is more deprived of electrons than the others (the central atom has a formal charge of +1, while the outer atoms each have a formal charge of −). Since the molecule has a bent geometry, the result is a dipole across the whole ozone molecule.
Nonpolar molecules
A molecule may be nonpolar either when there is an equal sharing of electrons between the two atoms of a diatomic molecule or because of the symmetrical arrangement of polar bonds in a more complex molecule. For example, boron trifluoride (BF3) has a trigonal planar arrangement of three polar bonds at 120°. This results in no overall dipole in the molecule.
Carbon dioxide (CO2) has two polar C=O bonds, but the geometry of CO2 is linear so that the two bond dipole moments cancel and there is no net molecular dipole moment; the molecule is nonpolar.
Examples of household nonpolar compounds include fats, oil, and petrol/gasoline.
In the methane molecule (CH4) the four C−H bonds are arranged tetrahedrally around the carbon atom. Each bond has polarity (though not very strong). The bonds are arranged symmetrically so there is no overall dipole in the molecule. The diatomic oxygen molecule (O2) does not have polarity in the covalent bond because of equal electronegativity, hence there is no polarity in the molecule.
Amphiphilic molecules
Large molecules that have one end with polar groups attached and another end with nonpolar groups are described as amphiphiles or amphiphilic molecules. They are good surfactants and can aid in the formation of stable emulsions, or blends, of water and fats. Surfactants reduce the interfacial tension between oil and water by adsorbing at the liquid–liquid interface.
Predicting molecule polarity
Determining the point group is a useful way to predict polarity of a molecule. In general, a molecule will not possess dipole moment if the individual bond dipole moments of the molecule cancel each other out. This is because dipole moments are euclidean vector quantities with magnitude and direction, and a two equal vectors that oppose each other will cancel out.
Any molecule with a centre of inversion ("i") or a horizontal mirror plane ("σh") will not possess dipole moments.
Likewise, a molecule with more than one Cn axis of rotation will not possess a dipole moment because dipole moments cannot lie in more than one dimension. As a consequence of that constraint, all molecules with dihedral symmetry (Dn) will not have a dipole moment because, by definition, D point groups have two or multiple Cn axes.
Since C1, Cs,C∞h Cn and Cnv point groups do not have a centre of inversion, horizontal mirror planes or multiple Cn axis, molecules in one of those point groups will have dipole moment.
Electrical deflection of water
Contrary to popular misconception, the electrical deflection of a stream of water from a charged object is not based on polarity. The deflection occurs because of electrically charged droplets in the stream, which the charged object induces. A stream of water can also be deflected in a uniform electrical field, which cannot exert force on polar molecules. Additionally, after a stream of water is grounded, it can no longer be deflected. Weak deflection is even possible for nonpolar liquids.
See also
Chemical properties
Colloid
Detergent
Electronegativities of the elements (data page)
Polar point group
References
External links
Chemical Bonding
Polarity of Bonds and Molecules (archived)
Molecule Polarity
Physical chemistry
Chemical properties
Dimensionless numbers of chemistry | Chemical polarity | [
"Physics",
"Chemistry"
] | 2,972 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"Dimensionless numbers of chemistry",
"nan"
] |
361,123 | https://en.wikipedia.org/wiki/Intermodal%20freight%20transport | Intermodal freight transport involves the transportation of freight in an intermodal container or vehicle, using multiple modes of transportation (e.g., rail, ship, aircraft, and truck), without any handling of the freight itself when changing modes. The method reduces cargo handling, and so improves security, reduces damage and loss, and allows freight to be transported faster. Reduced costs over road trucking is the key benefit for inter-continental use. This may be offset by reduced timings for road transport over shorter distances.
Origins
Intermodal transportation has its origin in 18th century England and predates the railways. Some of the earliest containers were those used for shipping coal on the Bridgewater Canal in England in the 1780s. Coal containers (called "loose boxes" or "tubs") were soon deployed on the early canals and railways and were used for road/rail transfers (road at the time meaning horse-drawn vehicles).
Wooden coal containers were first used on the railways in the 1830s on the Liverpool and Manchester Railway. In 1841, Isambard Kingdom Brunel introduced iron containers to move coal from the vale of Neath to Swansea Docks. By the outbreak of the First World War the Great Eastern Railway was using wooden containers to trans-ship passenger luggage between trains and sailings via the port of Harwich.
The early 1900s saw the first adoption of covered containers, primarily for the movement of furniture and intermodal freight between road and rail. A lack of standards limited the value of this service and this in turn drove standardisation. In the U.S. such containers, known as "lift vans", were in use from as early as 1911.
Intermodal container
Early containers
In the United Kingdom, containers were first standardised by the Railway Clearing House (RCH) in the 1920s, allowing both railway-owned and privately-owned vehicles to be carried on standard container flats. By modern standards these containers were small, being long, normally wooden and with a curved roof and insufficient strength for stacking. From 1928 the London, Midland & Scottish Railway offered "door to door" intermodal road-rail services using these containers. This standard failed to become popular outside the United Kingdom.
Pallets made their first major appearance during World War II, when the United States military assembled freight on pallets, allowing fast transfer between warehouses, trucks, trains, ships, and aircraft. Because no freight handling was required, fewer personnel were needed and loading times were decreased.
Truck trailers were first carried by railway before World War II, an arrangement often called "piggyback", by the small Class I railroad, the Chicago Great Western in 1936. The Canadian Pacific Railway was a pioneer in piggyback transport, becoming the first major North American railway to introduce the service in 1952. In the United Kingdom, the big four railway companies offered services using standard RCH containers that could be craned on and off the back of trucks. Moving companies such as Pickfords offered private services in the same way.
Containerization
In 1933 in Europe, under the auspices of the International Chamber of Commerce, The Bureau International des Containers et du Transport Intermodal (BIC; English: International Bureau for Containers and Intermodal Transport) was established. In June 1933, the BIC decided about obligatory parameters for container use in international traffic. Containers handled by means of lifting gear, such as cranes, overhead conveyors, etc. for traveling elevators (group I containers), constructed after July 1, 1933. Obligatory Regulations:
Clause 1 — Containers are, as regards form, either of the closed or the open type, and, as regards capacity, either of the heavy or the light type.
Clause 2 — The loading capacity of containers must be such that their total weight (load, plus tare) is: for containers of the heavy type; for containers of the light type; a tolerance of 5 percent excess on the total weight is allowable under the same conditions as for wagon loads.
In April 1935, BIC established a second standard for European containers:
In the 1950s, a new standardized steel Intermodal container based on specifications from the United States Department of Defense began to revolutionize freight transportation. The International Organization for Standardization (ISO) then issued standards based upon the U.S. Department of Defense standards between 1968 and 1970.
The White Pass & Yukon Route railway acquired the world's first container ship, the Clifford J. Rogers, built in 1955, and introduced containers to its railway in 1956. In the United Kingdom the modernisation plan, and in turn the Beeching Report, strongly pushed containerization. British Railways launched the Freightliner service carrying high pre-ISO containers. The older wooden containers and the pre-ISO containers were rapidly replaced by ISO standard containers, and later by containers and larger.
In the U.S., starting in the 1960s, the use of containers increased steadily. Rail intermodal traffic tripled between 1980 and 2002, according to the Association of American Railroads (AAR), from 3.1 million trailers and containers to 9.3 million. Large investments were made in intermodal freight projects. An example was the US$740 million Port of Oakland intermodal rail facility begun in the late 1980s.
Since 1984, a mechanism for intermodal shipping known as double-stack rail transport has become increasingly common. Rising to the rate of nearly 70% of the United States' intermodal shipments, it transports more than one million containers per year. The double-stack rail cars design significantly reduces damage in transit and provides greater cargo security by cradling the lower containers so their doors cannot be opened. A succession of large, new, domestic container sizes was introduced to increase shipping productivity. In Europe, the more restricted loading gauge has limited the adoption of double-stack cars. However, in 2007 the Betuweroute, a railway from Rotterdam to the German industrial heartland, was completed, which may accommodate double-stacked containers in the future. Other countries, like New Zealand, have numerous low tunnels and bridges that limit expansion for economic reasons.
Since electrification generally predated double-stacking, the overhead wiring was too low to accommodate it. However, India is building some freight-only corridors with the overhead wiring at above rail, which is high enough.
Containers and container handling
Containers, also known as intermodal containers or ISO containers because the dimensions have been defined by ISO, are the main type of equipment used in intermodal transport, particularly when one of the modes of transportation is by ship. Containers are wide by or high. Since introduction, there have been moves to adopt other heights, such as . The most common lengths are , , , , although other lengths exist. The three common sizes are:
one TEU – ×
two TEU – ×
highcube × .
In countries where the railway loading gauge is sufficient, truck trailers are often carried by rail. Variations exist, including open-topped versions covered by a fabric curtain are used to transport larger loads. A container called a tanktainer, with a tank inside a standard container frame, carries liquids. Refrigerated containers (reefer) are used for perishables. Swap body units have the same bottom corners as intermodal containers but are not strong enough to be stacked. They have folding legs under their frame and can be moved between trucks without using a crane.
Handling equipment can be designed with intermodality in mind, assisting with transferring containers between rail, road and sea. These can include:
container gantry crane for transferring containers from seagoing vessels onto either trucks or rail wagons. A spreader beam moves in several directions allowing accurate positioning of the cargo. A container crane is mounted on rails moving parallel to the ship's side, with a large boom spanning the distance between the ship's cargo hold and the quay.
Straddle carriers, and the larger rubber tyred gantry crane are able to straddle container stacks as well as rail and road vehicles, allowing for quick transfer of containers.
Grappler lift, which is very similar to a straddle carrier except it grips the bottom of a container rather than the top.
Reach stackers are fitted with lifting arms as well as spreader beams for lifting containers to truck or rail and can stack containers on top of each other.
Sidelifters are a road-going truck or semi-trailer with cranes fitted at each end to hoist and transport containers in small yards or over longer distances.
Forklift trucks in larger sizes are often used to load containers to/from truck and rail.
Flatbed trucks with special chain assemblies such as QuickLoadz can pull containers onto or off of the bed using the corner castings.
Load securing in intermodal containers
According to the European Commission Transportation Department "it has been estimated that up to 25% of accidents involving trucks can be attributable to inadequate cargo securing". Cargo that is improperly secured can cause severe accidents and lead to the loss of cargo, the loss of lives, the loss of vehicles, ships and airplane; not to mention the environmental hazards it can cause.
There are many different ways and materials available to stabilize and secure cargo in containers used in the various modes of transportation. Conventional Load Securing methods and materials such as steel banding and wood blocking & bracing have been around for decades and are still widely used. In the last few years the use of several, relatively new and unknown Load Securing methods have become available through innovation and technological advancement including polyester strapping and -lashing, synthetic webbings and Dunnage Bags, also known as air bags.
Transportation modes
Container ships
Container ships are used to transport containers by sea. These vessels are custom-built to hold containers. Some vessels can hold thousands of containers. Their capacity is often measured in TEU or FEU. These initials stand for "twenty-foot equivalent unit", and "forty-foot equivalent unit", respectively. For example, a vessel that can hold 1,000 40-foot containers or 2,000 20-foot containers can be said to have a capacity of . After the year 2006, the largest container ships in regular operation are capable of carrying in excess of .
On board ships they are typically stacked up to seven units high.
A key consideration in the size of container ships is that larger ships exceed the capacity of important sea routes such as the Panama and Suez canals. The largest size of container ship able to traverse the Panama canal is referred to as Panamax, which is presently around . A third set of locks is planned as part of the Panama Canal expansion project to accommodate container ships up to in future, comparable to the present Suezmax.
Very large container ships also require specialized deep water terminals and handling facilities. The container fleet available, route constraints, and terminal capacity play a large role in shaping global container shipment logistics.
Railways and intermodal terminals
Increasingly, containers are shipped by rail in container well cars. These cars resemble flatcars but have a container-sized depression, or well, in the middle of the car between the bogies or trucks. Some container cars are built as an articulated "unit" of three or five permanently coupled cars, each having a single bogie rather than the two bogies normally found on freight cars.
Containers can be loaded on flatcars or in container well cars. In North America, Australia and Saudi Arabia, where vertical clearances are generally liberal, this depression is sufficient for two containers to be loaded in a "double-stack" arrangement. In Europe, height restrictions imposed by smaller structure gauges, and frequent overhead electrification, prevent double-stacking. Containers are therefore hauled one-high, either on standard flatcars or other railroad cars – but they must be carried in well wagons on lines built early in the Industrial Revolution, such as in the United Kingdom, where loading gauges are relatively small.
narrow-gauge railways have smaller wagons that do not readily carry ISO containers, nor do the long and wide wagons of the gauge Kalka-Shimla Railway. Wider narrow gauge railways of e.g. and gauge can take ISO containers, provided that the loading gauge allows it.
It is also common in North America and Australia to transport semi-trailers on railway flatcars or spine cars, an arrangement called "piggyback" or TOFC (trailer on flatcar) to distinguish it from container on flatcar (COFC). Some flatcars are designed with collapsible trailer hitches so they can be used for trailer or container service. Such designs allow trailers to be rolled on from one end, though lifting trailers on and off flatcars by specialized loaders is more common. TOFC terminals typically have large areas for storing trailers pending loading or pickup.
Thievery has become a problem in North America. Sophisticated thieves learn how to interpret the codes on the outside of containers to ascertain which ones have easily disposable cargo. They break into isolated containers on long trains, or even board slowly moving trains to toss the items to accomplices on the ground.
Trucks
Trucking is frequently used to connect the "linehaul" ocean and rail segments of a global intermodal freight movement. This specialized trucking that runs between ocean ports, rail terminals, and inland shipping docks, is often called drayage, and is typically provided by dedicated drayage companies or by the railroads.
As an example, since many rail lines in the United States terminate in or around Chicago, Illinois, the area serves as a common relay point for containerized freight moving across the country. Many of the motor carriers call this type of drayage “crosstown loads” that originate at one rail road and terminate at another. For example, a container destined for the east coast from the west will arrive in Chicago either via the Union Pacific or BNSF Railway and have to be relayed to one of the eastern railroads, either CSX or Norfolk Southern.
Barges
Barges utilising ro-ro and container-stacking techniques transport freight on large inland waterways such as the Rhine/Danube in Europe and the Mississippi River in the U.S.
Land bridges
The term landbridge or land bridge is commonly used in the intermodal freight transport sector. When a containerized ocean freight shipment travels across a large body of land for a significant distance, that portion of the trip is referred to as the "land bridge" and the mode of transport used is rail transport. There are three applications for the term.
Land bridge – An intermodal container shipped by ocean vessel crosses an entire body of land/country/continent before being reloaded on a cargo ship. For example, a container shipment from China to Germany is loaded onto a ship in China, unloads at a Los Angeles port, travels via rail transport to a New York/New Jersey port, and loads on a ship for Hamburg. Also see Eurasian Land Bridge.
Mini land bridge – An intermodal container shipped by ocean vessel from country A to country B passes across a large portion of land in either country A or B. For example, a container shipment from China to New York is loaded onto a ship in China, unloads at a Los Angeles port and travels via rail transport to New York, the final destination.
Micro land bridge – An intermodal container shipped by ocean vessel from country A to country B passes across a large portion of land to reach an interior inland destination. For example, a container shipment from China to Denver, Colorado, is loaded onto a ship in China, unloads at a Los Angeles port and travels via rail transport to Denver, the final destination.
The term reverse land bridge refers to a micro land bridge from an east coast port (as opposed to a west coast port in the previous examples) to an inland destination.
Planes and aircraft
Generally modern, bigger planes usually carry cargo in the containers. Sometimes even the checked luggage is first placed into containers, and then loaded onto the plane. Of course because of the requirement for the lowest weight possible (and very important, little difference in the viable mass point), and low space, specially designed containers made from lightweight material are often used. Due to price and size, this is rarely seen on the roads or in ports. However, large transport aircraft make it possible to even load standard container(s), or use standard sized containers made of much lighter materials like titanium or aluminium.
Biggest shipping liner companies by TEU capacity
Gallery
See also
Combined transport
Co-modality (by the European Commission)
Container numbering
Containerization
CargoBeamer
Customs Convention on Containers
Dunnage bag
Double-stack car
Dry port
Haulage
Inland port
Intermodal container
Intermodal flatcars
Konkan Railway Corporation
Less-than-truckload (LTL) shipping
Load securing
Merchant ship
Modalohr
Piggy-back
Roadrailer
Rolling highway
Shipping
Sidelifter
Swap body
Tanktainer
Transloading
Top intermodal container companies list
Well car
References
Bibliography
European Intermodal Association (2005). Intermodal Transport in Europe. EIA, Brussels.
Sidney, Samuel (1846). Gauge Evidence: The History and Prospects of the Railway System. Edmonds, London, UK. No ISBN.
External links
IANA: The Intermodal Association of North America
World Transportation Organization The world transportation organization (The Non-Profit Advisory Organization)
Freight transport
Intermodal transport | Intermodal freight transport | [
"Physics"
] | 3,520 | [
"Physical systems",
"Transport",
"Intermodal transport"
] |
361,157 | https://en.wikipedia.org/wiki/Bio-inspired%20computing | Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
History
Early Ideas
The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as a Turing machine. Turing firstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes. He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.
Neural Networks
First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms. They first mathematically described that a system of simplistic neurons was able to produce simple logical operations such as logical conjunction, disjunction and negation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969 book by Marvin Minsky and Seymour Papert as the main cause. Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known as threshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.
Ant Colonies
Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent. More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms. Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony. A very important result since it suggested that group selection evolutionary algorithms coupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.
Areas of research
Some areas of study in biologically inspired computing, and their biological counterparts:
Population Based Bio-Inspired Algorithms
Bio-inpsired computing, which work on a population of possible solutions in the context of evolutionary algorithms or in the context of swarm intelligence algorithms, are subdivided into Population Based Bio-Inspired Algorithms (PBBIA). They include Evolutionary Algorithms, Particle Swarm Optimization, Ant colony optimization algorithms and Artificial bee colony algorithms.
Virtual Insect Example
Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:
turn right for target-and-obstacle left;
turn left for target-and-obstacle right;
turn left for target-left-obstacle-right;
turn right for target-right-obstacle-left;
turn left for target-left without obstacle;
turn right for target-right without obstacle.
The virtual insect controlled by the trained spiking neural network can find food after training in any unknown terrain. After several generations of rule application it is usually the case that some forms of complex behaviour emerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (see complex systems). For this reason, when modeling the neural network, it is necessary to accurately model an in vivo network, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.
Natural evolution is a good analogy to this method–the rules of evolution (selection, recombination/reproduction, mutation and more recently transposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used in genetic algorithms.
Brain-inspired computing
Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.
Research
Artificial intelligence researchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.
The influence of brain science on Brain-inspired computing
Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopic network connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.
Brain-inspired chip
Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphic chip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backed SpiNNaker and BrainScaleS, Stanford's Neurogrid, IBM's TrueNorth, and Qualcomm's Zeroth.
TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities. The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.
In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian". The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.
Unclear Brain mechanism cognition
The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.
Recent advances in brain simulation linked individual variability in human cognitive processing speed and fluid intelligence to the balance of excitation and inhibition in structural brain networks, functional connectivity, winner-take-all decision-making and attractor working memory.
Unclear Brain-inspired computational models and algorithms
In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.
Constrained Computational architecture and capabilities
Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.
See also
Applications of artificial intelligence
Behavior based robotics
Bioinformatics
Bionics
Cognitive architecture
Cognitive modeling
Cognitive science
Connectionism
Digital morphogenesis
Digital organism
Fuzzy logic
Gene expression programming
Genetic algorithm
Genetic programming
Gerald Edelman
Janine Benyus
Learning classifier system
Mark A. O'Neill
Mathematical biology
Mathematical model
Natural computation
Neuroevolution
Olaf Sporns
Organic computing
Unconventional computing
Lists
List of emerging technologies
Outline of artificial intelligence
References
Further reading
(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)
"Nature-Inspired Algorithms"
"Biologically Inspired Computing"
"Digital Biology", Peter J. Bentley.
"First International Symposium on Biologically Inspired Computing"
Emergence: The Connected Lives of Ants, Brains, Cities and Software, Steven Johnson.
Dr. Dobb's Journal, Apr-1991. (Issue theme: Biocomputing)
Turtles, Termites and Traffic Jams, Mitchel Resnick.
Understanding Nonlinear Dynamics, Daniel Kaplan and Leon Glass.
Swarms and Swarm Intelligence by Michael G. Hinchey, Roy Sterritt, and Chris Rouff,
Fundamentals of Natural Computing: Basic Concepts, Algorithms, and Applications, L. N. de Castro, Chapman & Hall/CRC, June 2006.
"The Computational Beauty of Nature", Gary William Flake. MIT Press. 1998, hardcover ed.; 2000, paperback ed. An in-depth discussion of many of the topics and underlying themes of bio-inspired computing.
Kevin M. Passino, Biomimicry for Optimization, Control, and Automation, Springer-Verlag, London, UK, 2005.
Recent Developments in Biologically Inspired Computing, L. N. de Castro and F. J. Von Zuben, Idea Group Publishing, 2004.
Nancy Forbes, Imitation of Life: How Biology is Inspiring Computing, MIT Press, Cambridge, MA 2004.
M. Blowers and A. Sisti, Evolutionary and Bio-inspired Computation: Theory and Applications, SPIE Press, 2007.
X. S. Yang, Z. H. Cui, R. B. Xiao, A. H. Gandomi, M. Karamanoglu, Swarm Intelligence and Bio-Inspired Computation: Theory and Applications, Elsevier, 2013.
"Biologically Inspired Computing Lecture Notes", Luis M. Rocha
The portable UNIX programming system (PUPS) and CANTOR: a computational envorionment for dynamical representation and analysis of complex neurobiological data, Mark A. O'Neill, and Claus-C Hilgetag, Phil Trans R Soc Lond B 356 (2001), 1259–1276
"Going Back to our Roots: Second Generation Biocomputing", J. Timmis, M. Amos, W. Banzhaf, and A. Tyrrell, Journal of Unconventional Computing 2 (2007) 349–378.
C-M. Pintea, 2014, Advances in Bio-inspired Computing for Combinatorial Optimization Problem, Springer
"PSA: A novel optimization algorithm based on survival rules of porcellio scaber", Y. Zhang and S. Li
External links
Nature Inspired Computing and Engineering (NICE) Group, University of Surrey, UK
ALife Project in Sussex
Biologically Inspired Computation for Chemical Sensing Neurochem Project
AND Corporation
Centre of Excellence for Research in Computational Intelligence and Applications Birmingham, UK
BiSNET: Biologically-inspired architecture for Sensor NETworks
BiSNET/e: A Cognitive Sensor Networking Architecture with Evolutionary Multiobjective Optimization
Biologically inspired neural networks
NCRA UCD, Dublin Ireland
The PUPS/P3 Organic Computing Environment for Linux
SymbioticSphere: A Biologically-inspired Architecture for Scalable, Adaptive and Survivable Network Systems
The runner-root algorithm
Bio-inspired Wireless Networking Team (BioNet)
Biologically Inspired Intelligence
Theoretical computer science
Natural computation
Bioinspiration | Bio-inspired computing | [
"Mathematics",
"Engineering",
"Biology"
] | 3,123 | [
"Theoretical computer science",
"Applied mathematics",
"Biological engineering",
"Bioinspiration"
] |
361,184 | https://en.wikipedia.org/wiki/Generalized%20hypergeometric%20function | In mathematics, a generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. The series, if convergent, defines a generalized hypergeometric function, which may then be defined over a wider domain of the argument by analytic continuation. The generalized hypergeometric series is sometimes just called the hypergeometric series, though this term also sometimes just refers to the Gaussian hypergeometric series. Generalized hypergeometric functions include the (Gaussian) hypergeometric function and the confluent hypergeometric function as special cases, which in turn have many particular special functions as special cases, such as elementary functions, Bessel functions, and the classical orthogonal polynomials.
Notation
A hypergeometric series is formally defined as a power series
in which the ratio of successive coefficients is a rational function of n. That is,
where A(n) and B(n) are polynomials in n.
For example, in the case of the series for the exponential function,
we have:
So this satisfies the definition with and .
It is customary to factor out the leading term, so β0 is assumed to be 1. The polynomials can be factored into linear factors of the form (aj + n) and (bk + n) respectively, where the aj and bk are complex numbers.
For historical reasons, it is assumed that (1 + n) is a factor of B. If this is not already the case then both A and B can be multiplied by this factor; the factor cancels so the terms are unchanged and there is no loss of generality.
The ratio between consecutive coefficients now has the form
,
where c and d are the leading coefficients of A and B. The series then has the form
,
or, by scaling z by the appropriate factor and rearranging,
.
This has the form of an exponential generating function. This series is usually denoted by
or
Using the rising factorial or Pochhammer symbol
this can be written
(Note that this use of the Pochhammer symbol is not standard; however it is the standard usage in this context.)
Terminology
When all the terms of the series are defined and it has a non-zero radius of convergence, then the series defines an analytic function. Such a function, and its analytic continuations, is called the hypergeometric function.
The case when the radius of convergence is 0 yields many interesting series in mathematics, for example the incomplete gamma function has the asymptotic expansion
which could be written za−1e−z 2F0(1−a,1;;−z−1). However, the use of the term hypergeometric series is usually restricted to the case where the series defines an actual analytic function.
The ordinary hypergeometric series should not be confused with the basic hypergeometric series, which, despite its name, is a rather more complicated and recondite series. The "basic" series is the q-analog of the ordinary hypergeometric series. There are several such generalizations of the ordinary hypergeometric series, including the ones coming from zonal spherical functions on Riemannian symmetric spaces.
The series without the factor of n! in the denominator (summed over all integers n, including negative) is called the bilateral hypergeometric series.
Convergence conditions
There are certain values of the aj and bk for which the numerator or the denominator of the coefficients is 0.
If any aj is a non-positive integer (0, −1, −2, etc.) then the series only has a finite number of terms and is, in fact, a polynomial of degree −aj.
If any bk is a non-positive integer (excepting the previous case with bk < aj) then the denominators become 0 and the series is undefined.
Excluding these cases, the ratio test can be applied to determine the radius of convergence.
If p < q + 1 then the ratio of coefficients tends to zero. This implies that the series converges for any finite value of z and thus defines an entire function of z. An example is the power series for the exponential function.
If p = q + 1 then the ratio of coefficients tends to one. This implies that the series converges for |z| < 1 and diverges for |z| > 1. Whether it converges for |z| = 1 is more difficult to determine. Analytic continuation can be employed for larger values of z.
If p > q + 1 then the ratio of coefficients grows without bound. This implies that, besides z = 0, the series diverges. This is then a divergent or asymptotic series, or it can be interpreted as a symbolic shorthand for a differential equation that the sum satisfies formally.
The question of convergence for p=q+1 when z is on the unit circle is more difficult. It can be shown that the series converges absolutely at z = 1 if
.
Further, if p=q+1, and z is real, then the following convergence result holds :
.
Basic properties
It is immediate from the definition that the order of the parameters aj, or the order of the parameters bk can be changed without changing the value of the function. Also, if any of the parameters aj is equal to any of the parameters bk, then the matching parameters can be "cancelled out", with certain exceptions when the parameters are non-positive integers. For example,
.
This cancelling is a special case of a reduction formula that may be applied whenever a parameter on the top row differs from one on the bottom row by a non-negative integer.
Euler's integral transform
The following basic identity is very useful as it relates the higher-order hypergeometric functions in terms of integrals over the lower order ones
Differentiation
The generalized hypergeometric function satisfies
and
Additionally,
Combining these gives a differential equation satisfied by w = pFq:
.
Contiguous function and related identities
Take the following operator:
From the differentiation formulas given above, the linear space spanned by
contains each of
Since the space has dimension 2, any three of these p+q+2 functions are linearly dependent:
These dependencies can be written out to generate a large number of identities involving .
For example, in the simplest non-trivial case,
,
,
,
So
.
This, and other important examples,
,
,
,
,
,
can be used to generate continued fraction expressions known as Gauss's continued fraction.
Similarly, by applying the differentiation formulas twice, there are such functions contained in
which has dimension three so any four are linearly dependent. This generates more identities and the process can be continued. The identities thus generated can be combined with each other to produce new ones in a different way.
A function obtained by adding ±1 to exactly one of the parameters aj, bk in
is called contiguous to
Using the technique outlined above, an identity relating and its two contiguous functions can be given, six identities relating and any two of its four contiguous functions, and fifteen identities relating and any two of its six contiguous functions have been found. (The first one was derived in the previous paragraph. The last fifteen were given by Gauss in his 1812 paper.)
Identities
A number of other hypergeometric function identities were discovered in the nineteenth and twentieth centuries. A 20th century contribution to the methodology of proving these identities is the Egorychev method.
Saalschütz's theorem
Saalschütz's theorem is
For extension of this theorem, see a research paper by Rakha & Rathie.
Dixon's identity
Dixon's identity, first proved by , gives the sum of a well-poised 3F2 at 1:
For generalization of Dixon's identity, see a paper by Lavoie, et al.
Dougall's formula
Dougall's formula gives the sum of a very well-poised series that is terminating and 2-balanced.
Terminating means that m is a non-negative integer and 2-balanced means that
Many of the other formulas for special values of hypergeometric functions can be derived from this as special or limiting cases.
Generalization of Kummer's transformations and identities for 2F2
Identity 1.
where
;
Identity 2.
which links Bessel functions to 2F2; this reduces to Kummer's second formula for b = 2a:
Identity 3.
.
Identity 4.
which is a finite sum if b-d is a non-negative integer.
Kummer's relation
Kummer's relation is
Clausen's formula
Clausen's formula
was used by de Branges to prove the Bieberbach conjecture.
Special cases
Many of the special functions in mathematics are special cases of the confluent hypergeometric function or the hypergeometric function; see the corresponding articles for examples.
The series 0F0
As noted earlier, . The differential equation for this function is , which has solutions where k is a constant.
The series 0F1
The functions of the form are called confluent hypergeometric limit functions and are closely related to Bessel functions.
The relationship is:
The differential equation for this function is
or
When a is not a positive integer, the substitution
gives a linearly independent solution
so the general solution is
where k, l are constants. (If a is a positive integer, the independent solution is given by the appropriate Bessel function of the second kind.)
A special case is:
The series 1F0
An important case is:
The differential equation for this function is
or
which has solutions
where k is a constant.
is the geometric series with ratio z and coefficient 1.
is also useful.
The series 1F1
The functions of the form are called confluent hypergeometric functions of the first kind, also written . The incomplete gamma function is a special case.
The differential equation for this function is
or
When b is not a positive integer, the substitution
gives a linearly independent solution
so the general solution is
where k, l are constants.
When a is a non-positive integer, −n, is a polynomial. Up to constant factors, these are the Laguerre polynomials. This implies Hermite polynomials can be expressed in terms of 1F1 as well.
The series 1F2
Relations to other functions are known for certain parameter combinations only.
The function is the antiderivative of the cardinal sine. With modified values of and , one obtains the antiderivative of .
The Lommel function is .
The series 2F0
The confluent hypergeometric function of the second kind can be written as:
The series 2F1
Historically, the most important are the functions of the form . These are sometimes called Gauss's hypergeometric functions, classical standard hypergeometric or often simply hypergeometric functions. The term Generalized hypergeometric function is used for the functions pFq if there is risk of confusion. This function was first studied in detail by Carl Friedrich Gauss, who explored the conditions for its convergence.
The differential equation for this function is
or
It is known as the hypergeometric differential equation. When c is not a positive integer, the substitution
gives a linearly independent solution
so the general solution for |z| < 1 is
where k, l are constants. Different solutions can be derived for other values of z. In fact there are 24 solutions, known as the Kummer solutions, derivable using various identities, valid in different regions of the complex plane.
When a is a non-positive integer, −n,
is a polynomial. Up to constant factors and scaling, these are the Jacobi polynomials. Several other classes of orthogonal polynomials, up to constant factors, are special cases of Jacobi polynomials, so these can be expressed using 2F1 as well. This includes Legendre polynomials and Chebyshev polynomials.
A wide range of integrals of elementary functions can be expressed using the hypergeometric function, e.g.:
The series 3F0
The Mott polynomials can be written as:
The series 3F2
The function
is the dilogarithm
The function
is a Hahn polynomial.
The series 4F3
The function
is a Wilson polynomial.
All roots of a quintic equation can be expressed in terms of radicals and the Bring radical, which is the real solution to . The Bring radical can be written as:
The series q+1Fq
The functions
for and are the Polylogarithm.
For each integer n≥2, the roots of the polynomial xn−x+t can be expressed as a sum of at most N−1 hypergeometric functions of type n+1Fn, which can always be reduced by eliminating at least one pair of a and b parameters.
Generalizations
The generalized hypergeometric function is linked to the Meijer G-function and the MacRobert E-function. Hypergeometric series were generalised to several variables, for example by Paul Emile Appell and Joseph Kampé de Fériet; but a comparable general theory took long to emerge. Many identities were found, some quite remarkable. A generalization, the q-series analogues, called the basic hypergeometric series, were given by Eduard Heine in the late nineteenth century. Here, the ratios considered of successive terms, instead of a rational function of n, are a rational function of qn. Another generalization, the elliptic hypergeometric series, are those series where the ratio of terms is an elliptic function (a doubly periodic meromorphic function) of n.
During the twentieth century this was a fruitful area of combinatorial mathematics, with numerous connections to other fields. There are a number of new definitions of general hypergeometric functions, by Aomoto, Israel Gelfand and others; and applications for example to the combinatorics of arranging a number of hyperplanes in complex N-space (see arrangement of hyperplanes).
Special hypergeometric functions occur as zonal spherical functions on Riemannian symmetric spaces and semi-simple Lie groups. Their importance and role can be understood through the following example: the hypergeometric series 2F1 has the Legendre polynomials as a special case, and when considered in the form of spherical harmonics, these polynomials reflect, in a certain sense, the symmetry properties of the two-sphere or, equivalently, the rotations given by the Lie group SO(3). In tensor product decompositions of concrete representations of this group Clebsch–Gordan coefficients are met, which can be written as 3F2 hypergeometric series.
Bilateral hypergeometric series are a generalization of hypergeometric functions where one sums over all integers, not just the positive ones.
Fox–Wright functions are a generalization of generalized hypergeometric functions where the Pochhammer symbols in the series expression are generalised to gamma functions of linear expressions in the index n.
See also
Appell series
Humbert series
Kampé de Fériet function
Lauricella hypergeometric series
Notes
References
(the first edition has )
(a reprint of this paper can be found in Carl Friedrich Gauss, Werke, p. 125)
(part 1 treats hypergeometric functions on Lie groups)
(there is a 2008 paperback with )
External links
The book "A = B", this book is freely downloadable from the internet.
MathWorld
Factorial and binomial topics
Ordinary differential equations
Mathematical series | Generalized hypergeometric function | [
"Mathematics"
] | 3,144 | [
"Sequences and series",
"Factorial and binomial topics",
"Series (mathematics)",
"Mathematical structures",
"Calculus",
"Combinatorics"
] |
361,245 | https://en.wikipedia.org/wiki/Torque%20wrench | A torque wrench is a tool used to apply a specific torque to a fastener such as a nut, bolt, or lag screw. It is usually in the form of a socket wrench with an indicating scale, or an internal mechanism which will indicate (as by 'clicking', a specific movement of the tool handle in relation to the tool head) when a specified (adjustable) torque value has been reached during application.
A torque wrench is used where the tightness of screws and bolts is a crucial parameter of assembly or adjustment. It allows the operator to set the torque applied to the fastener to meet the specification for a particular application. This permits proper tension and loading of all parts.
Torque screwdrivers and torque wrenches have similar purposes and may have similar mechanisms.
History
The first patent for a torque wrench was filed by John H. Sharp of Chicago in 1931. This wrench was referred to as a torque measuring wrench and would be classified today as an indicating torque wrench.
In 1935, Conrad Bahr and George Pfefferle patented an adjustable ratcheting torque wrench. The tool featured audible feedback and restriction of back-ratcheting movement when the desired torque was reached. Bahr, who worked for the New York City Water Department, was frustrated at the inconsistent tightness of flange bolts he found while attending to his work. He claimed to have invented the first torque limiting tool in 1918 to alleviate these problems. Bahr's partner, Pfefferle, was an engineer for S.R. Dresser Manufacturing Co and held several patents.
Types
Beam
The most basic form of torque wrench consists of two beams. The first is a lever used to apply the torque to the fastener being tightened and serves also as the handle of the tool. When force is applied to the handle it will deflect predictably and proportionally with said force in accordance with Hooke's law. The second beam is only attached at one end to the wrench head and free on its other, this serves as the indicator beam. Both of these beams run parallel to each other when the tool is at rest, with the indicator beam usually on top. The indicator beam's free end is free to travel over a calibrated scale attached to the lever or handle, marked in units of torque. When the wrench is used to apply torque, the lever bends and the indicating beam stays straight. Thus, the end of the indicating beam points to the magnitude of the torque that is currently being applied. This type of wrench is simple, inherently accurate, and inexpensive.
The beam type torque wrench was developed in between late 1920s and early 1930s by Walter Percy Chrysler for the Chrysler Corporation and a company known as Micromatic Hone. Paul Allen Sturtevant—a sales representative for the Cedar Rapids Engineering Company at that time—was licensed by Chrysler to manufacture his invention. Sturtevant patented the torque wrench in 1938 and became the first individual to sell torque wrenches.
A more sophisticated variation of the beam type torque wrench has a dial gauge indicator on its body that can be configured to give a visual or electrical indication when a preset torque is reached.
Deflecting beam
The dual-signal deflecting beam torque wrench was patented by the Australian Warren and Brown company in 1948. It employs the principle of applying torque to a deflecting beam rather than a coil spring. This is claimed to help prolong the accuracy of the wrench throughout its working life, with a greater safety margin on maximum loading and provides more consistent and accurate readings throughout the range of each wrench. The operator can both hear the signal click and see (and feel) a physical indicator when the desired torque is reached.
The wrench functions in the same general way as an ordinary beam torque wrench. There are two beams both connected to the head end but only one through which torque is applied. The load carrying beam is straight and runs from head to handle, it deflects when torque is applied. The other beam (indicating beam) runs directly above the deflecting beam for about half of the length then bends away to the side at an angle from the deflecting beam. The indicating beam retains its orientation and shape during operation. Because of this, there is relative displacement between the two beams. The deflecting beam torque wrench differs from the ordinary beam torque wrench in how it utilizes this relative displacement. Attached to the deflecting beam is a scale and onto that is fitted a wedge which can be slid along the length of the scale parallel to the flexing beam. This wedge is used to set the desired torque. Directly facing this wedge is the side of the angled indicating beam. From this side protrudes a pin, which acts as a trigger for another pin, the latter pin is spring loaded, and fires out of the end of the indicating beam once the trigger pin contacts the adjustable wedge. This firing makes a loud click and gives a visual and tactile indication that the desired torque has been met. The indicator pin can be reset by simply pressing it back into the indicating beam.
Slipper
A slipper type torque wrench consists of a roller and cam (or similar) mechanism. The cam is attached to the driving head, the roller pushes against the cam locking it in place with a specific force which is provided by a spring (which is in many cases adjustable). If a torque which is able to defeat the holding force of the roller and spring is applied, the wrench will slip and no more torque will be applied to the bolt. A slipper torque wrench will not overtighten the fastener by continuing to apply torque beyond a predetermined limit.
Click
A more sophisticated method of presetting torque is with a calibrated clutch mechanism. One common form uses a ball detent and spring, with the spring preloaded by an adjustable screw thread, calibrated in torque units. The ball detent transmits force until the preset torque is reached, at which point the force exerted by the spring is overcome and the ball "clicks" out of its socket. This design yields greater precision as well as giving tactile and audible feedback. The wrench will not start slipping once the desired torque is reached, it will only click and bend slightly at the head; the operator can continue to apply torque to the wrench without any additional action or warnings from the wrench.
A number of variations of this design exist for different applications and different torque ranges. A modification of this design is used in some drills to prevent gouging the heads of screws while tightening them. The drill will start slipping once the desired torque is reached.
"No-hub" wrench
These are specialized torque wrenches used by plumbers to tighten the clamping bands on hubless soil pipe couplings. They are usually T-handled wrenches with a one-way combination ratchet and clutch. They are preset to a fixed torque designed to secure the coupling adequately but insufficient to damage it.
Electronic torque wrenches
With electronic (indicating) torque wrenches, measurement is by means of a strain gauge attached to the torsion rod. The signal generated by the transducer is converted to the required unit of torque (e.g. N·m or lbf·ft) and shown on the digital display. A number of different joints (measurement details or limit values) can be stored. These programmed limit values are then permanently displayed during the tightening process by means of LEDs or the display. At the same time, this generation of torque wrenches can store all the measurements made in an internal readings memory. This reading's memory can then be easily transferred to a PC via the interface (RS232) or printed. A popular application of this kind of torque wrench is for in-process documentation or quality assurance purposes. Typical accuracy level would be ±0.5% to 4%.
Interchangeable Head Torque Wrenches
Interchangeable head torque wrenches are designed to connect several different types of wrench heads, thereby reducing the number of torque wrenches needed. These wrenches are ideal for applications that require multiple fastening tools. They typically have a standard mounting interface that allows for quick changeover from one wrench head to another while ensuring that the torque applied remains accurate. Common interface sizes include 9×12mm and 12×14mm, and interchangeable heads include open-end, ring-end, adjustable, ratchet, etc.
Programmable electronic torque / angle wrenches
Torque measurement is conducted in the same way as with an electronic torque wrench but the tightening angle from the snug point or threshold is also measured. The angle is measured by an angle sensor or electronic gyroscope. The angle measurement process enables joints which have already been tightened to be recognized. The inbuilt readings memory enables measurements to be statistically evaluated. Tightening curves can be analyzed using the software via the integrated tightening-curve system (force/path graph). This type of torque wrench can also be used to determine breakaway torque, prevail torque and the final torque of a tightening job. Thanks to a special measuring process, it is also possible to display the yield point (yield controlled tightening). This design of torque wrench is highly popular with automotive manufacturers for documenting tightening processes requiring both torque and angle control because, in these cases, a defined angle has to be applied to the fastener on top of the prescribed torque (e.g. + 90° – here the means the snug point/threshold and +90° indicates that an additional angle has to be applied after the threshold).
In 1995, Saltus-Werk Max Forst GmbH applied for an international patent for the first electronic torque wrench with angle measurement which did not require a reference arm.
Mechatronic torque wrenches
Torque measurement is achieved in the same way as with a click-type torque wrench but, at the same time, the torque is measured as a digital reading (click and final torque) as with an electronic torque wrench. This is, therefore, a combination of electronic and mechanical measurements. All the measurements are transferred and documented via wireless data transmission. Users will know they have achieved the desired torque setting when the wrench "beeps".
Torque wrench standardization
ISO
The International Organization for Standardization maintains standard ISO 6789. This standard covers the construction and calibration of hand-operated torque tools. They define two types of torque tool encompassing twelve classes, these are given by the table below. Also given is the percentage allowable deviation from the desired torque.
The ISO standard also states that even when overloaded by 25% of the maximum rating, the tool should remain reliably usable after being re-calibrated. Re-calibration for tools used within their specified limits should occur after 5000 cycles of torquing or 12 months, whichever is soonest. In cases where the tool is in use in an organization which has its own quality control procedures, then the calibration schedule can be arranged according to company standards.
Tools should be marked with their torque range and the unit of torque as well as the direction of operation for unidirectional tools and the maker's mark. If a calibration certificate is provided, the tool must be marked with a serial number that matches the certificate or a calibration laboratory should give the tool a reference number corresponding with the tool's calibration certificate.
ASME
The American Society of Mechanical Engineers maintains standard ASME B107.300. This standard has the same type designation as the ISO standard with the addition of the type 3, (limiting) torque tool. This type will release the drive once the desired torque is met so that no more torque can be applied. This standard, however, uses different class designations within each type as well as additional style and design variants within each class. The standard also separates manual and electronic tools into different sections and designations. The ASME and ISO standards cannot be considered compatible. The table below gives some of the types and tolerances specified by the standard for manual torque tools.
Tools should be marked with the model number of the tool, the unit of torque and the maker's mark. For unidirectional tools, the word "TORQUES" or "TORQUE" and the direction of operation must also be marked.
Using torque wrenches
Precision
Click type torque wrenches are precise when properly calibrated—however the more complex mechanism can result in loss of calibration sooner than the beam type, where there is little to no malfunction, (however the thin indicator rod can be accidentally bent out of true). Beam type torque wrenches are impossible to use in situations where the scale cannot be directly read—and these situations are common in automotive applications. The scale on a beam type wrench is prone to parallax error, as a result of the large distance between indicator arm and scale (on some older designs). There is also the issue of increased user error with the beam type—the torque has to be read at every use and the operator must use caution to apply loads only at the floating handle's pivot point. Dual-beam or "flat" beam versions reduce the tendency for the pointer to rub, as do low-friction pointers.
Extensions
The use of cheater bars that extend from the handle end can damage the wrench, so only manufacturer specified equipment should be used.
Using socket extensions requires no adjustment of the torque setting.
Using a crow's foot or similar extension requires the use of the following equation:
using a combination of handle and crow's foot extensions requires the use of the following equation:
where:
is the wrench indicated torque (setting torque),
is the desired torque,
is the length of the torque wrench, from the handle to the center of the head,
is the length of the crow's foot extension, from the center of the torque wrench head to the center line of the bolt,
is the length of the handle extension, from the extension end to the torque wrench handle.
These equations only apply if the extension is colinear with the length of the torque wrench. In other cases, the distance from the torque wrench's head to the bolt's head, as if it were in line, should be used. If the extension is set at 90° then no adjustment is required. These methods are not recommended except for extreme circumstances.
Storage
For click (or other micrometer) types, when not in use, the force acting on the spring should be removed by setting the scale to its minimum rated value in order to prevent permanent set in the spring.
Never set a micrometer style torque wrench to zero as the internal mechanism requires a small amount of tension in order to prevent components shifting and reduction of accuracy.
Calibration
As with any precision tool, torque wrenches should be periodically re-calibrated. As previously stated, according to ISO standards calibration should happen every 5000 operations or every year, whichever comes first. It is possible that torque wrenches can fall up to 10% out of calibration in the first year of use.
Calibration, when performed by a specialist service which follows ISO standards, follows a specific process and constraints. The operation requires specialist torque wrench calibration equipment with an accuracy of ±1% or better. The temperature of the area where calibration is being performed should be between 18 °C and 28 °C with no more than a 1 °C fluctuation and the relative humidity should not exceed 90%.
Before any calibration work can be done, the tool should be preloaded and torqued without measure according to its type. The tool is then connected to the tester and force is applied to the handle (at no more than 10° from perpendicular) for values of 20%, 60% and 100% of the maximum torque and repeated according to their class. The force should be applied slowly and without jerky or irregular motion. The table below gives more specifics regarding the pattern of testing for each class of torque wrench.
While professional calibration is recommended, for some people it would be out of their means. It is possible to calibrate a torque wrench in the home shop or garage. The process generally requires that a certain mass is attached to a lever arm and the torque wrench is set to the appropriate torque to lift said mass. The error within the tool can be calculated and the tool may be able to be altered or any work done adjusted for this error.
See also
Dental torque wrench
Foot-pound force
Battery torque wrench
Hydraulic torque wrench
Impact wrench
Micrometer
Newton metre
Torque converter
Torque limiter
Torque screwdriver
Torque tester
Torsion (mechanics)
Wrench
Notes
References
External links
American inventions
Wrench
Wrenches | Torque wrench | [
"Physics"
] | 3,443 | [
"Wikipedia categories named after physical quantities",
"Force",
"Physical quantities",
"Torque"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.