id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
24,233,307 | https://en.wikipedia.org/wiki/C18H27NO2 | {{DISPLAYTITLE:C18H27NO2}}
The molecular formula C18H27NO2 may refer to:
Alifedrine, a partial beta-adrenergic agonist
Caramiphen, an anticholinergic drug
Dyclonine, an over-the-counter local anesthetic
S33005, a serotonin–norepinephrine reuptake inhibitor
WS-12 | C18H27NO2 | Chemistry | 93 |
19,722 | https://en.wikipedia.org/wiki/Metallurgy | Metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallic elements, their inter-metallic compounds, and their mixtures, which are known as alloys.
Metallurgy encompasses both the science and the technology of metals, including the production of metals and the engineering of metal components used in products for both consumers and manufacturers. Metallurgy is distinct from the craft of metalworking. Metalworking relies on metallurgy in a similar manner to how medicine relies on medical science for technical advancement. A specialist practitioner of metallurgy is known as a metallurgist.
The science of metallurgy is further subdivided into two broad categories: chemical metallurgy and physical metallurgy. Chemical metallurgy is chiefly concerned with the reduction and oxidation of metals, and the chemical performance of metals. Subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation (corrosion). In contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. Topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms.
Historically, metallurgy has predominately focused on the production of metals. Metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. Metal alloys are often a blend of at least two different metallic elements. However, non-metallic elements are often added to alloys in order to achieve properties suitable for an application. The study of metal production is subdivided into ferrous metallurgy (also known as black metallurgy) and non-ferrous metallurgy, also known as colored metallurgy.
Ferrous metallurgy involves processes and alloys based on iron, while non-ferrous metallurgy involves processes and alloys based on other metals. The production of ferrous metals accounts for 95% of world metal production.
Modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. Some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals (including welding, brazing, and soldering). Emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials (semiconductors) and surface engineering.
Etymology and pronunciation
Metallurgy derives from the Ancient Greek , , "worker in metal", from , , "mine, metal" + , , "work" The word was originally an alchemist's term for the extraction of metals from minerals, the ending -urgy signifying a process, especially manufacturing: it was discussed in this sense in the 1797 Encyclopædia Britannica.
In the late 19th century, metallurgy's definition was extended to the more general scientific study of metals, alloys, and related processes. In English, the pronunciation is the more common one in the United Kingdom. The pronunciation is the more common one in the United States US and is the first-listed variant in various American dictionaries, including Merriam-Webster Collegiate and American Heritage.
History
The earliest metal employed by humans appears to be gold, which can be found "native". Small amounts of natural gold, dating to the late Paleolithic period, 40,000 BC, have been found in Spanish caves. Silver, copper, tin and meteoric iron can also be found in native form, allowing a limited amount of metalworking in early cultures. Early cold metallurgy, using native copper not melted from mineral has been documented at sites in Anatolia and at the site of Tell Maghzaliyah in Iraq, dating from the 7th/6th millennia BC.
The earliest archaeological support of smelting (hot metallurgy) in Eurasia is found in the Balkans and Carpathian Mountains, as evidenced by findings of objects made by metal casting and smelting dated to around 6200–5000 BC, with the invention of copper metallurgy. Certain metals, such as tin, lead, and copper can be recovered from their ores by simply heating the rocks in a fire or blast furnace in a process known as smelting. The first evidence of copper smelting, dating from the 6th millennium BC, has been found at archaeological sites in Majdanpek, Jarmovac and Pločnik, in present-day Serbia. The site of Pločnik has produced a smelted copper axe dating from 5,500 BC, belonging to the Vinča culture. The Balkans and adjacent Carpathian region were the location of major Chalcolithic cultures including Vinča, Varna, Karanovo, Gumelnița and Hamangia, which are often grouped together under the name of 'Old Europe'. With the Carpatho-Balkan region described as the 'earliest metallurgical province in Eurasia', its scale and technical quality of metal production in the 6th–5th millennia BC totally overshadowed that of any other contemporary production centre.
The earliest documented use of lead (possibly native or smelted) in the Near East dates from the 6th millennium BC, is from the late Neolithic settlements of Yarim Tepe and Arpachiyah in Iraq. The artifacts suggest that lead smelting may have predated copper smelting. Metallurgy of lead has also been found in the Balkans during the same period.
Copper smelting is documented at sites in Anatolia and at the site of Tal-i Iblis in southeastern Iran from .
Copper smelting is first documented in the Delta region of northern Egypt in , associated with the Maadi culture. This represents the earliest evidence for smelting in Africa.
The Varna Necropolis, Bulgaria, is a burial site located in the western industrial zone of Varna, approximately 4 km from the city centre, internationally considered one of the key archaeological sites in world prehistory. The oldest gold treasure in the world, dating from 4,600 BC to 4,200 BC, was discovered at the site. The gold piece dating from 4,500 BC, found in 2019 in Durankulak, near Varna is another important example. Other signs of early metals are found from the third millennium BC in Palmela, Portugal, Los Millares, Spain, and Stonehenge, United Kingdom. The precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing.
In approximately 1900 BC, ancient iron smelting sites existed in Tamil Nadu.
In the Near East, about 3,500 BC, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. This represented a major technological shift known as the Bronze Age.
The extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. The process appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines.
Historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. This includes the ancient and medieval kingdoms and empires of the Middle East and Near East, ancient Iran, ancient Egypt, ancient Nubia, and Anatolia in present-day Turkey, Ancient Nok, Carthage, the Celts, Greeks and Romans of ancient Europe, medieval Europe, ancient and medieval China, ancient and medieval India, ancient and medieval Japan, amongst others.
A 16th century book by Georg Agricola, De re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. Agricola has been described as the "father of metallurgy".
Extraction
Extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. In order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. Extractive metallurgists are interested in three primary streams: feed, concentrate (metal oxide/sulphide) and tailings (waste).
After mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. Concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products.
Mining may not be necessary, if the ore body and physical environment are conducive to leaching. Leaching dissolves minerals in an ore body and results in an enriched solution. The solution is collected and processed to extract valuable metals. Ore bodies often contain more than one valuable metal.
Tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. Additionally, a concentrate may contain more than one valuable metal. That concentrate would then be processed to separate the valuable metals into individual constituents.
Metal and its alloys
Much effort has been placed on understanding iron–carbon alloy system, which includes steels and cast irons. Plain carbon steels (those that contain essentially only carbon as an alloying element) are used in low-cost, high-strength applications, where neither weight nor corrosion are a major concern. Cast irons, including ductile iron, are also part of the iron-carbon system. Iron-Manganese-Chromium alloys (Hadfield-type steels) are also used in non-magnetic applications such as directional drilling.
Other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. These metals are most often used as alloys with the noted exception of silicon, which is not a metal. Other forms include:
Stainless steel, particularly Austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important.
Aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications.
Copper-nickel alloys (such as Monel) are used in highly corrosive environments and for non-magnetic applications.
Nickel-based superalloys like Inconel are used in high-temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers.
For extremely high temperatures, single crystal alloys are used to minimize creep. In modern electronics, high purity single crystal silicon is essential for metal-oxide-silicon transistors (MOS) and integrated circuits.
Production
In production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. This involves production of alloys, shaping, heat treatment and surface treatment of product. The task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. To achieve this goal, the operating environment must be carefully considered.
Determining the hardness of the metal using the Rockwell, Vickers, and Brinell hardness scales is a commonly used practice that helps better understand the metal's elasticity and plasticity for different applications and production processes. In a saltwater environment, most ferrous metals and some non-ferrous alloys corrode quickly. Metals exposed to cold or cryogenic conditions may undergo a ductile to brittle transition and lose their toughness, becoming more brittle and prone to cracking. Metals under continual cyclic loading can suffer from metal fatigue. Metals under constant stress at elevated temperatures can creep.
Metalworking processes
Casting – molten metal is poured into a shaped mold. Variants of casting include sand casting, investment casting, also called the lost wax process, die casting, centrifugal casting, both vertical and horizontal, and continuous castings. Each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion.
Forging – a red-hot billet is hammered into shape.
Rolling – a billet is passed through successively narrower rollers to create a sheet.
Extrusion – a hot and malleable metal is forced under pressure through a die, which shapes it before it cools.
Machining – lathes, milling machines and drills cut the cold metal to shape.
Sintering – a powdered metal is heated in a non-oxidizing environment after being compressed into a die.
Fabrication – sheets of metal are cut with guillotines or gas cutters and bent and welded into structural shape.
Laser cladding – metallic powder is blown through a movable laser beam (e.g. mounted on a NC 5-axis machine). The resulting melted metal reaches a substrate to form a melt pool. By moving the laser head, it is possible to stack the tracks and build up a three-dimensional piece.
3D printing – Sintering or melting amorphous powder metal in a 3D space to make any object to shape.
Cold-working processes, in which the product's shape is altered by rolling, fabrication or other processes, while the product is cold, can increase the strength of the product by a process called work hardening. Work hardening creates microscopic defects in the metal, which resist further changes of shape.
Heat treatment
Metals can be heat-treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. Common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering:
Annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft-edged so that, when the metal is hit or stressed it dents or perhaps bends, rather than breaking; it is also easier to sand, grind, or cut annealed metal.
Quenching is the process of cooling metal very quickly after heating, thus "freezing" the metal's molecules in the very hard martensite form, which makes the metal harder.
Tempering relieves stresses in the metal that were caused by the hardening process; tempering makes the metal less hard while making it better able to sustain impacts without breaking.
Often, mechanical and thermal treatments are combined in what are known as thermo-mechanical treatments for better properties and more efficient processing of materials. These processes are common to high-alloy special steels, superalloys and titanium alloys.
Plating
Electroplating is a chemical surface-treatment technique. It involves bonding a thin layer of another metal such as gold, silver, chromium or zinc to the surface of the product. This is done by selecting the coating material electrolyte solution, which is the material that is going to coat the workpiece (gold, silver, zinc). There needs to be two electrodes of different materials: one the same material as the coating material and one that is receiving the coating material. Two electrodes are electrically charged and the coating material is stuck to the work piece. It is used to reduce corrosion as well as to improve the product's aesthetic appearance. It is also used to make inexpensive metals look like the more expensive ones (gold, silver).
Shot peening
Shot peening is a cold working process used to finish metal parts. In the process of shot peening, small round shot is blasted against the surface of the part to be finished. This process is used to prolong the product life of the part, prevent stress corrosion failures, and also prevent fatigue. The shot leaves small dimples on the surface like a peen hammer does, which cause compression stress under the dimple. As the shot media strikes the material over and over, it forms many overlapping dimples throughout the piece being treated. The compression stress in the surface of the material strengthens the part and makes it more resistant to fatigue failure, stress failures, corrosion failure, and cracking.
Thermal spraying
Thermal spraying techniques are another popular finishing option, and often have better high temperature properties than electroplated coatings. Thermal spraying, also known as a spray welding process, is an industrial coating process that consists of a heat source (flame or other) and a coating material that can be in a powder or wire form, which is melted then sprayed on the surface of the material being treated at a high velocity. The spray treating process is known by many different names such as HVOF (High Velocity Oxygen Fuel), plasma spray, flame spray, arc spray and metalizing.
Electroless deposition
Electroless deposition (ED) or electroless plating is defined as the autocatalytic process through which metals and metal alloys are deposited onto nonconductive surfaces. These nonconductive surfaces include plastics, ceramics, and glass etc., which can then become decorative, anti-corrosive, and conductive depending on their final functions. Electroless deposition is a chemical processes that create metal coatings on various materials by autocatalytic chemical reduction of metal cations in a liquid bath.
Characterization
Metallurgists study the microscopic and macroscopic structure of metals using metallography, a technique invented by Henry Clifton Sorby.
In metallography, an alloy of interest is ground flat and polished to a mirror finish. The sample can then be etched to reveal the microstructure and macrostructure of the metal. The sample is then examined in an optical or electron microscope, and the image contrast provides details on the composition, mechanical properties, and processing history.
Crystallography, often using diffraction of x-rays or electrons, is another valuable tool available to the modern metallurgist. Crystallography allows identification of unknown materials and reveals the crystal structure of the sample. Quantitative crystallography can be used to calculate the amount of phases present as well as the degree of strain to which a sample has been subjected.
Current advanced characterization techniques, which are used frequently in this field are: Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), Electron Backscattered Diffraction (EBSD) and Atom-Probe Tomography (APT).
See also
Adrien Chenot
Archaeometallurgy
Blacksmith
CALPHAD
Carbonyl metallurgy
Cupellation
Experimental archaeometallurgy
Forging
Goldbeating
Gold phosphine complex
Metallurgical failure analysis
Metalworking
Mineral industry
Pyrometallurgy
Welding
References
Metals | Metallurgy | Chemistry,Materials_science,Engineering | 3,801 |
1,975,653 | https://en.wikipedia.org/wiki/Young%20Eagles | The Young Eagles is a program created by the US Experimental Aircraft Association designed to give children between the ages of 8 and 17 an opportunity to experience flight in a general aviation airplane while educating them about aviation. The program is offered free of charge with costs covered by the volunteers. It was launched in 1992 and, by July 2024, had flown more than 2.3 million children in 90 countries, making it the most successful program of its kind in history. The presenting sponsors for it are Phillips 66 and Sporty's Pilot Shop.
Program history
Project Schoolflight, co-founded by EAA founder Paul Poberezny in 1955, served as the inspirational predecessor program to the Young Eagles, ending in 1978. In 1991, a survey of long-time EAA members was conducted to help determine the nascent organization's future priorities. Nearly 92 percent said EAA's primary objective should be to involve more young people in aviation. The survey also showed that a flight experience inspired respondents toward aviation. On May 13, 1992, following several months of coordination by EAA's then-President Tom Poberezny and members of the EAA Board of Directors, management, staff and volunteers, the Young Eagles Program was unveiled at a Washington, D.C. news conference.
The mission of the EAA Young Eagles Program is to provide a meaningful flight experience – free of charge – in a general aviation aircraft for young people (primarily between the ages of 8 and 17). Flights are provided by EAA members worldwide.
The initial goal of the program was to fly one million children prior to the 100th anniversary of flight celebration (Dec. 17, 2003). That goal was achieved on November 13, 2003. An ongoing annual goal of introducing 100,000 young people to the Young Eagles experience was established.
In March 2011, EAA reported the results of a study on the program that showed that program participants were 5.4 times more likely to become a pilot than those who had never participated and that 9% of those new pilots were female, an increase of 50% compared to the general population of pilots, which was 6% female. The study also indicated that the older a child is when taking their flight that it is the more likely that child will become a pilot, with two out of every 100 participants who are 17 years old continuing to complete a pilot certificate.
The program is administered by the Young Eagles Office at EAA headquarters in Oshkosh, Wisconsin.
Since 1994, "International Young Eagles Day," a day set aside to encourage all EAA members and Chapters to participate is held annually on the second Saturday of June.
At AirVenture Oshkosh 2012, EAA unveiled a new program called "Eagle Flights," which will offer rides for adults.
International Young Eagles
In Canada the Canadian Owners and Pilots Association participated in the Young Eagles program between 1992 and 2008. COPA members had flown more than 81,000 Young Eagles. COPA participation was ended on May 31, 2008, due to insurance concerns.
Pilot participation
More than 43,000 pilots have participated in the program, donating their time and paying the full cost of providing the flights for the children in their own or rented aircraft. While some pilots have only flown a few Young Eagles there are many pilots who have flown more than three thousand children.
In September 2023, EAA volunteer Fred Stadler became the first Young Eagles pilot to fly 10,000 children as part of the program. He started giving Young Eagles flights in 2000.
Program Chairmen
At the program's inception EAA decided to continuously recruit a well-known person and pilot to act as Chairman and raise the profile of the program. The program's founding chairman was Academy Award-winning film actor Cliff Robertson, who served in that capacity from 1992 to 1994. Robertson was succeeded in 1994 by retired USAF General and test pilot Chuck Yeager, the first person to fly faster than the speed of sound. Yeager stepped down as chairman in 2004 and, in March 2004, franchise film actor Harrison Ford became Chairman of the Young Eagles program. Ford has flown more than 300 Young Eagles, including the 2-millionth Young Eagle, in several types of aircraft, and finished his five-year term in 2009. In September 2009, Captain Chesley Sullenberger and First Officer Jeffrey Skiles, who became famous in the US Airways Flight 1549 Hudson River ditching on 15 January 2009, were named as the program's new co-chairmen.
In July 2013, aerobatic world champion pilot Sean D. Tucker replaced Sullenberger and Skiles as chairman. In July 2018, NFL tight end Jimmy Graham joined Tucker as the organization's co-chairman.
Scholarships and sponsors
Rolls-Royce scholarship
Rolls-Royce contributed in 2010 six flight scholarships for basic flight training, and one for advanced training toward a private pilot certificate.
The Next Step
In May 2009, EAA joined with Sporty's Pilot Shop of Batavia, Ohio, to provide the Next Step to the Young Eagles Flight experience. Sporty's has made their on line Complete Flight Training Course available to any interested Young Eagle following their flight. Sporty's also provides pilot logbooks to allow Young Eagles to record their flight and any subsequent aviation experiences.
Gathering of Eagles
The Gathering of Eagles is an annual fundraiser auction event to support the Young Eagles program. The organization hosts the event each year in the EAA AirVenture Museum during its EAA AirVenture Airshow. Among items auctioned were a SR-71 themed "Blackbird" Ford Mustang donated by Ford Motor Company, Jack Roush, and EAA member Carroll Shelby.
One-of-a-Kind Auctioned Cars
2006: Shelby GT350H Ford Mustang
2007: Unknown
2008: F-22 Raptor Ford Mustang "AV8R"
2009: P-51 "Dearborn Doll" Ford Mustang "AV-X10"
2010: SR-71 "Blackbird" Ford Mustang
2011: United States Navy Blue Angels Ford Mustang
2012: Tuskegee Airmen "Red Tails" Ford Mustang
2013: United States Air Force Thunderbirds Ford Mustang
2014: F-35 Lightning II Ford Mustang
2015: Apollo Ford Mustang
2016: Bob Hoover P-51 "Old Yeller" GT350 Mustang
2017: 2017 Ford F-150 Raptor F-22 Raptor
2018: 2018 Ford Eagle Squadron Mustang GT
2019: 2019 Ford "Old Crow" Mustang GT
2021: WASP inspired 2021 Ford Mustang Mach-E
2022: 2022 Ford Bronco
See also
Project Schoolflight (predecessor program)
Notes
References
Young Eagles Fact Sheet - accessed 19 August 2006
COPA Young Eagles website - accessed 19 August 2006
External links
Young Eagles
EAA
Aviation in Canada
Aviation in the United States
Experimental Aircraft Association | Young Eagles | Engineering | 1,366 |
44,031 | https://en.wikipedia.org/wiki/Perfect%20matching | In graph theory, a perfect matching in a graph is a matching that covers every vertex of the graph. More formally, given a graph , a perfect matching in is a subset of edge set , such that every vertex in the vertex set is adjacent to exactly one edge in .
A perfect matching is also called a 1-factor; see Graph factorization for an explanation of this term. In some literature, the term complete matching is used.
Every perfect matching is a maximum-cardinality matching, but the opposite is not true. For example, consider the following graphs:
In graph (b) there is a perfect matching (of size 3) since all 6 vertices are matched; in graphs (a) and (c) there is a maximum-cardinality matching (of size 2) which is not perfect, since some vertices are unmatched.
A perfect matching is also a minimum-size edge cover. If there is a perfect matching, then both the matching number and the edge cover number equal .
A perfect matching can only occur when the graph has an even number of vertices. A near-perfect matching is one in which exactly one vertex is unmatched. This can only occur when the graph has an odd number of vertices, and such a matching must be maximum. In the above figure, part (c) shows a near-perfect matching. If, for every vertex in a graph, there is a near-perfect matching that omits only that vertex, the graph is also called factor-critical.
Characterizations
Hall's marriage theorem provides a characterization of bipartite graphs which have a perfect matching.
The Tutte theorem provides a characterization for arbitrary graphs.
A perfect matching is a spanning 1-regular subgraph, a.k.a. a 1-factor. In general, a spanning k-regular subgraph is a k-factor.
A spectral characterization for a graph to have a perfect matching is given by Hassani Monfared and Mallik as follows: Let be a graph on even vertices and be distinct nonzero purely imaginary numbers. Then has a perfect matching if and only if there is a real skew-symmetric matrix with graph and eigenvalues . Note that the (simple) graph of a real symmetric or skew-symmetric matrix of order has vertices and edges given by the nonzero off-diagonal entries of .
Computation
Deciding whether a graph admits a perfect matching can be done in polynomial time, using any algorithm for finding a maximum cardinality matching.
However, counting the number of perfect matchings, even in bipartite graphs, is #P-complete. This is because computing the permanent of an arbitrary 0–1 matrix (another #P-complete problem) is the same as computing the number of perfect matchings in the bipartite graph having the given matrix as its biadjacency matrix.
A remarkable theorem of Kasteleyn states that the number of perfect matchings in a planar graph can be computed exactly in polynomial time via the FKT algorithm.
The number of perfect matchings in a complete graph Kn (with n even) is given by the double factorial:
Connection to Graph Coloring
An edge-colored graph can induce a number of (not necessarily proper) vertex colorings equal to the number of perfect matchings, as every vertex is covered exactly once in each matching. This property has been investigated in quantum physics and computational complexity theory.
Perfect matching polytope
The perfect matching polytope of a graph is a polytope in R|E| in which each corner is an incidence vector of a perfect matching.
See also
Envy-free matching
Maximum-cardinality matching
Perfect matching in high-degree hypergraphs
Hall-type theorems for hypergraphs
The unique perfect matching problem
References
Matching (graph theory) | Perfect matching | Mathematics | 770 |
37,153 | https://en.wikipedia.org/wiki/Supercomputer | A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2022, supercomputers have existed which can perform over 1018 FLOPS, so called exascale supercomputers. For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). They have been essential in the field of cryptanalysis.
Supercomputers were introduced in the 1960s, and for several decades the fastest was made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran more quickly than their more general-purpose contemporaries. Through the decade, increasing amounts of parallelism were added, with one to four processors being typical. In the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.
The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 1990s, with China becoming increasingly active in the field. , Lawrence Livermore National Laboratory's El Capitan is the world's fastest supercomputer. The US has five of the top 10; Japan, Finland, Switzerland, Italy and Spain have one each. In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark.
History
In 1960, UNIVAC built the Livermore Atomic Research Computer (LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speed drum memory, rather than the newly emerging disk drive technology. Also, among the first supercomputers was the IBM 7030 Stretch. The IBM 7030 was built by IBM for the Los Alamos National Laboratory, which then in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis.
The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words. The Atlas Supervisor swapped data in the form of pages between the magnetic core and the drum. The Atlas operating system also introduced time-sharing to supercomputing, so that more than one program could be executed on the supercomputer at any one time. Atlas was a joint venture between Ferranti and Manchester University and was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second.
The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicon transistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing refrigeration to the supercomputer design. Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each.
Cray left CDC in 1972 to form his own company, Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful supercomputers in history. The Cray-2 was released in 1985. It had eight central processing units (CPUs), liquid cooling and the electronics coolant liquid Fluorinert was pumped through the supercomputer architecture. It reached 1.9 gigaFLOPS, making it the first supercomputer to break the gigaflop barrier.
Massively parallel designs
The only computer to seriously challenge the Cray-1's performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate more quickly than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.
But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?" But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably the Connection Machine (CM) that developed from research at MIT. The CM-1 used as many as 65,536 simplified custom microprocessors connected together in a network to share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.
In 1982, Osaka University's LINKS-1 Computer Graphics System used a massively parallel processing architecture, with 514 microprocessors, including 257 Zilog Z8001 control processors and 257 iAPX 86/20 floating-point processors. It was mainly used for rendering realistic 3D computer graphics. Fujitsu's VPP500 from 1992 is unusual since, to achieve higher speeds, its processors used GaAs, a material normally reserved for microwave applications due to its toxicity. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaFLOPS (GFLOPS) per processor. The Hitachi SR2201 obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface.
Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including the Evans & Sutherland ES-1, MasPar, nCUBE, Intel iPSC and the Goodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines adding graphic units to the mix.
In 1998, David Bader developed the first Linux supercomputer using commodity parts. While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta Technologies "AltaCluster" of eight dual, 333 MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to ensure interoperability, as none of it had been run on Linux previously. Using the successful prototype design, he led the development of "RoadRunner," the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world. Though Linux-based clusters using consumer-grade parts, such as Beowulf, existed prior to the development of Bader's prototype and RoadRunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers.
Systems with a massive number of processors generally take one of two paths. In the grid computing approach, the processing power of many computers, organized as distributed, diverse administrative domains, is opportunistically used whenever a computer is available. In another approach, many processors are used in proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects. The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.
As the price, performance and energy efficiency of general-purpose graphics processing units (GPGPUs) have improved, a number of petaFLOPS supercomputers such as Tianhe-I and Nebulae have started to rely on them. However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application to it. However, GPUs are gaining ground, and in 2012 the Jaguar supercomputer was transformed into Titan by retrofitting CPUs with GPUs.
High-performance computers have an expected life cycle of about three years before requiring an upgrade. The Gyoukou supercomputer is unique in that it uses both a massively parallel design and liquid immersion cooling.
Special purpose supercomputers
A number of special-purpose systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom ASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle, Deep Blue, and Hydra for playing chess, Gravity Pipe for astrophysics, MDGRAPE-3 for protein structure prediction and molecular dynamics, and Deep Crack for breaking the DES cipher.
Energy usage and heat management
Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers. The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components. There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 megawatts (MW) of electricity. The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400 an hour or about $3.5 million per year.
Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways. The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue.
The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray-2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure. However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.
In the Blue Gene system, IBM deliberately used low power processors to deal with heat density. The IBM Power 775, released in 2011, has closely packed elements that require water cooling. The IBM Aquasar system uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.
The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt". In 2008, Roadrunner by IBM operated at 376 MFLOPS/W. In November 2010, the Blue Gene/Q reached 1,684 MFLOPS/W and in June 2011 the top two spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W.
Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can remove waste heat, the ability of the cooling systems to remove waste heat is a limiting factor. , many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited the thermal design power of the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.
Software and system management
Operating systems
Since the end of the 20th century, supercomputer operating systems have undergone major transformations, based on the changes in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux.
Since modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a full Linux distribution on server and I/O nodes.
While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.
Although most modern supercomputers use Linux-based operating systems, each manufacturer has its own specific Linux distribution, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.
Software tools and message passing
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source software such as Beowulf.
In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA or OpenCL.
Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and debugging such applications.
Distributed supercomputing
Opportunistic approaches
Opportunistic supercomputing is a form of networked grid computing whereby a "super virtual computer" of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing cannot handle traditional supercomputing tasks such as fluid dynamic simulations.
The fastest grid computing system is the volunteer computing project Folding@home (F@h). , F@h reported 2.5 exaFLOPS of x86 processing power. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.
The Berkeley Open Infrastructure for Network Computing (BOINC) platform hosts a number of volunteer computing projects. , BOINC recorded a processing power of over 166 petaFLOPS through over 762 thousand active Computers (Hosts) on the network.
, Great Internet Mersenne Prime Search's (GIMPS) distributed Mersenne Prime search achieved about 0.313 PFLOPS through over 1.3 million computers. The PrimeNet server has supported GIMPS's grid computing approach, one of the earliest volunteer computing projects, since 1997.
Quasi-opportunistic approaches
Quasi-opportunistic supercomputing is a form of distributed computing whereby the "super virtual computer" of many networked geographically disperse computers performs computing tasks that demand huge processing power. Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through the implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.
High-performance computing clouds
Cloud computing with its recent and rapid expansions and development have grabbed the attention of high-performance computing (HPC) users and developers in recent years. Cloud computing attempts to provide HPC-as-a-service exactly like other forms of services available in the cloud such as software as a service, platform as a service, and infrastructure as a service. HPC users may benefit from the cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges are virtualization overhead in the cloud, multi-tenancy of resources, and network latency issues. Much research is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility.
In 2016, Penguin Computing, Parallel Works, R-HPC, Amazon Web Services, Univa, Silicon Graphics International, Rescale, Sabalcore, and Gomput started to offer HPC cloud computing. The Penguin On Demand (POD) cloud is a bare-metal compute model to execute code, but each user is given virtualized login node. POD computing nodes are connected via non-virtualized 10 Gbit/s Ethernet or QDR InfiniBand networks. User connectivity to the POD data center ranges from 50 Mbit/s to 1 Gbit/s. Citing Amazon's EC2 Elastic Compute Cloud, Penguin Computing argues that virtualization of compute nodes is not suitable for HPC. Penguin Computing has also criticized that HPC clouds may have allocated computing nodes to customers that are far apart, causing latency that impairs performance for some HPC applications.
Performance measurement
Capability versus capacity
Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complex weather simulation application.
Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems. Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.
Performance metrics
In general, the speed of supercomputers is measured and benchmarked in FLOPS (floating-point operations per second), and not in terms of MIPS (million instructions per second), as is the case with general-purpose computers. These measurements are commonly used with an SI prefix such as tera-, combined into the shorthand TFLOPS (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand PFLOPS (1015 FLOPS, pronounced petaflops.) Petascale supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS). However, The performance of a supercomputer can be severely impacted by fluctuation brought on by elements like system load, network traffic, and concurrent processes, as mentioned by Brehm and Bruhwiler (2015).
No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry. The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK benchmarks and shown as "Rmax" in the TOP500 list. The LINPACK benchmark typically performs LU decomposition of a large matrix. The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.
The TOP500 list
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
This is a list of the computers which appeared at the top of the TOP500 list since June 1993, and the "Peak speed" is given as the "Rmax" rating. In 2018, Lenovo became the world's largest provider for the TOP500 supercomputers with 117 units produced.
Applications
The stages of supercomputer application are summarized in the following table:
The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.
Modern weather forecasting relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.
In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.
The Advanced Simulation and Computing Program currently uses supercomputers to maintain and simulate the United States nuclear stockpile.
In early 2020, COVID-19 was front and center in the world. Supercomputers used different simulations to find compounds that could potentially stop the spread. These computers run for tens of hours using multiple paralleled running CPU's to model different processes.
Development and trends
In the 2010s, China, the United States, the European Union, and others competed to be the first to create a 1 exaFLOP (1018 or one quintillion FLOPS) supercomputer. Erik P. DeBenedictis of Sandia National Laboratories has theorized that a zettaFLOPS (1021 or one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two-week time span accurately. Such systems might be built around 2030.
Many Monte Carlo simulations use the same algorithm to process a randomly generated data set; particularly, integro-differential equations describing physical transport processes, the random paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc. The next step for microprocessors may be into the third dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.
The cost of operating high performance supercomputers has risen, mainly due to increasing power consumption. In the mid-1990s a top 10 supercomputer required in the range of 100 kilowatts, in 2010 the top 10 supercomputers required between 1 and 2 megawatts. A 2010 study commissioned by DARPA identified power consumption as the most pervasive challenge in achieving Exascale computing. At the time a megawatt per year in energy consumption cost about 1 million dollars. Supercomputing facilities were constructed to efficiently remove the increasing amount of heat produced by modern multi-core central processing units. Based on the energy consumption of the Green 500 list of supercomputers between 2007 and 2011, a supercomputer with 1 exaFLOPS in 2011 would have required nearly 500 megawatts. Operating systems were developed for existing hardware to conserve energy whenever possible. CPU cores not in use during the execution of a parallelized application were put into low-power states, producing energy savings for some supercomputing applications.
The increasing cost of operating supercomputers has been a driving factor in a trend toward bundling of resources through a distributed supercomputer infrastructure. National supercomputing centers first emerged in the US, followed by Germany and Japan. The European Union launched the Partnership for Advanced Computing in Europe (PRACE) with the aim of creating a persistent pan-European supercomputer infrastructure with services to support scientists across the European Union in porting, scaling and optimizing supercomputing applications. Iceland built the world's first zero-emission supercomputer. Located at the Thor Data Center in Reykjavík, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers.
Funding supercomputer hardware also became increasingly difficult. In the mid-1990s a top 10 supercomputer cost about 10 million euros, while in 2010 the top 10 supercomputers required an investment of between 40 and 50 million euros. In the 2000s national governments put in place different strategies to fund supercomputers. In the UK the national government funded supercomputers entirely and high performance computing was put under the control of a national funding agency. Germany developed a mixed funding model, pooling local state funding and federal funding.
In fiction
Examples of supercomputers in fiction include HAL 9000, Multivac, The Machine Stops, GLaDOS, The Evitable Conflict, Vulcan's Hammer, Colossus, WOPR, AM, and Deep Thought. A supercomputer from Thinking Machines was mentioned as the supercomputer used to sequence the DNA extracted from preserved parasites in the Jurassic Park series.
See also
ACM/IEEE Supercomputing Conference
ACM SIGHPC
High-performance computing
High-performance technical computing
Jungle computing
Metacomputing
Nvidia Tesla Personal Supercomputer
Parallel computing
Supercomputing in China
Supercomputing in Europe
Supercomputing in India
Supercomputing in Japan
SLURM
Testing high-performance computing applications
Ultra Network Technologies
Quantum computing
References
External links
McDonnell, Marshall T. (2013). "Supercomputer Design: An Initial Effort to Capture the Environmental, Economic, and Societal Impacts". Chemical and Biomolecular Engineering Publications and Other Works.
American inventions
Cluster computing
Concurrent computing
Distributed computing architecture
Parallel computing | Supercomputer | Technology | 6,426 |
33,238,388 | https://en.wikipedia.org/wiki/Tensiometer%20%28soil%20science%29 | A tensiometer in soil science is a measuring instrument used to determine the matric water potential () (soil moisture tension) in the vadose zone. This device typically consists of a glass or plastic tube with a porous ceramic cup and is filled with water. The top of the tube has either a built-in vacuum gauge or a rubber cap used with a portable puncture tensiometer instrument, which uses a hypodermic needle to measure the pressure inside the tensiometer. The tensiometer is buried in the soil, and a hand pump is used to pull a partial vacuum. As water is pulled out of the soil by plants and evaporation, the vacuum inside the tube increases. When the soil is wetted flow can also occur in the reverse direction: as water is added to the soil, the vacuum inside the tube pulls moisture from the soil and decreases. When the water pressure in the tensiometer is determined to be in equilibrium with the water pressure in the soil, the tensiometer gauge reading represents the matric potential of the soil.
Such tensiometers are used in irrigation scheduling to help farmers and other irrigation managers to determine when to water. In conjunction with a water retention curve, tensiometers can be used to determine how much to water. With practice, a tensiometer can be a useful tool for these purposes. Soil tensiometers can also be used in the scientific study of soils and plants.
References
Rawls, W.J., Ahuja, L.R., Brakensiek, D.L., and Shirmohammadi, A. 1993. Infiltration and soil water movement, in Maidment, D.R., Ed., Handbook of hydrology, New York, NY, USA, McGraw-Hill, p. 5.1–5.51.
External links
The Experimental Hydrology Wiki Soil matric potential - tensiometer (T4)
The Experimental Hydrology Wiki Soil matric potential - tensiometer (T5)
Irrigation
Soil physics
Earth observation in-situ sensors | Tensiometer (soil science) | Physics,Environmental_science | 432 |
18,390,089 | https://en.wikipedia.org/wiki/Russula%20adusta | Russula adusta, commonly known as the blackening brittlegill or blackening russula, is a species of gilled mushroom. It is a member of the Russula subgenus Compactae. The cap is brown to gray and somewhat shiny, with a mild taste and, reportedly, an odor of empty wine barrels. It has a propensity to turn black from cutting or bruising and has white spores. Similar species include Russula albonigra and R. densifolia.
Russula adusta is found in woodlands of Europe and North America, growing under conifers.
Taxonomy
Russula adusta was first described by the French mycologist Pierre Bulliard in 1785 as Agaricus nigricans, before gaining its current binomial name from the Swedish mycologist Elias Magnus Fries.
Description
This is a large member of the genus Russula, and it has a cap that is dirty white when young, but swiftly turns brown, and then black on aging. It measures in diameter. There is usually a large depression in the centre of mature caps, which are three quarter peeling. The stem is white, firm, and straight, measuring long and wide; it too blackens with age. The gills are off-white initially, very widely spaced, and are adnate. These turn red; then grey, and finally black, when bruised. The flesh, which has a fruity smell, when cut turns pale Indian red, and then grey, and black within 20 minutes. The spore print is white, and the warty oval spores measure 7–8 x 6–7 μm.
Old specimens are sometimes parasitised by fungi of the genus Asterophora or Nyctalis, in particular the species N. parasitica and N. asterophora (the pick-a-back toadstool).
Similar species
Species that also bruise red then black include Russula acrifolia and R. dissimulans.
Russula albonigra has closer gills and is far less common. It bruises directly to black, lacking the red intermediary phase.
Distribution and habitat
Russula adusta appears in late summer and autumn in both deciduous and coniferous woodland, under conifer trees, across Britain, northern Europe, and North America. In North America, it appears in the Pacific Northwest and northern California from October to February.
Toxicity
The species contains toxins which could cause gastrointestinal upset.
References
External links
Rogers Mushrooms – Russula adustavia the Wayback Machine
Savuhapero, svedkremla via the Wayback Machine
adusta
Fungi of Europe
Fungi of North America
Fungus species | Russula adusta | Biology | 547 |
8,952,788 | https://en.wikipedia.org/wiki/Mycovirus | Mycoviruses (Ancient Greek: μύκης ("fungus") + Latin ), also known as mycophages, are viruses that infect fungi. The majority of mycoviruses have double-stranded RNA (dsRNA) genomes and isometric particles, but approximately 30% have positive-sense, single-stranded RNA (+ssRNA) genomes.
True mycoviruses demonstrate an ability to be transmitted to infect other healthy fungi. Many double-stranded RNA elements that have been described in fungi do not fit this description, and in these cases they are referred to as virus-like particles or VLPs. Preliminary results indicate that most mycoviruses co-diverge with their hosts, i.e. their phylogeny is largely congruent with that of their primary hosts. However, many virus families containing mycoviruses have only sparsely been sampled. Mycovirology is the study of mycoviruses. It is a special subdivision of virology and seeks to understand and describe the taxonomy, host range, origin and evolution, transmission and movement of mycoviruses and their impact on host phenotype.
History
The first record of an economic impact of mycoviruses on fungi was recorded in cultivated mushrooms (Agaricus bisporus) in the late 1940s and was called the La France disease. Hollings found more than three different types of viruses in the abnormal sporophores. This report essentially marks the beginning of mycovirology.
The La France Disease is also known as X disease, watery stripe, dieback and brown disease. Symptoms include:
Reduced yield
Slow and aberrant mycelial growth
Waterlogging of tissue
Malformation
Premature maturation
Increased post-harvest deterioration (reduced shelf life)
Mushrooms have shown no resistance to the virus, and so control has been limited to hygienic practises to stop the spread of the virus.
Perhaps the best known mycovirus is Cryphonectria parasitica hypovirus 1 (CHV1). CHV1 is exceptional within mycoviral research for its success as a biocontrol agent against the fungus C. parasitica, the causative agent of chestnut blight, in Europe, but also because it is a model organism for studying hypovirulence in fungi. However, this system is only being used in Europe routinely because of the relatively small number of vegetative compatibility groups (VCGs) on the continent. By contrast, in North America the distribution of the hypovirulent phenotype is often prevented because an incompatibility reaction prevents fungal hyphae from fusing and exchanging their cytoplasmic content. In the United States, at least 35 VCGs were found. A similar situation seems to be present in China and Japan, where 71 VCGs have been identified so far.
Taxonomy
The majority of mycoviruses have double-stranded RNA (dsRNA) genomes and isometric particles, but approximately 30% have positive-sense, single-stranded RNA (+ssRNA) genomes. However, negative single-stranded RNA viruses and single-stranded DNA viruses have also been described. The updated 9th ICTV report on virus taxonomy lists over 90 mycovirus species covering 10 viral families, of which 20% were not assigned to a genus or, in some cases, not even to a family.
Isometric forms predominate mycoviral morphologies in comparison to rigid rods, flexuous rods, club-shaped particles, enveloped bacilliform particles, and Herpesvirus-like viruses. The lack of genomic data often hampers a conclusive assignment to already established groups of viruses or makes it impossible to erect new families and genera. The latter is true for many unencapsidated dsRNA viruses, which are assumed to be viral, but missing sequence data has prevented their classification so far. So far, viruses of the families Partitiviridae, Totiviridae, and Narnaviridae are dominating the "mycovirus sphere".
Host range and incidence
Mycoviruses are common in fungi (Herrero et al., 2009) and are found in all four phyla of the true fungi: Chytridiomycota, Zygomycota, Ascomycota and Basidiomycota. Fungi are frequently infected with two or more unrelated viruses and also with defective dsRNA and/or satellite dsRNA. There are also viruses that simply use fungi as vectors and are distinct from mycoviruses because they cannot reproduce in the fungal cytoplasm.
It is generally assumed that the natural host range of mycoviruses is confined to closely related vegetability compatibility groups or VCGs which allow for cytoplasmic fusion, but some mycoviruses can replicate in taxonomically different fungal hosts. Good examples are mitoviruses found in the two fungal species S. homoeocarpa and Ophiostoma novo-ulmi. Nuss et al. (2005) described that it is possible to extend the natural host range of C. parasitica hypovirus 1 (CHV1) to several fungal species that are closely related to C. parasitica using in vitro virus transfection techniques. CHV1 can also propagate in the genera Endothia and Valsa, which belong to the two distinct families Cryphonectriaceae and Diaporthaceae, respectively. Furthermore, some human pathogenic fungi are also found to be naturally infected with mycoviruses, including AfuPmV-1 of Aspergillus fumigatus and TmPV1 of Talaromyces marneffei (formerly Penicillium marneffei).
In one study, forty patients with acute lymphoblastic leukemia were found to have antibodies to a mycovirus-containing Aspergillus flavus.In another research report, exposure of mononuclear cells from patients with acute lymphoblastic leukemia in full remission resulted in the re-development of the genetic and cell surface phenotypes characteristic of acute lymphoblastic leukemia.
Origin and evolution
Viruses consisting of dsRNA as well as ssRNA are assumed to be very ancient and presumably originated from the "RNA world" as both types of RNA viruses infect bacteria as well as eukaryotes. Although the origin of viruses is still not well understood, recently presented data suggest that viruses may have invaded the emerging "supergroups" of eukaryotes from an ancestral pool during a very early stage of life on earth. According to Koonin, RNA viruses colonized eukaryotes first and subsequently co-evolved with their hosts. This concept fits well with the proposed "ancient co-evolution hypothesis", which also assumes a long co-evolution of viruses and fungi. The "ancient co-evolution hypothesis" could explain why mycoviruses are so diverse.
It has also been suggested that it is very likely that plant viruses containing a movement protein evolved from mycoviruses by introducing an extracellular phase into their life cycle rather than eliminating it. Furthermore, the recent discovery of an ssDNA mycovirus has tempted some researchers to suggest that RNA and DNA viruses might have common evolutionary mechanisms. However, there are many cases where mycoviruses are grouped together with plant viruses. For example, CHV1 showed phylogenetic relatedness to the ssRNA genus Potyvirus, and some ssRNA viruses, which were assumed to confer hypovirulence or debilitation, were often found to be more closely related to plant viruses than to other mycoviruses. Therefore, another theory arose that these viruses moved from a plant host to a plant pathogenic fungal host or vice versa. This "plant virus hypothesis" may not explain how mycoviruses developed originally, but it could help to understand how they evolved further.
Transmission
A significant difference between the genomes of mycoviruses to other viruses is the absence of genes for ‘cell-to-cell movement’ proteins. It is therefore assumed that mycoviruses only move intercellularly during cell division (e.g. sporogenesis) or via hyphal fusion. Mycoviruses might simply not need an external route of infection as they have many means of transmission and spread due to their fungal host's life style:
Plasmogamy and cytoplasmic exchange over extended periods of time
Production of vast amounts of asexual spores
Overwintering via sclerotia
More or less effective transmission into sexual spores
However, there are potential barriers to mycovirus spread due to vegetative incompatibility and variable transmission to sexual spores. Transmission to sexually produced spores can range from 0% to 100% depending on the virus-host combination. Transmission between species of the same genus sharing the same habitat has also been reported including Cryphonectria (C. parasitica and C. sp), Sclerotinia (Sclerotinia sclerotiorum and S. minor), and Ophiostoma (O. ulmi and O. novo-ulmi). Intraspecies transmission has also been reported between Fusarium poae and black Aspergillus isolates. However, it is not known how fungi overcome the genetic barrier; whether there is some form of recognition process during physical contact or some other means of exchange, such as vectors. Research using Aspergillus species indicated that transmission efficiencies might depend on the hosts viral infection status (infected with no, different, or same virus), and that mycoviruses might play a role in the regulation of secondary mycoviral infection. Whether this is also true for other fungi is not yet known. In contrast to acquiring mycoviruses spontaneously, the loss of mycoviruses seems very infrequent and suggests that either viruses actively moved into spores and new hyphal tips, or the fungus might facilitate the mycoviral transport in some other way.
Movement of mycoviruses within fungi
Although it is not known yet whether viral transport is an active or passive process, it is generally assumed that fungal viruses move forward by plasma streaming. Theoretically they could drift with the cytoplasm as it extends into new hyphae, or attach to the web of microtubuli, which would drag them through the internal cytoplasmic space. That might explain how they pass through septa and bypass woronin bodies. However, some researchers have found them located next to septum walls, which could imply that they ‘got stuck’ and were not able to move actively forward themselves. Others have suggested that the transmission of viral mitochondrial dsRNA may play an important role in the movement of mitoviruses found in Botrytis cinerea.
Impact on host phenotype
Phenotypic effects of mycoviral infections can vary from advantageous to deleterious, but most of them are asymptomatic or cryptic. The connection between phenotype and mycovirus presence is not always straightforward. Several reasons may account for this. First, the lack of appropriate infectivity assays often hindered the researcher from reaching a coherent conclusion. Secondly, mixed infection or unknown numbers of infecting viruses make it very difficult to associate a particular phenotypic change with the investigated virus.
Although most mycoviruses often do not seem to disturb their host's fitness, this does not necessarily mean they are living unrecognized by their hosts. A neutral co-existence might just be the result of a long co-evolutionary process. Accordingly, symptoms may only appear when certain conditions of the virus-fungus-system change and get out of balance. This could be external (environmental) as well as internal (cytoplasmic). It is not known yet why some mycoviruses-fungus-combinations are typically detrimental while others are asymptomatic or even beneficial. Nevertheless, harmful effects of mycoviruses are economically interesting, especially if the fungal host is a phytopathogen and the mycovirus could be exploited as biocontrol agent. The best example is represented by the case of CHV1 and C. parasitica. Other examples of deleterious effects of mycoviruses are the ‘La France’ disease of A. bisporus and the mushroom diseases caused by Oyster mushroom spherical virus and Oyster mushroom isometric virus.
In summary, the main negative effects of mycoviruses are:
Decreased growth rate
Lack of sporulation
Change of virulence
Reduced germination of spores
Hypovirulent phenotypes do not appear to correlate with specific genome features and it seems there is not one particular metabolic pathway causing hypovirulence but several. In addition to negative effects, beneficial interactions do also occur. Well described examples are the killer phenotypes in yeasts and Ustilago. Killer isolates secrete proteins that are toxic to sensitive cells of the same or closely related species while the producing cells themselves are immune. Most of these toxins degrade the cell membrane. There are potentially interesting applications of killer isolates in medicine, food industry, and agriculture. A three-part system involving a mycovirus of an endophytic fungus (Curvularia protuberata) of the grass Dichanthelium lanuginosum has been described, which provides a thermal tolerance to the plant, enabling it to inhabit adverse environmental niches. In medically important fungi, an uncharacterized A78 virus of A. fumigatus causes mild hypervirulent effect on pathogenicity when tested on Galleria mellonella (Greater wax moth). Furthermore, TmPV1, a dsRNA partitivirus, of Talaromyces marneffei (formerly Penicillium marneffei) was found to cause hypervirulence phenotype on T. marneffei when tested on a mouse model. These could imply mycoviruses may play important roles in the pathogensis of human pathogenic fungi.
Classification
Most fungal viruses belong to double-stranded RNA viruses, but about 30% belong to positive-strand RNA virus.
However, negative single-stranded RNA viruses and single-stranded DNA viruses have also been described. The ninth edition of the report of the International Committee on Taxonomy of Viruses lists more than 90 fungal viruses belonging to 10 families, of which about 20% of the viruses have not been incertae sedis due to insufficient sequence data and have not yet been determined. The shape of most fungal viruses is isometric.
References
Tebbi CK, Badiga A, Sahakian E, Arora AI, Nair S, Powers JJ, Achille AN, Jaglal MV, Patel S, Migone F. Plasma of Acute Lymphoblastic Leukemia Patients React to the Culture of a Mycovirus Containing Aspergillus flavus. J Pediatr Hematol Oncol. 2020 Jul;42(5):350-358. doi: 10.1097/MPH.0000000000001845. PMID 32576782.
Tebbi CK, Badiga A, Sahakian E, Powers JJ, Achille AN, Patel S, Migone F. Exposure to a mycovirus containing Aspergillus Flavus reproduces acute lymphoblastic leukemia cell surface and genetic markers in cells from patients in remission and not controls. Cancer Treat Res Commun. 2021;26:100279. doi: 10.1016/j.ctarc.2020.100279. Epub 2020 Dec 11. PMID 33348275.
Further reading
External links
Mycology | Mycovirus | Biology | 3,260 |
977,799 | https://en.wikipedia.org/wiki/HCG%2087 | HCG 87 is a compact group of galaxies listed in the Hickson Compact Group Catalogue. This group is about 400 million light-years away in the constellation Capricornus.
The group distinguishes itself as one of the most compact groups of galaxies, hosting two active galactic nuclei and a starburst among its three members, all of which show signs of interaction. This interaction, which astronomers have called visually, and scientifically, intriguing is being examined to understand the influence of active nuclei on star formation histories.
Members
External links
Astronomy Picture of the Day
Galaxy Group HCG 87 – 2003 July 31
HCG 87: A Small Group of Galaxies – 2010 July 6
Close-ups of HCG 87
Galactic Clusters
Studies of Hickson Compact Groups
References
87
Galaxy clusters
Capricornus | HCG 87 | Astronomy | 157 |
50,810,237 | https://en.wikipedia.org/wiki/Jack%27d | Jack'd is a location-based chat and dating app catering to gay and bisexual men. It is available for Android, iPhone, and Windows phones. Jack'd was previously owned by Online Buddies, owner of Manhunt. In 2019, Perry Street Software, the parent company of Scruff, bought Jack’d for an undisclosed sum.
Controversies
On June 13, 2016, the Los Angeles Times reported that Omar Mateen was a Jack'd user for at least a year prior to the Orlando nightclub shooting in which he killed 49 people and wounded 53 others. Jack'd was not able to substantiate those claims.
On February 5, 2019, technology news outlet The Register reported a security flaw in the app in which users' private photos could be publicly viewed by anybody aware of the flaw. On February 7, 2019, Jack'd fixed the bug. On June 28, 2019, the Office of the Attorney General of New York announced that Online Buddies, Inc. will pay the state $240,000 to settle the privacy complaint and that the company would implement a "comprehensive security program" to prevent similar incidents in the future. In a statement, New York State Attorney General Letitia James said, “[Jack'd] put users’ sensitive information and private photos at risk of exposure and [Online Buddies] didn't do anything about it for a full year just so they could continue to make a profit.”
See also
Homosocialization
JOYclub
Timeline of online dating services
Tinder
References
2010 software
Android (operating system) software
Geosocial networking
Internet properties established in 2010
iOS software
LGBTQ social networking services
Mobile social software
Online dating applications
Online dating services
LGBTQ online dating services
Social networking services
Technology companies established in 2010 | Jack'd | Technology | 355 |
29,760,827 | https://en.wikipedia.org/wiki/Gebhart%20factor | The Gebhart factors are used in radiative heat transfer, it is a means to describe the ratio of radiation absorbed by any other surface versus the total emitted radiation from given surface. As such, it becomes the radiation exchange factor between a number of surfaces. The Gebhart factors calculation method is supported in several radiation heat transfer tools, such as TMG and TRNSYS.
The method was introduced by Benjamin Gebhart in 1957. Although a requirement is the calculation of the view factors beforehand, it requires less computational power, compared to using ray tracing with the Monte Carlo Method (MCM). Alternative methods are to look at the radiosity, which Hottel and others build upon.
Equations
The Gebhart factor can be given as:
.
The Gebhart factor approach assumes that the surfaces are gray and emits and are illuminated diffusely and uniformly.
This can be rewritten as:
where
is the Gebhart factor
is the heat transfer from surface i to j
is the emissivity of the surface
is the surface area
is the temperature
The denominator can also be recognized from the Stefan–Boltzmann law.
The factor can then be used to calculate the net energy transferred from one surface to all other, for an opaque surface given as:
where
is the net heat transfer for surface i
Looking at the geometric relation, it can be seen that:
This can be used to write the net energy transfer from one surface to another, here for 1 to 2:
Realizing that this can be used to find the heat transferred (Q), which was used in the definition, and using the view factors as auxiliary equation, it can be shown that the Gebhart factors are:
where
is the view factor for surface i to j
And also, from the definition we see that the sum of the Gebhart factors must be equal to 1.
Several approaches exists to describe this as a system of linear equations that can be solved by Gaussian elimination or similar methods. For simpler cases it can also be formulated as a single expression.
See also
Radiosity
Thermal radiation
Black body
References
Heat transfer | Gebhart factor | Physics,Chemistry | 427 |
60,421,497 | https://en.wikipedia.org/wiki/Decolonization%20%28medicine%29 | Decolonization, also bacterial decolonization, is a medical intervention that attempts to rid a patient of an antimicrobial resistant pathogen, such as methicillin-resistant Staphylococcus aureus (MRSA) or antifungal-resistant Candida.
By pre-emptively treating patients who have become colonized with an antimicrobial resistant organism, the likelihood of the patient going on to develop life-threatening healthcare-associated infections is reduced. Common sites of bacterial colonization include the nasal passage, groin, oral cavity and skin.
History
In cooperation with the Centers for Disease Control and Prevention (CDC), the Chicago Antimicrobial Resistance and Infection Prevention Epicenter (C-PIE), Harvard/Irvine Bi-Coastal Epicenter, and Washington University and Barnes Jewish County (BJC) Center for Prevention of Healthcare-Associated Infections conducted a study to test different strategies to prevent and decrease the rate of healthcare-associated infections (HAIs). REDUCE MRSA, which stands for Randomized Evaluation of Decolonization vs. Universal Clearance to Eliminate methicillin-resistant Staphylococcus aureus (MRSA), was completed in September 2011. This study determined decolonization with chlorhexidine and mupirocin of all patients without screening was the most effective method of reducing the presence of MRSA and the overall number of bloodstream infections.
Medical uses
Decolonization is used to reduce rates of infections caused by MRSA. Staphylococcus aureus (S. aureus) is a common cause of hospital related infections, including bloodstream infections and infections of the heart and bone. Additionally, increasing cases of methicillin-susceptible S. aureus (MSSA) and MRSA pose a new challenge as these strains are difficult or impossible to treat with standard antibiotic regimens. Because of the prevalence of S. aureus within the general population and significant number of severe infections caused by this bacteria, decolonization protocols have been implemented in many hospital networks to decrease MRSA infections. By using disinfectants over an extended period of time, decolonization decreases or minimizes patient bacterial load.
Technique
There are several decolonization regimens currently used for MRSA decolonization. Targeted decolonization involves screening patients for MRSA then isolating and implementing decolonization protocols only for patients who test positive for MRSA. On the other hand, universal decolonization involves no screening and decolonization for all patients in a given hospital setting or department.
Products used for decolonization typically involve chlorhexidine rinses for bathing or showering, a mouthwash to clean the oral cavity, and a nasal spray containing mupirocin. It is important to include a mouthwash and nasal spray as individuals commonly carry MRSA in the nose, mouth, and throat. Chlorhexidine is a disinfectant that is used to disinfect skin prior to surgery, surgical instrument sterilization, and in hand disinfectants in healthcare settings. In the mouthwash form, it is commonly used for gingivitis. Mupirocin is a topical antibiotic commonly used for superficial skin infections and has been approved by the FDA nasal decolonization. Though these are the most commonly used products, there are a number of alternative antibiotics and antiseptics, like povidone-iodine, that are used in decolonization.
Typically, patients use chlorhexidine shampoo or body wash daily and mupirocin nasal spray twice daily. The duration of product use for optimal effect is still being studied, but the most widely studied regimen recommends use of the products as mentioned previously for five days twice a month over a sixth month period. There is limited data supporting decolonization or recommendations of duration of decolonization in outpatient settings.
Risks and complications
Decolonization is a relatively safe medical intervention. Local skin irritation is the most common side effect.
See also
Antibiotic
Antifungal
Antiviral drug
References
Bacteria and humans
Antimicrobial resistance | Decolonization (medicine) | Biology | 840 |
77,772,029 | https://en.wikipedia.org/wiki/HD%2032667 | HD 32667 is a hierarchical triple star system located about away in the southern constellation of Lepus. The brightest of the three components, and the only one visible, is a hot white subgiant star. With an apparent magnitude of 5.582, it is faintly visible by the naked eye in dark skies. In Chinese astronomy, the star was given the name Jiǔ yóu zēng qī (), meaning it was the seventh star added to the asterism Jiǔ yóu (, "Imperial Military Flag") in the Net mansion, when the star chart () was compiled between 1744 and 1752.
The star is listed in the Catalogue of Ap, HgMn and Am Stars as an A2-type Am star designated Renson 8370, although astronomer Dorrit Hoffleit suggested the contrary, classing it as an A3 weak-line star.
Stellar companions
HD 32667 Ab
Radial velocity variations were reported as early as 1930, indicating the existence of an unresolved companion (HD 32667 Ab) orbiting close to the primary star. The 1991 edition of the Bright Star Catalogue lists HD 32667 (HR 1645) as a spectroscopic binary. However, this secondary star would remain hardly studied, with existing measurements being of "very bad" quality. In 2019, rough constraints were made on the nature of the secondary, namely that it does not weigh more than 1.44 , has either a highly eccentric (e~0.8) 46-day orbit or a 4-day orbit with an indeterminate eccentricity, and has a substantial magnitude difference with the brighter primary. Further research is needed to determine its precise characteristics.
HD 32667 B
A distant red dwarf companion revolving around the inner binary (Aa/Ab) was discovered in 2019 from data collected by the Gemini Planet Imager. The discovery paper described it as a 110.3 (0.1053 ) ultra-cool dwarf with a spectral type of M8, located at a separation of 0.533" from the inner binary. A 2023 study presented a semi-major axis of , a substantially higher mass of 0.21 , and a spectral type of M4V.
References
A-type subgiants
M-type main-sequence stars
Triple star systems
Lepus (constellation)
Leporis, 10
032667
CD-24 02795
J05035326-2423174
023554
1645 | HD 32667 | Astronomy | 502 |
2,272,644 | https://en.wikipedia.org/wiki/Daina%20Taimi%C5%86a | Daina Taimiņa (born August 19, 1954) is a Latvian mathematician, retired adjunct associate professor of mathematics at Cornell University, known for developing a way of modeling hyperbolic geometry with crocheted objects.
Education and career
Taimiņa received all of her formal education in Riga, Latvia, where in 1977 she graduated summa cum laude from the University of Latvia and completed her graduate work in Theoretical Computer Science (with thesis advisor Prof. Rūsiņš Mārtiņš Freivalds) in 1990. As one of the restrictions of the Soviet system at that time, a doctoral thesis was not allowed to be defended in Latvia, so she defended hers in Minsk, receiving the title of Candidate of Sciences. This explains the fact that Taimiņa's doctorate was formally issued by the Institute of Mathematics of the National Academy of Sciences of Belarus. After Latvia regained independence in 1991, Taimiņa received her higher doctoral degree (doktor nauk) in mathematics from the University of Latvia, where she taught for 20 years.
Daina Taimiņa joined the Cornell Math Department in December 1996.
Combining her interests in mathematics and crocheting, she is one of 24 mathematicians and artists who make up the Mathemalchemy Team.
Hyperbolic crochet
While attending a geometry workshop at Cornell University about teaching geometry for university professors in 1997, Taimiņa was presented with a fragile paper model of a hyperbolic plane, made by the professor in charge of the workshop, David Henderson (designed by geometer William Thurston.) It was made «out of thin, circular strips of paper taped together». She decided to make more durable models, and did so by crocheting them. The first night after first seeing the paper model at the workshop she began experimenting with algorithms for a crocheting pattern, after visualising hyperbolic planes as exponential growth.
The following fall, Taimiņa was scheduled to teach a geometry class at Cornell. She was determined to find what she thought was the best possible way to teach her class. So while she, together with her family, spent the preceding summer at a tree farm in Pennsylvania, she also spent her days by the pool watching her two daughters learning how to swim whilst simultaneously making a classroom set of models of the hyperbolic plane. This was the first ever made from yarn and crocheting.
The models made a significant difference to her students, according to themselves. They said they "liked the tactile way of exploring hyperbolic geometry" and that it helped them acquire experiences that helped them move on in said geometry. This was what Taimina herself had been missing when first learning about hyperbolic planes and is also what has made her models so effective, as these models have later become the preferred way of explaining hyperbolic space within geometry.
In a TedxRiga by Taimiņa she tells the story of how the need for a visual, intuitive way of understanding hyperbolic planes spurred her toward inventing crocheted geometry models. In the talk she also gives a basic introduction to hyperbolic geometry using her models as well as rendering some of the negative responses she initially received from some who viewed crocheting as unfitting in mathematics.
In the foreword to Taimiņa's book Crocheting Adventures with Hyperbolic Planes mathematician William Thurston, the designer of the paper model of hyperbolic planes, called Taimiņa's models «deceptively interesting». He attributed much of his view on them to how they make possible a tactile, non-symbolic, cognitively holistic way of understanding the highly abstract and complex part of mathematics non-euclidean geometry, is.
Taimiņa has led several workshops at Cornell University for college geometry instructors together with professor David Henderson (of the aforementioned 1997 workshop and who later became her husband).
Crocheted mathematical models later appeared in three geometry textbooks they wrote together, of which the most popular is Experiencing Geometry: Euclidean and non-Euclidean with History. In 2020 Taimiņa published 4th edition of this book as open source Experiencing Geometry
An article about Taimiņa's innovation in New Scientist was spotted by the Institute For Figuring, a small non-profit organisation based in Los Angeles, and she was invited to speak about hyperbolic space and its connections with nature to a general audience which included artists and movie producers. Taimiņa's initial lecture and following other public presentations sparked great interest in this new tactile way of exploring concepts of hyperbolic geometry, making this advanced topic accessible to wide audiences. Originally creating purely mathematical models, Taimiņa soon became popular as a fiber artist and public presenter for general audiences of ages five and up. In June 2005, her work was first shown as art in an exhibition "Not The Knitting You Know" at Eleven Eleven Sculpture Space, an art gallery in Washington, D.C. Since then she has participated regularly in various shows in galleries in US, UK, Latvia, Italy, Belgium, Ireland, Germany. Her artwork is in the collections of several private collectors, colleges and universities, and has been included in the American Mathematical Model Collection of the Smithsonian Museum, Cooper–Hewitt, National Design Museum, and Institut Henri Poincaré.
Her work and its far-flung influence has received wide interest in media. It has been written about in 'Knit Theory' in Discover magazine and in The Times, explaining how a hyperbolic plane can be crocheted by increasing the number of stitches:
Margaret Wertheim interviewed Daina Taimiņa and David Henderson for Cabinet Magazine
Later, based on Taimiņa's work, the Institute For Figuring published a brochure "A Field Guide to Hyperbolic Space". In 2005 the IFF decided to incorporate Taimiņa's ideas and approach of explaining hyperbolic space in their mission of popularizing mathematics, and curated an exhibition at Machine Project gallery, which was the subject of a piece in the Los Angeles Times.
Taimiņa's way of exploring hyperbolic space via crochet and connections with nature, combatting math phobia, was adapted by Margaret Wertheim in her talks and became highly successful in the IFF-curated Hyperbolic Crochet Coral Reef project.
Books
Taimiņa's book "Crocheting Adventures with Hyperbolic Planes" (A K Peters, Ltd., 2009, ) won the 2009 Bookseller/Diagram Prize for Oddest Title of the Year.
It also won the 2012 Euler Book Prize of the Mathematical Association of America.
Taimiņa also contributed to David W. Henderson's book Differential Geometry: A Geometric Introduction (Prentice Hall, 1998) and, with Henderson, wrote Experiencing Geometry: Euclidean and Non-Euclidean with History (Prentice Hall, 2005).
See also
Mathematics and fiber arts
Notes
References
David W. Henderson, Daina Taimina Experiencing Geometry: Euclidean and non-Euclidean with History, Pearson Prentice Hall, 2005 Experiencing Geometry
Further reading
.
External links
Personal web page at Cornell University
.
.
.
.
TEDxRiga talk Crocheting Hyperbolic Planes: Daina Taimiņa at TEDxRiga
1954 births
Topologists
Hyperbolic geometers
Latvian emigrants to the United States
Women mathematicians
Cornell University faculty
20th-century Latvian mathematicians
Living people
Latvian women writers
Mathematical artists
Mathematics popularizers
Riga State Gymnasium No.1 alumni
University of Latvia alumni
Latvian women scientists
21st-century Latvian mathematicians | Daina Taimiņa | Mathematics | 1,470 |
508,012 | https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen%20theorem | In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the sample size increases to infinity. Under stronger assumptions, the Berry–Esseen theorem, or Berry–Esseen inequality, gives a more quantitative result, because it also specifies the rate at which this convergence takes place by giving a bound on the maximal error of approximation between the normal distribution and the true distribution of the scaled sample mean. The approximation is measured by the Kolmogorov–Smirnov distance. In the case of independent samples, the convergence rate is , where is the sample size, and the constant is estimated in terms of the third absolute normalized moment.
Statement of the theorem
Statements of the theorem vary, as it was independently discovered by two mathematicians, Andrew C. Berry (in 1941) and Carl-Gustav Esseen (1942), who then, along with other authors, refined it repeatedly over subsequent decades.
Identically distributed summands
One version, sacrificing generality somewhat for the sake of clarity, is the following:
There exists a positive constant C such that if X1, X2, ..., are i.i.d. random variables with E(X1) = 0, E(X12) = σ2 > 0, and E(|X1|3) = ρ < ∞, and if we define
the sample mean, with Fn the cumulative distribution function of
and Φ the cumulative distribution function of the standard normal distribution, then for all x and n,
That is: given a sequence of independent and identically distributed random variables, each having mean zero and positive variance, if additionally the third absolute moment is finite, then the cumulative distribution functions of the standardized sample mean and the standard normal distribution differ (vertically, on a graph) by no more than the specified amount. Note that the approximation error for all n (and hence the limiting rate of convergence for indefinite n sufficiently large) is bounded by the order of n−1/2.
Calculated upper bounds on the constant C have decreased markedly over the years, from the original value of 7.59 by Esseen in 1942. The estimate C < 0.4748 follows from the inequality
since σ3 ≤ ρ and 0.33554 · 1.415 < 0.4748. However, if ρ ≥ 1.286σ3, then the estimate
is even tighter.
proved that the constant also satisfies the lower bound
Non-identically distributed summands
Let X1, X2, ..., be independent random variables with E(Xi) = 0, E(Xi2) = σi2 > 0, and E(|Xi|3) = ρi < ∞. Also, let
be the normalized n-th partial sum. Denote Fn the cdf of Sn, and Φ the cdf of the standard normal distribution. For the sake of convenience denote
In 1941, Andrew C. Berry proved that for all n there exists an absolute constant C1 such that
where
Independently, in 1942, Carl-Gustav Esseen proved that for all n there exists an absolute constant C0 such that
where
It is easy to make sure that ψ0≤ψ1. Due to this circumstance inequality (3) is conventionally called the Berry–Esseen inequality, and the quantity ψ0 is called the Lyapunov fraction of the third order. Moreover, in the case where the summands X1, ..., Xn have identical distributions
and thus the bounds stated by inequalities (1), (2) and (3) coincide apart from the constant.
Regarding C0, obviously, the lower bound established by remains valid:
The lower bound is exactly reached only for certain Bernoulli distributions (see for their explicit expressions).
The upper bounds for C0 were subsequently lowered from Esseen's original estimate 7.59 to 0.5600.
Sum of a random number of random variables
Berry–Esseen theorems exist for the sum of a random number of random variables. The following is Theorem 1 from Korolev (1989), substituting in the constants from Remark 3. It is only a portion of the results that they established:
Let be independent, identically distributed random variables with , , . Let be a non-negative integer-valued random variable, independent from . Let , and define
Then
Multidimensional version
As with the multidimensional central limit theorem, there is a multidimensional version of the Berry–Esseen theorem.
Let be independent -valued random vectors each having mean zero. Write and assume is invertible. Let be a -dimensional Gaussian with the same mean and covariance matrix as . Then for all convex sets ,
,
where is a universal constant and (the third power of the L2 norm).
The dependency on is conjectured to be optimal, but might not be.
See also
Chernoff's inequality
Edgeworth series
List of inequalities
List of mathematical theorems
Concentration inequality
Notes
References
Bibliography
Durrett, Richard (1991). Probability: Theory and Examples. Pacific Grove, CA: Wadsworth & Brooks/Cole. .
Feller, William (1972). An Introduction to Probability Theory and Its Applications, Volume II (2nd ed.). New York: John Wiley & Sons. .
Manoukian, Edward B. (1986). Modern Concepts and Theorems of Mathematical Statistics. New York: Springer-Verlag. .
Serfling, Robert J. (1980). Approximation Theorems of Mathematical Statistics. New York: John Wiley & Sons. .
External links
Gut, Allan & Holst Lars. Carl-Gustav Esseen, retrieved Mar. 15, 2004.
Probabilistic inequalities
Theorems in statistics
Central limit theorem | Berry–Esseen theorem | Mathematics | 1,211 |
28,372,485 | https://en.wikipedia.org/wiki/Verbal%20self-defense | Verbal self-defense or verbal aikido is the art of using one's words to prevent, de-escalate, or end an attempted verbal or physical assault.
It is a way of using words to maintain mental and emotional safety. This kind of "conflict management" involves using posture and body language, tone of voice, and choice of words as a means for calming a potentially volatile situation before it can manifest into physical violence. This often involves techniques such as taking a time-out, deflecting the conversation to less argumentative topics, and/or redirecting the conversation to other individuals in the group who are less passionately involved.
Overview
Verbal self-defense experts have widely varying definitions of what it is and how it is applied. These include everything from a person simply saying no to someone else or repeatedly refusing a request to telling someone who has violated a personal boundary what he/she ought to know. It could even entail a more complicated scenario in which a person is called on to refuse to engage verbally with someone manipulative, to set limits, and to end the conversation.
In any definition it is always agreed that verbal self-defense is necessary as a means of enforcing personal boundaries and limits. Part of learning these skills includes learning how to identify communication triggers which cause a person to experience negative feelings and, in some cases, what those triggers represent with regards to what personal values the other person are violating.
The abusive types of communication that verbal self-defense is designed to acknowledge and deal with also vary greatly. This includes indirect forms of abuse such as backhanded comments, and backstabbing or two-faced behaviors. As well, verbal self-defense is meant to address more commonly recognized forms of abuse such as yelling, belittling, and name calling. Going beyond verbal attacks, abusive behaviors also recognized in the field of verbal self-defense are aggressive posturing (taking a threatening posture or making a threatening gesture), physically interfering with personal belongings, and inappropriately intruding on one's personal space.
Key components
Most experts who write and publish articles and books on the subject of verbal self-defense identify several key elements to strong verbal self-defense skills.
Being able to identify people, situations, and/or behaviors that induce hurtful feelings such as fear, inadequacy, and shame is important in order to know when a person needs to apply verbal tactics of defense.
Controlling how a person responds to conflict, both mentally and emotionally, is key to applying verbal defense skills efficiently and appropriately.
Having a general knowledge of what to say in advance offers a significant advantage for anyone using verbal self-defense. Some authors have even gone so far as to provide actual statements for people to use as a way to deal with verbally aggressive communicators.
Controversy
Authors and professional instructors offering seminars and workshops have differing views with regard to whether or not verbal self-defense is a form of "persuasion" and if "consequences" for the attacker should be considered a key component.
Persuasion vs. self-defense
In the field of verbal self-defense, the one element not entirely agreed upon is the concept that verbal defense tactics include the art of persuasion. Several authors clearly proclaim that verbal self-defense is designed as a means for persuading others; however, more recent books on the subject have denounced this commonly accepted fact.
The newer definition of verbal self-defense divides persuasion into a category of its own and states that verbal defense tactics should be more in line with the concept of physical self-defense. This idea, taken from ideologies of martial arts, puts forth the belief that verbal self-defense should only be used with respect to maintaining one's mental and emotional well-being. The position regarding self-protection in a verbal conflict and the further intention to protect the verbal assailant is posited in verbal aikido, which aims at proposing a balanced or collaborative result wherein the attacker may save face.
Consequences
The requirement for having a means to enforce "consequences" on people as a pre-requisite for effective verbal self-defense still remains questionable. Almost every author on the subject includes ways of handling non-physical aggression without having any repercussions for the attacker in the event the conflict is not solved amicably.
With specific regard to verbal self-defense in schools the concept of having consequences for bullying is considered by some to be key, where others are less focused on punishment and choose, instead, to put more emphasis on dealing with the aggressor in more positive ways. Although this topic has only recently begun being addressed by experts in this field it remains to be seen to what degree the importance of consequences will have in handling interpersonal conflicts using verbal self-defense.
Common approaches
Leading authors in the area of verbal self-defense and defensive communication styles offer several different techniques for defusing potentially volatile and/or abusive situations of conflict.
Avoidance
Being aware of situations that will likely lead to verbal conflict or abuse and making an effort to avoid them.
Withdrawing
Once engaged in an argument, situation of conflict, or when being verbally attacked, making an excuse and exiting the area.
Deflecting
Changing topic or focus on the interaction as a means of avoiding any disagreement or negative reaction on the part of the aggressor.
Compromise
Openly offering ideas and seeking ways to placate the attacker and/or their reasons for the abusive communication.
Verbal aikido
A means of communication that is based on the aikido philosophy and martial way, created and propagated by Morihei Ueshiba during the 20th century. It is a style of conflict management and resolution that involves treating the "attacker" as a partner rather than an adversary. The techniques practiced by aikidoka aim at restoring a balanced interpersonal dynamic and/or reaching a positive emotional result in an exchange.
In a common teaching of this communication style, developed by Luke Archer, the approach is simplified into three steps:
Receiving the attack with an "inner smile" (a serene inner confidence)
Accompanying the attacker with verbal Irimi until destabilization
Proposing Ai-ki (an energy balance)
Through the methods and exercises taught in verbal aikido training, the practitioner works on developing a sense of self-control, an assertive style of communication, and the practice of deliberate intention.
Applications
Developed and taught widely in police academy settings, it finds significant relevance in today's everyday workplace as well as schools, public processing centers, help desks and even mall security level guards. While it may seem counter intuitive, teaching adolescents this valuable skill has shown to reduce frustration in their daily lives as they feel more in control of conflict, while learning how to follow a path to safety and resolution.
Influential contributors
Several people are considered to be significant contributors to the field of verbal self-defense. These include people who were early pioneers to advocate for the importance of verbal defense skills, developers of new techniques for verbally defensive tactics, and internationally recognized and known trainers.
Suzette Haden Elgin (1936–2015), the author of The Gentle Art of Verbal Self-Defense, was one of the earliest writers to use the term. She states that verbal self-defense defends against the eight most common types of verbal violence, and redirect and defuse potential verbal confrontations.
George Thompson (1941–2011), author of Verbal Judo, advanced the field of verbal self-defense by breaking down how to apply the techniques for de-escalation and defusing used by professionally trained police officers. He was one of the leading experts in verbal self-defense tactics and trained law-enforcement agencies around the world.
Daniel Scott, author of Verbal Self Defense for The Workplace, more recently combined self-defense concepts with the language patterns of neuro linguistic programming in order to develop a new form of verbal self-defense. His new six-step model for verbal self-defense includes all the main components necessary for people to defend themselves against bullies and aggressive people in the work place and elsewhere.
See also
Assertiveness
Conflict resolution
Negotiations
Persuasion
Self-defense
Verbal abuse
References
Further reading
Elgin, Suzette Haden (1985). The Gentle Art of Verbal Self-Defense. Dorset House Publishing Co Inc.
Glass, Lillian (1999). The Complete Idiot's Guide to Verbal Self Defense. Alpha.
Scott, Daniel (2009). Verbal Self Defense for The Workplace. Book Shaker.
Thompson, George (2004). Verbal Judo, Harper Paperbacks.
Sauvé, Gaëtan (2013). "Désamorcer les attaques verbales" Les éditions succès-do, https://www.autodefenseverbale.com/livre
Conflict (process)
de:Deeskalation | Verbal self-defense | Biology | 1,782 |
38,879,895 | https://en.wikipedia.org/wiki/Operating%20System%20Concepts | Operating System Concepts by Abraham Silberschatz and James Peterson is a classic textbook on operating systems. It is often called the "dinosaur book", as the first edition of the book had on the cover a number of dinosaurs labeled with various old operating systems. The bigger dinosaurs were labeled with the older big OSs. The ape-like creature was labeled UNIX. The idea was that like dinosaurs, operating systems evolve.
The book has been published in updated editions since 1983. The third edition added the author Peter Galvin, and the sixth edition added the author Greg Gagne. the textbook was in its ninth edition.
References
1982 books
Operating systems
Computer science books
Addison-Wesley books | Operating System Concepts | Technology | 139 |
3,881,608 | https://en.wikipedia.org/wiki/Gavins%20Point%20Dam | Gavins Point Dam is a embankment rolled-earth and chalk-fill dam which spans the Missouri River and impounds Lewis and Clark Lake. The dam joins Cedar County, Nebraska with Yankton County, South Dakota a distance of 811.1 river miles (1,305 km) upstream of St. Louis, Missouri, where the river joins the Mississippi River. The dam and hydroelectric power plant were constructed as the Gavins Point Project from 1952 to 1957 by the United States Army Corps of Engineers as part of the Pick-Sloan Plan. The dam is located approximately 4 miles (6.4 km) west or upstream of Yankton, South Dakota.
History and background
Gavins Point Dam was constructed as a part of the Pick–Sloan Missouri Basin Program, authorized by the Flood Control Act of 1944 by Congress. The dam is named after Gavins Point, a bluff along the northern bank of the Missouri River named for an early settler, now within the western end of Lewis & Clark Recreation Area, which was to be the original location of construction of the dam. The location was moved and construction began further downstream along Calumet Bluff because this location offered a shorter span distance and less fill material needed for dam construction, although the project kept the original name. The dam operations work in conjunction with the other
Pick-Sloan Program Dams to assist with conservation, control, and use of water resources in the Missouri River Basin. The intended beneficial uses of these water resources include flood control, aids to navigation, irrigation, supplemental water supply, power generation, municipal and industrial water supplies, stream-pollution abatement, sediment control, preservation and enhancement of fish and wildlife, and creation of recreation opportunities. Gavins Point is the most downstream dam on the Missouri River, being 811.1 river miles upstream of St. Louis where the river meets the Mississippi River. The next dam upstream is Fort Randall Dam.
2011 Missouri River Flood
During the 2011 Missouri River Flood, the dam released a record water flow of 160,200 cfs, topping the previous record of 70,000 cfs in 1997. During the 2011 flood, the debris damaged the dam and a significant portion of rocks were dislodged from its upstream side. The U.S. Army Corps of Engineers soon began repairs to the dam and its spillway gates. Pressure sensors were also installed in concrete portion of the dam.
Hydroelectric power generation
The dam has a hydroelectric power plant with three generators, each having a nameplate capacity of 44,099 kW, for a total of 132.297 MW. The hydroelectric power plant provides enough electricity to supply 68,000 homes. The power generated is sold through the Western Area Power Administration.
Reservoir
See main article: Lewis and Clark Lake
Gavins Point Dam creates Lewis and Clark Lake, a popular regional tourist destination for water-based recreational opportunities including boating and fishing, along with camping, hiking, and hunting opportunities managed by the State of South Dakota, State of Nebraska, and the U.S. Army Corps of Engineers. The lake is significantly impacted by sedimentation and siltation issues, diminishing the overall water surface area, water storage capacity, and recreational opportunities. Sediment carried by the Missouri River and Niobrara River is slowed and trapped within the reservoir due to the dam impounding and thus stopping the natural river flow. Studies show approximately 5.1 million tons of sediment are deposited in the lake each year, which contributes to the lake's increasing size of delta area on the western portions of the lake. Approximately 60% of the sediment comes from the Nebraska Sandhills via the Niobrara River. As of 2016, approximately 30% of the lake's overall surface area has diminished due to sedimentation deposits, and some figures project by 2045 approximately 50% of the lake will be diminished due to sedimentation deposits. Presently, there is no plan or solution to remove or slow the progression of the siltation within the lake.
See also
Lewis and Clark Lake
Pick–Sloan Plan
List of crossings of the Missouri River
List of dams in the Missouri River watershed
Water Resources Development Act
References
External links
U.S. Army Corps of Engineers, Omaha District - Gavins Point Project (Official Site)
Missouri River Water Management Division - U.S. Army Corps of Engineers
Missouri River Basin Daily Bulletin - USACE
Buildings and structures in Cedar County, Nebraska
Dams in Nebraska
Dams in South Dakota
Hydroelectric power plants in Nebraska
Hydroelectric power plants in South Dakota
Dams on the Missouri River
Buildings and structures in Yankton County, South Dakota
United States Army Corps of Engineers dams
Dams completed in 1957
Energy infrastructure completed in 1957
Earth-filled dams
United States Army Corps of Engineers
Energy infrastructure in Nebraska | Gavins Point Dam | Engineering | 947 |
20,182,015 | https://en.wikipedia.org/wiki/Operation%20LAC | Operation LAC (Large Area Coverage) was a United States Army Chemical Corps operation which dispersed microscopic zinc cadmium sulfide (ZnCdS) particles over much of the United States and Canada in order to test dispersal patterns and the geographic range of chemical or biological weapons.
Earlier tests
There were several tests that occurred prior to the first spraying affiliated with Operation LAC that proved the concept of large-area coverage. Canadian files relating to participation in the tests cite in particular three previous series of tests leading up to those conducted in Operation LAC.
September 1950 – Six simulated attacks were conducted upon the San Francisco Bay Area. It was concluded that it was feasible to attack a seaport city with biological aerosol agents from a ship offshore.
March–April 1952 – Five trials were conducted off the coast of South Carolina and Georgia under Operation Dew. It was concluded that long-range aerosol clouds could obtain hundreds of miles of travel and large-area coverage when disseminated from ground level under certain meteorological conditions.
1957 – North Sea, East coast of Britain. It was shown that large-area coverage with particles was feasible under most meteorological conditions.
In addition, the army admitted to spraying in Minnesota locations from 1953 into the mid-1960s.
In St. Louis in the mid 1950s, and again a decade later, the army sprayed zinc cadmium sulfide via motorized blowers atop Pruitt-Igoe, at schools, from the backs of station wagons, and via planes.
Operation
Operation LAC was undertaken in 1957 and 1958 by the U.S. Army Chemical Corps. The operation involved spraying large areas with zinc cadmium sulfide. The U.S. Air Force loaned the Army a C-119, "Flying Boxcar", and it was used to disperse the materials by the ton in the atmosphere over the United States. The first test occurred on December 2, 1957, along a path from South Dakota to International Falls, Minnesota.
The tests were designed to determine the dispersion and geographic range of biological or chemical agents. Stations on the ground tracked the fluorescent zinc cadmium sulfide particles. During the first test and subsequently, much of the material dispersed ended up being carried by winds into Canada. However, as was the case in the first test, particles were detected up to 1,200 miles away from their drop point. A typical flight line covering 400 miles would release 5,000 pounds of zinc cadmium sulfide and in fiscal year 1958 around 100 hours were spent in flight for LAC. That flight time included four runs of various lengths, one of which was 1,400 miles.
Specific tests
The December 2, 1957, test was incomplete due to a mass of cold air coming south from Canada. It carried the particles from their drop point and then took a turn northeast, taking most of the particles into Canada with it. Military operators considered the test a partial success because some of the particles were detected 1,200 miles away, at a station in New York state. A February 1958 test at Dugway Proving Ground ended similarly. Another Canadian air mass swept through and carried the particles into the Gulf of Mexico. Two other tests, one along a path from Toledo, Ohio, to Abilene, Texas, and another from Detroit, to Springfield, Illinois, to Goodland, Kansas, showed that agents dispersed through this aerial method could achieve widespread coverage when particles were detected on both sides of the flight paths.
Scope
According to Leonard A. Cole, an Army Chemical Corps document titled "Summary of Major Events and Problems" described the scope of Operation LAC. Cole stated that the document outlined that the tests were the largest ever undertaken by the Chemical Corps and that the test area stretched from the Rocky Mountains to the Atlantic Ocean, and from Canada to the Gulf of Mexico. Other sources describe the scope of LAC varyingly; examples include, "Midwestern United States", and "the states east of the Rockies". Specific locations are mentioned as well. Some of those include: a path from South Dakota to Minneapolis, Minnesota, Dugway Proving Ground, Corpus Christi, Texas, north-central Texas, and the San Francisco Bay area.
Risks and issues
Bacillus globigii was used to simulate biological warfare agents (such as anthrax), because it was then considered a contaminant with little health consequence to humans; however, BG is now considered a human pathogen.
Anecdotal evidence exists of ZnCdS causing adverse health effects as a result of LAC. However, a U.S. government study, done by the U.S. National Research Council, stated, in part, "After an exhaustive, independent review requested by Congress, we have found no evidence that exposure to zinc cadmium sulfide at these levels could cause people to become sick." Still, the use of ZnCdS remains controversial and one critic accused the Army of "literally using the country as an experimental laboratory".
According to the National Library of Medicine's TOXNET database, the EPA reported that Cadmium-sulfide was classified as a probable human carcinogen.
See also
Human experimentation in the United States
Operation Dew
Project 112
References
Further reading
Subcommittee on Zinc Cadmium Sulfide, U.S. National Research Council, Toxicologic Assessment of the Army's Zinc Cadmium Sulfide Dispersion, (Google Books), National Academies Press, 1997, ().
LAC
United States biological weapons program
Chemical warfare
Human subject research in the United States | Operation LAC | Chemistry | 1,108 |
240,749 | https://en.wikipedia.org/wiki/MEDLINE | MEDLINE (Medical Literature Analysis and Retrieval System Online, or MEDLARS Online) is a bibliographic database of life sciences and biomedical information. It includes bibliographic information for articles from academic journals covering medicine, nursing, pharmacy, dentistry, veterinary medicine, and health care. MEDLINE also covers much of the literature in biology and biochemistry, as well as fields such as molecular evolution.
Compiled by the United States National Library of Medicine (NLM), MEDLINE is freely available on the Internet and searchable via PubMed and NLM's National Center for Biotechnology Information's Entrez system.
History
MEDLARS (Medical Literature Analysis and Retrieval System) is a computerised biomedical bibliographic retrieval system. It was launched by the National Library of Medicine in 1964 and was the first large-scale, computer-based, retrospective search service available to the general public.
Initial development of MEDLARS
Since 1879, the National Library of Medicine has published Index Medicus, a monthly guide to medical articles in thousands of journals. The huge volume of bibliographic citations was manually compiled. In 1957 the staff of the NLM started to plan the mechanization of the Index Medicus, prompted by a desire for a better way to manipulate all this information, not only for Index Medicus but also to produce subsidiary products. By 1960 a detailed specification was prepared, and by the spring of 1961, request for proposals were sent out to 72 companies to develop the system. As a result, a contract was awarded to the General Electric Company. A Minneapolis-Honeywell 800 computer, which was to run MEDLARS, was delivered to the NLM in March 1963, and Frank Bradway Rogers (Director of the NLM 1949 to 1963) said at the time, "..If all goes well, the January 1964 issue of Index Medicus will be ready to emerge from the system at the end of this year. It may be that this will mark the beginning of a new era in medical bibliography."
MEDLARS cost $3 million to develop, and at the time of its completion in 1964, no other publicly available, fully operational electronic storage and retrieval system of its magnitude existed. The original computer configuration operated from 1964 until its replacement by MEDLARS II in January 1975.
MEDLARS Online
In late 1971, an online version called MEDLINE ("MEDLARS Online") became available as a way to do online searching of MEDLARS from remote medical libraries. This early system covered 239 journals and boasted that it could support as many as 25 simultaneous online users (remotely logged in from distant medical libraries) at one time. However, this system remained primarily in the hands of libraries, with researchers able to submit pre-programmed search tasks to librarians and obtain results on printouts, but rarely able to interact with the NLM computer output in real-time. This situation continued through the beginning of the 1990s and the rise of the World Wide Web.
In 1996, soon after most home computers began automatically bundling efficient web browsers, a free public version of MEDLINE was deployed. This system, called PubMed, was offered to the general online user in June 1997, when MEDLINE searches via the Web were demonstrated.
Database
In December 2024, the database contained more than 38 million records from over 5,200 selected publications covering biomedicine and health from 1781 to the present. Originally, the database covered articles starting from 1965, but this has been enhanced, and records as far back as 1781 are now available within the main index. The database is freely accessible on the Internet via the PubMed interface, and new citations are added Tuesday through Saturday. For citations added during 1995-2003, about 48% are for cited articles published in the U.S., about 88% are published in English (overall about 84%), and about 76% have English abstracts written by authors of the articles.
Data quality
Being an aggregated source, the PubMed database suffers from multi-source problems such as inconsistent representations from the upstream data providers.
Retrieval
MEDLINE uses Medical Subject Headings (MeSH) for information retrieval. Engines designed to search MEDLINE (such as Entrez and PubMed) generally use a Boolean expression combining MeSH terms, words in the abstract and title of the article, author names, date of publication, etc. Entrez and PubMed can also find articles similar to a given one based on a mathematical scoring system that takes into account the similarity of word content of the abstracts and titles of two articles.
MEDLINE added a "publication type" term for "randomized controlled trial" in 1991 and a MESH subset "systematic review" in 2001.
Importance
MEDLINE functions as an important resource for biomedical researchers and journal clubs from all over the world. Along with the Cochrane Library and a number of other databases, MEDLINE facilitates evidence-based medicine. Most systematic review articles published presently build on extensive searches of MEDLINE to identify articles that might be useful in the review. MEDLINE influences researchers in their choice of journals in which to publish.
Inclusion of journals
More than 5,200 biomedical journals are indexed in MEDLINE. New journals are not included automatically or immediately. Several criteria for selection are applied. Selection is based on the recommendations of a panel, the Literature Selection Technical Review Committee, based on the scientific scope and quality of a journal. The Journals Database (one of the Entrez databases) contains information, such as its name abbreviation and publisher, about all journals included in Entrez, including PubMed. Journals that no longer meet the criteria are removed. Being indexed in MEDLINE gives a non-predatory identity to a journal.
Usage
PubMed usage has been on the rise since 2008. In 2011, PubMed/MEDLINE was searched 1.8 billion times, up from 1.6 billion searches in the previous year. In 2023, the database was searched 3.66 billion times.
A service such as MEDLINE strives to balance usability with power and comprehensiveness. In keeping with the fact that MEDLINE's primary user community is professionals (medical scientists, health care providers), searching MEDLINE effectively is a learned skill; untrained users are sometimes frustrated with the large numbers of articles returned by simple searches. Counterintuitively, a search that returns thousands of articles is not guaranteed to be comprehensive. Unlike using a typical Internet search engine, PubMed searching MEDLINE requires a little investment of time. Using the MeSH database to define the subject of interest is one of the most useful ways to improve the quality of a search. Using MeSH terms in conjunction with limits (such as publication date or publication type), qualifiers (such as adverse effects or prevention and control), and text-word searching is another. Finding one article on the subject and clicking on the "Related Articles" link to get a collection of similarly classified articles can expand a search that otherwise yields few results.
For lay users who are trying to learn about health and medicine topics, the NIH offers MedlinePlus; thus, although such users are still free to search and read the medical literature themselves (via PubMed), they also have some help with curating it into something comprehensible and practically applicable for patients and family members.
See also
Altbib
LILACS
HubMed an alternative interface to the PubMed medical literature database.
Journalology
eTBLAST – a natural language text similarity engine for MEDLINE and other text databases.
Medscape
Twease – an open-source biomedical search engine
References
External links
Biological databases
United States National Library of Medicine
Bibliographic databases and indexes
Medical databases
Online databases
Year of establishment missing
Public domain databases | MEDLINE | Biology | 1,557 |
36,821,788 | https://en.wikipedia.org/wiki/Mycena%20atkinsonii | Mycena atkinsonii is a species of agaric fungus in the family Mycenaceae. The species was first described scientifically by New York State botanist Homer Doliver House in 1920.
References
External links
atkinsonii
Fungi described in 1920
Fungi of North America
Fungus species | Mycena atkinsonii | Biology | 57 |
19,386,345 | https://en.wikipedia.org/wiki/Gyrokinetics | Gyrokinetics is a theoretical framework to study plasma behavior on perpendicular spatial scales comparable to the gyroradius and frequencies much lower than the particle cyclotron frequencies.
These particular scales have been experimentally shown to be appropriate for modeling plasma turbulence. The trajectory of charged particles in a magnetic field is a helix that winds around the field line. This trajectory can be decomposed into a relatively slow motion of the guiding center along the field line and a fast circular motion, called gyromotion. For most plasma behavior, this gyromotion is irrelevant. Averaging over this gyromotion reduces the equations to six dimensions (3 spatial, 2 velocity, and time) rather than the seven (3 spatial, 3 velocity, and time). Because of this simplification, gyrokinetics governs the evolution of charged rings with a guiding center position, instead of gyrating charged particles.
Derivation of the gyrokinetic equation
Fundamentally, the gyrokinetic model assumes the plasma is strongly magnetized (), the perpendicular spatial scales are comparable to the gyroradius (), and the behavior of interest has low frequencies (). We must also expand the distribution function, , and assume the perturbation is small compared to the background (). The starting point is the Fokker–Planck equation and Maxwell's equations. The first step is to change spatial variables from the particle position to the guiding center position . Then, we change velocity coordinates from to the velocity parallel , the magnetic moment , and the gyrophase angle . Here parallel and perpendicular are relative to , the direction of the magnetic field, and is the mass of the particle. Now, we can average over the gyrophase angle at constant guiding center position, denoted by , yielding the gyrokinetic equation.
The electrostatic gyrokinetic equation, in the absence of large plasma flow, is given by
Here the first term represents the change in the perturbed distribution function, , with time. The second term represents particle streaming along the magnetic field line. The third term contains the effects of cross-field particle drifts, including the curvature drift, the grad-B drift, and the lowest order E-cross-B drift. The fourth term represents the nonlinear effect of the perturbed drift interacting with the distribution function perturbation. The fifth term uses a collision operator to include the effects of collisions between particles. The sixth term represents the Maxwell–Boltzmann response to the perturbed electric potential. The last term includes temperature and density gradients of the background distribution function, which drive the perturbation. These gradients are only significant in the direction across flux surfaces, parameterized by , the magnetic flux.
The gyrokinetic equation, together with gyro-averaged Maxwell's equations, give the distribution function and the perturbed electric and magnetic fields. In the electrostatic case we only require Gauss's law (which takes the form of the quasineutrality condition), given by
Usually solutions are found numerically with the help of supercomputers, but in simplified situations analytic solutions are possible.
See also
GYRO - a computational plasma physics code
Gyrokinetic ElectroMagnetic - a gyrokinetic plasma turbulence simulation
Notes
References
J.B. Taylor and R.J. Hastie, Stability of general plasma equilibria - I formal theory. Plasma Phys. 10:479, 1968.
P.J. Catto, Linearized gyro-kinetics. Plasma Physics, 20(7):719, 1978.
R.G. LittleJohn, Journal of Plasma Physics Vol 29 pp. 111, 1983.
J.R. Cary and R.G.Littlejohn, Annals of Physics Vol 151, 1983.
T.S. Hahm, Physics of Fluids Vol 31 pp. 2670, 1988.
A.J. Brizard and T.S. Hahm, Foundations of Nonlinear Gyrokinetic Theory, Rev. Modern Physics 79, PPPL-4153, 2006.
X. Garbet and M. Lesur, Gyrokinetics, hal-03974985, 2023.
External links
GS2: A numerical continuum code for the study of turbulence in fusion plasmas.
AstroGK: A code based on GS2 (above) for studying turbulence in astrophysical plasmas.
GENE: A semi-global continuum turbulence simulation code, for fusion plasmas.
GEM: A particle in cell turbulence code, for fusion plasmas.
GKW: A semi-global continuum gyrokinetic code, for turbulence in fusion plasmas.
GYRO: A semi-global continuum turbulence code, for fusion plasmas.
GYSELA: A semi-lagrangian code, for turbulence in fusion plasmas.
ELMFIRE: Particle in cell monte-carlo code, for fusion plasmas.
GT5D: A global continuum code, for turbulence in fusion plasmas.
ORB5 Global particle in cell code, for electromagnetic turbulence in fusion plasmas.
(d)FEFI: Homepage for the author of continuum gyrokinetic codes, for turbulence in fusion plasmas.
GKV: A local continuum gyrokinetic code, for turbulence in fusion plasmas.
GTC: A global gyrokinetic particle in cell simulation for fusion plasmas in toroidal and cylindrical geometries.
Kinetics (physics)
Plasma theory and modeling
Theoretical physics | Gyrokinetics | Physics | 1,149 |
25,391,738 | https://en.wikipedia.org/wiki/Apicomplexan%20life%20cycle | Apicomplexans, a group of intracellular parasites, have life cycle stages that allow them to survive the wide variety of environments they are exposed to during their complex life cycle. Each stage in the life cycle of an apicomplexan organism is typified by a cellular variety with a distinct morphology and biochemistry.
Not all apicomplexa develop all the following cellular varieties and division methods. This presentation is intended as an outline of a hypothetical generalised apicomplexan organism.
Methods of asexual replication
Apicomplexans (sporozoans) replicate via ways of multiple fission (also known as schizogony). These ways include , and , although the latter is sometimes referred to as schizogony, despite its general meaning.
Merogony is an asexually reproductive process of apicomplexa. After infecting a host cell, a trophozoite (see glossary below) increases in size while repeatedly replicating its nucleus and other organelles. During this process, the organism is known as a or . Cytokinesis next subdivides the multinucleated schizont into numerous identical daughter cells called merozoites (see glossary below), which are released into the blood when the host cell ruptures. Organisms whose life cycles rely on this process include Theileria, Babesia, Plasmodium, and Toxoplasma gondii.
Sporogony is a type of sexual and asexual reproduction. It involves karyogamy, the formation of a zygote, which is followed by meiosis and multiple fission. This results in the production of sporozoites.
Other forms of replication include and .
Endodyogeny is a process of asexual reproduction, favoured by parasites such as Toxoplasma gondii. It involves an unusual process in which two daughter cells are produced inside a mother cell, which is then consumed by the offspring prior to their separation.
Endopolygeny is the division into several organisms at once by internal budding.
Glossary of cell types
Infectious stages
A (ancient Greek , seed + , animal) is the cell form that infects new hosts. In Plasmodium, for instance, the sporozoites are cells that develop in the mosquito's salivary glands, leave the mosquito during a blood meal, and enter liver cells (hepatocytes), where they multiply. Cells infected with sporozoites eventually burst, releasing merozoites into the bloodstream. Sporozoites are motile and they move by gliding.
A (G. , part [of a series] +, animal) is the result of merogony that takes place within a host cell. During this stage, the parasite infects the host's cells and then replicates its own nucleus and induces cell segmentation in a form of asexual reproduction. In coccidiosis, merozoites form the first phase of the internal life cycle of coccidian. In the case of Plasmodium, merozoites infect red blood cells and then rapidly reproduce asexually. The red blood cell host is destroyed by this process, which releases many new merozoites that go on to find new blood-borne hosts. Merozoites are motile. Before schizogony, the merozoite is also known as the schizozoite.
A (G. , partner + , cell) is a name given to a parasite's gamete-forming cells. A male gametocyte divides to give many flagellated microgametes, whereas the female gametocyte differentiates to a macrogamete.
An (G. , egg + , motile) is a fertilised zygote capable of moving spontaneously. It penetrates epithelial cells lining the midgut of mosquitoes to form a thick-walled structure known as an oocyst under the mosquito's outer gut lining. Ookinetes are motile and they move by gliding.
A (G. , nourishment + , animal) is the activated, intracellular feeding stage in the apicomplexan life cycle. After gorging itself on its host, the trophozoite undergoes schizogony and develops into a schizont, later releasing merozoites.
A hypnozoite (G. , sleep + , animal) is a quiescent parasite stage that is best known for its "... probable association with latency and relapse in human malarial infections caused by Plasmodium ovale and P. vivax". Hypnozoites are directly sporozoite-derived.
A (G. , slow + , animal) is a sessile, slow-growing form of zoonotic microorganisms such as Toxoplasma gondii, among others responsible for parasitic infections. In chronic (latent) toxoplasmosis, bradyzoites microscopically present as clusters enclosed by an irregular crescent-shaped wall (cysts) in infected muscle and brain tissues. Also known as a bradyzoic merozoite.
A (G. , fast + , animal), contrasting with a bradyzoite, is a form typified by rapid growth and replication. Tachyzoites are the motile forms of those coccidians which form tissue pseudocysts, such as Toxoplasma and Sarcocystis. Typically infecting cellular vacuoles, tachyzoites divide by endodyogeny and endopolygeny. Also known as a tachyzoic merozoite (same journal reference as for "bradyzoic merozoite", above).
An (G. , egg + , bladder) is a hardy, thick-walled spore, able to survive for lengthy periods outside a host. The zygote develops within the spore, which acts to protect it during transfer to new hosts. Organisms that create oocysts include Eimeria, Isospora, Cryptosporidium, and Toxoplasma.
Genome size
The dynamics of gene loss was studied in 41 apicomplexan genomes. Loss of genes employed in amino acid metabolism and steroid biosynthesis could be explained by metabolic redundancy with the host. Also, DNA repair genes tend to be lost by apicomplexans with reduced proteome size, probably reflecting a reduced need for DNA repair of genomes with smaller information content. Reduced DNA repair may help explain the elevated mutation rates in pathogens with reduced genome size.
See also
Trematode life cycle stages
References
Apicomplexa
Reproduction | Apicomplexan life cycle | Biology | 1,406 |
170,089 | https://en.wikipedia.org/wiki/Numerical%20integration | In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral.
The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical integration", especially as applied to one-dimensional integrals. Some authors refer to numerical integration over more than one dimension as cubature; others take "quadrature" to include higher-dimensional integration.
The basic problem in numerical integration is to compute an approximate solution to a definite integral
to a given degree of accuracy. If is a smooth function integrated over a small number of dimensions, and the domain of integration is bounded, there are many methods for approximating the integral to the desired precision.
Numerical integration has roots in the geometrical problem of finding a square with the same area as a given plane figure (quadrature or squaring), as in the quadrature of the circle.
The term is also sometimes used to describe the numerical solution of differential equations.
Motivation and need
There are several reasons for carrying out numerical integration, as opposed to analytical integration by finding the antiderivative:
The integrand may be known only at certain points, such as obtained by sampling. Some embedded systems and other computer applications may need numerical integration for this reason.
A formula for the integrand may be known, but it may be difficult or impossible to find an antiderivative that is an elementary function. An example of such an integrand is , the antiderivative of which (the error function, times a constant) cannot be written in elementary form.
It may be possible to find an antiderivative symbolically, but it may be easier to compute a numerical approximation than to compute the antiderivative. That may be the case if the antiderivative is given as an infinite series or product, or if its evaluation requires a special function that is not available.
History
The term "numerical integration" first appears in 1915 in the publication A Course in Interpolation and Numeric Integration for the Mathematical Laboratory by David Gibb.
"Quadrature" is a historical mathematical term that means calculating area. Quadrature problems have served as one of the main sources of mathematical analysis. Mathematicians of Ancient Greece, according to the Pythagorean doctrine, understood calculation of area as the process of constructing geometrically a square having the same area (squaring). That is why the process was named "quadrature". For example, a quadrature of the circle, Lune of Hippocrates, The Quadrature of the Parabola. This construction must be performed only by means of compass and straightedge.
The ancient Babylonians used the trapezoidal rule to integrate the motion of Jupiter along the ecliptic.
For a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side (the Geometric mean of a and b). For this purpose it is possible to use the following fact: if we draw the circle with the sum of a and b as the diameter, then the height BH (from a point of their connection to crossing with a circle) equals their geometric mean. The similar geometrical construction solves a problem of a quadrature for a parallelogram and a triangle.
Problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge had been proved in the 19th century to be impossible. Nevertheless, for some figures (for example the Lune of Hippocrates) a quadrature can be performed. The quadratures of a sphere surface and a parabola segment done by Archimedes became the highest achievement of the antique analysis.
The area of the surface of a sphere is equal to quadruple the area of a great circle of this sphere.
The area of a segment of the parabola cut from it by a straight line is 4/3 the area of the triangle inscribed in this segment.
For the proof of the results Archimedes used the Method of exhaustion of Eudoxus.
In medieval Europe the quadrature meant calculation of area by any method. More often the Method of indivisibles was used; it was less rigorous, but more simple and powerful. With its help Galileo Galilei and Gilles de Roberval found the area of a cycloid arch, Grégoire de Saint-Vincent investigated the area under a hyperbola (Opus Geometricum, 1647), and Alphonse Antonio de Sarasa, de Saint-Vincent's pupil and commentator, noted the relation of this area to logarithms.
John Wallis algebrised this method: he wrote in his Arithmetica Infinitorum (1656) series that we now call the definite integral, and he calculated their values. Isaac Barrow and James Gregory made further progress: quadratures for some algebraic curves and spirals. Christiaan Huygens successfully performed a quadrature of some Solids of revolution.
The quadrature of the hyperbola by Saint-Vincent and de Sarasa provided a new function, the natural logarithm, of critical importance.
With the invention of integral calculus came a universal method for area calculation. In response, the term "quadrature" has become traditional, and instead the modern phrase "computation of a univariate definite integral" is more common.
Methods for one-dimensional integrals
A quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration.
Numerical integration methods can generally be described as combining evaluations of the integrand to get an approximation to the integral. The integrand is evaluated at a finite set of points called integration points and a weighted sum of these values is used to approximate the integral. The integration points and weights depend on the specific method used and the accuracy required from the approximation.
An important part of the analysis of any numerical integration method is to study the behavior of the approximation error as a function of the number of integrand evaluations. A method that yields a small error for a small number of evaluations is usually considered superior. Reducing the number of evaluations of the integrand reduces the number of arithmetic operations involved, and therefore reduces the total error. Also, each evaluation takes time, and the integrand may be arbitrarily complicated.
Quadrature rules based on step functions
A "brute force" kind of numerical integration can be done, if the integrand is reasonably well-behaved (i.e. piecewise continuous and of bounded variation), by evaluating the integrand with very small increments.
This simplest method approximates the function by a step function (a piecewise constant function, or a segmented polynomial of degree zero) that passes through the point . This is called the midpoint rule or rectangle rule
Quadrature rules based on interpolating functions
A large class of quadrature rules can be derived by constructing interpolating functions that are easy to integrate. Typically these interpolating functions are polynomials. In practice, since polynomials of very high degree tend to oscillate wildly, only polynomials of low degree are used, typically linear and quadratic.
The interpolating function may be a straight line (an affine function, i.e. a polynomial of degree 1)
passing through the points and .
This is called the trapezoidal rule
For either one of these rules, we can make a more accurate approximation by breaking up the interval into some number of subintervals, computing an approximation for each subinterval, then adding up all the results. This is called a composite rule, extended rule, or iterated rule. For example, the composite trapezoidal rule can be stated as
where the subintervals have the form with and Here we used subintervals of the same length but one could also use intervals of varying length .
Interpolation with polynomials evaluated at equally spaced points in yields the Newton–Cotes formulas, of which the rectangle rule and the trapezoidal rule are examples. Simpson's rule, which is based on a polynomial of order 2, is also a Newton–Cotes formula.
Quadrature rules with equally spaced points have the very convenient property of nesting. The corresponding rule with each interval subdivided includes all the current points, so those integrand values can be re-used.
If we allow the intervals between interpolation points to vary, we find another group of quadrature formulas, such as the Gaussian quadrature formulas. A Gaussian quadrature rule is typically more accurate than a Newton–Cotes rule that uses the same number of function evaluations, if the integrand is smooth (i.e., if it is sufficiently differentiable). Other quadrature methods with varying intervals include Clenshaw–Curtis quadrature (also called Fejér quadrature) methods, which do nest.
Gaussian quadrature rules do not nest, but the related Gauss–Kronrod quadrature formulas do.
Adaptive algorithms
Extrapolation methods
The accuracy of a quadrature rule of the Newton–Cotes type is generally a function of the number of evaluation points. The result is usually more accurate as the number of evaluation points increases, or, equivalently, as the width of the step size between the points decreases. It is natural to ask what the result would be if the step size were allowed to approach zero. This can be answered by extrapolating the result from two or more nonzero step sizes, using series acceleration methods such as Richardson extrapolation. The extrapolation function may be a polynomial or rational function. Extrapolation methods are described in more detail by Stoer and Bulirsch (Section 3.4) and are implemented in many of the routines in the QUADPACK library.
Conservative (a priori) error estimation
Let have a bounded first derivative over i.e. The mean value theorem for where gives
for some depending on .
If we integrate in from to on both sides and take the absolute values, we obtain
We can further approximate the integral on the right-hand side by bringing the absolute value into the integrand, and replacing the term in by an upper bound
where the supremum was used to approximate.
Hence, if we approximate the integral by the quadrature rule our error is no greater than the right hand side of . We can convert this into an error analysis for the Riemann sum, giving an upper bound of
for the error term of that particular approximation. (Note that this is precisely the error we calculated for the example .) Using more derivatives, and by tweaking the quadrature, we can do a similar error analysis using a Taylor series (using a partial sum with remainder term) for f. This error analysis gives a strict upper bound on the error, if the derivatives of f are available.
This integration method can be combined with interval arithmetic to produce computer proofs and verified calculations.
Integrals over infinite intervals
Several methods exist for approximate integration over unbounded intervals. The standard technique involves specially derived quadrature rules, such as Gauss-Hermite quadrature for integrals on the whole real line and Gauss-Laguerre quadrature for integrals on the positive reals. Monte Carlo methods can also be used, or a change of variables to a finite interval; e.g., for the whole line one could use
and for semi-infinite intervals one could use
as possible transformations.
Multidimensional integrals
The quadrature rules discussed so far are all designed to compute one-dimensional integrals. To compute integrals in multiple dimensions, one approach is to phrase the multiple integral as repeated one-dimensional integrals by applying Fubini's theorem (the tensor product rule). This approach requires the function evaluations to grow exponentially as the number of dimensions increases. Three methods are known to overcome this so-called curse of dimensionality.
A great many additional techniques for forming multidimensional cubature integration rules for a variety of weighting functions are given in the monograph by Stroud.
Integration on the sphere has been reviewed by Hesse et al. (2015).
Monte Carlo
Monte Carlo methods and quasi-Monte Carlo methods are easy to apply to multi-dimensional integrals. They may yield greater accuracy for the same number of function evaluations than repeated integrations using one-dimensional methods.
A large class of useful Monte Carlo methods are the so-called Markov chain Monte Carlo algorithms, which include the Metropolis–Hastings algorithm and Gibbs sampling.
Sparse grids
Sparse grids were originally developed by Smolyak for the quadrature of high-dimensional functions. The method is always based on a one-dimensional quadrature rule, but performs a more sophisticated combination of univariate results. However, whereas the tensor product rule guarantees that the weights of all of the cubature points will be positive if the weights of the quadrature points were positive, Smolyak's rule does not guarantee that the weights will all be positive.
Bayesian quadrature
Bayesian quadrature is a statistical approach to the numerical problem of computing integrals and falls under the field of probabilistic numerics. It can provide a full handling of the uncertainty over the solution of the integral expressed as a Gaussian process posterior variance.
Connection with differential equations
The problem of evaluating the definite integral
can be reduced to an initial value problem for an ordinary differential equation by applying the first part of the fundamental theorem of calculus. By differentiating both sides of the above with respect to the argument x, it is seen that the function F satisfies
Numerical methods for ordinary differential equations, such as Runge–Kutta methods, can be applied to the restated problem and thus be used to evaluate the integral. For instance, the standard fourth-order Runge–Kutta method applied to the differential equation yields Simpson's rule from above.
The differential equation has a special form: the right-hand side contains only the independent variable (here ) and not the dependent variable (here ). This simplifies the theory and algorithms considerably. The problem of evaluating integrals is thus best studied in its own right.
Conversely, the term "quadrature" may also be used for the solution of differential equations: "solving by quadrature" or "reduction to quadrature" means expressing its solution in terms of integrals.
See also
Truncation error (numerical integration)
Clenshaw–Curtis quadrature
Gauss-Kronrod quadrature
Riemann Sum or Riemann Integral
Trapezoidal rule
Romberg's method
Tanh-sinh quadrature
Nonelementary Integral
References
Philip J. Davis and Philip Rabinowitz, Methods of Numerical Integration.
George E. Forsythe, Michael A. Malcolm, and Cleve B. Moler, Computer Methods for Mathematical Computations. Englewood Cliffs, NJ: Prentice-Hall, 1977. (See Chapter 5.)
Josef Stoer and Roland Bulirsch, Introduction to Numerical Analysis. New York: Springer-Verlag, 1980. (See Chapter 3.)
Boyer, C. B., A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach, New York: Wiley, 1989 (1991 pbk ed. ).
Eves, Howard, An Introduction to the History of Mathematics, Saunders, 1990, ,
S.L.Sobolev and V.L.Vaskevich: The Theory of Cubature Formulas, Kluwer Academic, ISBN 0-7923-4631-9 (1997).
External links
Integration: Background, Simulations, etc. at Holistic Numerical Methods Institute
Lobatto Quadrature from Wolfram Mathworld
Lobatto quadrature formula from Encyclopedia of Mathematics
Implementations of many quadrature and cubature formulae within the free Tracker Component Library.
SageMath Online Integrator
Numerical analysis | Numerical integration | Mathematics | 3,314 |
31,583,410 | https://en.wikipedia.org/wiki/Elitzur%27s%20theorem | In quantum field theory and statistical field theory, Elitzur's theorem states that in gauge theories, the only operators that can have non-vanishing expectation values are ones that are invariant under local gauge transformations. An important implication is that gauge symmetry cannot be spontaneously broken. The theorem was first proved in 1975 by Shmuel Elitzur in lattice field theory, although the same result is expected to hold in the continuum limit. The theorem shows that the naive interpretation of the Higgs mechanism as the spontaneous symmetry breaking of a gauge symmetry is incorrect, although the phenomenon can be reformulated entirely in terms of gauge invariant quantities in what is known as the Fröhlich–Morchio–Strocchi mechanism.
Theory
A field theory admits different types of symmetries, with the two most common ones being global and local symmetries. Global symmetries are fields transformations acting the same way everywhere while local symmetries act on fields in a position dependent way. The latter correspond to redundancies in the description of the system. This is a consequence of Noether's second theorem which states that each local symmetry degree of freedom corresponds to a relation among the Euler–Lagrange equations, making the system underdetermined. Underdeterminacy requires gauge fixing of the non-propagating degrees of freedom so that the equations of motion admit a unique solution.
Spontaneous symmetry breaking occurs when the action of a theory has a symmetry but the vacuum state violates this symmetry. In that case there will exist a local operator that is non-invariant under the symmetry giving it a nonzero vacuum expectation value. Such non-invariant local operators always have vanishing vacuum expectation values for finite size systems, prohibiting spontaneous symmetry breaking. This occurs because over large timescales, finite systems always transition between all possible ground states, averaging away the expectation value of the operator.
While spontaneous symmetry breaking can occur for global symmetries, Elitzur's theorem states that the same is not the case for gauge symmetries; all vacuum expectation values of gauge non-invariant operators are vanishing, even in systems of infinite size. On the lattice this follows from the fact that integrating gauge non-invariant observables over a group measure always yields zero for compact gauge groups. Positivity of the measure and gauge invariance are sufficient to prove the theorem. This is also an explanation for why gauge symmetries are mere redundancies in lattice field theories, where the equations of motion need not define a well-posed problem as they do not need to be solved. Instead, Elitzur's theorem shows that any observable that is not invariant under the symmetry has a vanishing expectation value, making it unobservable and therefore redundant.
Showing that a system admits spontaneous symmetry breaking requires introducing a weak external source field that breaks the symmetry and gives rise to a preferred ground state. The system is then taken to the thermodynamic limit after which the external source field is switched off. If the vacuum expectation value of symmetry non-invariant operators is nonzero in this limit then there is spontaneous symmetry breaking. Physically it means that the system never leaves the original ground state into which it was placed by the external field. For global symmetries this occurs because the energy barrier between the various ground states is proportional to the volume, so in the thermodynamic limit this diverges, locking the system into the ground state. Local symmetries get around this construction because the energy barrier between two ground states depends only on local features so transitions to different gauge related ground states can occur locally and does not require the field to change everywhere at the same time as it does for global symmetries.
Limitations and implications
There are a number of limitations to the theorem. In particular, spontaneous symmetry breaking of a gauge symmetry is allowed in a system with infinite spatial dimensions or a symmetry with an infinite number of variables, since in these cases there are infinite energy barriers between gauge related configurations. The theorem also does not apply to residual gauge degrees of freedom nor large gauge transformations, which can in principle be spontaneously broken. Furthermore, all current proofs rely on a lattice field theory formulation so they may be invalid in a genuine continuum field theory. It is therefore in principle plausible that there may exist exotic continuum theories for which gauge symmetries can be spontaneously broken, although such a scenario remains unlikely due to the absence of any known examples.
Landau's classification of phases uses expectation values of local operators to determine the phase of the system. However, Elitzur's theorem shows that this approach is inadmissible in certain systems such as Yang–Mills theories for which no local operator can act as an order operator for confinement. Instead, to get around the theorem requires constructing nonlocal gauge invariant operators, whose expectation values need not be zero. The most common ones are Wilson loops and their thermal equivalents, Polyakov loops. Another nonlocal operator that acts as a order operator is the 't Hooft loop.
Since gauge symmetries cannot be spontaneously broken, this calls into question the validity of the Higgs mechanism. In the usual presentation, the Higgs field has a potential that appears to give the Higgs field a non-vanishing vacuum expectation value. However, this is merely a consequence of imposing a gauge fixing, usually the unitary gauge. Any value of the vacuum expectation value can be acquired by an appropriate gauge fixing choice. Calculating the expectation value in a gauge invariant way always gives zero, in agreement with Elitzur's theorem. The Higgs mechanism can however be reformulated entirely in a gauge invariant way in what is known as the Fröhlich–Morchio–Strocchi mechanism which does not involve spontaneous symmetry breaking of any symmetry. For non-abelian gauge groups that have a subgroup, this mechanism agrees with the Higgs mechanism, but for other gauge groups there can appear discrepancies between the two approaches.
Elitzur's theorem can also be generalized to a larger notion of local symmetries where in a D-dimensional space, there can be symmetries that act uniformly on a d-dimensional hyperplanes. In this view, global symmetries act on D-dimensional hyperplanes while local symmetries act on 0-dimensional ones. The generalized Elitzur's theorem then provides bounds on the vacuum expectation values of operators that are non-invariant under such d-dimensional symmetries. This theorem has numerous applications in condensed matter systems where such symmetries appear.
See also
Mermin–Wagner theorem
References
External links
Notes on lattice gauge theory by A. Muramatsu
Gauge theories
Lattice field theory
Symmetry
Theorems in quantum mechanics
Statistical mechanics theorems | Elitzur's theorem | Physics,Mathematics | 1,366 |
7,661,567 | https://en.wikipedia.org/wiki/European%20Forum%20for%20Good%20Clinical%20Practice | The European Forum for Good Clinical Practices (EFGCP) is a European think tank which works on the ethical, regulatory, and scientific framework of clinical research in Europe. The EFGCP is committed to the development of the standards for the protection of human subjects and data quality in clinical trials, both in Europe and abroad.
See also
European Clinical Research Infrastructures Network (ECRIN)
European Medicines Agency (EMEA, EU)
European and Developing Countries Clinical Trials Partnership (EDCTP)
EUDRANET
EudraVigilance
Good Clinical Practice (GCP)
Harmonization in clinical trials
International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH)
Inverse benefit law
Quality assurance
Standing operating procedure
External links
European Forum for Good Clinical Practices
‘The Procedure for the Ethical Review of Protocols for Clinical Research Projects in the European Union'
European clinical research
Clinical trials
European medical and health organizations
Pharmaceutical industry
National agencies for drug regulation
International organisations based in Belgium | European Forum for Good Clinical Practice | Chemistry,Biology | 196 |
53,969,515 | https://en.wikipedia.org/wiki/Eupenicillium%20shearii | Eupenicillium shearii is a fungus in the genus Penicillium. The type strain was first isolated in 1931 by Dr. Otto R. Reinking from a soil sample taken in Honduras. It has also been isolated from soil collected in the Democratic Republic of the Congo and near Abidjan. It was named and described in 1967.
E. shearii is of interest to medicinal chemists due to its production of kaitocephalin, a substance that may protect the brain and nervous system. Therefore, kaitocephalin is an attractive scaffold for drug development. Drugs based on this compound may be used to treat neurological conditions including Alzheimer’s, amyotrophic lateral sclerosis (ALS), and stroke.
See also
List of Penicillium species
References
Penicillium
Fungi described in 1931
Fungus species | Eupenicillium shearii | Biology | 175 |
53,936,277 | https://en.wikipedia.org/wiki/NGC%205026 | NGC 5026 is a barred spiral galaxy or lenticular galaxy in the constellation of Centaurus. It was discovered on 5 June 1834 by John Herschel. It was described as "pretty bright, pretty large, round, gradually brighter middle" by John Louis Emil Dreyer, the compiler of the New General Catalogue.
References
Notes
Barred spiral galaxies
Centaurus
5026
046023
Discoveries by John Herschel | NGC 5026 | Astronomy | 82 |
36,370,190 | https://en.wikipedia.org/wiki/Konstanze%20Kr%C3%BCger | Konstanze Krüger-Farrouj (née Konstanze Deubner, born 22 January 1968) is a German zoologist and behaviour researcher. She is Professor of Horse Management at Nürtingen-Geislingen University of Applied Science, and her special field of research is the social system of horses.
Scientific career
Krüger studied veterinary medicine at the Ludwig Maximilian University of Munich (LMU). After completing her studies in 1996, she accepted a position as scientific assistant at the Institute for Animal Anatomy and Histology at the LMU in Munich.
From April 1999 to February 2006, she ran the Einthal Equestrian Park in Obertraubling, together with her husband. From June 2004, she held a research position at Biology 1 in the department of Zoology at the University of Regensburg, researching social learning and social cognition in horses. In October 2008 she organised the 1st International Equine Science Meeting. On 1 March 2012, she became Germany's first Professor of Horse Management, taking charge of the department at the Nürtingen-Geislingen University of Applied Science. In March 2012, she organised the 2nd International Equine Science Meeting at the University of Regensburg, sponsored by the DFG. In October 2012, she was promoted to the position of Professor of Zoology at the University of Regensburg.
Research focus
Social cognition in horses
Social ecology of horses
Innovative Behaviour in horse
Methods
Longterm field Studies
Social Network Analysis
Hierarchy Calculations
Behaviour Test
Practical application of research in equestrian sport and horse management
Behaviour of horses in the "round pen technique"
The behaviour of horses in the Join-Up-Method is a learned response specific to a particular location, and not a natural "language" as claimed by Monty Roberts in his books. The research explains how and why this training can be generalised to other people and places and therefore be an important tool in the training of horses.
Horse sense: social status of horses (Equus caballus) affects their likelihood of copying other horses
This research supports the opinion of many equestrian experts that it is wise to use an experienced horse to demonstrate new exercises to horses in training. It also reveals that only horses from the same social group are suitable demonstrators.
Visual laterality in the domestic horse (Equus caballus) interacting with humans
According to the situation, horses sometimes prefer to observe things with their left eye, and sometimes with the right. For every day training purposes, it is best to allow horses to observe a potentially dangerous object with its left eye until it calms down.
Third-party interventions keep social partners from exchanging affiliative interactions with others
This study demonstrates that it is important when keeping horses in groups to ensure that the groups are composed of horses of mixed ages.
Scientific importance of the research
For the last 30 years, horses have been described as being incapable of demonstrating social learning (Baer et al. 1983.; Baker and Crawford 1986; Clarke et al. 1996; Lindberg et al. 1999) because the social complexity of horses was underestimated and the experimental designs were therefore not suitably constructed. In several studies this was taken into account, and for the first time horses did show social learning. This has important implications for other species in which social learning has similarly not been shown. The experimental design for these species should now be reconsidered and new experiments conducted in which the social aspects of the species are taken into account.
Publications
Key publications in peer-reviewed journals
Krueger, K., Flauger, B., Farmer, K., & Hemelrijk, C. (2014). Movement initiation in groups of feral horses. Behav. Process., 103, 91–101.
Krueger K, Farmer K, Heinze J (2013) The effects of age, rank and neophobia on social learning in horses. Anim. Cogn 17, 645-655
Schneider, G.; Krueger, K. (2012) Third-party interventions keep social partners from exchanging affiliative interactions with others Anim. Behav. 83 377–387.
Krueger, K; Farmer, K. (2011) Laterality in the Horse [Lateralität beim Pferd ] mup 4 160–167.
Krueger, K.; Flauger, B.; Farmer, K.; Maros, K. (2011) Horses (Equus caballus) use human local enhancement cues and adjust to human attention Anim. Cogn. 14 187–201.
Farmer, K.; Krueger, K.; Byrne, R. (2010) Visual laterality in the domestic horse (Equus caballus) interacting with humans Anim. Cogn. 13 229–238.
Krueger, K.;Heinze, J. (2008) Horse sense: social status of horses (Equus caballus) affects their likelihood of copying other horses' behavior Anim. Cogn. 11 431–439.
Krueger, K.; Flauger, B. (2008) Social feeding decisions in horses (Equus caballus) Behav. Process. 78 76–83.
Krueger, K.; Flauger, B. (2007) Social learning in horses from a novel perspective Behav. Process. 76 37–39.
Krueger, K. (2007) Behaviour of horses in the "round pen technique" Appl. Anim. Behav. Sci. 104 162–170.
Books
Trainingslehre Für Dressurpferde
Das Pferd im Blickpunkt der Wissenschaft
Articles in specialist publications
Krueger K. 2008–2009. Journal Bayerns Pferde Zucht und Sport,
Der Linksdrall: Sensorische Einseitigkeit bei Pferden 2008, 10, pp. 66–68
Pferdeverhalten: So integriere ich mein Pferd in die Herde. 2008, 12, pp. 64–69
Das Sozialsystem des Pferdes: Das Know-how für den täglichen Umgang. 2009, 1, pp. 72–76
Das Sozialsystem des Pferdes, Teil II: Führungspersönlichkeiten. 2009, 2, pp. 76–80
Visuelle Fähigkeiten der Pferde. 2009
Die Erkennung von Artgenossen und Menschen. 2009
Das Gedächtnis der Pferde, 2009, 7
Die Unarten des Pferdes, 2009, 9, pp. 84–89
Charakterpferde, 2009, 12
Lectures as invited speaker at conferences
Opening Speaker 43. Internationale Tagung Angewandte Ethologie
Die sensorische Lateralität als Indikator für emotionale und kognitive Reaktionen auf Umweltreize beim Tier (Übersichtreferat)
References
External links
Homepage Konstanze Krüger
Homepage Konstanze Krüger at Nürtingen-Geislingen University of Applied Science
Research Project "Innovative Behaviour in Horses"
20th-century German zoologists
Ethologists
1968 births
Living people
21st-century German zoologists
Scientists from Cologne | Konstanze Krüger | Biology | 1,505 |
1,172,401 | https://en.wikipedia.org/wiki/List%20of%20ZX%20Spectrum%20clones | The following is a list of clones of Sinclair Research's ZX Spectrum home computer. This list includes both official clones (from Timex Corporation) and many unofficial clones, most of which were produced in Eastern Bloc countries. The list does not include computers which require additional hardware or software to become ZX-compatible.
Many software emulators can fully or partially emulate some clones as well.
Official
The only official clones of the Spectrum were made by Timex. There were three models developed, only two of which were released:
Timex Sinclair 2068
The Timex Sinclair 2068 or T/S 2068 (also known as TC 2068 or UK 2086) was a significantly more sophisticated machine than the original Spectrum. The most notable changes were the addition of a cartridge port, an AY-3-8912 sound chip, and an improved ULA giving access to better graphics modes. The T/S 2068 was produced for consumers in the United States, while very similar machines were marketed in Portugal and Poland as the Timex Computer 2068 (TC 2068) and Unipolbrit Komputer 2086 (UK 2086) respectively. A small number of TC 2068s were also sold in Poland.
Timex Computer 2048
The Timex Computer 2048 or TC 2048 was a similar machine to the Spectrum 48K, but with the improved ULA from the TC 2068 (allowing access to the improved graphics modes), Kempston joystick port, and composite video output. Marketed only in Portugal and Poland.
Timex Sinclair 2048
The Timex Sinclair 2048 or T/S 2048 was a never-released variant of the T/S 2068 with 16 KB of RAM.
Inves Spectrum +
A clone of the ZX Spectrum+ developed by Investrónica in Spain in 1986, the Inves Spectrum + was based on the work developed by the company on the ZX Spectrum 128. Released just after Amstrad bought Sinclair Research Ltd, it looked much like a regular ZX Spectrum+, but all the internal components were redesigned. As the ROM was also modified, it has compatibility problems with some games – notably Bombjack, Commando, and Top Gun. A Kempston joystick port was fitted on the rear of the machine.
Due to Invéstronica being the distributor of Sinclair's products in Spain, and because Amstrad already had its own exclusive distributor in Spain (Indescomp, later bought by Amstrad itself), Amstrad sued Investrónica in 1987 to cease sales of the computer. The court agreed with Amstrad, but the decision was not issued until 1991, when the computer was discontinued, as the 8-bit computer market in Spain was succeeded by 16-bit computers.
Decibels dB Spectrum+
The Decibels dB Spectrum+ was an official clone of the ZX Spectrum+ for the Indian market, introduced in 1988 by Decibels Electronics Limited, selling over 50000 units and achieving an 80% market share.
Unofficial
British
Harlequin
A British clone of the 48K ZX Spectrum, Harlequin was designed and developed by Chris Smith, to aid the reverse engineering of the ZX Spectrum custom ULA chip, and its research documentation. Completed in 2008, it is the first 100% timing compatible clone. Until 2012/13 the Harlequin existed only as a breadboard prototype, but recently, José Leandro Martínez, Ingo Truppel, and others produced a limited number of PCB versions as an exact board replacement for an actual ZX Spectrum.
Czech & Czechoslovak
Bobo64
The Bobo64 was an advanced Czech computer compatible with the ZX Spectrum, developed by Václav Daněček between 1986 and 1987. It has many enhancements over the original ZX Spectrum, including 256×256 graphics with attributes per 8 x 1 pixels, and 512 x 256 graphics. Unlike other Czechoslovak home-made ZX Spectrum clones, the Bobo64 gained some popularity, and was built by dozens of enthusiasts.
Didaktik series
The Didaktik was a series of home computers produced by Didaktik in Skalica, in the former Czechoslovakia.
The first model compatible with the ZX Spectrum was the Didaktik Gama, based on the U880 or Zilog Z80 processors and the original ULA chip. It was produced in three variants between 1987 and 1989. The Gama has a built-in 8255 chip (used for the Kempston joystick, and also as a printer port) and 80 KB RAM, adding an alternative memory bank from the address 32768 to 65535.
The Gama was followed by the cheaper Didaktik M (first variant released in 1990; the second in 1991). The model M had a modernised case, Sinclair and Kempston Joystick ports, and a keyboard with cursors and reset key. The design, however, was of lower quality than the Gama. Its screen aspect ratio and display timing are different from the original ZX Spectrum because the M uses a different ULA chip, compatible with the Belarusian clone Baltik. It ran at 4 MHz. The final model was the Didaktik Kompakt (1991) which integrated all previous M hardware with a 3.5″ floppy disk drive.
Unlike previous versions, the Didaktik 192K was an amateur project, partly combining the hardware of the Didaktik Gama and the ZX Spectrum 128K.
Krišpín
The Krišpín was a czechoslovakian clone of the ZX Spectrum, developed by František Kubiš at 1984, a student of EF SVŠT (Electrotechnical Faculty of Slovak Technical University) Bratislava. The ULA was designed using discrete 74xx ICs, which resulted in the screen part of RAM being synchronised perfectly, without CPU blocking.
MISTRUM
Another Czechoslovakian clone of the 48K ZX Spectrum, the MISTRUM, was supplied in kit form. The ROM was modified to include letters with Czech diacritic marks. An article on how to build a Mistrum was published in the Czechoslovak amateur radio magazine Amatérské Radio 1/89.
Nucleon
The Nucleon was a Czech clone of the Pentagon 512K, made by CSS Electronics.
Sparrow 48K
The Sparrow 48K is the first modern clone of the ZX Spectrum designed to replace the original motherboard in standard and Spectrum+ case. Production commenced in 2013. In addition to the use of the original ULA chip, this clone was heavily modernised, replacing part of the larger glue logic with one CPLD chip, the entire main memory with one SRAM chip, and all 8 video memory chips with a second SRAM. The TV modulator has been dropped in favour of a video signal, and the PSU was changed and improved. The Sparrow also offers a larger ROM, which can be increased by 16 KB via a switch or a jumper. The successor is the Sparrow SX, with software ROM switching and RTC.
East German
HCX
The HCX was a Spectrum clone developed at the Technical University of Magdeburg in 1988.
RR-Spectrum
The RR-Spectrum was a privately built East German clone of the ZX Spectrum.
Spectral
Spectral was another An East German clone of the ZX Spectrum. It came with a built-in joystick interface, and either 48 or 128 KB RAM. It was sold in kit form by Hübner Elektronik.
Hungarian
HT 3080C
The HT 3080C was a Hungarian ZX Spectrum clone made by Híradástechnikai Szövetkezet (Telecommunication Technology Cooperative), and released in 1986. It was the third computer from the company. The two first computers (HT 1080Z and HT 2080Z) were clones of the TRS-80, and were unsuccessful because of the poor graphics features and high price. They were both school computers.
In 1986, Hungarian school computers were required to meet two criteria: produce high resolution graphics, and support letters with Hungarian diacritic marks. The HT 3080C was produced to satisfy both these criteria, and was also designed to be compatible with the previous HT machines, with the option of switching between TRS-80 and ZX Spectrum mode. It had a graphics resolution of 256 x 192 (the ZX Spectrum standard) and an AY-chip for sound (for compatibility with previous HT machines).
It featured a 32 KB ROM, 64 KB RAM, and (uniquely) a Commodore serial port which enabled the connection of peripherals for the C64 (e.g. the 1541 floppy disk drive).
Polish
Elwro 800 Junior
The Elwro 800 Junior was Polish clone of the ZX Spectrum produced by ELWRO for use in schools. It ran a special version of CP/M called CP/J. The computer had a full size keyboard, and even a paper/document holder. The reason for the latter is that the machine shares the same case as the Elwirka electronic keyboard, which had provisions for holding sheet music. Peripherals were attached to the computer using a mix of DIN and D-subminiature connectors.
ELWRO had developed a local area network protocol called JUNET (JUnior NETwork) for use with the machines which operated on a basis not unlike MIDI, in which one DIN cable was used to receive data, and another to send it. In this manner, the teacher was able to monitor what all the students in the class were doing on their computers.
The updated Elwro 804 Junior PC had an internal 3.5″ disk drive.
Portuguese
IODO
The IODO (Issue One Dot One) was created in Portugal by Consultório da Paula (now PSiTech) in 09/01/2019. It is a clone of the original 16 KB ZX Spectrum issue one, and it's on display on LOAD ZX museum in Cantanhede, Portugal.
Romanian
CoBra
The CoBra (COmputer BRAsov) was a ZX Spectrum clone built in Braşov, Romania in 1988. ROM contained the OPUS and CP/M operating systems.
CIP series
The CIP are Romanian ZX Spectrum clones made by Intreprinderea Electronica. CIP stands for Calculator pentru Instruire Personala ("Computer for Personal Education"). The ROM is original Sinclair, but has been modified to display 'BASIC S' in place of the standard Sinclair copyright message. Only one set of 8 x 1-bit 64 KB RAM modules is present.
The initial version, CIP-02, had a low quality 2 KB EEPROM with a propensity for fast data loss, and BASIC had to be loaded from tape. CIP-03 was a version of the EEPROM designed to work with the 3 data densities on the tape at speeds up to 3 times higher than the original, and the 2K ROM was also capable of loading and saving at those speeds, using the whole 64K as storage. The top data density was often hit and miss; very good magnetic tape had to be used, and a special monophonic cassette recorder could be bought separately for best results. Produced from 1988 to 1993 it was a common clone in Romania, with about 15,000 units produced. CIP-04 was a ZX Spectrum +3 clone with a built-in floppy disk drive and 256 KB RAM.
Felix HC series
Felix HC are a series of ZX Spectrum clones manufactured in Romania from 1985 to 1994 by ICE Felix. The HC designation stands for Home Computer, and for the first four models in the series, the number indicates the year of first manufacture. Models in the series were: HC 85, HC 88, HC 90, HC 91, HC91+ (HC128), HC 2000 and HC386.
The earliest version (HC 85) closely resembled the Spectrum, with a built-in BASIC interpreter, Z80A processor, 48 KB RAM, tape, and TV interfaces. It was used in schools/universities, and as a personal computer.
An optional Interface 1 expansion was available for the HC 85, HC 90, and HC 91. It was functionally similar to the ZX Interface 1, but instead of Microdrives it supported single-density or double-density floppy disks.
The HC 90 had a redesigned circuit board supporting fewer, larger memory chips; it was functionally equivalent to the HC 85.
The HC 91 had a modified keyboard with 50 keys instead of 40. It had 64 KB RAM, and extra circuitry which provided CP/M support if the Interface 1 expansion was also present.
The HC 2000 (manufactured from 1992 to 1994) had a built-in 3.5-inch 720 KB floppy disk drive, and 64 KB RAM. It could be used both as a Spectrum clone with added disk functionality (only 48 KB RAM available) or in CP/M mode, giving access to the full 64 KB memory. Essentially, it consolidated the HC 91, Interface 1, and floppy disk drive into a single case.
The last model to be made in the Z80 line was the HC91+. It was a ZX Spectrum 128K clone in a HC91 case and keyboard, and had some compatibility problems. For the first time, the AY-8910 sound chip was offered as an add-on service, and was soldered on the board by factory technicians. Demoscene demos had problems running multi-colour effects, and displaying sound VU meter-like effects, through lack of data in the AY chip.
JET
JET was a Romanian clone from 1989 produced by Electromagnetica. JET is an acronym for Jocuri Electronice pe Televizor (Electronic Games on Television).
Timisoara series
The Timisoara series were Romanian ZX Spectrum clones developed in the university of Timișoara. Its name is a portmanteau of Timişoara and Spectrum. The first model, TIM-S, It had Source (ALIM) parallel and serial connectors, as well as ports for connecting a cassette recorder, and television set. Later models (microTIM, microTIM+ and TIM-S+) were equipped with a joystick port, and came with 128 KB RAM and an AY-3-8912 sound chip. Production continued into the early 1990s.
Sages
Sages V1 was a ZX Spectrum clone with audio and joystick connectors placed on the front of the case, and a keyboard similar to that of the Ice Felix HC-85K
Pandora
Pandora was a ZX Spectrum clone sold by private engineer from Buzau, had larger EPROM allowing switch between classic Spectrum and a customized mode (character using a bold typeset, Pandora message displayed on startup, etc)
South American
Czerweny CZ
The Czerweny CZ 2000, Czerweny CZ Spectrum and Czerweny CZ Spectrum Plus were Argentinian ZX Spectrum clones which were produced from 1985 until an electrical fire destroyed the factory in Paraná in June 1986.
Microdigital TK90X
The TK90X was the first Brazilian ZX Spectrum clone. It was launched in 1985 by Microdigital Eletronica, a company located in São Paulo, Brazil, which had previously manufactured ZX81 clones (TK82, TK82C, TK83, and TK85) and a ZX80 clone (TK80). The ROM was hacked to include a UDG editor, and accented characters. In spite of this, incompatibility issues with ZX Spectrum software are very rare. The keyboard membrane is more durable than that found on the original ZX Spectrum 48K. The TK90X also features a Sinclair-compatible joystick port.
Microdigital TK95
The TK95 microcomputer was the successor to the TK90X. Launched in November 1986, its improvements were largely cosmetic, as it uses exactly the same PCB as the TK90X, but had its ROM capacity increased to 16 KB.
South Korean
Samsung SPC-650
The Personal Computer SPC-650 was a South Korean clone of the ZX Spectrum+ by Samsung, with a similar design to the original machine.
Soviet/Russian
ALF TV Game
A game console based on the ZX Spectrum 48, developed by the Brest Special Design Bureau "Zapad" and produced by the " Tsvetotron " plant. Cartridges are a board with ROM chips and a page decoder (the cartridge is accessed through 16K pages).
AZX-Monstrum
A Spectrum-compatible computer based on the Zilog Z380 (a 32-bit version of the Z80, capable of running at 40 MHz). Development started in 1999 and was abandoned in 2001.
Anbelo/C
Produced as a kit for assembly and as a finished computer by the Research Institute of Precision Technology (Zelenograd), the Angstrem plant and the Anbelo MGP (Belozersky).
Arus
The Arus (ru: Арус) is a ZX Spectrum clone based on the Pentagon. Developed in the early 1990s it was produced at the Iset plant in Kamensk-Uralsky. It has supports the Russian language in the BASIC interpreter and TR-DOS operating system.
ATM Turbo
The ATM Turbo (ru: АТМ-ТУРБО) was developed in Moscow in 1991 by two companies: MicroArt and ATM. It featured a 7 MHz Z80 processor, 1024 KB RAM, 128 KB ROM, AY-8910 sound chip (two were fitted in upgraded models), 8-bit DAC, 8-channel ADC, RS-232 and Centronics ports, Beta Disk Interface, IDE interface, AT/XT keyboard, text mode (80×25, 16 possible colours in an 8×8 pattern), and two additional resolutions of 320 x 200 and 640 x 200 pixels. A substantial part of the ATM design was transferred to the Baseconf core of the ZX-Evolution computer.
Baltica
Baltica (or Baltic, ru: Балтик) was a Soviet clone of the 48K ZX Spectrum. Its CPU Z80 ran at a higher frequency (4 MHz) which made it less compatible. It was first released in 1988 by a company named Sonet from Minsk and different versions exist, with expanded hardware and operating systems, compared to the original ZX Spectrum.
Best III
The Best III was a ZX Spectrum clone made in St. Petersburg in 1993. The size of the system unit is 16.8 × 10 × 2 inches. Its CPU is a Russian Z80 clone.
Bi Am ZX-Spectrum 48/64 and 128
The Bi Am ZX-Spectrum 48/64 was Russian clone of the ZX Spectrum produced between 1992 and 1994. The system unit is made of metal, and measures 10 × 8.4 × 2 inches. The Bi Am ZX-Spectrum 128 was a 128 KB version of the same computer.
Blic
Blic (ru: Блиц) or Blitz is a Soviet clone of the ZX Spectrum 48K, designed in 1990, and based on the earlier Leningrad clone. The ROM had been modified to display “BLIC Home Computer” alongside three rectangles which were respectively coloured blue, red, and green. The firmware contained a modified font of the Latin and Cyrillic alphabet. Keyboard layouts were switched between Cyrillic and Latin using the POKE 23607.56 and POKE 23607.56 commands, respectively. The layout of the Cyrillic keyboard is YaWERT (яверт) rather than the more familiar JCUKEN. The keys were made of rubber, and their size and placement was virtually identical to that on the original ZX Spectrum 48K.
Byte
Byte (ru: Байт) was a Soviet clone made in Brest by the Brest Electromechanical Plant. Introduced in 1989, it used several Z80 CPU clones like the KR1858VM1, KR1858VM1 or T34VM1. Specifications are similar to the original Spectrum, with 48 KB or RAM. In 1992 an average of 1,705 computers were produced per month. Production ended in 1995.
BASIC and Breeze
These machines were produced at the Vladivostok plant Radiopribor, based on South Korean microchips. They were sold in suitcase with a cassette containing programs and games. BASIC (ru: Бейсик) came with 48 KB of RAM, while the Breeze (ru: Бриз) was a 128 KB machine with a printer controller, disk drive and a sound chip.
Composite
The Composite (ru: Композит) was a Russian clone of the ZX Spectrum introduced in 1993 by NTK (ru: НТК), with 48 KB RAM. It is a modified version of Leningrad 2, produced by the Composite co-op.
Dubna 48K
Dubna 48K (ru: Дубна 48К) was a 1991 Soviet clone of the ZX Spectrum home computer, named after the town of Dubna, where it was produced. It used Zilog Z80 processor clone.
Duet
The Duet (ru: Дуэт) was a ZX Spectrum 48K clone produced at the Lianozovsky Electromechanical Plant (LEMZ, Moscow).
Ella Ra
The Ella Ra, Also known as the Elara-Disk 128, this was a Russian clone, made in 1991, of the ZX Spectrum 128K. It featured a 58-key keyboard, floppy disk drive, and ports for both Kempston and Sinclair joysticks. Whilst it is possible to expand the system, incompatibilities may arise due to some of the ports having been changed.
GrandRomMax
The GrandRomMax was a Russian clone of the ZX Spectrum made in Moscow in 1993. It is very similar to the Pentagon, but was designed to be more like the original ZX Spectrum. Several variations exist of the system, with only minor differences between them. One version has an improperly configured Beta Disk Interface, resulting in all information on the disk being destroyed when an attempt to write to it is made on a different machine.
The GrandRomMax is not easy to expand because some of its PL/M chips do not support the signals required for sending and receiving data to and from certain peripherals.
Grandboard 2+
Grandboard 2+ was a Russian clone of the ZX Spectrum, based on the GrandRomMax GRM2+ board. It was developed and manufactured in 1994 by the Independent Science-Manufacturing Laboratory of Computer Techniques in Fryazino.
CPU: Z-80 NEC (8-bit)
Clock frequency: 3.45 MHz
RAM: 128 KB
Text: 24 x 32, eight possible colours
Graphics: 256 x 192, eight possible colours
Sound processor AY-8910m (YM 2149F)
Dimensions: 350 × 280 × 35 mm (13.2 × 8.4 × 2 inches)
Gamma
The Gamma (ru: Гамма) was a ZX Spectrum 48K clone produced by OKB Processor, Voronezh, in the late 1980s. The ROM was changed from the original machine, with lowercase Latin characters replaced by Cyrillic and Sinclair BASIC messages translated into Russian.
Hobbit
Hobbit (ru: Хоббит) was a Soviet/Russian 8-bit home computer, based on the Sinclair Research ZX Spectrum hardware architecture. It also featured a CP/M mode, and Forth mode or LOGO mode, with the Forth or LOGO operating environment residing in an on-board ROM chip.
Impulse
Impulse (ru: Импульс) series was built by the RIP plant in Krasnodar. The keyboard had Cyrillic characters and the ROM was modified. The Impulse-M model featured a built-in SECAM encoder for connecting the computer to a TV.
Iskra-1085
The Iskra-1085 (ru: Искра 1085) was a ZX Spectrum 48K clone with 64K of RAM. Developed in the second half of the 1980s, it was produced by Schetmash in Kursk. The computer had a built-in power supply.
Julduz
The Julduz (Юлдуз, meaning "star" in Azerbaijani) was a ZX Spectrum clone aimed at schools, with 64 KB of RAM.
Kay 1024
The Kay 1024 was a Russian clone of the ZX Spectrum, released by NEMO in 1998. It was intended to rival the popular Scorpion ZS 256, and had a slightly lower price despite carrying far more onboard RAM (1024 KB). It features a controller for a standard PC keyboard, as well as an HDD, but not for FDDs; support for these was available via an expansion card. The CPU has a turbo mode, enabling it to run at 10 MHz.
Krasnogorsk
The Krasnogorsk (ru: Красного́рск) was a Russian clone of the ZX Spectrum, named after the city in which it was built (Krasnogorsk). It was developed and manufactured in 1991, but not produced in the same quantities as the Leningrad 1.
Kvorum
The Kvorum (ru: КВОРУМ) were a series of Russian ZX Spectrum clones with three different RAM options: 48 KB (Kvorum 48); 64 KB (Kvorum 64); 128 KB (Kvorum 128). The Kvorum 128 featured built-in tests, a memory monitor, and the possibility of copying in ROM. It also had the option of running CP/M and TR-DOS (via Beta Disk).
The Kvorum 128+ had the same features as the Kvorum 128, but included a built-in 3.5″ drive.
Leningrad
Leningrad is a series of Soviet clones of the ZX Spectrum.The Leningrad 1 was released in 1988, and was a clone of the 48K – it became the cheapest out of all the mass-made clones. They computer was designed to be as simple as possible, and more compact than the other clones which were available at the time. It was designed by Sergey Zonov, who later went on and created the Scorpion. The Leningrad 2 was released in 1991. The joystick port was changed to one which was Kempston-compatible, and the keyboard was much improved. It sold in great numbers.
Master
The Master (ru: MACTEP) was a Soviet clone of the ZX Spectrum made in 1990. It ran at 2.5 MHz with 48 KB RAM, and It had ports for both Sinclair and Kempston joysticks.
Master K
Master K is a Russian clone of the ZX Spectrum made in Ivanovo in 1991. It featured 48 KB RAM, 16 KB ROM, and a Kempston joystick interface. The dimensions of the system unit are 14 × 8 × 2 inches, and its weight is approximately 1 kg.
Magic 05
Magic-05 or Магик-05 is a home computer, based on Soviet components. Various models were developed and produced by the UOMZ and Vector plants (Ekaterinburg).
Moskva
Moskva (ru: Москва, en: Moscow) was the name of two Soviet ZX Spectrum clones. Introduced in 1988, the Moskva 48K was the first mass-produced clone of the 48K Spectrum in the USSR. One year later, the Moskva 128K was launched, and was a faithful clone of the ZX Spectrum 128K, featuring a built-in printer interface, joystick and TV/RGB ports, but lacked a sound processor and disk drive.
Nafanja
Nafanja (ru: НАФАНЯ) was a Soviet ZX Spectrum clone from 1990, which was designed to be transported in a case. It was made for diplomats and children. It is compatible with Dubna 48K, and has a joystick port. At the time of launch, its price was 650 roubles.
Parus VI-201
The Parus VI-201 (ru: Парус BN-201) was a Russian ZX Spectrum clone from 1992, designed for use as a video game console; ВИ (VI) stood for видео игра (video game). It was equipped with a Zilog Z80 processor, RF modulator, plus several DIN connectors for use with Kempston joysticks and an external cassette recorder.
Orel BK-08
The Orel BK-08 (ru: Орель БК-08) was a Ukrainian ZX Spectrum clone from 1991 which featured 64 KB non-separate fast RAM, an NMI button, an extended keyboard with Cyrillic characters in the upper address of ROM, two Sinclair joystick ports, and one Kempston in both connectors (DIN connectors). The video signal is output via SRGB, rather than an RF modulator. Memory access is clear (there is no conflict with the CPU and display controller) and display timing is the same as in the original ZX Spectrum.
Pentagon
The Pentagon (ru: Пентагон) home computer was a clone of the British-made Sinclair ZX Spectrum 128. It was manufactured by amateurs in the former Soviet Union, following freely distributable documentation. Its PCB was copied all over the ex-USSR in 1991-1996, which made it a widespread ZX Spectrum clone. The name "Pentagon" derives from the shape of the original PCB (Pentagon 48), with a diagonal cut in one of the corners.
Peters MC64 and MD-256S3
The Peters MC64 was a Russian ZX Spectrum clone from around 1993, made by Peters Plus, Ltd., who went on to make the Sprinter. Its dimensions are 14 × 7.2 × 2 inches. The Peters MD-256S3 is an enhanced version of the MC64.
Profi
The Profi or ZX-Profi is a Soviet ZX Spectrum clone developed in 1991 in Moscow by Kondor and Kramis. It features a 7 MHz Zilog Z80 CPU, up to 1024 KB RAM, 64 KB ROM, AY8910 sound chip, Beta 128 disk interface, IDE interface, and 512 x 240 multi-colour (i.e. two possible colours per 8 x 1 block) graphics mode for CP/M. Users liked to plug in two 8-bit DACs to play 4-channel modules of Scream Tracker. It also has both parallel and serial ports, and the possibility of attaching an IBM keyboard. Later models had a hard disk interface, and turbo mode.
Robik
ALU Robik () was a Soviet and Ukrainian ZX Spectrum clone produced between 1989 and 1998 by NPO "Rotor" in Cherkasy (Ukraine).
Santaka 002
A ZX Spectrum+ clone produced in 1990 in Kaunas (as mentioned on the computer startup screen), then on the Lithuanian SSR. Its keyboard features Cyrillic characters rather than Latin ones.
Scorpion ZS-256
The Scorpion ZS-256 (ru: Скорпион ЗС-256) was a very widespread ZX Spectrum clone produced in St. Petersburg by Sergey Zonov. It was fitted with a Zilog Z80 processor, whilst memory options ranged from 256 to 1024 KB. Various expansions were produced, including SMUC: an adapter for IDE and ISA slots, which allowed the use of IBM PC compatible hard drives and expansion cards. The Shadow Service Monitor (debugger) in the BASIC ROM was activated by pressing the Magic Button (NMI). There was also the option of fitting the machine with a ProfROM which included such software as a clock, hard disk utilities and the ZX-Word text editor.
Sever 48/002
Sever 48/002 (ru: Север 48/002) was a Soviet ZX Spectrum clone from 1990, whose name means 'North' (Север). It had 64 KB of RAM, and a 16 KB ROM. The dimensions of the system unit are 12 × 8 × 2 inches, and its weight is 1 kg.
Sintez and -Sintez-
The Sintez and -Sintez- are Soviet clones of the ZX Spectrum developed in the "Signal" factory (НПО «Сигнал») within the Moldovan SSR in 1989. The original Sintez resembled the Spectrum+ model, while the -Sintez- was an improved version with a more common mechanical keyboard, an additional serial port, as well as the provision for an 8080 or related processor (e.g. 8255) to be added and used together with the UA 880. Whilst it is largely compatible with software for the ZX Spectrum 48K (and has two Interface 2 joystick ports) its hardware is configured differently from the machine it is based on, utilising a different memory chip set-up, and lacking slowdown when accessing certain areas of memory, with the result that certain applications and games may produce unexpected results, or crash altogether.
Spektr 48
Spektr 48 (ru: Срektр 48) was a Russian clone of the 48K ZX Spectrum, produced in 1991 by Oryol (Орёл). It used a membrane keyboard which featured both Latin and Cyrillic letters, and came with a monitor program in ROM.
Symbol
The Symbol (ru: Симбол) was a Russian clone of the ZX Spectrum, produced by JSC "Radiozavod" in Penza from 1990 to 1995.
Vega
The Vega-64 and Vega-128 were produced in Odesa by the VPO Prometheus from 1990 to 1991. It was used a school computer, and supported both Cyrillic and Latin character sets.
Vesta
Vesta (ru: Веста) was a series of machines produced by the Stavropol radio plant Signal. The Vesta IK-30 is a ZX Spectrum 48K clone with a 40-button keyboard, external power supply and a joystick. Vesta IK-30M and Vesta IK-31 are more modern models.
Vostok
The Vostok was a ZX Spectrum 48K clone, produced by the Izhevsk Radio Plant. It came with a Kempston joystick interface and a built-in tape recorder.
ZX Next
ZX Next is a Russian ZX Spectrum clone with two Z80 processors (one serving as a video processor). It features an RS-232 port, turbo mode, IBM keyboard, 10 Mbit/s local network interface, and a CGA graphics mode with 640×200 pixel resolution. Its RAM is expandable to 512 KB. The machine also goes by the names ZX-Forum 2 and ZX Frium2. Not to be confused with the Sinclair ZX Spectrum Next released in 2017.
ZXM series
This is a series of Russian ZX Spectrum clones designed by Mick Laboratory.
The ZXM-777 was developed in 2006, and uses a TMPZ84C00-8 CPU at 3.5 MHz in normal mode, or 7.0 MHz in turbo mode. It features 128 KB of RAM, a YM2149F sound chip, a floppy disk controller, and can TR-DOS, BASIC 128, or ASIC 48.
The ZXM-Phoenix was introduced in 2008, and uses a KR1858VM1 (Z80A clone) CPU running at 3.5 MHz, or a TMPZ84C00-8 running at 3.5 MHz in normal mode, or 7.0 MHz in turbo mode. It has 1024/2048 KB of RAM, floppy and hard drive controllers, and features mouse support.
The ZXM-Alcyon was developed in late 2015, and is based on the transformation of an Igrosoft slot machine board (which uses a Zilog Z80 microprocessor) into a ZX Spectrum compatible machine.
The ZXM-Jasper was developed in 2016, and is also based on the Igrosoft board, but its goal was to be a Pentagon-compatible machine
The ZXM-Zephyr is a 2013 development, based on the ZXM-Phoenix. It is Spectrum compatible, and adds a USB connection, and an SD card reader.
Other
AZX-Monstrum
The AZX-Monstrum is a proposal for a vastly modernised ZX Spectrum compatible computer. The CPU is a Zilog Z380 (a 32-bit version of the Z80, capable of running at 40 MHz), it has its own graphic adapter, AT-keyboard, own BIOS and extended BASIC-ROM, and RAM expandable up to 4 GB linear. The computer is supposed to be almost 100% compatible. Standard devices of are HDD-controller, DMA vs IRQ controller, ROM-Task Switching, and more. So far only the HDD-controller has been produced, but the rest exists as drawings. All the plans are freely available.
Just Speccy
A ZX Spectrum clone made by Zaxxon.
Speccybob
The SpeccyBob is a ZX Spectrum clone built entirely around standard 74HC TTL chips and a programmable EPROM.
ZX Spectrum SE
The ZX Spectrum SE is a proposal for an advanced Spectrum machine, based on the Timex TC 2048 and the ZX Spectrum 128, with Timex graphic modes, and 280K RAM., made by Andrew Owen and Jarek Adamski in 2000. A prototype was created, and this configuration is supported by different emulators.
A planned production models of the ZX Spectrum SE are the Chloe 140SE and Chloe 280SE.
It subsequently became an FPGA project not directly related to the ZX Spectrum., adds a graphics mode with 320 points (instead of 256) per width and uses a dialect of Microsoft BASIC.
ZX128u+
The ZX128u+ is a Spanish clone with the ULAplus display support, using an emulated DivMMC interface as mass storage. The board is based on the Harlequin clone and contains aZ80 processor and AY chip.
PLD-based clones
These machines are based on programmable logic devices (PLDs) – electronic components used to build reconfigurable digital circuits.
Buryak
A ZX Spectrum compatible computer with a real Z80 cpu, VGA, Turbosound, PS/2 keyboard, Kempston joystick, customized for Raspberry Pi 3B case.
Centoventotto
A ZX Spectrum clone made by Mario Pratto in 2022.
Chrome and Chrome 128
Chrome and Chrome 128 are Spectrum clones featuring a 7 MHz Zilog Z80 CPU, 160+64 RAM, PlusD floppy disk interface, AY sound chip, and an RGB SCART port.
eLeMeNt ZX
The eLeMeNt ZX was developed by Jan Kučera (a.k.a. LMN128) in 2020, based on a lot of experience from developments of the universal FPGA interface named MB03+. It is the first (and only, as of 2022) clone with 100% hardware and display timings aligned with a digital video and sound output (incl. HDMI). It uses a genuine (faster) Z80 CPU switchable from 3.5 MHz up to 20 MHz, which can be overclocked to 30 MHz, or changed for a T80 core at higher speed. Other logic circuitry is integrated in the Alchitry AU and AU+ FPGA modules, attachable to the eLeMeNt's motherboard.
The eLeMeNt ZX combines 48K, 128K, +2, +2A, and many Russian memory models, including four Pentagon (computer) and several other Russian models, and the most popular interfaces, such as: K-Mouse; TurboSound FM; Sound Interface Device (SID); enhanced Covox and Soundrive; DivMMC; Z-Controller; Timex and advanced HiRes 512×192 with attributes and planar-based and chunky HGFX graphics modes; ULA+ and indexed true colour palettes; USB mouse and keyboard; 2 interchangeable SD card slots; 3 joystick slots, supporting 2-button Kempston and 8-button Sega controllers; USB serial connection with PC through a standard USB-A cable. The eLeMeNt features the original ZX bus (1x external, 2x internal), a USB-A serial connection and a rich internal pinout expansion for other modern peripherals.
The eLeMeNt has 2 MB of RAM, which is upgradeable to 4 MB. The ROM system supports 16K to 64 KB ROMs, plus SetUp (BIOS) ROM, Rescue ROM, and the latest version of the modern FAT and POSIX-API based filesystem: esxDOS.
Humble 48
A spanish clone, introduced in 2017.
Karabas 128, Pro
The Karabas-128 is a ZX Spectrum 128k clone developed by Andy Karpov, based on CPLD Altera EPM7128STC100. The Karabas Pro is FPGA based clone with FDD and HDD controllers.
N-Go
A clone of the ZX Spectrum Next.
SAM Coupé
SAM Coupé was an advanced 8-bit computer from 1989, compatible with the ZX Spectrum 48K. The design of the disk-drive hardware was based on the MG PlusD interface. SAM BASIC was very similar to the BetaBasic, and was developed by the same author. The Coupé was considered the successor to the ZX Spectrum in the late '80s.
Sizif
The Sizif is a ZX Spectrum CPLD-based clone for rubber case, developed by Eugene Lozovoy.
Speccy 2010
The Speccy2010 is FPGA development board by Martin Bórik, built for the implementation of various gaming computers, originally focused on ZX Spectrum and its clones.
Sprinter
Superfo (ZX mini, ZX Max, ZX Spider, ZX Nuvo)
These are ZX Spectrum clones by Don "Superfo".
ZX Badaloc
ZX Badaloc was the very first CPLD/FPGA advanced ZX Spectrum clone.
ZX-Evolution (Ts-Conf and Baseconf)
A Spectrum-compatible computer with improved hardware specifications and using modern peripherals. In addition to the basic core (Baseconf), it also has extended core named TS-Conf, which supports sprites, other extended video modes and have original memory manager.
ZX Prism
The ZX Prism is a proposal for a modern ZX Spectrum clone.
ZX-Uno(+)
The ZX-Uno is based on a FPGA board focused on replicating ZX Spectrum computer models. It has a similar size to the Raspberry Pi and fit into a RasPi case.
Multi-platform computers with ZX-Spectrum core
DivGMX
A ZX Spectrum interface that can also work as a standalone computer.
ReVerSE
Several projects by mvvproject.
ZX DOS+
ZX DOS is the continuation of the ZX-Uno project. A 1 MB version of the ZXDOS was released in 2020 that is compatible with the SpecNext core.
References
External links
Planet Sinclair: Computers: Clones and Variants
Sinclair Nostalgia Products — Sinclair Clones
Wayback Machine
Old-computers.com - ICE Felix HC-85
Old-computers.com - ICE Felix HC-91
Old-computers.com - ICE Felix HC-2000
Lists of computer hardware
Computer hardware clones | List of ZX Spectrum clones | Technology | 9,044 |
24,150,278 | https://en.wikipedia.org/wiki/C20H28O | {{DISPLAYTITLE:C20H28O}}
The molecular formula C20H28O (molar mass: 284.436 g/mol) may refer to:
Cingestol
Delanterone, a steroidal antiandrogen
Lynestrenol, a progestogen hormone
Retinal, one of the three forms of vitamin A
Tigestol
Vitamin_A2 | C20H28O | Chemistry | 87 |
41,378,784 | https://en.wikipedia.org/wiki/Immunodominance | Immunodominance is the immunological phenomenon in which immune responses are mounted against only a few of the antigenic peptides out of the many produced. That is, despite multiple allelic variations of MHC molecules and multiple peptides presented on antigen presenting cells, the immune response is skewed to only specific combinations of the two. Immunodominance is evident for both antibody-mediated immunity and cell-mediated immunity. Epitopes that are not targeted or targeted to a lower degree during an immune response are known as subdominant epitopes. The impact of immunodominance is immunodomination, where immunodominant epitopes will curtail immune responses against non-dominant epitopes.
Antigen-presenting cells such as dendritic cells, can have up to six different types of MHC molecules for antigen presentation. There is a potential for generation of hundreds to thousands of different peptides from the proteins of pathogens. Yet, the effector cell population that is reactive against the pathogen is dominated by cells that recognize only a certain class of MHC bound to only certain pathogen-derived peptides presented by that MHC class.
Antigens from a particular pathogen can be of variable immunogenicity, with the antigen that stimulates the strongest response being the immunodominant one. The different levels of immunogenicity amongst antigens forms what is known as dominance hierarchy.
Mechanism
CTL immunodominance
The mechanisms of immunodominance are very poorly understood. What determines cytotoxic T lymphocyte (CTL) immunodominance can be a number of factors, many of which are debated. Of these, one in particular focuses on the timing of CTL clonal expansion. The dominant CTLs that arise were activated sooner so therefore proliferate faster than subdominant CTLs that were activated later, thus resulting in a greater number of CTLs for that immunodominant epitope. This can be in concordance with an additional theory which states that immunodominance may be dependent on the affinity of the T-cell receptor (TCR) to the immunodominant epitope. That is, T cells with a TCR that has high affinity for its antigen are most likely to be immunodominant. High affinity of the peptide to the TCR contributes to the T cell’s survival and proliferation, allowing for more clonal selection of the immunodominant T cells over the subdominant T cells. Immunodominant T cells also curtail subdominant T cells by outcompeting them for cytokine sources from antigen-presenting cells. This leads to a greater expansion of the T cells that recognize a high affinity epitope and is favoured since these cells are likely to clear the infection much more quickly and effectively than their subdominant counterparts. It is important to note, however, that immunodominance is a relative term. If subdominant epitopes are introduced without the dominant epitope, the immune response will be focused to that subdominant epitope. Meanwhile, if the dominant epitope is introduced with the subdominant epitope, the immune response will be directed against the dominant epitope while silencing the response against the subdominant epitope.
Antibody immunodominance
The mechanism of immunodominance in B cell activation focuses on the affinity of epitope binding to the B-cell receptor (BCR). If an epitope binds very strongly to a B cell BCR, it will then subsequently bind with high affinity to the resultant antibodies produced by that B cell upon activation. These antibodies then out-compete the BCR for the epitope, and thus that B cell lineage will be unavailable for subsequent stimulation. On the opposite end of the scale where BCRs have low affinity for the epitopes, these B cells are outcompeted for stimulation by B cells with BCRs that have higher affinities for their respective epitopes. Insufficient T cell stimulation by these B cells also leads to suppression of these B cells by the T cells. The immunodominant epitope will be a BCR that has a particular ‘goldilocks’ amount of affinity for its epitope determined by equilibrium binding affinity. This leads to initial IgM response directed at the strongly binding epitope, and the subsequent IgG response focused on the immunodominant epitope. That is, those within the ‘goldilocks zone’ for affinity will be available for subsequent T helper stimulation, allowing for class switching, affinity maturation and thus, resulting in immunodominance to that particular epitope.
Implications
Having the immune response focused on a specific immunodominant epitope is useful because it allows the strongest immune response against a certain pathogen to dominate, thus eliminating the pathogen fast and effectively. However, it can also cause a hindrance because of potential pathogen escape. In the case of HIV, immunodominance can be unfavourable because of the high mutation rate of HIV. The immunodominant epitope can be mutated in the virus, thus allowing HIV to avoid the adaptive immune response when reintroduced from latency. This is why the disease perpetuates, as the virus mutates to avoid the antibodies and T cells specific for the immunodominant epitope that is no longer expressed by the virus.
Immunodominance can also have implications in cancer immunotherapy. Similar to HIV-escape, cancer can escape the immune system’s detection by antigenic variation. As the immunodominant epitope is mutated and/or lost in the cancer, the immune response no longer has
Immunodominance also has implications in vaccine development. Immunodominant epitopes vary from person to person. This phenomenon is due to the variability of HLA types, which make up the MHC molecules that present the immunodominant epitopes. Therefore, people with different alleles may respond to different epitopes of the same pathogen. With vaccine development particularly for subunit-based and recombinant vaccines, this may lead to some individuals which have different HLA haplotype to not respond while others do.
References
Immunology | Immunodominance | Biology | 1,351 |
4,215,135 | https://en.wikipedia.org/wiki/Atomic%20mirror | In physics, an atomic mirror is a device which reflects neutral atoms in a way similar to the way a conventional mirror reflects visible light. Atomic mirrors can be made of electric fields or magnetic fields, electromagnetic waves or just silicon wafer; in the last case, atoms are reflected by the attracting tails of the van der Waals attraction (see quantum reflection). Such reflection is efficient when the normal component of the wavenumber of the atoms is small or comparable to the effective depth of the attraction potential (roughly, the distance at which the potential becomes comparable to the kinetic energy of the atom). To reduce the normal component, most atomic mirrors are blazed at the grazing incidence.
At grazing incidence, the efficiency of the quantum reflection can be enhanced by a surface covered with ridges (ridged mirror).
The set of narrow ridges reduces the van der Waals attraction of atoms to the surfaces and enhances the reflection. Each ridge blocks part of the wavefront, causing Fresnel diffraction.
Such a mirror can be interpreted in terms of the Zeno effect.
We may assume that the atom is "absorbed" or "measured" at the ridges. Frequent measuring (narrowly spaced ridges) suppresses the transition of the particle to the half-space with absorbers, causing specular reflection. At large separation between thin ridges, the reflectivity of the ridged mirror is determined by dimensionless momentum , and does not depend on the origin of the wave; therefore, it is suitable for reflection of atoms.
Applications
Atomic interferometry
See also
Quantum reflection
Ridged mirror
Zeno effect
Atomic nanoscope
Atom laser
References
Atomic, molecular, and optical physics | Atomic mirror | Physics,Chemistry | 335 |
81,553 | https://en.wikipedia.org/wiki/Teotihuacan | Teotihuacan (; Spanish: Teotihuacán, ; ) is an ancient Mesoamerican city located in a sub-valley of the Valley of Mexico, which is located in the State of Mexico, northeast of modern-day Mexico City.
Teotihuacan is known today as the site of many of the most architecturally significant Mesoamerican pyramids built in the pre-Columbian Americas, namely the Pyramid of the Sun and the Pyramid of the Moon. Although close to Mexico City, Teotihuacan was not a Mexica (i.e. Aztec) city, and it predates the Aztec Empire by many centuries. At its zenith, perhaps in the first half of the first millennium (1 CE to 500 CE), Teotihuacan was the largest city in the Americas, with a population of at least 25,000, but has been estimated at 125,000 or more, making it at least the sixth-largest city in the world during its epoch.
The city covered and 80 to 90 percent of the total population of the valley resided in Teotihuacan. Apart from the pyramids, Teotihuacan is also anthropologically significant for its complex, multi-family residential compounds, the Avenue of the Dead, and its vibrant, well-preserved murals. Additionally, Teotihuacan exported fine obsidian tools found throughout Mesoamerica. The city is thought to have been established around 100 BCE, with major monuments continuously under construction until about 250 CE. The city may have lasted until sometime between the 7th and 8th centuries CE, but its major monuments were sacked and systematically burned around 550 CE. Its collapse might be related to the extreme weather events of 535–536.
Teotihuacan began as a religious center in the Mexican Plateau around the first century CE. It became the largest and most populated center in the pre-Columbian Americas. Teotihuacan was home to multi-floor apartment compounds built to accommodate the large population. The term Teotihuacan (or Teotihuacano) is also used to refer to the whole civilization and cultural complex associated with the site.
Although it is a subject of debate whether Teotihuacan was the center of a state empire, its influence throughout Mesoamerica is well documented. Evidence of Teotihuacano presence is found at numerous sites in Veracruz and the Maya region. The later Aztecs saw these magnificent ruins and claimed a common ancestry with the Teotihuacanos, modifying and adopting aspects of their culture. The ethnicity of the inhabitants of Teotihuacan is the subject of debate. Possible candidates are the Nahua, Otomi, or Totonac ethnic groups. Other scholars have suggested that Teotihuacan was multi-ethnic, due to the discovery of cultural aspects connected to the Maya as well as Oto-Pamean people. It is clear that many different cultural groups lived in Teotihuacan during the height of its power, with migrants coming from all over, but especially from Oaxaca and the Gulf Coast.
After the collapse of Teotihuacan, central Mexico was dominated by more regional powers, notably Xochicalco and Tula.
The city and the archeological site are located in what is now the San Juan Teotihuacán municipality in the State of México, approximately northeast of Mexico City. The site covers a total surface area of and was designated a UNESCO World Heritage Site in 1987. It is the most-visited archeological site in Mexico, receiving 4,185,017 visitors in 2017.
Etymology
The name was given by the Nahuatl-speaking Aztecs centuries after the fall of the city around 550 CE. The term has been glossed as "birthplace of the gods", or "place where gods were born", reflecting Nahua creation myths that were said to occur in Teotihuacan. Nahuatl scholar Thelma D. Sullivan interprets the name as "place of those who have the road of the gods." This is because the Aztecs believed that the gods created the universe at that site. The name is pronounced in Nahuatl, with the stress on the syllable . By normal Nahuatl orthographic conventions, a written accent mark would not appear in that position. Both this pronunciation and the Spanish pronunciation are used; in Spanish and usually English, the stress falls on the final syllable.
The original name of the city is unknown, but it appears in hieroglyphic texts from the Maya region as , or "Place of Reeds". This suggests that, in the Maya civilization of the Classic period, Teotihuacan was understood as a Place of Reeds similar to other Postclassic Central Mexican settlements that took the name of , such as and .
This naming convention led to much confusion in the early 20th century, as scholars debated whether Teotihuacan or Tula-Hidalgo was the Tollan described by 16th-century chronicles. It now seems clear that may be understood as a generic Nahua term applied to any large settlement. In the Mesoamerican concept of urbanism, and other language equivalents serve as a metaphor, linking the bundles of reeds and rushes that formed part of the lacustrine environment of the Valley of Mexico and the large gathering of people in a city.
As of January 23, 2018, the name Teotihuacan has come under scrutiny by experts, who now feel that the site's name may have been changed by Spanish colonizers in the 16th century. Archeologist Verónica Ortega of the National Institute of Anthropology and History states that the city appears to have actually been named Teohuacan, meaning "City of the Sun" rather than "City of the Gods", as the current name suggests.
History
Historical course
The first human establishment in the Teotihuacan area dates back to 600 BCE, and until 200 BCE the site consisted of scattered small villages. The total estimated population of the Teotihuacan Valley during this time was approximately 6,000. From 100 BCE to 750 CE, Teotihuacan evolved into a huge urban and administrative center with cultural influences throughout the broader Mesoamerica region.
The history of Teotihuacan is distinguished by four consecutive periods:
Period I occurred between 200 - 1 BCE and marks the development of a distinctively urban area. During this period, Teotihuacan began to grow into a city as local farmers began coalescing around the abundant springs of Teotihuacan.
Period II lasted between 1 CE to 350 CE. During this era, Teotihuacan exhibited explosive growth and emerged as the largest metropolis in Mesoamerica. Factors influencing this growth include the destruction of other settlements due to volcanic eruptions and the economic pull of the expanding city. This influx of new residents caused a reorganization of urban housing to the unique compound complexes that typify Teotihuacan. This period is notable for its monumental architecture and sculpture, especially the construction of some of the most well-known sites of Teotihuacan, the Pyramids of the Sun and Moon. Further, the shift of political power from the Temple of the Feathered Serpent and its surrounding palace structure to the Avenue of the Dead Complex occurred sometime between CE 250 and 350. Some authors believe that this represents a shift from the centralized, monarchical political system to a more decentralized and bureaucratic organization. Around 300 CE, the Temple of the Feathered Serpent was desecrated and construction in the city proceeded in a more egalitarian direction, focusing on the building of comfortable, stone accommodations for the population.
Period III lasted from 350 to 650 CE and is known as the classical period of Teotihuacan, during which the city reached the apogee of influence in Mesoamerica. Its population is estimated at a minimum of 125,000 inhabitants, and the city was among the largest cities in the ancient world, containing 2,000 buildings within an area of 18 square kilometers. It was also during this high period when Teotihuacan contained approximately half all people in the Valley of Mexico, becoming a kind of primate city of Mesoamerica. This period saw a massive reconstruction of buildings, and the Temple of the Feathered Serpent, which dates back to the previous period, was covered with a plaza with rich sculptural decoration. Typical artistic artifacts of this period are funeral masks, crafted mainly from green stone and covered with mosaics of turquoise, shell or obsidian. These masks were highly uniform in nature.
Period IV describes the time period between 650 and 750 CE. It marks the end of Teotihuacan as a major power in Mesoamerica. The city's elite housing compounds, clustered around the Avenue of the Dead, bear many burn marks, and archeologists hypothesize that the city experienced civil strife that hastened its decline. Factors that also led to the decline of the city included disruptions in tributary relations, increased social stratification, and power struggles between the ruling and intermediary elites. Following this decline, Teotihuacan continued to be inhabited, though it never reached its previous levels of population.
Origins and foundation
The early history of Teotihuacan is quite mysterious, and the origin of its founders is uncertain. Around 300 BCE, people of the central and southeastern areas of Mesoamerica began to gather into larger settlements. Teotihuacan was the largest urban center of Mesoamerica before the Aztecs, almost 1000 years prior to their epoch. The city was already in ruins by the time of the Aztecs. For many years, archeologists believed it was built by the Toltec. This belief was based on colonial period texts, such as the Florentine Codex, which attributed the site to the Toltecs. However, the Nahuatl word "Toltec" generally means "craftsman of the highest level" and may not always refer to the Toltec civilization centered at Tula, Hidalgo. Since Toltec civilization flourished centuries after Teotihuacan, the people could not have been the city's founders.
In the Late Formative era, a number of urban centers arose in central Mexico. The most prominent of these appears to have been Cuicuilco, on the southern shore of Lake Texcoco. Scholars have speculated that the eruption of the Xitle volcano may have prompted a mass emigration out of the central valley and into the Teotihuacan valley. These settlers may have founded or accelerated the growth of Teotihuacan.
Other scholars have put forth the Totonac people as the founders of Teotihuacan and have suggested that Teotihuacan was a multi-ethnic state since they find diverse cultural aspects connected to the Zapotec, Mixtec, and Maya peoples. The builders of Teotihuacan took advantage of the geography in the Basin of Mexico. From the swampy ground, they constructed raised beds, called chinampas, creating high agricultural productivity despite old methods of cultivation. This allowed for the formation of channels, and subsequently canoe traffic, to transport food from farms around the city. The earliest buildings at Teotihuacan date to about 200 BCE. The largest pyramid, the Pyramid of the Sun, was completed by 100 CE.
Year 378: Conquest of Tikal
Evidence of a king or other authoritarian ruler is strikingly absent in Teotihuacan. Contemporaneous cities in the same region, including Mayan and Zapotec, as well as the earlier Olmec civilization, left ample attestations of dynastic authoritarian sovereignty in the form of royal palaces, ceremonial ball courts, and depictions of war, conquest, and humiliated captives. However, no such artifacts have been found in Teotihuacan. Many scholars have thus concluded that Teotihuacan was led by some sort of "collective governance."
In January 378, the warlord Sihyaj K'ahk' (literally, "born of fire"), depicted with artifacts and the feather-serpent imagery associated with Teotihuacan culture, conquered Tikal, 600 miles away from Teotihuacan, removing and replacing the Maya king, with support from El Peru and Naachtun, as recorded by Stela 31 at Tikal and other monuments in the Maya region. At this time, the Spearthrower Owl ruler was also associated with Teotihuacan culture. Linda R. Manzanilla wrote in 2015:
Year 426: Conquest of Copán and Quiriguá
In 426, the Copán ruling dynasty was created with K'inich Yax K'uk' Mo' as the first king. The Dynasty went on to have sixteen rulers. Copán is located in modern-day Honduras, as described by Copán Altar Q. Soon thereafter, Yax K'uk' Mo' installed Tok Casper as king of Quiriguá, about 50 km north of Copán.
Zenith
The city reached its peak in 450 CE when it was the center of a powerful culture whose influence extended through much of the Mesoamerican region. At this time, the city covered over 30 km (over square miles), and perhaps housed a population of 150,000 people, with one estimate reaching as high as 250,000. Various districts in the city housed people from across the Teotihuacan region of influence, which spread south as far as Guatemala. Notably absent from the city are fortifications and military structures.
The nature of political and cultural interactions between Teotihuacan and the centers of the Maya region (as well as elsewhere in Mesoamerica) has been a long-standing and significant area for debate. Substantial exchange and interaction occurred over the centuries from the Terminal Preclassic to the Mid-Classic period. "Teotihuacan-inspired ideologies" and motifs persisted at Maya centers into the Late Classic, long after Teotihuacan itself had declined. However, scholars debate the extent and degree of Teotihuacan influence. Some believe that it had direct and militaristic dominance while others view the adoption of "foreign" traits as part of a selective, conscious, and bi-directional cultural diffusion. New discoveries have suggested that Teotihuacan was not much different in its interactions with other centers from the later empires, such as the Toltec and Aztec. It is believed that Teotihuacan had a major influence on the Preclassic and Classic Maya.
Architectural styles prominent at Teotihuacan are found widely dispersed at a number of distant Mesoamerican sites, which some researchers have interpreted as evidence for Teotihuacan's far-reaching interactions and political or militaristic dominance. A style particularly associated with Teotihuacan is known as talud-tablero, in which an inwards-sloping external side of a structure (talud) is surmounted by a rectangular panel (tablero). Variants of the generic style are found in a number of Maya region sites including Tikal, Kaminaljuyu, Copan, Becan, and Oxkintok, and particularly in the Petén Basin and the central Guatemalan highlands. The talud-tablero style pre-dates its earliest appearance at Teotihuacan in the Early Classic period; it appears to have originated in the Tlaxcala-Puebla region during the Preclassic. Analyses have traced the development into local variants of the talud-tablero style at sites such as Tikal, where its use precedes the 5th-century appearance of iconographic motifs shared with Teotihuacan. The talud-tablero style disseminated through Mesoamerica generally from the end of the Preclassic period, and not specifically, or solely, via Teotihuacano influence. It is unclear how or from where the style spread into the Maya region. During its zenith, the main structures at Teotihuacan, including the pyramids, were painted in impressive shades of dark red, with some small spots persisting to this day.
The city was a center of industry, home to many potters, jewelers, and craftspeople. Teotihuacan is known for producing a great number of obsidian artifacts. No ancient Teotihuacano non-ideographic texts are known to exist (or known to have once existed). Inscriptions from Maya cities show that Teotihuacan nobility traveled to, and perhaps conquered, local rulers as far away as Honduras. Maya inscriptions note an individual named by scholars as "Spearthrower Owl", apparently ruler of Teotihuacan, who reigned for over 60 years and installed his relatives as rulers of Tikal and Uaxactun in Guatemala.
Scholars have based interpretations of Teotihuacan culture on its archeology, murals that adorn the site (and others, like the Wagner Murals, found in private collections), and hieroglyphic inscriptions made by the Maya describing their encounters with Teotihuacan conquerors. The creation of murals, perhaps tens of thousands of murals, reached its height between 450 and 650. The artistry of the painters was unrivaled in Mesoamerica and has been compared with that of painters in Renaissance Florence, Italy.
Collapse
Scholars originally thought that invaders attacked the city in the 7th or 8th century, sacking and burning it. More recent evidence, however, seems to indicate that the burning was limited to the structures and dwellings associated primarily with the ruling class. Some think this suggests that the burning was from an internal uprising and the invasion theory is flawed because early archeological efforts were focused exclusively on the palaces and temples, places used by the upper classes. Because all of these sites showed burning, archeologists concluded that the whole city was burned. Instead, it is now known that the destruction was centered on major civic structures along the Avenue of the Dead. The sculptures inside palatial structures, such as Xalla, were shattered. No traces of foreign invasion are visible at the site.
Evidence for population decline beginning around the 6th century lends some support to the internal unrest hypothesis. The decline of Teotihuacan has been correlated to lengthy droughts related to the climate changes of 535–536, possibly caused by the eruption of the Ilopango volcano in El Salvador. This theory of ecological decline is supported by archeological remains that show a rise in the percentage of juvenile skeletons with evidence of malnutrition during the 6th century, further supporting the hypothesis of famine as one of the more plausible reasons for the decline of Teotihuacan. Urbanized Teotihuacanos would likely have been dependent on agricultural crops such as maize, beans, amaranth, tomatillos, and pumpkins. If climate change affected crop yields, then the harvest would not have been sufficient to feed Teotihucan's extensive population. However, the two main hypotheses are not mutually exclusive. Drought leading to famine could have led to incursions from smaller surrounding civilizations as well as internal unrest.
As Teotihuacan fell in local prominence, other nearby centers, such as Cholula, Xochicalco, and Cacaxtla, competed to fill the power void. They may have even aligned themselves against Teotihuacan to seize the opportunity to further reduce its influence and power. The art and architecture at these sites emulate Teotihuacan forms but also demonstrate an eclectic mix of motifs and iconography from other parts of Mesoamerica, particularly the Maya region.
The sudden destruction of Teotihuacan was common for Mesoamerican city-states of the Classic and Epi-Classic period. Many Maya states suffered similar fates in subsequent centuries, a series of events often referred to as the Classic Maya collapse. Nearby, in the Morelos valley, Xochicalco was sacked and burned in 900, and Tula met a similar fate around 1150.
Aztec period
During the 1200s CE, Nahua migrants repopulated the area. By the 1300s, it had fallen under the sway of Huexotla, and in 1409 was assigned its own tlatoani, Huetzin, a son of the tlatoani of Huexotla. But his reign was cut short when Tezozomoc, tlatoani of Azcapotzalco, invaded Huexotla and the neighboring Acolhua lands in 1418. Huetzin was deposed by the invaders, and Tezozomoc installed a man named Totomochtzin. Less than a decade later, in 1427, the Aztec Empire formed, and Teotihuacan was vassalized once more by the Acolhua.
Culture
Archeological evidence suggests that Teotihuacan was a multi-ethnic city, and while the predominant language or languages used in Teotihuacan have been lost to history, Totonac and Nahua, early forms of which were spoken by the Aztecs, seem to be highly plausible. This apparent regionally diverse population of Teotihuacan can be traced back to a natural disaster that occurred prior to its population boom. At one point in time, Teotihuacan was rivaled by another basin power, Cuicuilco. Both cities, roughly the same size and hubs for trade, were productive centers of artisans and commerce. Roughly around 100 BCE, however, the power dynamic changed when Mount Xitle, an active volcano, erupted, and heavily affected Cuicuilco and the farmland that supported it. It is believed that the later exponential growth of Teotihuacan's population was due to the subsequent migration of those displaced by the eruption. While this eruption is referenced as being the primary cause of the mass exodus, recent advancements of dating have shed light on an even earlier eruption. The eruption of Popocatepetl in the middle of the first century preceded that of Xitle, and is believed to have begun the aforementioned degradation of agricultural lands and structural damage to the city. Xitle's eruption further instigated the abandonment of Cuicuilco.
In the Tzacualli phase (–150 CE), Teotihuacan saw a population growth to approximately 60,000 to 80,000 people, most of whom are believed to have come from the Mexican basin. Following this growth, however, the influx of new residents slowed, and evidence suggests that, by the Miccaotli phase, , the urban population had reached its maximum.
In 2001, Terrence Kaufman presented linguistic evidence suggesting that an important ethnic group in Teotihuacan was of Totonacan or Mixe–Zoquean linguistic affiliation. He uses this to explain general influences from Totonacan and Mixe–Zoquean languages in many other Mesoamerican languages, whose people did not have any known history of contact with either of the abovementioned groups. Other scholars maintain that the largest population group must have been of Otomi ethnicity because the Otomi language is known to have been spoken in the area around Teotihuacan both before and after the Classic period and not during the middle period.
Teotihuacan compounds show evidence of being segregated into three classes: high elites, intermediate elites, and the laboring class. Residential architectural structures seem to be differentiable by the artistry and complexity of the structure itself. Based on the quality of construction materials and sizes of rooms as well as the quality of assorted objects found in the residency, dwellings radiating outward from the Central district and along the Avenue of the Dead might have been occupied by higher status individuals. However, Teotihuacan overall does not appear to have been organized into discrete zoning districts. The more elite compounds were often decorated with elaborate murals. Thematic elements of these murals included processions of lavishly dressed priests, jaguar figures, the storm god deity, and an anonymous goddess whose hands offer gifts of maize, precious stones, and water. Rulers who may have requested to be immortalized through art are noticeably absent in Teotihuacan artwork. Observed artwork, instead, tends to portray institutionalized offices and deities. It suggests their art glorifies nature and the supernatural and emphasizes egalitarian rather than aristocratic values. Also absent from Teotihuacan artwork is writing, despite the city having a strong network of contact with the literate Maya.
The laboring classes, themselves also stratified, consisted of farmers, skilled craftworkers, and the peripheral rural population. The city dwelling craftspeople of various specialties were housed in apartment complexes distributed throughout the city, known as neighborhood centers, and evidence shows that these centers were the economic and cultural engines of Teotihuacan. Established by the elite to showcase the sumptuary goods that the resident craftsmen provided, the neighborhood centers representing diversity in goods was aided by the heavy concentration of immigrated individuals from different regions of Mesoamerica. Along with archeological evidence pointing to one of the primary traded items being textiles, craftspeople capitalized on their mastery of painting, building, the performance of music and military training. These neighborhood centers closely resembled individual compounds, often surrounded by physical barriers separating them from the others. In this way, Teotihuacan developed an internal economic competition that fueled productivity and helped create a social structure of its own that differed from the larger structure. The repeated actions of the craftworkers left their physical mark. Based on the wear of teeth, archeologists were able to determine that some bodies worked with fibers with their frontal teeth, insinuating that they were involved with making nets, like those depicted in mural art. Female skeletons provided evidence that they might have sewn or painted for long periods of time, indicative of the headdresses that were created as well as pottery which was fired and painted. Wear on specific joints indicate the carrying of heavy objects over an extended period of time. Evidence of these heavy materials is found in the copious amounts of imported pottery, and raw materials found on-site, such as rhyolitic glass shards, marble, and slate. The residences of the rural population of the city were in enclaves between the middle-class residences or the periphery of the city while smaller encampments filled with earthenware from other regions, also suggest that merchants were situated in their own encampments as well.
Religion
In An Illustrated Dictionary of the Gods and Symbols of Ancient Mexico and the Maya, Miller and Taube list eight deities:
The Storm God
The Great Goddess
The Feathered Serpent. An important deity in Teotihuacan; most closely associated with the Feathered Serpent Pyramid (Temple of the Feathered Serpent).
The Old God
The War Serpent. Taube has differentiated two different serpent deities whose depictions alternate on the Feathered Serpent Pyramid: the Feathered Serpent and what he calls the "War Serpent". Other researchers are more skeptical.
The Netted Jaguar
The Pulque God
The Fat God. Known primarily from figurines and so assumed to be related to household rituals.
Esther Pasztory adds one more:
The Flayed God. Known primarily from figurines and so assumed to be related to household rituals.
The consensus among scholars is that the primary deity of Teotihuacan was the Great Goddess of Teotihuacan. The dominant civic architecture is the pyramid. Politics were based on the state religion, and religious leaders were the political leaders. Religious leaders would commission artists to create religious artworks for ceremonies and rituals. The artwork likely commissioned would have been a mural or a censer depicting gods like the Great Goddess of Teotihuacan or the Feathered Serpent. Censers would be lit during religious rituals to invoke the gods including rituals with human sacrifice.
As evidenced from human and animal remains found during excavations of the pyramids in the city, Teotihuacanos practiced human sacrifice. Scholars believe that the people offered human sacrifices as part of a dedication when buildings were expanded or constructed. The victims were probably enemy warriors captured in battle and brought to the city for ritual sacrifice to ensure the city could prosper. Some men were decapitated, some had their hearts removed, others were killed by being hit several times over the head, and some were buried alive. Animals that were considered sacred and represented mythical powers and the military were also buried alive or captured and held in cages such as cougars, a wolf, eagles, a falcon, an owl, and even venomous snakes.
Numerous stone masks have been found at Teotihuacan, and have been generally believed to have been used during a funerary context. However, other scholars call this into question, noting that the masks "do not seem to have come from burials".
Population
Teotihuacan had one of the largest, or perhaps had the largest, population of any city in the Basin of Mexico during its occupation. Teotihuacan was a large pre-historic city that underwent massive population growth and sustained it over most of the city's occupancy. In 100 CE, the population could be estimated at around 60,000-80,000, after 200 years of the city's occupancy, within of the city. The population, eventually, stabilized around 100,000 people around 300 CE.
The population reached its peak numbers around 400 to 500 CE. During 400 to 500 CE, the Xolalpan period, the city's population was estimated to be 100,000 to 200,000 people. This number was achieved by estimating compound sizes to hold approximately 60 to 100, with 2,000 compounds. These high numbers continued until the city started to decline between 600 and 700 CE.
One of Teotihuacan's neighborhoods, Teopancazco, was occupied during most of the time Teotihuacan was as well. It showed that Teotihuacan was a multiethnic city that was broken up into areas of different ethnicities and workers. This neighborhood was important in two ways; the high infant mortality rate and the role of the different ethnicities. The high infant mortality rate was important within the neighborhood, and the city at large, as there are a large number of perinatal skeletons at Teopancazco. This suggests that the population of Teotihuacan was sustained and grew due to people coming into the city, rather than the population reproducing. The influx of people came from surrounding areas, bringing different ethnicities to the city.
Teotihuacan also had two other neighborhoods that prominently depicted this multiethnic city picture. Both neighborhoods contained not only different architecture from the other parts of Teotihuacan but also artifacts and burial practices that began the narrative of these places. Archaeologists have also performed oxygen isotope ratio testing and strontium isotope ratio testing to determine, using the bones and the teeth of the skeletons uncovered, whether these skeletons were native to Teotihuacan or were immigrants to the city. The oxygen ratio testing can be used to determine where someone grew up, and the strontium ratio testing can be used to determine where someone was born and where they were living when they died. These tests revealed a lot of information, but specifically enabled clear distinction between the people living in the ethnic neighborhoods and those native to Teotihuacan.
One neighborhood was called Tlailotlacan and was believed to be a neighborhood of migrants predominantly from the Oaxaca region. The excavations there featured prominently artifacts in the Zapotec style of from the Zapotec region, including one tomb with an antechamber. The oxygen isotope ratio testing was particularly helpful when analyzing this neighborhood because it painted a clear picture of the initial influx from Oaxaca, followed by routine journeys back to the homeland to maintain the culture and heritage of the following generations. Later oxygen isotope ratio testing also revealed that out of the skeletons tested, four-fifths of them had immigrated to the city or were born in the city, but spent their childhood in their homeland before returning to Teotihuacan. There was evidence of constant interaction between Teotihuacan and the Oaxacan homeland through journeys taken by children and mothers, keeping the culture and the roots to their homeland alive.
The other main neighborhood was called Barrio de Los Comerciantes, or the Merchants' Barrio. There is less information about those who lived here (or perhaps more research needs to be done), but this neighborhood also had clear differences from other areas of the city. The architecture was different, featuring round adobe structures, as well as foreign pottery and artifacts identified as belonging to the Gulf Coast region. This neighborhood, similarly to Tlailotclan, saw a huge influx of immigration, determined by the strontium isotope ratio testing of bones and teeth, with people spending a significant part of their lives before death in Teotihuacan.
Writing and literature
There was a big find in the La Ventilla district that contains over 30 signs and clusters on the floor of the patio. Much of the findings in Teotihuacan suggest that the inhabitants had their own writing style. The figures were made "quickly and show control" giving the idea that they were practiced and were adequate for the needs of their society. Other societies around Teotihuacan adopted some of the symbols that were used there. The inhabitants there rarely used any other societies' symbols and art. These writing systems were not anything like those of their neighbors, but the same writings show that they must have been aware of the other writings.
Obsidian workshops
The processing of obsidian was the most developed art and the main source of wealth in Teotihuacan and many other ancient Mesoamerican cultures. The workshops produced tools or objects of obsidian of various uses and types (black and grey colors), intended for commercial transactions beyond the geographical boundaries of the city, with cities such as Monte Alban in Oaxaca Mexico, Tikal in Guatemala, and some Mayan states. Figurines, blades, arrowheads, spikes, knife handles, jewelry, masks, or ornaments, etc were some of the most notable and common objects constructed. Obsidian came mainly from the mines of Pachuca (Teotihuacan) and its processing was the most important industry in the city, which had acquired the monopoly in the trade of obsidian in the broader Middle American region. The state also heavily monitored the trade, movement, and creation of obsidian tools, as it was such an important industry in the city that it was limited to the regional workshops where the tools were produced. This brittle yet strong rock, was mainly formed into objects by flaking off pieces from a larger cone, but wood and bone tools have also been found to have been used in the process.
Archeological site
Knowledge of the huge ruins of Teotihuacan was never completely lost. After the fall of the city, various squatters lived on the site. During Aztec times, the city was a place of pilgrimage and identified with the myth of Tollan, the place where the sun was created. Today, Teotihuacan is one of the most noted archeological attractions in Mexico.
Excavations and investigations
In the late 17th century Carlos de Sigüenza y Góngora (1645–1700) made some excavations around the Pyramid of the Sun. Minor archeological excavations were conducted in the 19th century. In 1905 Mexican archeologist and government official, in the regime of Porfirio Díaz, Leopoldo Batres led a major project of excavation and restoration. The Pyramid of the Sun was restored to celebrate the centennial of the Mexican War of Independence in 1910. The site of Teotihuacan was the first to be expropriated for the national patrimony under the Law of Monuments (1897), giving jurisdiction under legislation for the Mexican state to take control. Some 250 plots were farmed on the site. Peasants who had been farming portions were ordered to leave and the Mexican government eventually paid some compensation to those individuals. A feeder train line was built to the site in 1908, which allowed the efficient hauling of material from the excavations and later brought tourists to the site. In 1910, the International Congress of Americanists met in Mexico, coinciding with the centennial celebrations, and the distinguished delegates, such as its president Eduard Seler and vice president Franz Boas were taken to the newly finished excavations.
Further excavations at the Ciudadela were carried out in the 1920s, supervised by Manuel Gamio. Between April 26 and July 29, 1932, Swedish anthropologist/archaeologist Sigvald Linné, his wife, and a small crew excavated in the Xolalpan area, part of the municipality of San Juan Teotihuacán. Other sections of the site were excavated in the 1940s and 1950s. The first site-wide project of restoration and excavation was carried out by INAH from 1960 to 1965, supervised by Jorge Acosta. This undertaking had the goals of clearing the Avenue of the Dead, consolidating the structures facing it, and excavating the Palace of Quetzalpapalotl.
During the installation of a "sound and light" show in 1971, workers discovered the entrance to a tunnel and cave system underneath the Pyramid of the Sun. Although scholars long thought this to be a natural cave, more recent examinations have established the tunnel was entirely manmade. The interior of the Pyramid of the Sun has never been fully excavated.
In 1980-82, another major program of excavation and restoration was carried out at the Pyramid of the Feathered Serpent and the Avenue of the Dead complex. Most recently, a series of excavations at the Pyramid of the Moon have greatly expanded evidence of cultural practices.
Recent discoveries
In late 2003 a tunnel beneath the Temple of the Feathered Serpent was accidentally discovered by Sergio Gómez Chávez and Julie Gazzola, archeologists of the National Institute of Anthropology and History (INAH). After days of a heavy rainstorm, Gómez Chávez noticed that a nearly three-foot-wide sinkhole occurred near the foot of the temple pyramid.
First trying to examine the hole with a flashlight from above Gómez could see only darkness, so tied with a line of heavy rope around his waist he was lowered by several colleagues, and descending into the murk he realized it was a perfectly cylindrical shaft. At the bottom he came to rest in an apparently ancient construction – a man-made tunnel, blocked in both directions by immense stones. Gómez was aware that archeologists had previously discovered a narrow tunnel underneath the Pyramid of the Sun and supposed he was now observing a kind of similar mirror tunnel, leading to a subterranean chamber beneath the Temple of the Feathered Serpent. He decided initially to elaborate on a clear hypothesis and to obtain approval. Meanwhile, he erected a tent over the sinkhole to preserve it from the hundreds of thousands of tourists who visit Teotihuacán. Researchers reported that the tunnel was believed to have been sealed in 200 CE.
Preliminary planning of the exploration and fundraising took more than six years.
Before the start of excavations, beginning in the early months of 2004, Victor Manuel Velasco Herrera, from UNAM Institute of Geophysics, determined with the help of ground-penetrating radar (GPR) and a team of some 20 archeologists and workers the approximate length of the tunnel and the presence of internal chambers. They scanned the earth under the Ciudadela, returning every afternoon to upload the results to Gómez's computers. By 2005, the digital map was complete. The archeologists explored the tunnel with a remote-controlled robot called Tlaloc II-TC, equipped with an infrared camera and a laser scanner that generates 3D visualization to perform three-dimensional register of the spaces beneath the temple. A small opening in the tunnel wall was made and the scanner captured the first images, 37 meters into the passage.
In 2009, the government granted Gómez permission to dig. By the end of 2009 archeologists of the INAH located the entrance to the tunnel that leads to galleries under the pyramid, where remains of rulers of the ancient city might have been deposited. In August 2010 Gómez Chávez, now director of Tlalocan Project: Underground Road, announced that INAH's investigation of the tunnel – closed nearly 1,800 years ago by Teotihuacan dwellers – will proceed. The INAH team, consisting of about 30 people supported by national and international advisors at the highest scientific levels, intended to enter the tunnel in September–October 2010. This excavation, the deepest made at the Pre-Hispanic site, was part of the commemorations of the 100th anniversary of archeological excavations at Teotihuacan and its opening to the public.
It was mentioned that the underground passage runs under Feathered Serpent Temple, and the entrance is located a few meters away from the temple at the expected place, deliberately sealed with large boulders nearly 2,000 years ago. The hole that had appeared during the 2003 storms was not the actual entrance; a vertical shaft of almost 5 meters by side is the access to the tunnel. At 14 meters deep, the entrance leads to a nearly 100-meter long corridor that ends in a series of underground galleries in the rock. After archeologists broke ground at the entrance of the tunnel, a staircase, and ladders that would allow easy access to the subterranean site were installed. Works advanced slowly and with painstaking care; excavating was done manually, with spades. Nearly 1,000 tons of soil and debris were removed from the tunnel. There were large spiral seashells, cat bones, pottery, fragments of human skin. The rich array of objects unearthed included: wooden masks covered with inlaid rock jade and quartz, elaborate necklaces, rings, greenstone crocodile teeth and human figurines, crystals shaped into eyes, beetle wings arranged in a box, sculptures of jaguars, and hundreds of metalized spheres. The mysterious globes lay in both the north and south chambers. Ranging from 40 to 130 millimeters, the balls have a core of clay and are covered with a yellow jarosite formed by the oxidation of pyrite. According to George Cowgill of Arizona State University, the spheres are a fascinating find: "Pyrite was certainly used by the Teotihuacanos and other ancient Mesoamerican societies. Originally, the spheres would have shown [sic] brilliantly. They are indeed unique, but I have no idea what they mean." All these artifacts were deposited deliberately and pointedly, as if in offering to appease the gods.
One of the most remarkable findings in the tunnel chambers was a miniature mountainous landscape, 17 meters underground, with tiny pools of liquid mercury representing lakes. The walls and ceiling of the tunnel were found to have been carefully impregnated with mineral powder composed of magnetite, pyrite (fool's gold), and hematite to provide a glittering brightness to the complex, and to create the effect of standing under the stars as a peculiar re-creation of the underworld. At the end of the passage, Gómez Chávez's team uncovered four greenstone statues, wearing garments and beads; their open eyes would have shone with precious minerals. Two of the figurines were still in their original positions, leaning back and appearing to contemplate up at the axis where the three planes of the universe meet – likely the founding shamans of Teotihuacan, guiding pilgrims to the sanctuary, and carrying bundles of sacred objects used to perform rituals, including pendants and pyrite mirrors, which were perceived as portals to other realms.
After each new segment was cleared, the 3D scanner documented the progress. By 2015 nearly 75,000 fragments of artifacts have been discovered, studied, cataloged, analyzed and, when possible, restored.
The significance of these new discoveries is publicly explored in a major exhibition at the De Young Museum in San Francisco, which opened in late September 2017.
A recent discovery of an 1800-year-old bouquet of flowers was made in 2021. The flowers, which were found in the tunnel beneath a pyramid dedicated to the feathered serpent deity Quetzalcóatl, date to between roughly 1 and 200 C.E. It is the first time such a well-preserved plant matter has been discovered at Teotihuacan.
Monuments of Teotihuacan
The city of Teotihuacan was characterized by large and imposing buildings, which included, apart from the complexes of houses, temples, large squares, stadiums, and palaces of the rulers, nobles, and priests. The city's urban-ceremonial space is considered one of the most impressive achievements of the pre-Columbian New World.
The size and quality of the monuments, the originality of the residential architecture, and the miraculous iconography in the colored murals of the buildings or the vases with the paintings of butterflies, eagles, coyotes with feathers and jaguars, suggest beyond any doubt a high-level civilization, whose cultural influences were spread and transplanted into all the Mesoamerican populations. The main monuments of the city of Teotihuacan are connected to each other by a central road of 45 meters wide and a length of 2 kilometers, called "Avenue of the Dead " (Avenida de Los Muertos), because it is believed to have been paved with tombs. East is the imposing "Pyramid of the Sun " (Piramide del Sol), the third-largest pyramid in the world. It has a volume of 1 million cubic meters. It is a gradual pyramid, with a base dimension of 219.4 x 231.6 meters and a height of 65 meters. At the top of the pyramid, there was a huge pedestal, where human sacrifices were made.
At the north end of the city, the Boulevard of the dead ends in the "Pyramid of the Moon " (Piramide de la Luna), surrounded laterally by platforms-ramps and lower pyramids. In the southern part is the "Temple of Cetzalkokal " (Quetzalcoatl), dedicated to God in the form of a winged serpent, which gives life and fertility. Sculpture representation of the God Ketzalkokal and twelve Heads of winged snakes adorn the two sides of the uphill scale of the temple.
Site layout
The city's broad central avenue, called "Avenue of the Dead" (a translation from its Nahuatl name Miccaotli), is flanked by impressive ceremonial architecture, including the immense Pyramid of the Sun (third largest in the world after the Great Pyramid of Cholula and the Great Pyramid of Giza). Pyramid of the Moon and The Ciudadela with Temple of the Feathered Serpent are placed at both ends of the Avenue while Palace-museum Quetzalpapálot, the fourth basic structure of the site, is situated between two main pyramids. Along the Avenue are many smaller talud-tablero platforms as well. The Aztecs believed they were tombs, inspiring the name of the avenue. Scholars have now established that these were ceremonial platforms that were topped with temples.
The Avenue of the Dead is roughly 40 meters wide and 4 km long. Further down the Avenue of the Dead, after a small river, is the area known as the Citadel, containing the ruined Temple of the Feathered Serpent Quetzalcoatl. This area was a large plaza surrounded by temples that formed the religious and political center of the city. The name "Citadel" was given to it by the Spanish, who believed it was a fort. Most of the common people lived in large apartment buildings spread across the city. Many of the buildings contained workshops where artisans produced pottery and other goods.
The urban layout of Teotihuacan exhibits two slightly different orientations, which resulted from both astronomical and topographic criteria. The central part of the city, including the Avenue of the Dead, conforms to the orientation of the Sun Pyramid, while the southern part reproduces the orientation of the Ciudadela. The two constructions recorded sunrises and sunsets on particular dates, allowing the use of an observational calendar. The orientation of the Sun Pyramid was intended to record "the sunrises on February 11 and October 29 and sunsets on April 30 and August 13. The interval from February 11 and October 29, as well as from August 13 to April 30, is exactly 260 days". The recorded intervals are multiples of 13 and 20 days, which were elementary periods of the Mesoamerican calendar. Furthermore, the Sun Pyramid is aligned to Cerro Gordo to the north, which means that it was purposefully built on a spot where a structure with a rectangular ground plan could satisfy both topographic and astronomical requirements. The artificial cave under the pyramid additionally attests to the importance of this spot.
Another example of artificial landscape modifications is the course of the San Juan River, which was modified to bend around the structures as it goes through the center of town eventually returning to its natural course outside of Teotihuacan.
Pecked-cross circles throughout the city and in the surrounding regions served as a way to design the urban grid, and as a way to read their 260-day calendar. The urban grid had great significance to city planners when constructing Teotihuacan, as the cross is pecked into the ground in the Pyramid of the Sun in specific places throughout Teotihuacan in precise degrees and angles over three km in distance. The layout of these crosses suggests it was there to work as a grid to the layout of Teotihuacan because they are laid out in a rectangular shape facing the Avenue of the Dead. The direction of the axes of the crosses don't point to an astronomical North and South direction but instead point to their own city's North. Numerology also has significance in the cross pecking because of the placement and amount of the holes, which sometimes count to 260 days, the length of the ritual calendrical cycle. Some of the pecked-cross circles also resemble an ancient Aztec game called, patolli.
These pecked-cross circles can be found not just in Teotihuacan, but also throughout Mesoamerica. The ones found all share certain similarities. These include having the shape of two circles, one being inside of the other. They are all found pecked on the ground or onto rocks. They are all created with a small hammer-like device that produces cuplike markings that are 1 centimeter in diameter and 2 centimeters apart. They all have axes that are in line with the city structures of the region. Because they are aligned with the structures of the cities, they also align with the position of significant astronomical bodies.
The Ciudadela was completed during the Miccaotli phase, and the Pyramid of the Sun underwent a complex series of additions and renovations. The Great Compound was constructed across the Avenue of the Dead, west of Ciudadela. This was probably the city's marketplace. The existence of a large market in an urban center of this size is strong evidence of state organization. Teotihuacan was at that point simply too large and too complex to have been politically viable as a chiefdom.
The Ciudadela is a great enclosed plaza capable of holding 100,000 people. About 700,000 cubic meters (yards) of material were used to construct its buildings. Its central feature is the Temple of Quetzalcoatl, which was flanked by upper-class apartments. The entire compound was designed to overwhelm visitors.
Threat from development
The archeological park of Teotihuacan is under threat from development pressures. In 2004, the governor of Mexico state, Arturo Montiel, gave permission for Wal-Mart to build a large store in the third archeological zone of the park. According to Sergio Gómez Chávez, an archeologist and researcher for Mexico's National Institute of Anthropology and History (INAH) fragments of ancient pottery were found where trucks dumped the soil from the site.
More recently, Teotihuacan has become the center of controversy over Resplandor Teotihuacan, a massive light and sound spectacular installed to create a nighttime show for tourists. Critics explain that a large number of perforations for the project have caused fractures in stones and irreversible damage, while the project will have limited benefit.
In May 2021, the Secretariat of Culture announced that a construction crew had been bulldozing the northern outskirts of the city ruins in order to develop the land for an amusement park, despite three-months worth of orders from the government to stop work. The report detailed that at least 25 archeological structures were in immediate danger.
Mexican government response
On May 31, 2021, 250 National Guard troops and 60 agents of the Attorney General's Office were sent to the Teotihuacán site to seize parcels of land intended for illegal construction and to forcibly stop further destruction of historical sites. The National Institute of Anthropology and History (INAH) had suspended authorization for those projects in March, yet construction work with heavy machinery and looting of artifacts had continued. The seizure of the land came a week after the International Council on Monuments and Sites (ICOMOS) warned that Teotihuacán was at risk of losing its UNESCO World Heritage designation.
Gallery
See also
Asteroid 293477 Teotihuacan
Cerro de la Estrella, a large Teotihuacano-styled pyramid in what is now part of Mexico City
List of archaeoastronomical sites by country
List of megalithic sites
List of Mesoamerican pyramids
List of World Heritage Sites in Mexico
Robert E. Lee Chadwick, an American anthropologist and archeologist
Spring equinox in Teotihuacán
Giza pyramid complex
References
Further reading
Bueno, Christina. The Pursuit of Ruins: Archeology, History, and the Making of Modern Mexico. Albuquerque: University of New Mexico Press, 2016,
(1992). "Abstraction and the rise of a utopian state at Teotihuacan", in Janet Berlo, ed. Art, Ideology, and the City of Teotihuacan, Dumbarton Oaks, pp. 281–320.
(1959) Un Palacio en la ciudad de los dioses, Teotihuacán, Mexico, Instituto Nacional de Antropología e Historia.
(1962) El Universo de Quetzalcóatl, Fondo de Cultura Económica.
(1966) Arqueología de Teotihuacán, la cerámica, Fondo de Cultura Económica.
(1969) Teotihuacan, métropole de l'Amérique, Paris, F. Maspero.
External links
Teotihuacan Research Guide, academic resources and links, maintained by Temple University
Teotihuacan Teotihuacan information and history
Teotihuacan article by Encyclopædia Britannica
Teotihuacan Multimedia Gallery
360° Panoramic View of the Avenue of the Dead, the Pyramid of the Sun and the Pyramid of the Moon , by Roland Kuczora
Lidar scans of the Teotihuacán Valley reveal how the landscape was engineered centuries ago. Gizmodo Sept 21, 2021
Lost ancient cities and towns
Ancient peoples
Mesoamerican sites
History museums in Mexico
World Heritage Sites in Mexico
Mexico City metropolitan area
Archaeoastronomy
Ancient cities
Former populated places in Mexico
Archaeological sites in the State of Mexico
Museums in the State of Mexico
Archaeological museums in Mexico
Populated places established in the 1st millennium BC | Teotihuacan | Astronomy | 11,380 |
5,140,476 | https://en.wikipedia.org/wiki/Nagata%27s%20conjecture%20on%20curves | In mathematics, the Nagata conjecture on curves, named after Masayoshi Nagata, governs the minimal degree required for a plane algebraic curve to pass through a collection of very general points with prescribed multiplicities.
History
Nagata arrived at the conjecture via work on the 14th problem of Hilbert, which asks whether the invariant ring of a linear group action on the polynomial ring over some field is finitely generated. Nagata published the conjecture in a 1959 paper in the American Journal of Mathematics, in which he presented a counterexample to Hilbert's 14th problem.
Statement
Nagata Conjecture. Suppose are very general points in and that are given positive integers. Then for any curve in that passes through each of the points with multiplicity must satisfy
The condition is necessary: The cases and are distinguished by whether or not the anti-canonical bundle on the blowup of at a collection of points is nef. In the case where , the cone theorem essentially gives a complete description of the cone of curves of the blow-up of the plane.
Current status
The only case when this is known to hold is when is a perfect square, which was proved by Nagata. Despite much interest, the other cases remain open. A more modern formulation of this conjecture is often given in terms of Seshadri constants and has been generalised to other surfaces under the name of the Nagata–Biran conjecture.
References
.
.
.
Algebraic curves
Conjectures | Nagata's conjecture on curves | Mathematics | 295 |
43,977,982 | https://en.wikipedia.org/wiki/Cytokine-induced%20killer%20cell | Cytokine-induced killer cells (CIK) cells are a group of immune effector cells featuring a mixed T- and natural killer (NK) cell-like phenotype. They are generated by ex vivo incubation of human peripheral blood mononuclear cells (PBMC) or cord blood mononuclear cells with interferon-gamma (IFN-γ), anti-CD3 antibody, recombinant human interleukin (IL)-1 and recombinant human interleukin (IL)-2.
Typically, immune cells detect major histocompatibility complex (MHC) presented on infected cell surfaces, triggering cytokine release, causing lysis or apoptosis. However, CIK cells have the ability to recognize infected or even malignant cells in the absence of antibodies and MHC, allowing for a fast and unbiased immune reaction. This is of particular importance as harmful cells that are missing MHC markers cannot be tracked and attacked by other immune cells, such as T-lymphocytes. As a special feature, terminally differentiated CD3+CD56+ CIK cells possess the capacity for both MHC-restricted and MHC-unrestricted anti-tumor cytotoxicity.
These properties, inter alia, rendered CIK cells attractive as a potential therapy for cancer and viral infections.
A new subclass of NK cells have been created both in vitro and in vivo. These NK cells referred to as cytokine induced memory-like natural killer cells are induced using cytokines, most commonly a mix of IL-12, IL-15, and IL-18. These NK cells are activated by these cytokines to stimulate an infection and induce an adaptive immune response. If cocultured with target cells such as tumor targets, these NK cells have memory-like abilities and are more adapt and effective at mounting a defense.
Nomenclature
They were given the name “cytokine-induced killer” because cultivation with certain cytokines is mandatory for the maturation into terminally differentiated CIK cells. Several sources also call them natural killer cell-like T cells due to their close relationship to NK cells. Others propose to classify CIK cells as subset of NKT cells.
Mechanism
It has been shown that lymphocytes, when exposed to interferon-gamma, anti-CD3 antibody, interleukin-1 and interleukin 2, are capable of lysing fresh, non-cultured cancer cells, both primary and metastatic. CIK cells respond to these lymphokines, particularly IL-2, by lysing tumor cells that were already known to be resistant to NK cell or LAK cell activity.
Peripheral blood mononuclear cells or cord blood mononuclear cells are extracted from either peripheral blood or cord blood, e.g. by simple blood draw. Extracted cells are ex-vivo exposed to interferon-gamma, anti-CD3 antibody, interleukin-1 and interleukin-2 in a time-sensitive schedule. These cytokines strongly stimulate the proliferation and maturation into CIK cells.
After completed maturation CIK cells are transfused to the donor in autologous settings or to different recipients in allogeneic settings.
Furthermore, it has been shown that CIK cells have a relevant expression of FcγRIIIa (CD16a), which can be exploited in combination with clinical-grade mAbs to redirect their activity in an antigen-specific manner. Indeed, the engagement of CD16a on CD3+CD56+ cells led to a potent antibody-dependent cell–mediated cytotoxicity (ADCC) both in vitro and in vivo against ovarian cancer.,. Recently, it has been demonstrated the efficacy of a combined approach (CIK + Cetuximab) against triple negative breast cancer (TNBC), an aggressive tumor that still requires therapeutic options. Different primitive and metastatic TNBC cancer mouse models were established in mice, and the treatment CIK cells plus cetuximab significantly restrained primitive tumor growth in mice, either in patient-derived tumor xenografts or MDA-MB-231 cell line models. Moreover, this approach almost completely abolished metastasis spreading and dramatically improved survival. The antigen-specific mAb favored tumor and metastasis tissue infiltration by CIK cells, and led to an enrichment of the CD16a+ subset. Data highlight the potentiality of this novel immunotherapy strategy where a nonspecific cytotoxic cell population can be converted into tumor-specific effectors with clinical-grade antibodies, thus providing not only a therapeutic option for TNBC but also a valid alternative to more complex approaches based on chimeric antigen receptor-engineered cells.
Function
The mechanism of CIK cells is distinctive from that of natural killer cells or LAK cells because they can lyse cells that NK cells and LAK cells cannot.
CIK cells have, as a key feature, a double T-cell and NK cell-like phenotype. This unique combination of T-cell and NK-cell capabilities exerts a potent and widely MHC-unrestricted anti-tumor cytotoxicity against a broad range of cancer cells.
Up to now the exact mechanisms of tumor recognition and targeted cytotoxicity of CIK cells is not fully understood. Besides recognition via TCR/CD3, NK-cell-like tumor recognition is mediated by cell-cell contact-dependent NKG2D, DNAM-1 and NKp30. These receptors and surface markers confer the capability of acting against cells that do not display the major histocompatibility complex, as has been shown by the ability to cause lysis in non-immunogenic, allogeneic and syngeneic tumors. Particularly solid and hematologic tumor cells tend to overexpress NKG2D ligands, making them a sought target of CIK cell-mediated cytolysis. Recognition is specific to tumor and virus infected cells as CIK cells do not display activity against healthy cells.
Immunomodulatory Tregs were shown to inhibit CIK cell function.
Cancer Treatment
CIK cells, along with the administration of IL-2 have been experimentally used to treat cancer in mice and humans with low toxicity.
Clinical trials
In a large number of phase I and phase II studies, autologous and allogeneic CIK cells displayed a high cytotoxic potential against a broad range of varying tumor entities, whereas side effects were only minor. In many cases, CIK cell treatment led to complete remissions of tumor burden, prolonged survival durations and improved quality of life, even in advanced disease stages.
Currently, the utilization of CIK cell treatment is restricted to clinical studies, but this therapeutic approach might also benefit patients as first-line treatment modality in the future.
International Registry on CIK Cells (IRCC)
The international registry on CIK cells (IRCC) was founded in 2011 as an independent organization, dedicated to collect data about clinical trials utilizing CIK cells and subsequent analysis to determine the latest state of clinical CIK cell research. A particular focus is thereby the evaluation of CIK cell efficacy in clinical trials and side effects.
International Society of CIK Cells (ISCC)
The International Society of Cytokine-induced killer cells (ISCC) was founded in 2024 to promote the networking of people who develop and research treatment therapies with CIK cells. In general, the members of the society aim to improve treatment options for cancer patients.
Future trends
In studies, researchers succeeded in the transfection of cells ex-vivo with cytokine-genes, e.g. IL-2. Gene-modified CIK cells showed an increased proliferation rate and enhanced toxicity. Gene-transfected CIK cells were first applied in 1999 for the treatment of ten patients in metastatic state of disease.
Evidence is growing that the interaction with dendritic cells (DC) or rather vaccinated DCs further improves the anti-tumor efficacy of CIK cells and joint cultivation additionally reduces the number of Tregs within the CIK cell culture, resulting in enhanced expansion and frequency of CD3+CD56+ cells in the amplified cell population.
In-vitro studies revealed that CIK cells, redirected by chimeric antigen receptors with an antibody-defined specificity for different tumor antigens, showed an improved selectivity and activation in targeting antigen-presenting tumor cells.
In-vitro and in-vivo activity of CIK cells in conjunction with bispecific antibodies, cross-linking cytotoxic effector cells with malignant targets, was enhanced compared with CIK cells alone.
History
CIK cells were first described by Ingo G.H. Schmidt-Wolf in 1991, who also performed the first clinical trial with CIK cells in the treatment of cancer patients in 1994.
See also
Natural killer cell
Natural killer T cell
Lymphokine-activated killer cell
Interleukin
Cancer immunotherapy
References
[Rosato Antonio, Sommaggio Roberta (Aug 2017). "Cytokines for the induction of antitumor effectors: The paradigm of Cytokine-Induced Killer (CIK) cells". Cytokine Growth Factor Rev. 36: 99–105. doi:10.1016/j.cytogfr.2017.06.003. .]
External links
Cytokine-Induced Killer Cells at the US National Library of Medicine Medical Subject Headings (MeSH)
International Registry on CIK cells (IRCC)
International Society of CIK cells (ISCC)
Definition of CIK cells by the National Cancer Institute
Immune system
Lymphocytes
Cancer treatments | Cytokine-induced killer cell | Biology | 2,027 |
17,705,066 | https://en.wikipedia.org/wiki/Pirimiphos-methyl | Pirimiphos-methyl, marketed as Actellic and Sybol, is a phosphorothioate used as an insecticide. It was originally developed by Imperial Chemical Industries Ltd., now Syngenta, at their Jealott's Hill site and first marketed in 1977, ten years after its discovery.
This is one of several compounds used for vector control of Triatoma. These insects are implicated in the transmission of Chagas disease in the Americas. Pirimiphos-methyl can be applied as an interior surface paint additive, in order to achieve a residual pesticide effect.
Synthesis
Pirimiphos methyl is manufactured in a two-step process in which N,N-diethylguanidine is reacted with ethyl acetoacetate to form a pyrimidine ring and its hydroxy group is combined with dimethyl chlorothiophosphate to form the insecticide.
Pyrimiphos-ethyl is a related insecticide in which the methoxy groups are replaced with ethoxy groups.
References
External links
Acetylcholinesterase inhibitors
Organothiophosphate esters
Pesticides
Aminopyrimidines
Diethylamino compounds
Methoxy compounds | Pirimiphos-methyl | Biology,Environmental_science | 254 |
21,664 | https://en.wikipedia.org/wiki/Nebula | A nebula (; : nebulae, or nebulas) is a distinct luminescent part of interstellar medium, which can consist of ionized, neutral, or molecular hydrogen and also cosmic dust. Nebulae are often star-forming regions, such as in the Pillars of Creation in the Eagle Nebula. In these regions, the formations of gas, dust, and other materials "clump" together to form denser regions, which attract further matter and eventually become dense enough to form stars. The remaining material is then thought to form planets and other planetary system objects.
Most nebulae are of vast size; some are hundreds of light-years in diameter. A nebula that is visible to the human eye from Earth would appear larger, but no brighter, from close by. The Orion Nebula, the brightest nebula in the sky and occupying an area twice the angular diameter of the full Moon, can be viewed with the naked eye but was missed by early astronomers. Although denser than the space surrounding them, most nebulae are far less dense than any vacuum created on Earth (10 to 10 molecules per cubic centimeter) – a nebular cloud the size of the Earth would have a total mass of only a few kilograms. Earth's air has a density of approximately 10 molecules per cubic centimeter; by contrast, the densest nebulae can have densities of 10 molecules per cubic centimeter. Many nebulae are visible due to fluorescence caused by embedded hot stars, while others are so diffused that they can be detected only with long exposures and special filters. Some nebulae are variably illuminated by T Tauri variable stars.
Originally, the term "nebula" was used to describe any diffused astronomical object, including galaxies beyond the Milky Way. The Andromeda Galaxy, for instance, was once referred to as the Andromeda Nebula (and spiral galaxies in general as "spiral nebulae") before the true nature of galaxies was confirmed in the early 20th century by Vesto Slipher, Edwin Hubble, and others. Edwin Hubble discovered that most nebulae are associated with stars and illuminated by starlight. He also helped categorize nebulae based on the type of light spectra they produced.
Observational history
Around 150 AD, Ptolemy recorded, in books VII–VIII of his Almagest, five stars that appeared nebulous. He also noted a region of nebulosity between the constellations Ursa Major and Leo that was not associated with any star. The first true nebula, as distinct from a star cluster, was mentioned by the Muslim Persian astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars (964). He noted "a little cloud" where the Andromeda Galaxy is located. He also cataloged the Omicron Velorum star cluster as a "nebulous star" and other nebulous objects, such as Brocchi's Cluster. The supernovas that created the Crab Nebula, SN 1054, was observed by Arabic and Chinese astronomers in 1054.
In 1610, Nicolas-Claude Fabri de Peiresc discovered the Orion Nebula using a telescope. This nebula was also observed by Johann Baptist Cysat in 1618. However, the first detailed study of the Orion Nebula was not performed until 1659 by Christiaan Huygens, who also believed he was the first person to discover this nebulosity.
In 1715, Edmond Halley published a list of six nebulae. This number steadily increased during the century, with Jean-Philippe de Cheseaux compiling a list of 20 (including eight not previously known) in 1746. From 1751 to 1753, Nicolas-Louis de Lacaille cataloged 42 nebulae from the Cape of Good Hope, most of which were previously unknown. Charles Messier then compiled a catalog of 103 "nebulae" (now called Messier objects, which included what are now known to be galaxies) by 1781; his interest was detecting comets, and these were objects that might be mistaken for them.
The number of nebulae was then greatly increased by the efforts of William Herschel and his sister, Caroline Herschel. Their Catalogue of One Thousand New Nebulae and Clusters of Stars was published in 1786. A second catalog of a thousand was published in 1789, and the third and final catalog of 510 appeared in 1802. During much of their work, William Herschel believed that these nebulae were merely unresolved clusters of stars. In 1790, however, he discovered a star surrounded by nebulosity and concluded that this was a true nebulosity rather than a more distant cluster.
Beginning in 1864, William Huggins examined the spectra of about 70 nebulae. He found that roughly a third of them had the emission spectrum of a gas. The rest showed a continuous spectrum and were thus thought to consist of a mass of stars. A third category was added in 1912 when Vesto Slipher showed that the spectrum of the nebula that surrounded the star Merope matched the spectra of the Pleiades open cluster. Thus, the nebula radiates by reflected star light.
In 1923, following the Great Debate, it became clear that many "nebulae" were in fact galaxies far from the Milky Way.
Slipher and Edwin Hubble continued to collect the spectra from many different nebulae, finding 29 that showed emission spectra and 33 that had the continuous spectra of star light. In 1922, Hubble announced that nearly all nebulae are associated with stars and that their illumination comes from star light. He also discovered that the emission spectrum nebulae are nearly always associated with stars having spectral classifications of B or hotter (including all O-type main sequence stars), while nebulae with continuous spectra appear with cooler stars. Both Hubble and Henry Norris Russell concluded that the nebulae surrounding the hotter stars are transformed in some manner.
Formation
There are a variety of formation mechanisms for the different types of nebulae. Some nebulae form from gas that is already in the interstellar medium while others are produced by stars. Examples of the former case are giant molecular clouds, the coldest, densest phase of interstellar gas, which can form by the cooling and condensation of more diffuse gas. Examples of the latter case are planetary nebulae formed from material shed by a star in late stages of its stellar evolution.
Star-forming regions are a class of emission nebula associated with giant molecular clouds. These form as a molecular cloud collapses under its own weight, producing stars. Massive stars may form in the center, and their ultraviolet radiation ionizes the surrounding gas, making it visible at optical wavelengths. The region of ionized hydrogen surrounding the massive stars is known as an H II region while the shells of neutral hydrogen surrounding the H II region are known as photodissociation region. Examples of star-forming regions are the Orion Nebula, the Rosette Nebula and the Omega Nebula. Feedback from star-formation, in the form of supernova explosions of massive stars, stellar winds or ultraviolet radiation from massive stars, or outflows from low-mass stars may disrupt the cloud, destroying the nebula after several million years.
Other nebulae form as the result of supernova explosions; the death throes of massive, short-lived stars. The materials thrown off from the supernova explosion are then ionized by the energy and the compact object that its core produces. One of the best examples of this is the Crab Nebula, in Taurus. The supernova event was recorded in the year 1054 and is labeled SN 1054. The compact object that was created after the explosion lies in the center of the Crab Nebula and its core is now a neutron star.
Still other nebulae form as planetary nebulae. This is the final stage of a low-mass star's life, like Earth's Sun. Stars with a mass up to 8–10 solar masses evolve into red giants and slowly lose their outer layers during pulsations in their atmospheres. When a star has lost enough material, its temperature increases and the ultraviolet radiation it emits can ionize the surrounding nebula that it has thrown off. The Sun will produce a planetary nebula and its core will remain behind in the form of a white dwarf.
Types
Classical types
Objects named nebulae belong to four major groups. Before their nature was understood, galaxies ("spiral nebulae") and star clusters too distant to be resolved as stars were also classified as nebulae, but no longer are.
H II regions, large diffuse nebulae containing ionized hydrogen
Planetary nebulae
Supernova remnants (e.g., Crab Nebula)
Dark nebulae
Not all cloud-like structures are nebulae; Herbig–Haro objects are an example.
Flux Nebulae
Diffuse nebulae
Most nebulae can be described as diffuse nebulae, which means that they are extended and contain no well-defined boundaries. Diffuse nebulae can be divided into emission nebulae, reflection nebulae and dark nebulae.
Visible light nebulae may be divided into emission nebulae, which emit spectral line radiation from excited or ionized gas (mostly ionized hydrogen); they are often called H II regions, H II referring to ionized hydrogen), and reflection nebulae which are visible primarily due to the light they reflect.
Reflection nebulae themselves do not emit significant amounts of visible light, but are near stars and reflect light from them. Similar nebulae not illuminated by stars do not exhibit visible radiation, but may be detected as opaque clouds blocking light from luminous objects behind them; they are called dark nebulae.
Although these nebulae have different visibility at optical wavelengths, they are all bright sources of infrared emission, chiefly from dust within the nebulae.
Planetary nebulae
Planetary nebulae are the remnants of the final stages of stellar evolution for mid-mass stars (varying in size between 0.5-~8 solar masses). Evolved asymptotic giant branch stars expel their outer layers outwards due to strong stellar winds, thus forming gaseous shells while leaving behind the star's core in the form of a white dwarf. Radiation from the hot white dwarf excites the expelled gases, producing emission nebulae with spectra similar to those of emission nebulae found in star formation regions. They are H II regions, because mostly hydrogen is ionized, but planetary are denser and more compact than nebulae found in star formation regions.
Planetary nebulae were given their name by the first astronomical observers who were initially unable to distinguish them from planets, which were of more interest to them. The Sun is expected to spawn a planetary nebula about 12 billion years after its formation.
Protoplanetary nebulae
Supernova remnants
A supernova occurs when a high-mass star reaches the end of its life. When nuclear fusion in the core of the star stops, the star collapses. The gas falling inward either rebounds or gets so strongly heated that it expands outwards from the core, thus causing the star to explode. The expanding shell of gas forms a supernova remnant, a special diffuse nebula. Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emission. This emission originates from high-velocity electrons oscillating within magnetic fields.
Examples
Ant Nebula
Barnard's Loop
Boomerang Nebula
Cat's Eye Nebula
Crab Nebula
Eagle Nebula
Eskimo Nebula
Carina Nebula
Fox Fur Nebula
Helix Nebula
Horsehead Nebula
Engraved Hourglass Nebula
Lagoon Nebula
Orion Nebula
Pelican Nebula
Red Square Nebula
Ring Nebula
Rosette Nebula
Tarantula Nebula
Waterfall Nebula
Catalogs
Gum catalog (emission nebulae)
RCW Catalogue (emission nebulae)
Sharpless catalog (emission nebulae)
Messier Catalogue
Caldwell Catalogue
Abell Catalog of Planetary Nebulae
Barnard Catalogue (dark nebulae)
Lynds' Catalogue of Bright Nebulae
Lynds' Catalogue of Dark Nebulae
See also
H I region
H II region
List of largest nebulae
List of diffuse nebulae
Lists of nebulae
Molecular cloud
Magellanic Clouds
Messier object
Nebular hypothesis
Orion molecular cloud complex
Timeline of knowledge about the interstellar and intergalactic medium
References
External links
Nebulae, SEDS Messier Pages
Fusedweb.pppl.gov
Historical pictures of nebulae, digital library of Paris Observatory
Space plasmas
Concepts in astronomy
Interstellar media | Nebula | Physics,Astronomy | 2,510 |
61,575,689 | https://en.wikipedia.org/wiki/National%20Atmospheric%20Deposition%20Program | The National Atmospheric Deposition Program (NADP) is a Cooperative Research Support Program of the State Agricultural Experiment Stations (NRSP-3). Housed at the Wisconsin State Laboratory of Hygiene at the University of Wisconsin–Madison, the NADP is a collaborative effort between many different groups, such as: Federal, state, tribal, local governmental agencies, educational institutions, private companies, and non-governmental agencies. These organizations work together in order to operate monitoring sites and report deposition data. The NADP provides free access to all of its data, including seasonal and annual averages, trend plots, deposition maps, reports, manuals, and educational brochures.
Overview
Established: 1977
Number of sites: ~350 different site locations
Numbers of users: >37,000
History
Evolution
The National Atmospheric Deposition Program, or NADP, was initiated by the State Agricultural Experiment Station in 1977 to monitor the effects of atmospheric deposition on crops, rangelands, forests, surface waters, and other natural and cultural resources. The initial goal was to provide regional data for the deposition of acids, nutrients, and base cations (including temporal trends/amounts and geographic distributions).
In 1978, the first NADP sites began collecting weekly precipitation samples. In the early 1980s, the National Acid Precipitation Assessment Program (NAPAP) was established, and began to work in collaboration with NADP in order to sustain a long term, quality-assured precipitation monitoring network. This unification brought on a major expansion as well as newfound federal agency support. Today, the NADP National Trends Network (NTN) has more than 250 sites.
In response to emerging issues, the NADP established an additional two networks in the 1990s: The Atmospheric Integrated Research Monitoring Network (AIRMoN), which collected daily samples at five sites, and the Mercury Deposition Network (MDN), which has more than 80 sites (six of which are located in Canada). The MDN collects wet deposition data for both total and methyl mercury in precipitation.
In 2009, the Atmospheric Mercury Network (AMNet) was formed as a fourth network, and as a subset of some MDN sites. The network uses continuous automatic measurement systems to monitor gaseous and particulate concentrations of atmospheric mercury. The Ammonia Monitoring Network (AMoN) was added as a fifth network in October 2010, and it currently has more than 100 sites. AMoN monitors ammonia gas concentrations across the United States to provide consistent and lasting data. The Mercury Litterfall Network (MLN) was approved as the sixth network in 2021 with 22 sites. MLN provides estimates of mercury dry deposition in forested landscapes using passive collectors.
History of the National Acid Precipitation Assessment Program (NAPAP)
The National Acid Precipitation Assessment Program (NAPAP) was a cooperative federal program that was first authorized in 1981 in order to coordinate acid rain research and report those findings to the U.S. Congress. The research, monitoring, and assessment efforts of NAPAP, and other groups in the 1980s, culminated in Title IV of the 1990 Clean Air Act Amendments (CAAA), also known as the Acid Deposition Control Program. Title IX of the CAAA reauthorized NAPAP to conduct acid rain research and monitoring, and to periodically assess the costs, benefits, and effectiveness of Title IV. The NAPAP member agencies were the U.S. Environmental Protection Agency, the U.S. Department of Energy, the U.S. Department of Agriculture, the U.S. Department of Interior, the National Aeronautics and Space Administration, and the National Oceanic and Atmospheric Administration.
The NAPAP published a total of four reports: 1991 (multiple volumes), 1998, 2005, and 2011. The Program was able to describe and document strong reductions in sulfur dioxide and nitrogen oxide emissions, as well as the resulting atmospheric deposition from 1980 to 2010 as various elements of the CAAA were implemented. The NAPAP officially ended with publication of the last report in 2011. To reflect the federal NAPAP role in the NADP, the network name was changed to NADP National Trends Network (NTN)
Organization
Governance
The organizational structure of the NADP follows the State Agricultural Experiment Station Guidelines for Multi-State Research Activities (SAESD, 2006)1. This framework allows any individual or institution to participate in any segment of NADP, whether it be the monitoring or the research aspect of atmospheric deposition. NADP is managed by two groups. The first being Program Management, which is largely a volunteer group made up of site sponsors and supervisors, policy experts from several agencies (at the federal, state, and local levels), scientists and research specialists, and anyone with an interest in atmospheric deposition. Program management is organized through an Executive Committee, Technical Subcommittees, several advisory subcommittees, science subcommittees, and ad hoc groups. The second group is Program Operations, which is managed by a professional staff housed at the Wisconsin State Laboratory of Hygiene at the University of Wisconsin-Madison. The Program Office oversees day to day tasks, including coordinating with the Executive Committee, the individual monitoring networks, the analytical laboratories, the External Quality Assurance Program, and the Network Equipment Depot.
Committees
The NADP is governed by an elected and rotating Executive Committee (8 members). Currently, there are two standing Subcommittees, three standing Advisory Committees, and four Science Committees (highlighted below) that contribute continuous, scheduled suggestions to the Executive Committee. Ad hoc groups and the Program Office also supply crucial input to the Executive Committee.
The Executive Committee (EC) is responsible for considering and, if approved, executing decisions which are often based on the suggestions made by the subcommittees, advisory committees, science committees, and ad hoc groups. In addition, the EC is accountable for financial decisions and securing a balanced, stable, and ongoing program. There are eight voting members, as well as numerous non-voting members, that make decisions and appoint responsibilities to the subcommittees.
The two standing Technical Subcommittees, Education and Outreach Subcommittee (EOS) (formally the Ecological Response and Outreach Subcommittee) and Network Operations Subcommittee (NOS), provide the technical support necessary to promote the goals of NADP. EOS maintains a platform to coordinate outreach and education activities among the network and scientific subcommittees. With approval and recommendation from the Executive Committee, EOS will provide guidance for outreach efforts and educational materials to the Program Office. EOS will provide a forum to enable communication of outreach and education needs, goals and activities of the subcommittees and networks. The goal is to enhance efficiency in messaging and reaching new audiences. The NOS focuses on equipment, research, sampling methods, collection sites, and the evaluation of the issues that arise from these components.
The three advisory subcommittees include the Budget Advisory Committee (BAC), Quality Assurance Advisory Group (QAAG), and Data Management Advisory Group (DMAG). The role of the BAC is to advise the EC with suggestions pertaining to the budget, and to outline financial planning for current and future years. The QAAG is in charge of ensuring quality management in all aspects of NADP, including the Program Office, networks, and laboratories. To do so, they provide recommendations for manuals and procedures to the EC. The DMAG counsels the EC in data management by reviewing data reports and formats in order to ensure that they are in line with the correct protocols.
The science committees do not directly advise NADP networks, but they are closely affiliated. They assess major atmospheric deposition concerns and track scientific interest and participation. The first scientific committee was the Critical Loads of Atmospheric Deposition (CLAD), and the second was the Total Deposition Science Committee (TDep). CLAD and TDep were approved by the EC in 2010 and 2011, respectively. The goal of the CLAD is to provide a forum, across all levels of government and industry, that encourages the use and discussion of technical information and critical load science. TDep seeks to evaluate pressing issues of atmospheric deposition via a collaboration between a wide range of groups. TDep also aims to improve the ability to measure and model wet and dry deposition. To do so, they are working to advance the techniques and procedures which are used to estimate deposition of sulfur, nitrogen, and mercury. In October 2017, the Aeroallergen Monitoring Science Committee (AMSC) was added as the third science committee. AMSC seeks to utilize emerging technologies to advance the science of aeroallergen monitoring, enhance the understanding of quality data collection and evaluation methods, and provide lasting data for national networks. A fourth science committee, the Mercury in the Environment and Links to Deposition Science Committee (MELD), was formed in 2020 to improve our understanding of atmospherically-derived mercury sources, pathways, processes, and effects on the environment.
All NADP operations are administered at the NADP Program Office, which is currently located at the Wisconsin State Laboratory of Hygiene at the University of Wisconsin–Madison. The five main functions of the Program Office are network administration, management, meetings and trainings, data and publications, and quality assurance and management.
Network administration involves overseeing the endeavors of all five networks, managing sample analysis, and coordinating data storage and user availability. These functions are executed from the two analytical laboratories housed at WSLH: The Central Analytical Lab (CAL), which analyses samples from the NTN and AMoN networks, and the Mercury (Hg) Analytical Laboratory (HAL). The HAL was previously housed at Eurofins Frontier Global Sciences, Inc. in Bothell, Washington. In May 2023, the CAL and the HAL were renamed the NADP Analytical Laboratory (NAL). In addition, the Network Equipment Depot, located at the WSLH, provides spare parts for NADP field equipment and troubleshoots site operation problems.
Cooperating agencies
More than 80 sponsors support the NADP: Private companies and other non-governmental organizations, universities, local and state government agencies (i.e. state agricultural experiment stations), national laboratories, Native American environmental organizations, Canadian government agencies, the National Oceanic and Atmospheric Administration, the U.S. Environmental Protection Agency, the U.S. Geological Survey, the National Park Service, the U.S. Fish & Wildlife Service, the Bureau of Land Management, the U.S. Forest Service, the U.S. Department of Agriculture-Agricultural Research Service, the National Science Foundation, and the U.S. Department of Energy.
Networks
NTN
The NTN has over 250 sites that focus on wet deposition chemistry by collecting weekly precipitation samples nationwide. The samples are sent to the NADP Analytical Laboratory (NAL) at the Wisconsin State Lab of Hygiene for analysis and are then used to determine geographic distribution and annual trends. The sample collection and handling methods follow strict clean-handling procedures in order to ensure accurate results. The analytes monitored are: Free acidity (H+ as pH), conductance, calcium (Ca2+), magnesium (Mg2+), sodium (Na+), potassium (K+), sulfate (SO42-), nitrate (NO3−), chloride (Cl−), and ammonium (NH4+). The NAL also measures orthophosphate, but only for quality assurance as an indicator of sample contamination.
MDN
The MDN measures total mercury concentrations on a weekly basis (methyl mercury is measured monthly at some sites), which provides wet deposition data for surface waters and other waterways. The goal is to deliver accurate information that allows researchers to evaluate the linkage between mercury and health, which is strengthened by its large spatial and temporal footprint.
AMNet
The AMNet consists of approximately 15 sites across the U.S. and Canada. The function of these sites is to measure ambient air concentrations of gaseous oxidized mercury (GOM), particulate bound mercury (PBM2.5), and gaseous elemental mercury (GEM). This network works to monitor and report atmospheric mercury that causes dry and total deposition of mercury at select MDN sites. AMNet produces high-resolution data to determine atmospheric mercury trends and models, the ecological consequences of mercury discharging sources, and how to adequately control mercury levels.
AMoN
The AMoN measures ambient ammonia gas concentrations over a two-week period via a Radiello®-passive sampler, which is a simple diffusive sampler that offers higher capacity and faster sampling rates than other devices. Therefore, AMoN can provide reliable data to aid in meeting air quality policies and administration needs. AMoN collects data biweekly to determine the spatial variability and seasonality of ammonia concentrations.
MLN
The MLN can provide an important component of mercury dry deposition to a forested landscape. The importance of litterfall mercury data for quantifying atmospheric mercury deposition to forests was demonstrated with studies at NADP sites in the eastern USA from 2007-2009 and 2007 to 2014.
Closed Networks
AIRMoN
The AIRMoN sites were primarily used to assess the impacts of emission changes such as potential effects from new sources, federal Clean Air Act controls, and source-receptor relationships in atmospheric models. The network measured the same contaminants as the NTN, but sampling occurred daily during precipitation to provide greater temporal resolution. This consistent, high-resolution sampling improved the researchers’ ability to evaluate the data and, therefore, provide reliable results. The network was discontinued in September 2019.
Products
Tabular data products
Reports
Brochures
Annual Data Summaries
Quality Assurance Reports
CLAD Science Committee Reports
TDep Science Committee Reports
AMSC Study Plan
MELD Science Committee Reports
Other helpful sites
Rocky Mountain Research Station - Air, soil, and water resources and quality
NRSP3: The National Atmospheric Deposition Program (NADP)
Standard Operating Procedures (SOP)
Accurate and consistent measurement of gases and deposition at every monitoring site is of the utmost importance to the NADP. This is accomplished, in part, by ensuring that all sites adhere to specific standard operating procedures. This provides consistent methodology at all sites within the networks. The SOPs can be viewed here:
http://nadp.slh.wisc.edu/siteops/
Other Deposition Monitoring Groups
Acid Deposition Monitoring Program in East Asia (EANET)
Canadian Air and Precipitation Monitoring Network (CAPMoN)
Clean Air Status and Trends Network (CASTNET)
Great Lakes National Program Office (GLNPO)
Asia-Pacific Mercury Monitoring Network (APMMN)
References
a. 1SAESD (State Agricultural Experiment Station Directors). 2013. Guidelines for Multistate Research Activities. Developed by SAESD in cooperation with the Cooperative State Research, Education, and Extension Service, USDA (NIFA) and the Experiment Station Committee on Organization and Policy (ESCOP). Approved September 26, 2000, updated August 15, 2013. http://escop.ncsu.edu/docs/MRF Guidelines Revised 08 1 513.pdf
b. NADP Governance Handbook
c. https://nadp.slh.wisc.edu/
External links
Added more references
Soil and crop science organizations
University of Wisconsin–Madison
1977 establishments in Wisconsin
Rain
Air pollution
Environmental chemistry | National Atmospheric Deposition Program | Chemistry,Environmental_science | 3,057 |
41,118,471 | https://en.wikipedia.org/wiki/Armitage%20%28computing%29 | Armitage is a graphical cyber attack management tool for the Metasploit Project that visualizes targets and recommends exploits. It is a free and open source network security tool notable for its contributions to red team collaboration allowing for: shared sessions, data, and communication through a single Metasploit instance. Armitage is written and supported by Raphael Mudge.
History
Armitage is a GUI front-end for the Metasploit Framework developed by Raphael Mudge with the goal of helping security professionals better understand hacking and to help them realize the power of Metasploit. It was originally made for Cyber Defense Exercises, but has since expanded its user base to other penetration testers.
Features
Armitage is a scriptable red team collaboration tool built on top of the Metasploit Framework. Through Armitage, a user may launch scans and exploits, get exploit recommendations, and use the advanced features of the Metasploit Framework's meterpreter.
References
External links
Cobalt Strike (Strategic Cyber LLC)
Computer security exploits
Computer security software
Cross-platform free software
Free security software
Injection exploits
Software testing
Unix network-related software
Software using the BSD license | Armitage (computing) | Technology,Engineering | 240 |
290,441 | https://en.wikipedia.org/wiki/Cram%C3%A9r%27s%20conjecture | In number theory, Cramér's conjecture, formulated by the Swedish mathematician Harald Cramér in 1936, is an estimate for the size of gaps between consecutive prime numbers: intuitively, that gaps between consecutive primes are always small, and the conjecture quantifies asymptotically just how small they must be. It states that
where pn denotes the nth prime number, O is big O notation, and "log" is the natural logarithm. While this is the statement explicitly conjectured by Cramér, his heuristic actually supports the stronger statement
and sometimes this formulation is called Cramér's conjecture. However, this stronger version is not supported by more accurate heuristic models, which nevertheless support the first version of Cramér's conjecture.
The strongest form of all, which was never claimed by Cramér but is the one used in experimental verification computations and the plot in this article, is simply
None of the three forms has yet been proven or disproven.
Conditional proven results on prime gaps
Cramér gave a conditional proof of the much weaker statement that
on the assumption of the Riemann hypothesis. The best known unconditional bound is
due to Baker, Harman, and Pintz.
In the other direction, E. Westzynthius proved in 1931 that prime gaps grow more than logarithmically. That is,
His result was improved by R. A. Rankin, who proved that
Paul Erdős conjectured that the left-hand side of the above formula is infinite, and this was proven in 2014 by Kevin Ford, Ben Green, Sergei Konyagin, and Terence Tao, and independently by James Maynard. The two sets of authors eliminated one of the factors of later that year, showing that, infinitely often,
where is some constant.
Heuristic justification
Cramér's conjecture is based on a probabilistic model—essentially a heuristic—in which the probability that a number of size x is prime is 1/log x. This is known as the Cramér random model or Cramér model of the primes.
In the Cramér random model,
with probability one. However, as pointed out by Andrew Granville, Maier's theorem shows that the Cramér random model does not adequately describe the distribution of primes on short intervals, and a refinement of Cramér's model taking into account divisibility by small primes suggests that the limit should not be 1, but a constant (), where is the Euler–Mascheroni constant. János Pintz has suggested that the limit sup may be infinite, and similarly Leonard Adleman and Kevin McCurley write
As a result of the work of H. Maier on gaps between consecutive primes, the exact formulation of Cramér's conjecture has been called into question [...] It is still probably true that for every constant , there is a constant such that there is a prime between and .
Similarly, Robin Visser writes
In fact, due to the work done by Granville, it is now widely believed that Cramér's conjecture is false. Indeed, there [are] some theorems concerning short intervals between primes, such as Maier's theorem, which contradict Cramér's model.
(internal references removed).
Related conjectures and heuristics
Daniel Shanks conjectured the following asymptotic equality, stronger than Cramér's conjecture, for record gaps:
J.H. Cadwell has proposed the formula for the maximal gaps:
which is formally identical to the Shanks conjecture but suggests a lower-order term.
Marek Wolf has proposed the formula for the maximal gaps
expressed in terms of the prime-counting function
:
where and is the twin primes constant; see , . This is again formally equivalent to the Shanks conjecture but suggests lower-order terms
.
Thomas Nicely has calculated many large prime gaps. He measures the quality of fit to Cramér's conjecture by measuring the ratio
He writes, "For the largest known maximal gaps, has remained near 1.13."
See also
Prime number theorem
Legendre's conjecture and Andrica's conjecture, much weaker but still unproven upper bounds on prime gaps
Firoozbakht's conjecture
Maier's theorem on the numbers of primes in short intervals for which the model predicts an incorrect answer
References
External links
Analytic number theory
Conjectures about prime numbers
Unsolved problems in number theory | Cramér's conjecture | Mathematics | 897 |
42,010,769 | https://en.wikipedia.org/wiki/StatsDirect | StatsDirect is a statistical software package designed for biomedical, public health, and general health science uses. The second generation of the software was reviewed in general medical and public health journals.
Features and use
StatsDirect's interface is menu driven and has editors for spreadsheet-like data and reports. The function library includes common medical statistical methods that can be extended by users via an XML-based description that can embed calls to native StatsDirect numerical libraries, R scripts, or algorithms in any of the .NET languages (such as C#, VB.Net, J#, or F#).
Common statistical misconceptions are challenged by the interface. For example, users can perform a chi-square test on a two-by-two table, but they are asked whether the data are from a cohort (perspective) or case-control (retrospective) study before delivering the result. Both processes produce a chi-square test result but more emphasis is put on the appropriate statistic for the inference, which is the odds ratio for retrospective studies and relative risk for prospective studies.
Origins
Professor Iain Buchan, formerly of the University of Manchester, wrote a doctoral thesis on the foundational work and is credited as the creator of the software. Buchan said he wished to address the problem of clinicians lacking the statistical knowledge to select and interpret statistical functions correctly, and often misusing software written by and for statisticians as a result.
The software debuted in 1989 as Arcus, then Arcus ProStat in 1993, both written for the DOS platform. Arcus Quickstat for Windows followed in 1999. In 2000, an expanded version, StatsDirect, was released for Microsoft Windows. In 2013, the third generation of this software was released, written in C# for the .NET platform.
StatsDirect reports embed the metadata necessary to replay calculations, which may be needed if the original data is ever updated. The reproducible report technology follows the research object approach for replaying in "eLabs".
References
External links
StatsDirect Home page
Biostatistics
Health informatics in the United Kingdom
Statistical software
University of Manchester
Windows-only proprietary software
pl:Statistica | StatsDirect | Mathematics | 459 |
26,927,417 | https://en.wikipedia.org/wiki/Project%20Sabre%20II | Project Sabre II was the Pakistan Air Force's program to develop a feasible and low-cost multirole combat jet based on an existing design—the Chengdu F-7 Skybolt, a Chinese variant of the MiG–21. The Pakistan Air Force (PAF) initiated Project Sabre II in 1987, hiring the American aerospace firm Grumman, to provide crucial expertise to refine the baseline aircraft design along with specialists from the PAF and the Chinese People's Liberation Army Air Force (PLAAF).
After studying the Sabre II concept with Grumman, the PAF terminated the program as unfeasible on economic grounds. Grumman withdrew from the project after sanctions were imposed by the United States on China after Beijing's suppression of the Tiananmen Square student protests in 1989. An embargo on military aid to Pakistan imposed by the United States further hampered the Sabre II development effort in the 1990s. In 1995, Pakistan and China began a collaboration which led to the successful JF-17 Thunder program.
Program overview
Origins
In 1982, the Indian Air Force (IAF) procured the MiG-29 Fulcrum from the Soviet Union to modernize its fighter aircraft fleet. As a result, the Pakistan Air Force (PAF) began looking for new technology to replace its aging fighters. By 1984, the PAF's F-7P (Chengdu F-7) fighters were equipped with western electronic systems. The PAF began developing an improved version of the F-7M to replace its large fleet of F-6 fighters.
The Pakistan Air Force started looking for a new fighter to replace their large fleet of Shenyang F-6 fighters, as they approached the end of their service lives in the late 1980s. After showing interest in the F-7M, the Air HQ of the Pakistan Air Force initiated Project Sabre–II to the develop a low-cost multirole fighter jet on the model of the F-7M.
Design feasibility
In January 1987, the Pakistan Air Force commissioned New York-based Grumman Aerospace to conduct studies and assess the feasibility of the Sabre II design concept with Pakistani aerial specialists and the Chinese Chengdu Aircraft corporation. After five to seven months, the group concluded that the financial risks caused by very high project costs, and the availability of more cost-effective options outweighed the potential benefits from technology transfer from the US to the Pakistan Aeronautical Complex, increasing its experience and technical knowledge.
Grumman, the Pakistan Air Force and the Chinese People's Liberation Army Air Force (PLAAF) created the Sabre II concept by radically upgrading the F-7M. Changes included upgrades to its avionics suite, radar system and engine, and a redesigned forward fuselage. The PAF stated that the Sabre II would replace around 150 F-6s in combat service. A picture showed that the F-7's nose inlet had been replaced with a solid nose radome and a new pair of air inlets were mounted on the sides of the fuselage under the cockpit.
Under Project Sabre II and the Chinese Super-7 project, the F-7 airframe was redesigned with angled air intakes on the sides of the fuselage so a solid radome nose could house radar and other avionics from Northrop's F-20 "Tigershark" fighter. The Chinese WP-7 turbojet engine was to be replaced with a modern turbofan engine, either the GE F404 or PW1120, to improve performance. The resulting aircraft, designated F-7M Sabre-II, would have looked much like the Guizhou JL-9 (or FTC-2000) jet trainer / fighter aircraft.
Pratt & Whitney's PW1216, an afterburning derivative of the J52-P-409 turbojet producing of thrust, was also proposed as the Sabre II's engine. Its afterburner was designed in China. Fitting the APG-66 radar was also planned.
U.S. objections and termination
As the Soviet Union was withdrawing from neighboring Afghanistan, American interest in Pakistan lessened. The PAF terminated the Sabre II project after Grumman's warning that it was financially risky and less feasible than other options. Worsening of the US–Chinese relations after Beijing's suppression of the Tiananmen Square student protests also hurt the project, as U.S. sanctions prevented transfer of American technology to China. Grumman Aerospace withdrew from the project shortly thereafter.
At the same time, the US Congress imposed an embargo on economic and military exports to Pakistan when Congressional leaders became aware of Pakistan's atomic bomb program. A panic ensued among the Pakistani military, as its nuclear bomb program impacted the Super-7 project. The US government tolerated Pakistan's nuclear program during the 1980s due to a desire for the country's cooperation in defeating the USSR in the Soviet–Afghan War. Once the Soviet forces retreated, Pakistani cooperation was no longer required and military and economic sanctions were imposed on Pakistan under the Pressler amendment in 1990. This prevented F-16 aircraft already paid for by the PAF during the Afghan war from being delivered. Efforts by the PAF to find a replacement failed (see Pakistan Air Force 1990–2001, the lost decade).
Evolution into JF-17
The Pakistan Air Force decided on a much less expensive solution for replacement of the F-6, the F-7P Skybolt, an upgraded version of the F-7M Airguard. The F-7P fleet was to be supported by a fleet of over 100 advanced F-16s from the United States, 40 of which had been delivered during the 1980s. The PAF launched a secretive project, ROSE, to procure as many second-hand Dassault Mirages as possible and upgrade their electronic systems. In March 1990 it was reported that after being rejected by the PAF, Sabre II had been superseded by the Super 7 and China was considering continuing its development. The American arms embargo had forced the PAF to come up with innovative solutions to keep all its combat infrastructure operational.
The PAF hired Russia's Mikoyan Group as consultants, and design studies for the project began at the Pakistan Aeronautical Complex. In 1995, the Pakistan Air Force decided to resume the program and quickly reached out to China. Memoranda of Understanding were reached between both countries towards developing new aircraft to fill the role of the Super 7. This led to the successful development of the JF-17 Thunder, introduced in 2003.
References
Chengdu aircraft
Pakistan Aeronautical Complex aircraft
Abandoned military aircraft projects of China
Cancelled military aircraft projects of Pakistan
Mid-wing aircraft
Single-engined jet aircraft
Aircraft with retractable tricycle landing gear
China–Pakistan military relations
China–United States relations
Pakistan–United States military relations
History of science and technology in Pakistan
Programs of the Ministry of Defence (Pakistan)
Pakistan Air Force
1987 in Pakistan
Secret military programs | Project Sabre II | Engineering | 1,414 |
73,505,016 | https://en.wikipedia.org/wiki/Glossary%20of%20arthropod%20cuticle | This is a glossary of terms used in the description of arthropod cuticle, including that of insects such as ants. For reasons still under investigation, these animals can have surface textures spanning and combining cracks, excavations, imbrications, mealiness, punctures, reticulations, roughness, scratches, spots, wrinkles, and more (generically, 'sculpturing' or 'microsculpture'). As such, hundreds of technical terms have been adapted for use in description of individual specimens from which taxa are defined.
A
C
D
E
F
G
H
I
L
M
N
O
P
R
S
T
U
V
See also
References
External links
Antkey glossary
PIAkey glossary
arthropod anatomy
Ants
myrmecology
Wikipedia glossaries using description lists | Glossary of arthropod cuticle | Biology | 166 |
60,578,758 | https://en.wikipedia.org/wiki/Sean%20Dougherty | Sean Dougherty is a Canadian astrophysicist who has been involved in a large number of radio astronomical facilities, both Canadian and international.
Dougherty obtained a degree in mathematics and physics from the University of Nottingham in 1983, and after that he pursued a doctorate in astrophysics at the University of Calgary, where he obtained his Ph.D. in 1993.
Dougherty has more than 20 years of expertise in radio astronomy, managing and representing Canadian contributions to international radio astronomical facilities, and also research and development projects.
Dougherty has also led the construction and delivery of the WIDAR correlator to the Karl G. Jansky Very Large Array (JVLA). He also led an international consortium that designed the correlator (Central Signal Processor) of the Square Kilometre Array (SKA) Phase 1 mid-frequency telescope (SKA1-Mid).
Dougherty was selected for the position of ALMA Director in July 2017 for a five year period, starting February 21, 2018. In January of 2023, his term as ALMA Director was extended an additional 5 years, February 2023 - January 2028.
Dougherty was previously the director of the Dominion Radio Astrophysical Observatory (DRAO), the national facility for radio astronomy of Canada. DRAO is administrated by the NRC Herzberg Astronomy and Astrophysics. He was a member of the ALMA Board representing the North American executive for four years, and has been the president for the ALMA Budget Committee for two years.
Dougherty initiated the efforts for the establishment of the ALMA 2030 Roadmap, which resulted in the launch of the Wideband Sensitivity Upgrade, the first large update of the facility since the inauguration of ALMA in March 2013.
Dougherty has more than 180 publications as of January 2024, of which around 85 are refereed.
References
Living people
Year of birth missing (living people)
Place of birth missing (living people)
Canadian astrophysicists
Alumni of the University of Nottingham
University of Calgary alumni
Radio astronomers | Sean Dougherty | Astronomy | 407 |
60,802,756 | https://en.wikipedia.org/wiki/Fast%20Pair | The Google Fast Pair Service, or simply Fast Pair, is Google's proprietary standard for quickly pairing Bluetooth devices when they come in close proximity for the first time using Bluetooth Low Energy (BLE). It was announced in October 2017 and initially designed for connecting audio devices such as speakers, headphones and car kits with the Android operating system. In 2018, Google added support for ChromeOS devices, and in 2019, Google announced that Fast Pair connections could now be synced with other Android devices on the same Google Account, a feature which Google expanded to ChromeOS devices in December 2023. Google has partnered with Bluetooth SoC designers including Qualcomm, Airoha Technology, and BES Technic to add Fast Pair support to their SDKs. In May 2019, Qualcomm announced their Smart Headset Reference Design, Qualcomm QCC5100, QCC3024 and QCC3034 SoC series with support for Fast Pair and Google Assistant. In July 2019, Google announced True Wireless Features (TWF), Find My Device and enhanced Connected Device Details.
References
Google
Bluetooth
Telecommunications standards | Fast Pair | Technology | 232 |
302,441 | https://en.wikipedia.org/wiki/Hexagram | A hexagram (Greek) or sexagram (Latin) is a six-pointed geometric star figure with the Schläfli symbol {6/2}, 2{3}, or {{3}}. The term is used to refer to a compound figure of two equilateral triangles. The intersection is a regular hexagon.
The hexagram is part of an infinite series of shapes which are compounds of two n-dimensional simplices. In three dimensions, the analogous compound is the stellated octahedron, and in four dimensions the compound of two 5-cells is obtained.
It has been historically used in various religious and cultural contexts and as decorative motifs. The symbol was used as a decorative motif in medieval Christian churches and Jewish synagogues. In the medieval period, a Muslim mystical symbol known as the Seal of Solomon was depicted as either a hexagram or pentagram.
Group theory
In mathematics, the root system for the simple Lie group G2 is in the form of a hexagram, with six long roots and six short roots.
Construction by compass and a straight edge
A six-pointed star, like a regular hexagon, can be created using a compass and a straight edge:
Make a circle of any size with the compass.
Without changing the radius of the compass, set its pivot on the circle's circumference, and find one of the two points where a new circle would intersect the first circle.
With the pivot on the last point found, similarly find a third point on the circumference, and repeat until six such points have been marked.
With a straight edge, join alternate points on the circumference to form two overlapping equilateral triangles.
Construction by linear algebra
A regular hexagram can be constructed by orthographically projecting any cube onto a plane through three vertices that are all adjacent to the same vertex. The twelve midpoints to edges of the cube form a hexagram. For example, consider the projection of the unit cube with vertices at the eight possible binary vectors in three dimensions onto the plane . The midpoints are , and all points resulting from these by applying a permutation to their entries. These 12 points project to a hexagram: six vertices around the outer hexagon and six on the inner.
Origins and shape
As a derivative of two overlapping triangles, the hexagram may have developed from different peoples with no direct correlation to one another.
The mandala symbol called yantra, found on ancient South Indian Hindu temples, is a geometric toolset that incorporates hexagrams into its framework. It symbolizes the nara-narayana, or perfect meditative state of balance achieved between Man and God, and if maintained, results in "moksha," or "nirvana" (release from the bounds of the earthly world and its material trappings).
Some researchers have theorized that the hexagram represents the astrological chart at the time of David's birth or anointment as king. The hexagram is also known as the "King's Star" in astrological circles.
In antique papyri, pentagrams, together with stars and other signs, are frequently found on amulets bearing the Jewish names of God, and used to guard against fever and other diseases. Curiously the hexagram is not found among these signs. In the Greek Magical Papyri (Wessely, l.c. pp. 31, 112) at Paris and London there are 22 signs side by side, and a circle with twelve signs, but neither a pentagram nor a hexagram.
Religious usage
Indian religions
Six-pointed stars have also been found in cosmological diagrams in Hinduism, Buddhism, and Jainism. The reasons behind this symbol's common appearance in Indic religions and the West are unknown. One possibility is that they have a common origin. The other possibility is that artists and religious people from several cultures independently created the hexagram shape, which is a relatively simple geometric design.
Within Indic lore, the shape is generally understood to consist of two triangles—one pointed up and the other down—locked in harmonious embrace. The two components are called "Om" and the "Hrim" in Sanskrit, and symbolize man's position between earth and sky. The downward triangle symbolizes Shakti, the sacred embodiment of femininity, and the upward triangle symbolizes Shiva, or Agni Tattva, representing the focused aspects of masculinity. The mystical union of the two triangles represents Creation, occurring through the divine union of male and female. The two locked triangles are also known as 'Shanmukha'—the six-faced, representing the six faces of Shiva & Shakti's progeny Kartikeya. This symbol is also a part of several yantras and has deep significance in Hindu ritual worship and history.
In Buddhism, some old versions of the Bardo Thodol, also known as The "Tibetan Book of the Dead", contain a hexagram with a swastika inside. It was made up by the publishers for this particular publication. In Tibetan, it is called the "origin of phenomenon" (chos-kyi 'byung-gnas). It is especially connected with Vajrayogini, and forms the center part of her mandala. In reality, it is in three dimensions, not two, although it may be portrayed either way.
The Shatkona is a symbol used in Hindu yantra that represents the union of both the masculine and feminine form. More specifically it is supposed to represent Purusha (the supreme being), and Prakriti (mother nature, or causal matter). Often this is represented as Shiva – Shakti.
Anahata or heart chakra is the fourth primary chakra, according to Hindu Yogic, Shakta and Buddhist Tantric traditions. In Sanskrit, anahata means "unhurt, unstruck, and unbeaten". Anahata Nad refers to the Vedic concept of unstruck sound (the sound of the celestial realm). Anahata is associated with balance, calmness, and serenity.
Judaism
The Magen David is a generally recognized symbol of Judaism and Jewish identity and is also known colloquially as the Jewish Star or "Star of David." Its usage as a sign of Jewish identity began in the Middle Ages, though its religious usage began earlier, with the current earliest archeological evidence being a stone bearing the shield from the arch of a 3–4th century synagogue in the Galilee.
Christianity
The first and the most important Armenian Cathedral of Etchmiadzin (303 AD, built by the founder of Christianity in Armenia) is decorated with many types of ornamented hexagrams and so is the tomb of an Armenian prince of the Hasan-Jalalyan dynasty of Khachen (1214 AD) in the Gandzasar Church of Artsakh.
The hexagram may be found in some Churches and stained-glass windows. In Christianity, it is sometimes called the star of creation. A very early example, noted by Nikolaus Pevsner, can be found in Winchester Cathedral, England in one of the canopies of the choir stalls, circa 1308.
Latter-day Saints (Mormons)
The Star of David is also used less prominently by the Church of Jesus Christ of Latter-day Saints, in the temples and in architecture. It symbolizes God reaching down to man and man reaching up to God, the union of Heaven and earth. It may also symbolize the Tribes of Israel and friendship and their affinity towards the Jewish people. Additionally, it is sometimes used to symbolize the quorum of the twelve apostles, as in Revelation 12, wherein the Church of God is symbolized by a woman wearing a crown of twelve stars. It is also sometimes used to symbolize the Big Dipper, which points to the North Star, a symbol of Jesus Christ.
Islam
The symbol is known in Arabic as Khātem Sulaymān (Seal of Solomon; ) or Najmat Dāwūd (Star of David; ). The "Seal of Solomon" may also be represented by a five-pointed star or pentagram.
In the Qur'an, it is written that David and King Solomon (Arabic, Suliman or Sulayman) were prophets and kings, and are figures revered by Muslims. The Medieval pre-Ottoman Hanafi Anatolian beyliks of the Karamanids and Jandarids used the star on their flag. The symbol is also used on the Hayreddin Barbarossa flag. Today the six-pointed star can be found in mosques and on other Arabic and Islamic artifacts.
Usage in heraldry
In heraldry and vexillology, a hexagram is a fairly common charge employed, though it is rarely called by this name. In Germanic regions it is known simply as a "star." In English and French heraldry, however, the hexagram is known as a "mullet of six points," where mullet is a French term for a spur rowel which is shown with five pointed arms by default unless otherwise specified. In Albanian heraldry and vexillology, hexagram has been used since classical antiquity and it is commonly referred to as sixagram. The coat of arms of the House of Kastrioti depicts the hexagram on a pile argent over the double headed eagle.
Usage in Theosophy
The Star of David is used in the seal and the emblem of the Theosophical Society (founded in 1875). Although it is more pronounced, it is used along with other religious symbols. These include the Swastika, the Ankh, the Aum, and the Ouroboros. The star of David is also known as the Seal of Solomon, which was its original name, being in regular use until around 50 years ago.
Usage in occultism
The hexagram, like the pentagram, was and is used in practices of the occult and ceremonial magic and is attributed to the 7 "old" planets outlined in astrology.
The six-pointed star is commonly used both as a talisman and for conjuring spirits and spiritual forces in diverse forms of occult magic. In the book The History and Practice of Magic, Vol. 2, the six-pointed star is called the talisman of Saturn and it is also referred to as the Seal of Solomon. Details are given in this book on how to make these symbols and the materials to use.
Traditionally, the Hexagram can be seen as the combination of the four elements. Fire is symbolized as an upwards pointing triangle, while Air (its elemental opposite) is also an upwards pointing triangle, but with a horizontal line through its center. Water is symbolized as a downwards pointing triangle, while Earth (its elemental opposite) is also a downwards pointing triangle, but with a horizontal line through its center. Combining the symbols of fire and water creates a hexagram (six-pointed star). The same follows when combining the symbols of air and earth. Both hexagrams combined are called a double-hexagram. Thus, a combination of the elements is created.
In Rosicrucian and Hermetic Magic, the seven Traditional planets correspond with the angles and the center of the Hexagram as follows, in the same patterns as they appear on the Sephiroth and on the Tree of Life. Saturn, although formally attributed to the Sephira of Binah, within this frame work nonetheless occupies the position of Daath.
In alchemy, the two triangles represent the reconciliation of the opposites of fire and water.
The hexagram is used as a sign for quintessence, the fifth element.
Usage in Freemasonry
The hexagram is featured within and on the outside of many Masonic temples as a decoration. It may have been found within the structures of King Solomon's temple, from which Freemasons are inspired in their philosophies and studies. Like many other symbols in Freemasonry, the deciphering of the hexagram is non-dogmatic and left to the interpretation of the individual.
Other uses
Flags
The flag of Australia had a six pointed star to represent the six federal states from 1901 to 1908.
The Ulster Banner flag of Northern Ireland, used from 1953 to 1972. The six pointed star, representing the six counties that make up Northern Ireland. The star of the Ulster Banner is not the compound of two equilateral triangles. The intersection is not a regular hexagon.
A flag used by rebels during the Whiskey Insurrection in South-Western Pennsylvania, 1794.
A hexagram appears on the Dardania Flag, proposed for Kosovo by the Democratic League of Kosovo.
The flag of Nigeria depicted a green hexagram surrounding a crown from with the white word "Nigeria" under it on a red disc from 1914 to 1960.
The flag of Israel has a blue hexagram in the middle.
Other symbolic uses
A six-point interlocking triangles has been used for thousands of years as an indication a sword was made, and "proofed", in the Damascus area of the Middle East. Still today, it is a required proof mark on all official UK and United States military swords though the blades themselves no longer come from the Middle East.
In southern Germany the hexagram can be found as part of tavern anchors. It is symbol for the tapping of beer and sign of the brewer's guild. In German this is called "Bierstern" (beer star) or "Brauerstern" (brewer's star).
A six-point star is used as an identifying mark of the Folk Nation alliance of US street gangs.
The Indian sage and seer Sri Aurobindo used it—e.g. on the cover of his books—as a symbol of the aspiration of humanity calling to the Divine to descend into life (the triangle with the point at the top), and the descent of the Divine into the Earth's atmosphere and all individuals in response to that calling (the triangle with the point at the bottom). (This was explained by the Mother, his spiritual partner in Her 14-volume Agenda and elsewhere by Sri Aurobindo in his writings.)
Man-made and natural occurrences
The main runways and taxiways of Heathrow Airport were arranged roughly in the shape of a hexagram.
A hexagram in a circle is incorporated prominently in the supports of Worthing railway station's platform 2 canopy (UK).
An extremely large, free-standing wood hexagram stands in the central park of the Municipality of El Tejar, Guatemala. Additionally, every year at Christmastime the residents of El Tejar erect a giant artificial Christmas tree in front of their municipal building, with a hexagram sitting at its peak.
Unicode
In Unicode, the "Star of David" symbol ✡ is encoded in U+2721.
Other hexagrams
The figure {6/3} can be shown as a compound of three digons.
Other hexagrams can be constructed as a continuous path.
See also
Pentagram
Star of Bethlehem
Star of David
Seal of Solomon
Heptagram
The Thelemic Unicursal hexagram
Pascal's mystic hexagram
Hexagram (I Ching)
Sacred Geometry
Footnotes
References
Graham, Dr. O.J. The Six-Pointed Star: Its Origin and Usage 4th ed. Toronto: The Free Press 777, 2001.
Grünbaum, B. and G. C. Shephard; Tilings and patterns, New York: W. H. Freeman & Co., (1987), .
Grünbaum, B.; Polyhedra with Hollow Faces, Proc of NATO-ASI Conference on Polytopes ... etc. (Toronto 1993), ed T. Bisztriczky et al., Kluwer Academic (1994) pp. 43–70.
Wessely, l.c. pp. 31, 112
External links
Hexagram (MathWorld)
The Archetypal Mandala of India
Thesis from Munich University on hexagram as brewing symbol
Art history
Church architecture
Iconography
Ornaments
6 (number)
Rotational symmetry
Synagogue architecture
Visual motifs
06 | Hexagram | Physics,Mathematics | 3,351 |
14,245,428 | https://en.wikipedia.org/wiki/More%20O%27Ferrall%E2%80%93Jencks%20plot | More O’Ferrall–Jencks plots are two-dimensional representations of multiple reaction coordinate potential energy surfaces for chemical reactions that involve simultaneous changes in two bonds. As such, they are a useful tool to explain or predict how changes in the reactants or reaction conditions can affect the position and geometry of the transition state of a reaction for which there are possible competing pathways.
Brief history
These plots were first introduced in a 1970 paper by R. A. More O’Ferrall to discuss mechanisms of β-eliminations and later adopted by W. P. Jencks in an attempt to clarify the finer details involved in the general acid-base catalysis of reversible addition reactions to carbon electrophiles such as the hydration of carbonyls.
Description
In this type of plot (Figure 1), each axis represents a unique reaction coordinate, the corners represent local minima along the potential surface such as reactants, products or intermediates and the energy axis projects vertically out of the page. Changing a single reaction parameter can change the height of one or more of the corners of the plot. These changes are transmitted across the surface such that the position of the transition state (the saddle point) is altered.
Consider a generic example in which the initial transition state along a concerted pathway is represented by a black dot on a red diagonal (Figure 1). Changing the height of the corners can have two effects on the position of the transition state: it can move along the diagonal, reflecting a change in the Gibbs free energy of the reaction (ΔG°), or perpendicular to it, reflecting a change in the energy of competing pathways. Thus, in accordance with the Hammond postulate, the transition state moves along the diagonal towards the corner that is raised in energy (a Hammond effect) and perpendicular to the diagonal towards the corner that is lowered (an anti-Hammond effect). In this example, R is raised in energy and I(2) is lowered in energy. The transition state moves accordingly and the vector sum of both movements gives the real change in its position.
Applications
Elimination reactions
Initially, More O’Ferrall introduced this type of analysis to discuss the continuity between concerted and step-wise β-elimination reaction mechanisms. The model also provided a framework within which to explain the effects of substituents and reaction conditions on the mechanism. The appropriate lower energy species were placed at the corners of the two dimensional plot (Figure 2). These were the reactants (top left), the products (bottom right) and the intermediates of the two possible stepwise reactions: the carbocation for E1 (bottom left) and the carbanion for E1cB (top-right). Thus, the horizontal axes represent the extent of deprotonation (C-H bond distance) and the vertical axes represent the extent of leaving group departure (C-LG distance). By applying the Hammond and anti-Hammond effects, he predicted the effects of various changes in the reactants or reaction conditions. For example, the effects of introducing a better leaving group on a substrate that initially eliminates via an E2 mechanism are illustrated in Figure 2. A better leaving group increases the energy of the reactants and of the carbanion intermediate. Thus, the transition state moves towards the reactants and away from the carbanion intermediate.
The model does not predict any change in leaving group departure at the transition state. Instead the extent of deprotonation is expected to decrease. This can be explained by the fact that a better leaving group needs less assistance from a developing neighbouring negative charge in order to depart. The true change predicts more carbocation character at the transition state and a mechanism that is more E1-like. These observations can be correlated with Hammett ρ-values. Poor leaving groups correlate with large positive ρ-values. Gradually increasing the leaving group ability decreases the ρ-value until it becomes large and negative, indicating the development of positive charge in the transition state.
Substitution reactions
A similar analysis, done by J. M. Harris, has been applied to the competing SN1 and SN2 nucleophilic aliphatic substitution pathways. The effects of increasing the nucleophilicity of the nucleophile are shown as an example in Figure 3. An agreement with Hammet ρ-values is also apparent in this application.
Addition to carbonyls
Finally, this type of plot can readily be drawn to illustrate the effects of changing parameters in the acid-catalyzed nucleophilic addition to carbonyls. The example in Figure 4 demonstrates the effects of increasing the strength of the acid. In this case, the extent of protonation is the α-value in the Brønsted catalysis equation. The fact that the α-value remains unchanged explains the linearity of Brønsted plots for such a reaction.
Ultimately, the More O’Ferrall–Jencks plots have qualitative predictive and explanatory power regarding the effects of changing substituents and reaction conditions for a wide variety of reactions.
See also
Potential energy surface
Reaction coordinate
Transition state
References
Plots (graphics)
Chemical kinetics
Physical organic chemistry | More O'Ferrall–Jencks plot | Chemistry | 1,069 |
1,228,297 | https://en.wikipedia.org/wiki/Epidermal%20growth%20factor | Epidermal growth factor (EGF) is a protein that stimulates cell growth and differentiation by binding to its receptor, EGFR. Human EGF is 6-kDa and has 53 amino acid residues and three intramolecular disulfide bonds.
EGF was originally described as a secreted peptide found in the submaxillary glands of mice and in human urine. EGF has since been found in many human tissues, including platelets, submandibular gland (submaxillary gland), and parotid gland. Initially, human EGF was known as urogastrone.
Structure
In humans, EGF has 53 amino acids (sequence NSDSECPLSHDGYCLHDGVCMYIEALDKYACNCVVGYIGERCzYRDLKWWELR), with a molecular mass of around 6 kDa. It contains three disulfide bridges (Cys6-Cys20, Cys14-Cys31, Cys33-Cys42).
Function
EGF, via binding to its cognate receptor, results in cellular proliferation, differentiation, and survival.
Salivary EGF, which seems to be regulated by dietary inorganic iodine, also plays an important physiological role in the maintenance of oro-esophageal and gastric tissue integrity. The biological effects of salivary EGF include healing of oral and gastroesophageal ulcers, inhibition of gastric acid secretion, stimulation of DNA synthesis as well as mucosal protection from intraluminal injurious factors such as gastric acid, bile acids, pepsin, and trypsin and to physical, chemical and bacterial agents.
Biological sources
The Epidermal growth factor can be found in platelets, urine, saliva, milk, tears, and blood plasma. It can also be found in the submandibular glands, and the parotid gland. The production of EGF has been found to be stimulated by testosterone.
Polypeptide growth factors
Polypeptide growth factors include:
Mechanism
EGF acts by binding with high affinity to epidermal growth factor receptor (EGFR) on the cell surface. This stimulates ligand-induced dimerization, activating the intrinsic protein-tyrosine kinase activity of the receptor (see the second diagram). The tyrosine kinase activity, in turn, initiates a signal transduction cascade that results in a variety of biochemical changes within the cell – a rise in intracellular calcium levels, increased glycolysis and protein synthesis, and increases in the expression of certain genes including the gene for EGFR – that ultimately lead to DNA synthesis and cell proliferation.
EGF-family / EGF-like domain
EGF is the founding member of the EGF-family of proteins. Members of this protein family have highly similar structural and functional characteristics. Besides EGF itself other family members include:
Heparin-binding EGF-like growth factor (HB-EGF)
transforming growth factor-α (TGF-α)
Amphiregulin (AR)
Epiregulin (EPR)
Epigen
Betacellulin (BTC)
neuregulin-1 (NRG1)
neuregulin-2 (NRG2)
neuregulin-3 (NRG3)
neuregulin-4 (NRG4).
All family members contain one or more repeats of the conserved amino acid sequence:
CX7CX4-5CX10-13CXCX8GXRC
Where C is cysteine, G is glycine, R is arginine, and X represents any amino acid.
This sequence contains six cysteine residues that form three intramolecular disulfide bonds. Disulfide bond formation generates three structural loops that are essential for high-affinity binding between members of the EGF-family and their cell-surface receptors.
Interactions
Epidermal growth factor has been shown to interact with epidermal growth factor receptors.
Medical uses
Recombinant human epidermal growth factor, sold under the brand name Heberprot-P, is used to treat diabetic foot ulcers. It can be given by injection into the wound site, or may be used topically. Tentative evidence shows improved wound healing. Safety has been poorly studied.
EGF is used to modify synthetic scaffolds for manufacturing of bioengineered grafts by emulsion electrospinning or surface modification methods.
Bone regeneration
EGF plays an enhancer role on the osteogenic differentiation of dental pulp stem cells (DPSCs) because it is capable of increasing extracellular matrix mineralization. A low concentration of EGF (10 ng/ml) is sufficient to induce morphological and phenotypic changes. These data suggests that DPSCs in combination with EGF could be an effective stem cell-based therapy to bone tissue engineering applications in periodontics and oral implantology.
History
EGF was the second growth factor to be identified. Initially, human EGF was known as urogastrone. Stanley Cohen discovered EGF while working with Rita Levi-Montalcini at the Washington University in St. Louis during experiments researching nerve growth factor. For these discoveries Levi-Montalcini and Cohen were awarded the 1986 Nobel Prize in Physiology or Medicine.
References
Further reading
External links
Shaanxi Zhongbang Pharma-Tech Co., Ltd.-Supply of Epidermal Growth Factor
EGF at the Human Protein Reference Database .
EGF model in BioModels database
Growth factors
Morphogens | Epidermal growth factor | Chemistry,Biology | 1,158 |
45,601,124 | https://en.wikipedia.org/wiki/NGC%2085 | NGC 85 is an interacting spiral or lenticular galaxy estimated to be about 200 million light-years away in the constellation of Andromeda. It was discovered by Ralph Copeland in 1873 and its apparent magnitude is 15.7. The galaxy appears to be interacting with the companion spiral IC 1546.
References
External links
0085
Andromeda (constellation)
18731115
Discoveries by Ralph Copeland
Lenticular galaxies | NGC 85 | Astronomy | 82 |
51,308,636 | https://en.wikipedia.org/wiki/Spirodiclofen | Spirodiclofen is an acaricide and insecticide used in agriculture to control mites and San Jose scale. In the United States, it is used on citrus, grapes, pome fruit, stone fruit, and tree nut crops.
Spirodiclofen belongs to the tetronic acid class and acts by inhibiting lipid biosynthesis, specifically acetyl CoA carboxylase, and is in IRAC group 23.
References
Acaricides
Insecticides
Spiro compounds
Chloroarenes
Gamma-lactones | Spirodiclofen | Chemistry | 114 |
2,286,665 | https://en.wikipedia.org/wiki/Enterprise%20asset%20management | Enterprise asset management (EAM) involves the management of the maintenance of physical assets of an organization throughout each asset's lifecycle. EAM is used to plan, optimize, execute, and track the needed maintenance activities with the associated priorities, skills, materials, tools, and information. This covers the design, construction, commissioning, operations, maintenance and decommissioning or replacement of plant, equipment and facilities. The goal of EAM is to maximize the value and efficiency of these assets while minimizing associated costs and risks.
"Enterprise" refers to the scope of the assets in an Enterprise across departments, locations, facilities and, potentially, supporting business functions. Various assets are managed by the modern enterprises at present. The assets may be fixed assets like buildings, plants, machineries or moving assets like vehicles, ships, moving equipments etc. The lifecycle management of the high value physical assets require regressive planning and execution of the work.
History
EAM arose as an extension of the computerized maintenance management system (CMMS) which is usually defined as a system for the computerisation of the maintenance of physical assets.
Enterprise asset management software
Enterprise asset management software is a computer software that handles every aspect of running a public works or asset-intensive organization. Enterprise asset management (EAM) software applications include features such as asset life-cycle management, preventive maintenance scheduling, warranty management, integrated mobile wireless handheld options and portal-based software interface. Rapid development and availability of mobile devices also affected EAM software which now often supports mobile enterprise asset management.
EAM Solution Applications in Power Generation
EAM solution applications, are used in power generation, including nuclear power plants. EAM solutions are used in the industries for managing asset portfolios and operational efficiency. They are recognized for their role in enhancing asset utilization and reducing costs, with a focus on compliance with regulatory guidelines, and meeting consumer/clients’ needs.
Features and applications for solutions include, but are not limited to:
Standardization of Work Processes: EAM solution applications are designed to streamline work processes in power generation operations. This includes improving worker productivity and asset return on investment by aiming to increase asset availability, reduce planned outage time, and enhance reliability.
Asset Performance Management (APM): This component of EAM solution applications offer software and services for optimizing asset performance and operational & maintenance efficiency. Features such as these include proprietary analytics and work process automation (e.g., Work Orders, Procurement Processes, Material Requestes, etc.).
Application in Power Generation: EAM solutions are often tailored for use in power generation and other industries with complex, mission-critical environments. The focus is on addressing challenges where operational failure can lead to significant consequences.
Asset Management in Nuclear Power: Asset management is a key component in nuclear power plants, particularly in competitive electricity markets. Asset Suite EAM aims to support decision-making processes by balancing financial performance, operational performance, and risk. These applications are essential for identifying and tracking changes to plant-specific controlled equipment and documentation.
Industry Usage: The software is noted for its application in the utility, transmission, and fossil or nuclear power industry. It is reportedly used by a significant portion of global nuclear fleets.
Modules for Nuclear Plants: EAM solution applications offer modules such as Procurement Engineering, Inventory Management, Total Exposure, Material Request and Receipt, Engineering Changes, and Work Orders which are geared towards the needs of nuclear plants.
Standardizing: Currently, there is a large movement to isolate and distribute a singular EAM solution for managing industrial assets across the nuclear power generation industry for commercial electricity production (i.e., Asset Suite/Passport) . This deployment is aimed at standardizing practices across multiple nuclear power plants. Although, not every plant utilizes the same software. As plants and corporations continue to expand and modernize, industries are moving from Asset Suite/Passport to Maximo EAM, which is another EAM solution application currently tailored for utility, transmission, and the nuclear industry.
See also
Building lifecycle management
References
Sources
Physical Asset Management(Springer publication) Nicholas Anthony John,2010.
Pascual, R. "El Arte de Mantener", Pontificia Universidad Católica de Chile, Santiago, Chile, 2015.
Asset management
Business software
Wireless locating | Enterprise asset management | Technology | 871 |
2,037,178 | https://en.wikipedia.org/wiki/Concurrent%20ML | Concurrent ML (CML) is a multi-paradigm, general-purpose, high-level, functional programming language. It is a dialect of the programming language ML which is a concurrent extension of the Standard ML language, characterized by its ability to allow creating composable communication abstractions that are first-class rather than built into the language. The design of CML and its primitive operations have been adopted in several other programming languages, such as GNU Guile, Racket, and Manticore.
Concepts
Many programming languages that support concurrency offer communication channels that allow the exchange of values between processes or threads running concurrently in a system. Communications established between processes may follow a specific protocol, requiring the programmer to write functions to establish the required pattern of communication. Meanwhile, a communicating system often requires establishing multiple channels, such as to multiple servers, and then choosing between the available channels when new data is available. This can be accomplished using polling, such as with the select operation on Unix systems.
Combining both application-specific protocols and multi-party communication may be complicated due to the need to introduce polling and checking for blocking within a pre-existing protocol. Concurrent ML solves this problem by reducing this coupling of programming concepts by introducing synchronizable events. Events are a first-class abstraction that can be used with a synchronization operation (called in CML and Racket) in order to potentially block and then produce some value resulting from communication (for example, data transmitted on a channel).
In CML, events can be combined or manipulated using a number of primitive operations. Each primitive operation constructs a new event rather than modifying the event in-place, allowing for the construction of compound events that represent the desired communication pattern. For example, CML allows the programmer to combine several sub-events in order to create a compound event that can then make a non-deterministic choice of one of the sub-events. Another primitive creates a new event that will modify the value resulting from synchronization to the original event. These events embody patterns of communication that, in a non-CML language, would typically be handled using a polling loop or function with handlers for each kind of event.
Hello world
Here is a "Hello, World!" program that prints to the system console. It spawns one thread with a channel for strings, and another thread which prints a string received on the channel. It uses Standard ML of New Jersey (SML/NJ) and CML. (On non linux-x86 platforms, the heap name will differ; the line with "cml_test.x86-linux" may need changing to something different.)
External links
References
High-level programming languages
Functional languages
Concurrent programming languages
ML programming language family
Programming constructs
Programming language design
Programming languages created in 1991 | Concurrent ML | Engineering | 575 |
61,915,279 | https://en.wikipedia.org/wiki/OneZoom | The OneZoom Tree of Life Explorer is a web-based phylogenetic tree software. It aims to map the evolutionary connection of all known life. As of 2023 it includes over 2.2 million species.
Organisation
OneZoom was originally invented by James Rosindell and is a charity registered in London. It is sponsored by individuals such as Richard Dawkins.
Tree of Life Explorer
The design is based on the pythagoras tree; beside a default spiral design there are other options, such as polytomy.
Leaves and nodes provide links to other websites, such as Wikipedia, Encyclopedia of Life or the NCBI taxonomy browser. The leaves representing single species are colour-coded according to their IUCN extinction risk, with red indicating a threatened species, black representing a recently extinct species, and grey representing species with unknown extinction risk.
See also
List of phylogenetic tree visualization software
References
External links
OneZoom Tree of Life Explorer
Interview with Luke Harmon in Utah Public Radio
Charities based in London
Phylogenetics software
Visualization software
Tree of life (biology)
Educational charities based in the United Kingdom
International charities | OneZoom | Biology | 223 |
2,001,378 | https://en.wikipedia.org/wiki/Bryostatin | Bryostatins are a group of macrolide lactones from (bacterial symbionts of) the marine organism Bugula neritina that were first collected and provided to JL Hartwell’s anticancer drug discovery group at the National Cancer Institute (NCI) by Jack Rudloe. Bryostatins are potent modulators of protein kinase C. They have been studied in clinical trials as anti-cancer agents, as anti-AIDS/HIV agents and in people with Alzheimer's disease.
Biological effects
Bryostatin 1 is a potent modulator of protein kinase C (PKC).
It showed activity in laboratory tests in cells and model animals, so it was brought into clinical trials. As of 2014 over thirty clinical trials had been conducted, using bryostatin alone and in combination with other agents, in both solid tumors and blood tumors; it did not show a good enough risk:benefit ratio to be advanced further.
It showed enough promise in animal models of Alzheimer's disease that a Phase II trial was started by 2010; the trial was sponsored by the Blanchette Rockefeller Neurosciences Institute. Scientists from that institute started a company called Neurotrope, and launched another clinical trial in Alzheimer's disease, preliminary results of which were released in 2017.
Bryostatin has also been studied in people with HIV.
Chemistry
Bryostatin 1 was first isolated in the 1960s by George Pettit from extracts of a species of bryozoan, Bugula neritina, based on research from samples originally provided by Jack Rudloe to Jonathan L. Hartwell’s anticancer drug discovery group at the National Cancer Institute (NCI). The structure of bryostatin 1 was determined in 1982. As of 2010 20 different bryostatins had been isolated.
The low concentration in bryozoans (to extract one gram of bryostatin, roughly one tonne of the raw bryozoans is needed) makes extraction unviable for large scale production. Due to the structural complexity, total synthesis has proved difficult, with only a few total syntheses reported so far. Total syntheses have been published for bryostatins 1, 2, 3, 7, 9 and 16. Among them, Wender’s total synthesis of bryostatin 1 is the shortest synthesis of any bryostatin reported, to date.
A number of structurally simpler synthetic analogs also have been prepared which exhibit similar biological profile and in some cases greater potency, which may provide a practical supply for clinical use.
Biosynthesis
In B. Neritina, bryostatin biosynthesis is carried out through a type I polyketide synthase cluster, bry. BryR is the secondary metabolism homolog of HMG-CoA synthase, which is the PKS in bacterial primary metabolism. In the bryostatin pathway, the BryR module catalyzes β-Branching between a local acetoacetyl acceptor acyl carrier protein (ACP-a) and an appropriate donor BryU acetyl-ACP (ACP-d).
The first step involves the loading of a malonyl unit onto a discrete BryU ACP-d within an initial BryA module. The extended BryU product in BryA is then loaded onto a cysteine sidechain of BryR for interaction with ACP-a. Upon interaction, BryR then catalyzes β-Branching, facilitating an aldol reaction between the alpha-carbon of the BryU unit and the β-ketone of ACP-a, yielding a product similar to HMGS products in primary metabolism. After β-Branching, subsequent dehydration by a BryT enoyl-CoA hydratase homolog (ECH), as well as BryA O-methylation and BryB double bond isomerization of the generated HMGS product, are carried out in specific domains of the bry cluster. These post-β-Branching steps generate the vinyl methylester moieties which are found in all natural product bryostatins. Finally, BryC and BryD are responsible for further extension, pyran ring closure, and cyclization of the HMGS product to produce the novel bryostatin product.
In the presence of BryR, ACP-d conversion to holo-ACP-d was observed prior to β-Branching. BryR was shown to have high specificity for ACP-d only after this conversion. Specificity for these protein-bound groups is a feature that differentiates the HMGS homologs found in primary metabolism, where HMGS typically acts on substrates linked to Coenzyme A, from those found in non-ribosomal peptide synthase (NRPS) or PKS pathways such as the bryostatin pathway.
References
Further reading
External links
Experimental cancer drugs
Macrolides
Total synthesis | Bryostatin | Chemistry | 1,022 |
48,160,075 | https://en.wikipedia.org/wiki/Skolem%20problem | In mathematics, the Skolem problem is the problem of determining whether the values of a constant-recursive sequence include the number zero. The problem can be formulated for recurrences over different types of numbers, including integers, rational numbers, and algebraic numbers. It is not known whether there exists an algorithm that can solve this problem.
A linear recurrence relation expresses the values of a sequence of numbers as a linear combination of earlier values; for instance, the Fibonacci numbers may be defined from the recurrence relation
together with the initial values and .
The Skolem problem is named after Thoralf Skolem, because of his 1933 paper proving the Skolem–Mahler–Lech theorem on the zeros of a sequence satisfying a linear recurrence with constant coefficients. This theorem states that, if such a sequence has zeros, then with finitely many exceptions the positions of the zeros repeat regularly. Skolem proved this for recurrences over the rational numbers, and Mahler and Lech extended it to other systems of numbers. However, the proofs of the theorem do not show how to test whether there exist any zeros.
There does exist an algorithm to test whether a constant-recursive sequence has infinitely many zeros, and if so to construct a decomposition of the positions of those zeros into periodic subsequences, based on the algebraic properties of the roots of the characteristic polynomial of the given recurrence. The remaining difficult part of the Skolem problem is determining whether the finite set of non-repeating zeros is empty or not.
Partial solutions to the Skolem problem are known, covering the special case of the problem for recurrences of degree at most four. However, these solutions do not apply to recurrences of degree five or more.
For integer recurrences, the Skolem problem is known to be NP-hard.
See also
Constant-recursive sequence
Skolem–Mahler–Lech theorem
References
External links
Recurrence relations
Unsolved problems in mathematics | Skolem problem | Mathematics | 433 |
2,929,013 | https://en.wikipedia.org/wiki/The%20Age%20of%20the%20Pussyfoot | The Age of the Pussyfoot is a science fiction novel by American writer Frederik Pohl, first published as a serial in Galaxy Science Fiction in three parts, starting in October 1965. It was later published as a standalone novel in 1969.
Inspiration
The novel was inspired by Pohl's own experiences in a local volunteer fire department and by the early computer time sharing systems, along with advances in medicine, such as transplants, extrapolated to the point where anyone with enough money can command huge resources and essentially live forever. There are some unusual social consequences of these advances, however.
Plot summary
Charles Dalgleish Forrester is revived from cryopreservation in the year 2527, having been killed in a fire 500 years earlier. Thanks to his insurance, after the expenses of his revival are paid he has a quarter of a million dollars, a fortune in his eyes. He can afford the luxuries of 26th century life, such as a Joymaker, a scepter-like portable computer terminal with some extra features like a drug dispenser.
After a heavy night partying, with some distant memory of an argument with somebody, he wakes in his new apartment, and over a 20th-century breakfast, checks in with his Joymaker. The Joymaker communicates by voice, and addresses him always as "Man Forrester". He is informed that he has a message from a woman whose name he doesn't recognize, and that someone called Heinzlichen Jura de Syrtis Major has taken out a hunting license on him. Baffled, he eventually encounters the woman from the party who he believes is called Tip. She maintains she is Adne Bensen, the woman whose messages he has been ignoring. Apparently she got on so well with Forrester that she is ready to begin a relationship. She takes him back to her apartment, where he finds she has two children, around 8 years old, who seem somewhat precocious for their years. With names like "Tunt" and "Mim" flying around, he mistakenly assumes those are the children's names, baffling them when he uses them that way.
Later Forrester encounters Heinzlichen with a few friends, who without much ado beat him up so badly he goes to the hospital. It transpires that Heinzlichen's hunting license allows him to kill Forrester providing he pays for the revival, and the whole vendetta is over some insult at the party, which Forrester can barely remember. It's possible Forrester trod on Heinzlichen's foot. Since Heinzlichen is from the human colonies on Mars and is adapted to low gravity, this is a major faux pas.
Mistake follows on mistake, compounding confusion. Forrester comes to believe that Adne is attempting to entrap him in fatherhood, presumably for his money, when she leaves a message saying that "we have to choose a name". He is equally disdainful of a friend who keeps asking him to join his Club, expecting that also to be just a ruse to get at his money.
Eventually Adne sets him straight. Firstly, he'll be broke soon because, unbeknownst to him, all the 20th century foods he likes are very expensive, as are all the other Joymaker functions he enjoys. He needs to get a high-paying job. Secondly, the "name" she was asking about was a "reciprocal name," one used only between two close friends or intimates. Each uses it only to address the other, as "Tunt" was the children's name for each other, and "Mim" was the name used between Adne and the children. Tip was the name she and another close friend used, so Forrester could not use it. The friend wanting Forrester to join his Club was in fact offering him a paying position in the organization, though Forrester is not sure he likes what the Club stands for.
However, Forrester's woes are not over. He first takes a high-paying job for what turns out to be an alien life form. The alien is known to all as a Sirian, but only because that's the star system in which he was captured (Sirius). Earth is in a state of preparation for a Sirian attack it is expecting. When the alien ship was first encountered, the Earth ship shot first and asked questions later. The only thing stopping an attack, it is believed, is that the Sirian's home world population has no idea where Earth is. The captured Sirians live on Earth in a state of virtual house arrest, with their movements restricted and monitored.
Forrester's job is to be the Sirian's guide to Earth culture and history, and he is paid handsomely for it. Unfortunately Adne and the others shun him for working for the alien.
The Sirian asks many questions about seemingly arbitrary topics of human history. When Forrester fails to respond to one of the Sirian's requests in time, he is promptly fired by the Sirian.
He then takes a high-paying job which is an apparent sinecure, watching over some machinery, until he learns that all the previous holders of the post are in cryopreservation after being blasted with radiation. Against the warnings of the Joymaker, he quits in the middle of a shift. In this time, this is a huge error, and all his funds are taken in fines. He is reduced to nothing, and forced to live with all the other bums on Skid Row.
The existence is actually quite comfortable. Nobody can afford a Joymaker, but rich people pass through doling out money to 26th century panhandlers, and there are cash-only eating places with coin-operated Joymakers at the tables. However, there are also people looking for thrills on the cheap, wanting to kill someone without having to pay for the revival. After a near miss, he runs into the Sirian again, who drugs and hypnotizes him. Under the delusion that he is helping Adne take a trip, Forrester places the Sirian in control of a spacecraft.
The ship heads into space and escapes, but not before the whole world learns that the alien has escaped, though not Forrester's role in the affair. The entire human population goes into a panic. Most commit suicide in order to hide in the cryopreservation banks. Heinzlichen comes after Forrester one more time, and Forrester kills him. This was simply Heinzlichen's way of getting into the freezer. Eventually Forrester is almost alone.
At this point the Club he had been asked to join goes into action. They are a 26th-century version of Luddites and are bent on dismantling the world's technological base by subverting central computing systems, believing this will improve human welfare. Ominously, they are "helped" by the Sirians in doing so. In the end, medical technicians and the Luddites are the only people left awake. Forrester learns about the conspiracy of the Luddites. Forrester cannot reach the technicians because all the computer terminals have been programmed against him. His only hope is to kill himself. He walks up to one of the automated medics, and cuts his throat.
Fortunately the medic, reacting to its programming to save lives over that set up by the Luddites, gets him to a medical facility in time, and he is able to abort the revolution. Eventually people start being revived, and he is reunited with Adne.
Characters
Charles Dalgliesh Forrester is a man born in the 20th century, revived 500 years later. He initially believes he is rich because he has a quarter of a million dollars in his investment, a considerable amount at the time of writing. He eventually discovers that although that amount would have supported him comfortably in a 20th-century lifestyle, his use of the joymaker and all that it can bring him means that he is rapidly running out of money.
Adne Bensen is a liberated 26th century woman. She styles herself as "natural flow", meaning that she does not use artificial hormones and is only receptive to men at certain times. She is quite open about this, much to Forrester's chagrin. Her profession is that of "Reacter". This means that she is employed as a consultant of sorts, giving her response to consumer products, and is rated by the number of potential customers she represents, and the chance that a favorable reaction from her will mean that the product is going to sell that many copies. She has two children, who are equally frank about her emotional and physical needs.
Heinzlichen Jura de Syrtis Major is a Martian colonist visiting Earth. He speaks English with a strong accent that Forrester identifies as German, though Heinzlichen insists that all Martians speak that way because the thinner atmosphere on Mars eliminates some of the higher audio frequencies. He also has no idea what "German" signifies. He takes out a "hunting license" on Forrester because of a perceived insult when Forrester accidentally stepped on his feet during a party. The license is very expensive, requiring Heinzlichen to post bonds and undertake not to damage Forrester's brain, as well as paying all medical costs. In the later sections of the novel, Forrester encounters another Martian, Kevin O'Rourke na Solis Lacis, who also sounds "German" to Forrester despite his Irish name, and who also has no idea what is meant by "German", or "Irish".
Joymaker
The story's joymaker bears a remarkable resemblance to devices in common use in the years following the start of the 21st century.
The remote-access computer transponder called the "joymaker" is your most valuable single possession in your new life. If you can imagine a combination of telephone, credit card, alarm clock, pocket bar, reference library, and full-time secretary, you will have sketched some of the functions provided by your joymaker. - from the novel
It was conceived by Pohl in the 1960s after he saw one of the earliest time sharing computer systems. These allowed multiple users spread over a wide area, connected by good quality telephone or data lines, to simultaneously use one or more large (for the time) computers for a variety of purposes.
In its basic form, the Joymaker is a remote time-sharing terminal which uses radio communications instead of wire lines, and interacts with its user via voice rather than a keyboard and text output. It is small and light enough to be worn or carried, resembling in some cases a small sceptre. It can also dispense various medications, stimulants etc. from reservoirs within it.
The story concerns a 20th-century man placed in what came to be called cryopreservation, revived in the 25th century, and coming to terms with life in an era of massive computer power, accessed via the Joymaker. Unlike early 21st century portable devices, the Joymaker had little or no innate computing power.
The Joymaker had the following uses :
Access to basic computing power, for money management etc.
Access to libraries at any time, in any place.
Educating children, each of whom has a special Joymaker.
Health - the Joymaker can sense heartbeat, respiration etc. and the central computer can order it to dispense medication, or it can send help.
Message Store and forward, later known as voice mail. This becomes the bedrock of social interaction in the story.
Ordering food and drink, whether at home or in public. All payment is done using the central computer.
Ordering other goods for delivery. Since payment is automatic, the expense of items is not always apparent to the buyers. The protagonist rapidly depletes his "fortune".
Public Address system - any group of people can hear a public announcement on their Joymakers, removing the need for loudspeakers in public places.
Locating people. The central computer can track the position of any Joymaker, and by extension, its owner. This information can be made available at the owner's discretion.
Jobs not requiring physical presence. One character is a "Reacter," someone who samples new products and reports her reactions using the Joymaker. The central computer analyzes her reactions in the light of her known psychological makeup and is able to statistically predict how well the product will sell.
Relation to actual devices
Pohl himself, in an afterword to the novel, made the following statement about the world he foresaw:
"I do not really think it will be that long. Not five centuries. Perhaps not even five decades."
Forty years after the publication of the novel, most people of 2005 would recognize the functions of the Joymaker in the cellphone, laptop computer, and personal digital assistant. By 2015 the ubiquitous smartphones provided most of the functions in a single package.
Only the medical capabilities are missing from devices carried by people in industrialized nations in the early 21st century. These devices, however usually have far more computing power than the Joymaker as conceived, and more even than the 1960s mainframe computers that provided the inspiration. Some of the actual social effects of portable communication and computing parallel those predicted in the novel.
See also
Barlowe's Guide to Extraterrestrials
External links
Frederik Pohl Got Computers Right - The Joymaker
1966 American novels
1966 science fiction novels
American science fiction novels
Fiction about suspended animation
Fictional computers
Novels by Frederik Pohl
Novels first published in serial form
Works originally published in Galaxy Science Fiction | The Age of the Pussyfoot | Technology | 2,783 |
24,104,531 | https://en.wikipedia.org/wiki/Cleaning%20card | Cleaning cards are disposable products designed to clean the interior contact points of a device that facilitates an electronic information transaction (point of sale terminal, automated teller machine, remote deposit check scanners, micr readers, magnetic stripe reader, bill acceptor, bill validator, access control locks, etc.). In order for the cleaning card to work properly in the device, the card resembles or mimics the material of the transaction media – such as a credit card, check, or currency. As the cleaning card is inserted and passed through the device, it will clean components that would normally come in contact with the transaction media such as readers, lenses, read/write chip and pins, belts, rollers, and paths. Cleaning card products are widely accepted and endorsed by device manufacturers and industry professionals. Many have developed their own cleaning cards to better clean their particular devices.
A typical cleaning card is much like a wiper or sponge that can get into areas that are not readily accessible. Typically, the cleaning card has a solid core covered by a soft wipe-like material. The product is then saturated with a cleaning solution recommended by the device manufacturer and then placed in a sealed pouch to maintain the saturation level and cleanliness of the card.
Invention and Evolution
The cleaning card was originally patented by Stanley H. Eyler and the patent (US#5525417 A) was assigned to his employer, the Clean Team Company. The Clean Team Company later changed its name to KICTeam, Inc., which continues to be the leading manufacturer of their brand's Waffletechnology cleaning cards. The cleaning card has evolved with the equipment they need to clean. A good example is the bill acceptor. Initially, the bill acceptor was designed for vending machines as a means of selling candy to the public. It includes a device that recognizes that a US one dollar bank note has been inserted. The cleaning card was required to be the same shape as US currency in order to be accepted into the device to clean it. Vending machines began accepting higher denominations as well as having the ability to make change. Specialized sensors were introduced into the bill acceptors to recognize multiple denominations and to only accept media that contained bank note characteristics. The bill acceptor cleaning card was redeveloped to contain magnetic ink and bank note characteristics so as to be accepted by the equipment. The development of bill acceptors for slot machines in the gaming and casino industry required the bill acceptor to be more sophisticated. The bill validators needed to validate currency of multiple denominations up to a one hundred dollar bank note. Fraud was now a critical issue and was addressed by multiple sensors and optics throughout the inserted currency pathway. These sensors and optics were recessed so as to keep currency from running across them with each insertion and wearing down sensitive lenses.
Area of application
Cleaning cards are used in the gaming, wagering, vending, hotel, retail, lottery, petroleum, manufacturing, shipping, auto id, card printing, banking. For example, this includes all places that credit cards or cash are inserted into a machine to make payments.
Cleaning of the magnetic head
The magnetic head inside the POS Terminal is a fixed component and for this reason it only can be cleared by cleaning cards that are flexible enough to clean the leading, center and trailing edges of this round reader head. The cleaning of the magnetic head is very important because it's responsible for the reading of the card and so it decides acceptance or rejection of the inserted card.
The cleaning card is not only cleaning the reading area of the magnetic head which is cleared. There is also a cleaning process within the device along the card path. The cleaning in these high dirt build-up areas is especially important and ensures efficient cleaning of the card reader.
Cleaning of chip reading contacts
Chip Cards are also known as Smart Cards and EMV Cards. There are two different types of EMV card readers - friction and landing. Contaminated contacts can result in rejection of the inserted payment or authorization card; the built up minerals can damage electronics. The cleaning card with ensures optimal cleaning of chip reading contacts.
Cleaning of motorized card readers
Motorized readers are built in, for example ATMs. The credit/debit-card is inserted into the card slot, where the first magnetic head is placed. If a magnetic stripe can be recognized, a shutter will be opened and the card will be transported to the second magnetic head by roles. The card is read, thereby the device knows whether the transaction goes over the micro chip or the magnetic stripe. If no micro chip is placed on the card, the transaction goes directly above the magnetic head. If the data gives the order for a transaction over the micro chip, the card is placed on the chip reading contact and is stopped. The chip contacts, are fitted on the chip and now the transaction begins. If the reading of the magnetic stripe respectively by the micro chip is not possible, the card will be declined.
A cleaning card for a motorized card reader will need a magnetic stripe built into it to activate the acceptance shutter.
Cleaning of check and document scanners
Check scanners are used by banks or business through remote deposit capture programs to take a digital image of the check and send the information to the bank for deposit. This is where the image of a check on your bank statement originates. If a check scanner is not properly cleaned financial institutions risk increased transaction failures, equipment malfunctions, personnel costs to reconcile poor images, equipment repair or exchange and non-compliance due to poor image quality. A cleaning card designed to clean a specific model of check scanner is run through the device the same way the operator would run a check through the device. The cleaning card makes contact with the optical lenses, micr reader, transport belts and rollers, print heads and clears the check path.
Transactions Terminology
Transactions are any action that has a monetary implication or transfer information from one media to another. The most commonly thought of transactions are the use of credit or debit cards through a card reader of some type. Card readers are also widely used for hotel door locks or access control devices. Another of the most common is a currency transaction via vending, slot machines, or self-checkout kiosk where a bill acceptor takes currency or a currency detector tabulates quantity. Many printers are transaction devices such as cashless ticket printers in the gaming industry.
See also
Check 21 Act
Magnetic stripe card
References
Banking technology
Retail point of sale systems
Credit cards | Cleaning card | Technology | 1,306 |
1,567,506 | https://en.wikipedia.org/wiki/Thiourea | Thiourea () is an organosulfur compound with the formula and the structure . It is structurally similar to urea (), except that the oxygen atom is replaced by a sulfur atom (as implied by the thio- prefix); however, the properties of urea and thiourea differ significantly. Thiourea is a reagent in organic synthesis. Thioureas are a broad class of compounds with the general structure .
Structure and bonding
Thiourea is a planar molecule. The C=S bond distance is 1.71 Å. The C-N distances average 1.33 Å. The weakening of the C-S bond by C-N pi-bonding is indicated by the short C=S bond in thiobenzophenone, which is 1.63 Å.
Thiourea occurs in two tautomeric forms, of which the thione form predominates in aqueous solutions. The equilibrium constant has been calculated as Keq is . The thiol form, which is also known as an isothiourea, can be encountered in substituted compounds such as isothiouronium salts.
Production
The global annual production of thiourea is around 10,000 tonnes. About 40% is produced in Germany, another 40% in China, and 20% in Japan. Thiourea can be produced from ammonium thiocyanate, but more commonly it is manufactured by the reaction of hydrogen sulfide with calcium cyanamide in the presence of carbon dioxide.
Applications
Thiox precursor
Thiourea per se has few applications. It is mainly consumed as a precursor to thiourea dioxide, which is a common reducing agent in textile processing.
Fertilizers
Recently thiourea has been investigated for its multiple desirable properties as a fertilizer especially under conditions of environmental stress. It may be applied in various capacities, such as a seed pretreatment (for priming), foliar spray or medium supplementation.
Other uses
Other industrial uses of thiourea include production of flame retardant resins, and vulcanization accelerators.
Thiourea is building blocks to pyrimidine derivatives. Thus, thioureas condense with β-dicarbonyl compounds. The amino group on the thiourea initially condenses with a carbonyl, followed by cyclization and tautomerization. Desulfurization delivers the pyrimidine. The pharmaceuticals thiobarbituric acid and sulfathiazole are prepared using thiourea. 4-Amino-3-hydrazino-5-mercapto-1,2,4-triazole is prepared by the reaction of thiourea and hydrazine.
Thiourea is used as an auxiliary agent in diazo paper, light-sensitive photocopy paper and almost all other types of copy paper.
It is also used to tone silver-gelatin photographic prints (see Sepia Toning).
Thiourea is used in the Clifton-Phillips and Beaver bright and semi-bright electroplating processes. It is also used in a solution with tin(II) chloride as an electroless tin plating solution for copper printed circuit boards.
Thioureas are used (usually as hydrogen-bond donor catalysts) in a research theme called thiourea organocatalysis. Thioureas are often found to be stronger hydrogen-bond donors (i.e., more acidic) than ureas.
Reactions
The material has the unusual property of changing to ammonium thiocyanate upon heating above . Upon cooling, the ammonium salt converts back to thiourea.
Reductant
Thiourea reduces peroxides to the corresponding diols. The intermediate of the reaction is an unstable endoperoxide.
Thiourea is also used in the reductive workup of ozonolysis to give carbonyl compounds. Dimethyl sulfide is also an effective reagent for this reaction, but it is highly volatile (boiling point ) and has an obnoxious odor whereas thiourea is odorless and conveniently non-volatile (reflecting its polarity).
Source of sulfide
Thiourea is employed as a source of sulfide, such as for converting alkyl halides to thiols. The reaction capitalizes on the high nucleophilicity of the sulfur center and easy hydrolysis of the intermediate isothiouronium salt:
In this example, ethane-1,2-dithiol is prepared from 1,2-dibromoethane:
Like other thioamides, thiourea can serve as a source of sulfide upon reaction with metal ions. For example, mercury sulfide forms when mercuric salts in aqueous solution are treated with thiourea:
These sulfiding reactions, which have been applied to the synthesis of many metal sulfides, require water and typically some heating.
Precursor to heterocycles
Thioureas are building blocks to pyrimidine derivatives. Thus thioureas condense with β-dicarbonyl compounds. The amino group on the thiourea initially condenses with a carbonyl, followed by cyclization and tautomerization. Desulfurization delivers the pyrimidine.
Similarly, aminothiazoles can be synthesized by the reaction of α-haloketones and thiourea.
The pharmaceuticals thiobarbituric acid and sulfathiazole are prepared using thiourea. 4-Amino-3-hydrazino-5-mercapto-1,2,4-triazole is prepared by the reaction of thiourea and hydrazine.
Silver polishing
According to the label on consumer products TarnX and Silver Dip, the liquid silver cleaning products contain thiourea along with a warning that thiourea is a chemical on California's list of carcinogens. A lixiviant for gold and silver leaching can be created by selectively oxidizing thiourea, bypassing the steps of cyanide use and smelting.
Kurnakov reaction
Thiourea is an essential reagent in the Kurnakov test used to differentiate cis- and trans- isomers of certain square planar platinum complexes. The reaction was discovered in 1893 by Russian chemist Nikolai Kurnakov and is still performed as an assay for compounds of this type.
Safety
The for thiourea is for rats (oral).
A goitrogenic effect (enlargement of the thyroid gland) has been reported for chronic exposure, reflecting the ability of thiourea to interfere with iodide uptake.
A cyclic derivative of thiourea called Thiamazole is used to treat overactive thyroid
See also
Thioureas
References
Further reading
External links
INCHEM assessment of thiourea
International Chemical Safety Card 0680
Functional groups | Thiourea | Chemistry | 1,457 |
63,659,864 | https://en.wikipedia.org/wiki/Krist%C3%ADn%20Vala%20Ragnarsd%C3%B3ttir | Kristín Vala Ragnarsdóttir (born 1954) is an Icelandic Earth and sustainability scientist and activist who is Professor of Sustainability Science in the Faculty of Earth Sciences and the Institute of Earth Sciences at the University of Iceland. She was the first woman to be a full professor in Earth Sciences at the University of Bristol in the UK and at the same time the first woman to become a full professor in the Science Faculty there. She was also the first woman to serve as Dean of a School at the University of Iceland.
Kristín Vala is a member of Academia Europaea (since 2012), the Norwegian Academy of Science and Letters, and the Icelandic Academy of Science. She is a fellow of the Royal Society of Arts, distinguished fellow of the Schumacher Institute, and a member of the Wellbeing Economy Alliance. She is a member of the sustainability think tanks the Balaton Group and the Club of Rome.
Career
Appointments
Kristín Vala was on the faculty of the University of Bristol for 20 years from 1989, starting as a research fellow in the Department of Geology, becoming professor of Environmental Geochemistry in the Department of Earth Sciences in 2001 and professor of Environmental Sustainability from 2006 to 2008. She moved to the University of Iceland as Professor of Sustainability Science in the Faculty of Earth Sciences in 2008, and was Dean of the School of Engineering and Natural Sciences from 2008 to 2012.
Board memberships
Kristín Vala was a board member of the Geological Society of London the European Association of Geochemistry, and the Schumacher Society (now Schumacher Institute). She was a member of the steering committee of the Balaton Group, and the Alliance for Sustainability and Prosperity (ASAP). She was also on the board/steering group of TreeSisters, Pyramid2030, 17Goals, Health Empowernment Through Nutrition, Framtíðarlandið (FutureIceland), Initiative for Equality, Landvernd (Nature Protection) and Landsvirkjun (National Energy).
Kristín Vala is a scientific advisor to the Ecological Sequestration Trust, serves on the global council of Wellbeing Economy Alliance (WEAll) and is a board member of Breiddalssetur Science and Culture Centre, and the Red Cross.
Editorial memberships
Kristín Vala was a member of the editorial boards of eEarth, Geochemical Transactions, Geochimica et Cosmochimice Acta and Chemical Geology.
Currently, she is a member of the editorial boards of Anthropocene Review, System Change, BioPhysical Economics and Sustainability (previously BioPhysical Economics and Resource Quality), and Solutions (for a Sustainable and Desirable Future).
Background
Training
Kristín Vala trained in geochemistry and petrology at the University of Iceland and geological sciences at Northwestern University, Evanston, Illinois.
Awards
Kristín Vala received the Award of Excellence Furthering Sustainability and Equality Learning from the Schumacher Institute. She was co-recipient of the Times Higher Education Supplement (THES) Award to the University of Bristol for Outstanding Contribution to Sustainable Development.
Expert member panels
Kristín Vala was a member of the UN Environment Program Depleted Uranium Scientific Assessment Teams, Kosovo (2000) and Bosnia Herzegovina (2002). She was a member of the International Expert Working Group of the Government of Bhutan on the New Development Paradigm (2013) and represented Academia Europaea in the European Academies Scientific Advisory Council (EASAC) working group on the Circular Economy (2016).
In Iceland, Kristín Vala has advised the government on issues relating to higher education and research, education for sustainability, climate strategy, prosperity, quality of life and wellbeing, and energy policy.
Research
During her career, Kristín Vala has published over 100 research articles, book chapters, and books and has been awarded prizes and memberships/fellowships by academies and sustainability think tanks.
Among many other topics, Kristín Vala has published work on geothermal systems, mineral solubility, mineral dissolution kinetics, structure and coordination of aqueous species, sorption of aqueous species to mineral surfaces, backfill materials for radioactive waste disposal, link between environment and health, bacterial and fungal weathering, and critical zone processes.
At the turn of the century, Kristín Vala's research turned to issues related to transdisciplinary sustainability science, including city carbon emission management, natural resource availability and management, soil sustainability, sustainable tourism, and achieving the UN Sustainability Goals through the wellbeing economy.
Politics
Kristín Vala is a member of the Pirate Party and has been influential in developing its policies relating to environment, climate, and sustainability. She was instrumental in facilitating the participation of the Icelandic government in joining the Wellbeing Economy Governments (WEGo).
Selected bibliography
Books
Ragnarsdottir K.V. and Banwart S.A. (editors) (2016) Soil: The Life Supporting Skin on Earth. eBook University of Iceland and University of Sheffield.
Plant J.A., Voulvoulis N. and Ragnarsdottir K.V. (editors) (2011) Pollutants, Human Health and the Environment. A Risk Approach. Wiley Blackwell, 356 pages.
Hancock P.L. and Skinner B.J. (editors), D.l. Dineley (associate editor) and Dawson A.G., Ragnarsdottir K.V. and Steward I.S. (subject editors) (2000) The Oxford Companion to the Earth, 1174 pp. Oxford University Press.
Book chapters
Lohrenz U., Sverdrup H.U. and Ragnarsdottir K.V. (2018) Global megatrends and resource use - A systemic reflection. In H. Lehmann (ed) Factor X. Eco-Efficiency in Industry and Science vol 32. Springer, Berlin.
Thorarinsdottir, R., Coaten, D., Pantanella, E., Shultz, C., Stander, H. and Ragnarsdottir, K.V. (2017) Renewable energy use for aquaponics development on global scale towards sustainable food production. In J. Bundschuh, G. Chen, D. Chandrasekharam, J. Piechocki (Eds.) Geothermal, Wind and Solar Energy Applications in Agriculture and Aquaculture, Sustainable Energy Development Series, CRC Press, 362 pages.
References
Kristín Vala Ragnarsdóttir
Kristín Vala Ragnarsdóttir
Women earth scientists
Sustainability scientists
Kristín Vala Ragnarsdóttir
Academics of the University of Bristol
Northwestern University alumni
Kristín Vala Ragnarsdóttir
Members of Academia Europaea
Members of the Norwegian Academy of Science and Letters
Living people
1954 births | Kristín Vala Ragnarsdóttir | Chemistry,Environmental_science | 1,367 |
53,718,600 | https://en.wikipedia.org/wiki/Rayleigh%20problem | In fluid dynamics, Rayleigh problem also known as Stokes first problem is a problem of determining the flow created by a sudden movement of an infinitely long plate from rest, named after Lord Rayleigh and Sir George Stokes. This is considered as one of the simplest unsteady problems that have an exact solution for the Navier-Stokes equations. The impulse movement of semi-infinite plate was studied by Keith Stewartson.
Flow description
Consider an infinitely long plate which is suddenly made to move with constant velocity in the direction, which is located at in an infinite domain of fluid, which is at rest initially everywhere. The incompressible Navier-Stokes equations reduce to
where is the kinematic viscosity. The initial and the no-slip condition on the wall are
the last condition is due to the fact that the motion at is not felt at infinity. The flow is only due to the motion of the plate, there is no imposed pressure gradient.
Self-Similar solution
The problem on the whole is similar to the one dimensional heat conduction problem. Hence a self-similar variable can be introduced
Substituting this the partial differential equation, reduces it to ordinary differential equation
with boundary conditions
The solution to the above problem can be written in terms of complementary error function
The force per unit area exerted on the plate is
Arbitrary wall motion
Instead of using a step boundary condition for the wall movement, the velocity of the wall can be prescribed as an arbitrary function of time, i.e., . Then the solution is given by
Rayleigh's problem in cylindrical geometry
Rotating cylinder
Consider an infinitely long cylinder of radius starts rotating suddenly at time with an angular velocity . Then the velocity in the direction is given by
where is the modified Bessel function of the second kind. As , the solution approaches that of a rigid vortex. The force per unit area exerted on the cylinder is
where is the modified Bessel function of the first kind.
Sliding cylinder
Exact solution is also available when the cylinder starts to slide in the axial direction with constant velocity . If we consider the cylinder axis to be in direction, then the solution is given by
See also
Stokes problem
References
Fluid dynamics | Rayleigh problem | Chemistry,Engineering | 435 |
32,658,232 | https://en.wikipedia.org/wiki/Pease%201 | Pease 1 is a planetary nebula located within the globular cluster M15 33,600 light years away in the constellation Pegasus. It was the first planetary nebula known to exist within a globular cluster when it was discovered in 1928 (for Francis G. Pease), and just four more have been found (in other clusters) since. At magnitude 15.5, it requires telescopes with an aperture of at least to be detected.
References
External links
Pease 1: Planetary Nebula in Messier 15, SEDS Messier pages
Planetary nebulae
Pegasus (constellation) | Pease 1 | Astronomy | 118 |
25,369,256 | https://en.wikipedia.org/wiki/Peres%20metric | In mathematical physics, the Peres metric is defined by the proper time
for any arbitrary function f. If f is a harmonic function with respect to x and y, then the corresponding Peres metric satisfies the Einstein field equations in vacuum. Such a metric is often studied in the context of gravitational waves. The metric is named for Israeli physicist Asher Peres, who first defined it in 1959.
See also
Introduction to the mathematics of general relativity
Stress–energy tensor
Metric tensor (general relativity)
References
Metric tensors
Spacetime
Coordinate charts in general relativity
General relativity
Gravity | Peres metric | Physics,Mathematics,Engineering | 115 |
3,053,507 | https://en.wikipedia.org/wiki/Crystal%20growth | A crystal is a solid material whose constituent atoms, molecules, or ions are arranged in an orderly repeating pattern extending in all three spatial dimensions. Crystal growth is a major stage of a crystallization process, and consists of the addition of new atoms, ions, or polymer strings into the characteristic arrangement of the crystalline lattice. The growth typically follows an initial stage of either homogeneous or heterogeneous (surface catalyzed) nucleation, unless a "seed" crystal, purposely added to start the growth, was already present.
The action of crystal growth yields a crystalline solid whose atoms or molecules are close packed, with fixed positions in space relative to each other.
The crystalline state of matter is characterized by a distinct structural rigidity and very high resistance to deformation (i.e. changes of shape and/or volume). Most crystalline solids have high values both of Young's modulus and of the shear modulus of elasticity. This contrasts with most liquids or fluids, which have a low shear modulus, and typically exhibit the capacity for macroscopic viscous flow.
Overview
After successful formation of a stable nucleus, a growth stage ensues in which free particles (atoms or molecules) adsorb onto the nucleus and propagate its crystalline structure outwards from the nucleating site. This process is significantly faster than nucleation. The reason for such rapid growth is that real crystals contain dislocations and other defects, which act as a catalyst for the addition of particles to the existing crystalline structure. By contrast, perfect crystals (lacking defects) would grow exceedingly slowly. On the other hand, impurities can act as crystal growth inhibitors and can also modify crystal habit.
Nucleation
Nucleation can be either homogeneous, without the influence of foreign particles, or heterogeneous, with the influence of foreign particles. Generally, heterogeneous nucleation takes place more quickly since the foreign particles act as a scaffold for the crystal to grow on, thus eliminating the necessity of creating a new surface and the incipient surface energy requirements.
Heterogeneous nucleation can take place by several methods. Some of the most typical are small inclusions, or cuts, in the container the crystal is being grown on. This includes scratches on the sides and bottom of glassware. A common practice in crystal growing is to add a foreign substance, such as a string or a rock, to the solution, thereby providing nucleation sites for facilitating crystal growth and reducing the time to fully crystallize.
The number of nucleating sites can also be controlled in this manner. If a brand-new piece of glassware or a plastic container is used, crystals may not form because the container surface is too smooth to allow heterogeneous nucleation. On the other hand, a badly scratched container will result in many lines of small crystals. To achieve a moderate number of medium-sized crystals, a container which has a few scratches works best. Likewise, adding small previously made crystals, or seed crystals, to a crystal growing project will provide nucleating sites to the solution. The addition of only one seed crystal should result in a larger single crystal.
Mechanisms of growth
The interface between a crystal and its vapor can be molecularly sharp at temperatures well below the melting point. An ideal crystalline surface grows by the spreading of single layers, or equivalently, by the lateral advance of the growth steps bounding the layers. For perceptible growth rates, this mechanism requires a finite driving force (or degree of supercooling) in order to lower the nucleation barrier sufficiently for nucleation to occur by means of thermal fluctuations. In the theory of crystal growth from the melt, Burton and Cabrera have distinguished between two major mechanisms:
Non-uniform lateral growth
The surface advances by the lateral motion of steps which are one interplanar spacing in height (or some integral multiple thereof). An element of surface undergoes no change and does not advance normal to itself except during the passage of a step, and then it advances by the step height. It is useful to consider the step as the transition between two adjacent regions of a surface which are parallel to each other and thus identical in configuration—displaced from each other by an integral number of lattice planes. Note here the distinct possibility of a step in a diffuse surface, even though the step height would be much smaller than the thickness of the diffuse surface.
Uniform normal growth
The surface advances normal to itself without the necessity of a stepwise growth mechanism. This means that in the presence of a sufficient thermodynamic driving force, every element of surface is capable of a continuous change contributing to the advancement of the interface. For a sharp or discontinuous surface, this continuous change may be more or less uniform over large areas for each successive new layer. For a more diffuse surface, a continuous growth mechanism may require changes over several successive layers simultaneously.
Non-uniform lateral growth is a geometrical motion of steps—as opposed to motion of the entire surface normal to itself. Alternatively, uniform normal growth is based on the time sequence of an element of surface. In this mode, there is no motion or change except when a step passes via a continual change. The prediction of which mechanism will be operative under any set of given conditions is fundamental to the understanding of crystal growth. Two criteria have been used to make this prediction:
Whether or not the surface is diffuse: a diffuse surface is one in which the change from one phase to another is continuous, occurring over several atomic planes. This is in contrast to a sharp surface for which the major change in property (e.g. density or composition) is discontinuous, and is generally confined to a depth of one interplanar distance.
Whether or not the surface is singular: a singular surface is one in which the surface tension as a function of orientation has a pointed minimum. Growth of singular surfaces is known to requires steps, whereas it is generally held that non-singular surfaces can continuously advance normal to themselves.
Driving force
Consider next the necessary requirements for the appearance of lateral growth. It is evident that the lateral growth mechanism will be found when any area in the surface can reach a metastable equilibrium in the presence of a driving force. It will then tend to remain in such an equilibrium configuration until the passage of a step. Afterward, the configuration will be identical except that each part of the step will have advanced by the step height. If the surface cannot reach equilibrium in the presence of a driving force, then it will continue to advance without waiting for the lateral motion of steps.
Thus, Cahn concluded that the distinguishing feature is the ability of the surface to reach an equilibrium state in the presence of the driving force. He also concluded that for every surface or interface in a crystalline medium, there exists a critical driving force, which, if exceeded, will enable the surface or interface to advance normal to itself, and, if not exceeded, will require the lateral growth mechanism.
Thus, for sufficiently large driving forces, the interface can move uniformly without the benefit of either a heterogeneous nucleation or screw dislocation mechanism. What constitutes a sufficiently large driving force depends upon the diffuseness of the interface, so that for extremely diffuse interfaces, this critical driving force will be so small that any measurable driving force will exceed it. Alternatively, for sharp interfaces, the critical driving force will be very large, and most growth will occur by the lateral step mechanism.
Note that in a typical solidification or crystallization process, the thermodynamic driving force is dictated by the degree of supercooling.
Morphology
It is generally believed that the mechanical and other properties of the crystal are also pertinent to the subject matter, and that crystal morphology provides the missing link between growth kinetics and physical properties. The necessary thermodynamic apparatus was provided by Josiah Willard Gibbs' study of heterogeneous equilibrium. He provided a clear definition of surface energy, by which the concept of surface tension is made applicable to solids as well as liquids. He also appreciated that an anisotropic surface free energy implied a non-spherical equilibrium shape, which should be thermodynamically defined as the shape which minimizes the total surface free energy.
It may be instructional to note that whisker growth provides the link between the mechanical phenomenon of high strength in whiskers and the various growth mechanisms which are responsible for their fibrous morphologies. (Prior to the discovery of carbon nanotubes, single-crystal whiskers had the highest tensile strength of any materials known). Some mechanisms produce defect-free whiskers, while others may have single screw dislocations along the main axis of growth—producing high strength whiskers.
The mechanism behind whisker growth is not well understood, but seems to be encouraged by compressive mechanical stresses including mechanically induced stresses, stresses induced by diffusion of different elements, and thermally induced stresses. Metal whiskers differ from metallic dendrites in several respects. Dendrites are fern-shaped like the branches of a tree, and grow across the surface of the metal. In contrast, whiskers are fibrous and project at a right angle to the surface of growth, or substrate.
Diffusion-control
Very commonly when the supersaturation (or degree of supercooling) is high, and sometimes even when it is not high, growth kinetics may be diffusion-controlled, which means the transport of atoms or molecules to the growing nucleus is limiting the velocity of crystal growth. Assuming the nucleus in such a diffusion-controlled system is a perfect sphere, the growth velocity, corresponding to the change of the radius with time , can be determined with Fick’s Laws.
1. Fick' s Law: ,
where is the flux of atoms in the dimension of , is the diffusion coefficient and is the concentration gradient.
2. Fick' s Law: ,
where is the change of the concentration with time.
The first Law can be adjusted to the flux of matter onto a specific surface, in this case the surface of the spherical nucleus:
,
where now is the flux onto the spherical surface in the dimension of and being the area of the spherical nucleus. can also be expressed as the change of number of atoms in the nucleus over time, with the number of atoms in the nucleus being:
,
where is the volume of the spherical nucleus and is the atomic volume. Therefore, the change if number of atoms in the nucleus over time will be:
Combining both equations for the following expression for the growth velocity is obtained:
From second Fick’s Law for spheres the equation below can be obtained:
Assuming that the diffusion profile does not change over time but is only shifted with the growing radius it can be said that , which leads to being constant. This constant can be indicated with the letter and integrating will result in the following equation:
,
where is the radius of the nucleus, is the distance from the nucleus where the equilibrium concentration is recovered and is the concentration right at the surface of the nucleus. Now the expression for can be found by:
Therefore, the growth velocity for a diffusion-controlled system can be described as:
Under such diffusion controlled conditions, the polyhedral crystal form will be unstable, it will sprout protrusions at its corners and edges where the degree of supersaturation is at its highest level. The tips of these protrusions will clearly be the points of highest supersaturation. It is generally believed that the protrusion will become longer (and thinner at the tip) until the effect of interfacial free energy in raising the chemical potential slows the tip growth and maintains a constant value for the tip thickness.
In the subsequent tip-thickening process, there should be a corresponding instability of shape. Minor bumps or "bulges" should be exaggerated—and develop into rapidly growing side branches. In such an unstable (or metastable) situation, minor degrees of anisotropy should be sufficient to determine directions of significant branching and growth. The most appealing aspect of this argument, of course, is that it yields the primary morphological features of dendritic growth.
See also
Abnormal grain growth
Chvorinov's rule
Cloud condensation nuclei
Crystal structure
Czochralski process
Dendrite (metal)
Diana's Tree
Fractional crystallization
Ice nucleus
Laser-heated pedestal growth
Manganese nodule
Micro-pulling-down
Monocrystalline whisker
Protocrystalline
Recrystallization (chemistry)
Seed crystal
Single crystal
Whisker (metallurgy)
Simulation
Kinetic Monte Carlo surface growth method
References
Crystallography
Crystals
Materials science
Mineralogy
Articles containing video clips | Crystal growth | Physics,Chemistry,Materials_science,Engineering | 2,594 |
52,440,533 | https://en.wikipedia.org/wiki/Zhang%20Benren | Zhang Benren (; 28 May 1929 – 1 November 2016) was a Chinese geochemist.
Born on 28 May 1929 in Huaiyuan County, Anhui, Zhang studied geology at Nanjing University, graduating in 1952. He then earned a degree from the Beijing Institute of Geology in 1956 and later became a faculty member. Zhang was elected an academician of the Chinese Academy of Sciences in 1999 and received the State Natural Science Award.
Zhang died at the age of 87 in 2016.
References
1929 births
2016 deaths
Chemists from Anhui
China University of Geosciences alumni
Academic staff of China University of Geosciences
Chinese geochemists
Educators from Anhui
Members of the Chinese Academy of Sciences
Nanjing University alumni
20th-century Chinese chemists | Zhang Benren | Chemistry | 150 |
87,837 | https://en.wikipedia.org/wiki/Ratio | In mathematics, a ratio () shows how many times one number contains another. For example, if there are eight oranges and six lemons in a bowl of fruit, then the ratio of oranges to lemons is eight to six (that is, 8:6, which is equivalent to the ratio 4:3). Similarly, the ratio of lemons to oranges is 6:8 (or 3:4) and the ratio of oranges to the total amount of fruit is 8:14 (or 4:7).
The numbers in a ratio may be quantities of any kind, such as counts of people or objects, or such as measurements of lengths, weights, time, etc. In most contexts, both numbers are restricted to be positive.
A ratio may be specified either by giving both constituting numbers, written as "a to b" or "a:b", or by giving just the value of their quotient Equal quotients correspond to equal ratios.
A statement expressing the equality of two ratios is called a proportion.
Consequently, a ratio may be considered as an ordered pair of numbers, a fraction with the first number in the numerator and the second in the denominator, or as the value denoted by this fraction. Ratios of counts, given by (non-zero) natural numbers, are rational numbers, and may sometimes be natural numbers.
A more specific definition adopted in physical sciences (especially in metrology) for ratio is the dimensionless quotient between two physical quantities measured with the same unit. A quotient of two quantities that are measured with units may be called a rate.
Notation and terminology
The ratio of numbers A and B can be expressed as:
the ratio of A to B
A:B
A is to B (when followed by "as C is to D"; see below)
a fraction with A as numerator and B as denominator that represents the quotient (i.e., A divided by B, or ). This can be expressed as a simple or a decimal fraction, or as a percentage, etc.
When a ratio is written in the form A:B, the two-dot character is sometimes the colon punctuation mark. In Unicode, this is , although Unicode also provides a dedicated ratio character, .
The numbers A and B are sometimes called terms of the ratio, with A being the antecedent and B being the consequent.
A statement expressing the equality of two ratios A:B and C:D is called a proportion, written as A:B = C:D or A:B∷C:D. This latter form, when spoken or written in the English language, is often expressed as
(A is to B) as (C is to D).
A, B, C and D are called the terms of the proportion. A and D are called its extremes, and B and C are called its means. The equality of three or more ratios, like A:B = C:D = E:F, is called a continued proportion.
Ratios are sometimes used with three or even more terms, e.g., the proportion for the edge lengths of a "two by four" that is ten inches long is therefore
(unplaned measurements; the first two numbers are reduced slightly when the wood is planed smooth)
a good concrete mix (in volume units) is sometimes quoted as
For a (rather dry) mixture of 4/1 parts in volume of cement to water, it could be said that the ratio of cement to water is 4:1, that there is 4 times as much cement as water, or that there is a quarter (1/4) as much water as cement.
The meaning of such a proportion of ratios with more than two terms is that the ratio of any two terms on the left-hand side is equal to the ratio of the corresponding two terms on the right-hand side.
History and etymology
It is possible to trace the origin of the word "ratio" to the Ancient Greek (logos). Early translators rendered this into Latin as ("reason"; as in the word "rational"). A more modern interpretation of Euclid's meaning is more akin to computation or reckoning. Medieval writers used the word ("proportion") to indicate ratio and ("proportionality") for the equality of ratios.
Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers. The Pythagoreans' conception of number included only what would today be called rational numbers, casting doubt on the validity of the theory in geometry where, as the Pythagoreans also discovered, incommensurable ratios (corresponding to irrational numbers) exist. The discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus of Cnidus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables.
The existence of multiple theories seems unnecessarily complex since ratios are, to a large extent, identified with quotients and their prospective values. However, this is a comparatively recent development, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios and quotients. The reasons for this are twofold: first, there was the previously mentioned reluctance to accept irrational numbers as true numbers, and second, the lack of a widely used symbolism to replace the already established terminology of ratios delayed the full acceptance of fractions as alternative until the 16th century.
Euclid's definitions
Book V of Euclid's Elements has 18 definitions, all of which relate to ratios. In addition, Euclid uses ideas that were in such common usage that he did not include definitions for them. The first two definitions say that a part of a quantity is another quantity that "measures" it and conversely, a multiple of a quantity is another quantity that it measures. In modern terminology, this means that a multiple of a quantity is that quantity multiplied by an integer greater than one—and a part of a quantity (meaning aliquot part) is a part that, when multiplied by an integer greater than one, gives the quantity.
Euclid does not define the term "measure" as used here, However, one may infer that if a quantity is taken as a unit of measurement, and a second quantity is given as an integral number of these units, then the first quantity measures the second. These definitions are repeated, nearly word for word, as definitions 3 and 5 in book VII.
Definition 3 describes what a ratio is in a general way. It is not rigorous in a mathematical sense and some have ascribed it to Euclid's editors rather than Euclid himself. Euclid defines a ratio as between two quantities of the same type, so by this definition the ratios of two lengths or of two areas are defined, but not the ratio of a length and an area. Definition 4 makes this more rigorous. It states that a ratio of two quantities exists, when there is a multiple of each that exceeds the other. In modern notation, a ratio exists between quantities p and q, if there exist integers m and n such that mp>q and nq>p. This condition is known as the Archimedes property.
Definition 5 is the most complex and difficult. It defines what it means for two ratios to be equal. Today, this can be done by simply stating that ratios are equal when the quotients of the terms are equal, but such a definition would have been meaningless to Euclid. In modern notation, Euclid's definition of equality is that given quantities p, q, r and s, p:q∷r:s if and only if, for any positive integers m and n, np<mq, np=mq, or np>mq according as nr<ms, nr=ms, or nr>ms, respectively. This definition has affinities with Dedekind cuts as, with n and q both positive, np stands to mq as stands to the rational number (dividing both terms by nq).
Definition 6 says that quantities that have the same ratio are proportional or in proportion. Euclid uses the Greek ἀναλόγον (analogon), this has the same root as λόγος and is related to the English word "analog".
Definition 7 defines what it means for one ratio to be less than or greater than another and is based on the ideas present in definition 5. In modern notation it says that given quantities p, q, r and s, p:q>r:s if there are positive integers m and n so that np>mq and nr≤ms.
As with definition 3, definition 8 is regarded by some as being a later insertion by Euclid's editors. It defines three terms p, q and r to be in proportion when p:q∷q:r. This is extended to four terms p, q, r and s as p:q∷q:r∷r:s, and so on. Sequences that have the property that the ratios of consecutive terms are equal are called geometric progressions. Definitions 9 and 10 apply this, saying that if p, q and r are in proportion then p:r is the duplicate ratio of p:q and if p, q, r and s are in proportion then p:s is the triplicate ratio of p:q.
Number of terms and use of fractions
In general, a comparison of the quantities of a two-entity ratio can be expressed as a fraction derived from the ratio. For example, in a ratio of 2:3, the amount, size, volume, or quantity of the first entity is that of the second entity.
If there are 2 oranges and 3 apples, the ratio of oranges to apples is 2:3, and the ratio of oranges to the total number of pieces of fruit is 2:5. These ratios can also be expressed in fraction form: there are 2/3 as many oranges as apples, and 2/5 of the pieces of fruit are oranges. If orange juice concentrate is to be diluted with water in the ratio 1:4, then one part of concentrate is mixed with four parts of water, giving five parts total; the amount of orange juice concentrate is 1/4 the amount of water, while the amount of orange juice concentrate is 1/5 of the total liquid. In both ratios and fractions, it is important to be clear what is being compared to what, and beginners often make mistakes for this reason.
Fractions can also be inferred from ratios with more than two entities; however, a ratio with more than two entities cannot be completely converted into a single fraction, because a fraction can only compare two quantities. A separate fraction can be used to compare the quantities of any two of the entities covered by the ratio: for example, from a ratio of 2:3:7 we can infer that the quantity of the second entity is that of the third entity.
Proportions and percentage ratios
If we multiply all quantities involved in a ratio by the same number, the ratio remains valid. For example, a ratio of 3:2 is the same as 12:8. It is usual either to reduce terms to the lowest common denominator, or to express them in parts per hundred (percent).
If a mixture contains substances A, B, C and D in the ratio 5:9:4:2 then there are 5 parts of A for every 9 parts of B, 4 parts of C and 2 parts of D. As 5+9+4+2=20, the total mixture contains 5/20 of A (5 parts out of 20), 9/20 of B, 4/20 of C, and 2/20 of D. If we divide all numbers by the total and multiply by 100, we have converted to percentages: 25% A, 45% B, 20% C, and 10% D (equivalent to writing the ratio as 25:45:20:10).
If the two or more ratio quantities encompass all of the quantities in a particular situation, it is said that "the whole" contains the sum of the parts: for example, a fruit basket containing two apples and three oranges and no other fruit is made up of two parts apples and three parts oranges. In this case, , or 40% of the whole is apples and , or 60% of the whole is oranges. This comparison of a specific quantity to "the whole" is called a proportion.
If the ratio consists of only two values, it can be represented as a fraction, in particular as a decimal fraction. For example, older televisions have a 4:3 aspect ratio, which means that the width is 4/3 of the height (this can also be expressed as 1.33:1 or just 1.33 rounded to two decimal places). More recent widescreen TVs have a 16:9 aspect ratio, or 1.78 rounded to two decimal places. One of the popular widescreen movie formats is 2.35:1 or simply 2.35. Representing ratios as decimal fractions simplifies their comparison. When comparing 1.33, 1.78 and 2.35, it is obvious which format offers wider image. Such a comparison works only when values being compared are consistent, like always expressing width in relation to height.
Reduction
Ratios can be reduced (as fractions are) by dividing each quantity by the common factors of all the quantities. As for fractions, the simplest form is considered that in which the numbers in the ratio are the smallest possible integers.
Thus, the ratio 40:60 is equivalent in meaning to the ratio 2:3, the latter being obtained from the former by dividing both quantities by 20. Mathematically, we write 40:60 = 2:3, or equivalently 40:60∷2:3. The verbal equivalent is "40 is to 60 as 2 is to 3."
A ratio that has integers for both quantities and that cannot be reduced any further (using integers) is said to be in simplest form or lowest terms.
Sometimes it is useful to write a ratio in the form 1:x or x:1, where x is not necessarily an integer, to enable comparisons of different ratios. For example, the ratio 4:5 can be written as 1:1.25 (dividing both sides by 4) Alternatively, it can be written as 0.8:1 (dividing both sides by 5).
Where the context makes the meaning clear, a ratio in this form is sometimes written without the 1 and the ratio symbol (:), though, mathematically, this makes it a factor or multiplier.
Irrational ratios
Ratios may also be established between incommensurable quantities (quantities whose ratio, as value of a fraction, amounts to an irrational number). The earliest discovered example, found by the Pythagoreans, is the ratio of the length of the diagonal to the length of a side of a square, which is the square root of 2, formally Another example is the ratio of a circle's circumference to its diameter, which is called , and is not just an irrational number, but a transcendental number.
Also well known is the golden ratio of two (mostly) lengths and , which is defined by the proportion
or, equivalently
Taking the ratios as fractions and as having the value , yields the equation
or
which has the positive, irrational solution
Thus at least one of a and b has to be irrational for them to be in the golden ratio. An example of an occurrence of the golden ratio in math is as the limiting value of the ratio of two consecutive Fibonacci numbers: even though all these ratios are ratios of two integers and hence are rational, the limit of the sequence of these rational ratios is the irrational golden ratio.
Similarly, the silver ratio of and is defined by the proportion
corresponding to
This equation has the positive, irrational solution so again at least one of the two quantities a and b in the silver ratio must be irrational.
Odds
Odds (as in gambling) are expressed as a ratio. For example, odds of "7 to 3 against" (7:3) mean that there are seven chances that the event will not happen to every three chances that it will happen. The probability of success is 30%. In every ten trials, there are expected to be three wins and seven losses.
Units
Ratios may be unitless, as in the case they relate quantities in units of the same dimension, even if their units of measurement are initially different.
For example, the ratio can be reduced by changing the first value to 60 seconds, so the ratio becomes . Once the units are the same, they can be omitted, and the ratio can be reduced to 3:2.
On the other hand, there are non-dimensionless quotients, also known as rates (sometimes also as ratios).
In chemistry, mass concentration ratios are usually expressed as weight/volume fractions.
For example, a concentration of 3% w/v usually means 3 g of substance in every 100 mL of solution. This cannot be converted to a dimensionless ratio, as in weight/weight or volume/volume fractions.
Triangular coordinates
The locations of points relative to a triangle with vertices A, B, and C and sides AB, BC, and CA are often expressed in extended ratio form as triangular coordinates.
In barycentric coordinates, a point with coordinates α, β, γ is the point upon which a weightless sheet of metal in the shape and size of the triangle would exactly balance if weights were put on the vertices, with the ratio of the weights at A and B being α : β, the ratio of the weights at B and C being β : γ, and therefore the ratio of weights at A and C being α : γ.
In trilinear coordinates, a point with coordinates x:y:z has perpendicular distances to side BC (across from vertex A) and side CA (across from vertex B) in the ratio x:y, distances to side CA and side AB (across from C) in the ratio y:z, and therefore distances to sides BC and AB in the ratio x:z.
Since all information is expressed in terms of ratios (the individual numbers denoted by α, β, γ, x, y, and z have no meaning by themselves), a triangle analysis using barycentric or trilinear coordinates applies regardless of the size of the triangle.
See also
Cross ratio
Dilution ratio
Displacement–length ratio
Dimensionless quantity
Financial ratio
Fold change
Interval (music)
Odds ratio
Parts-per notation
Price–performance ratio
Proportionality (mathematics)
Ratio distribution
Ratio estimator
Rate (mathematics)
Ratio (Twitter)
Rate ratio
Relative risk
Rule of three (mathematics)
Scale (map)
Scale (ratio)
Sex ratio
Superparticular ratio
Slope
References
Further reading
"Ratio" The Penny Cyclopædia vol. 19, The Society for the Diffusion of Useful Knowledge (1841) Charles Knight and Co., London pp. 307ff
"Proportion" New International Encyclopedia, Vol. 19 2nd ed. (1916) Dodd Mead & Co. pp270-271
"Ratio and Proportion" Fundamentals of practical mathematics, George Wentworth, David Eugene Smith, Herbert Druery Harper (1922) Ginn and Co. pp. 55ff
D.E. Smith, History of Mathematics, vol 2 Ginn and Company (1925) pp. 477ff. Reprinted 1958 by Dover Publications.
External links
Elementary mathematics
Algebra
Quotients | Ratio | Mathematics | 4,055 |
13,711 | https://en.wikipedia.org/wiki/Hydroxide | Hydroxide is a diatomic anion with chemical formula OH−. It consists of an oxygen and hydrogen atom held together by a single covalent bond, and carries a negative electric charge. It is an important but usually minor constituent of water. It functions as a base, a ligand, a nucleophile, and a catalyst. The hydroxide ion forms salts, some of which dissociate in aqueous solution, liberating solvated hydroxide ions. Sodium hydroxide is a multi-million-ton per annum commodity chemical.
The corresponding electrically neutral compound HO• is the hydroxyl radical. The corresponding covalently bound group –OH of atoms is the hydroxy group.
Both the hydroxide ion and hydroxy group are nucleophiles and can act as catalysts in organic chemistry.
Many inorganic substances which bear the word hydroxide in their names are not ionic compounds of the hydroxide ion, but covalent compounds which contain hydroxy groups.
Hydroxide ion
The hydroxide ion is naturally produced from water by the self-ionization reaction:
H3O+ + OH− 2H2O
The equilibrium constant for this reaction, defined as
Kw = [H+][OH−]
has a value close to 10−14 at 25 °C, so the concentration of hydroxide ions in pure water is close to 10−7 mol∙dm−3, to satisfy the equal charge constraint. The pH of a solution is equal to the decimal cologarithm of the hydrogen cation concentration; the pH of pure water is close to 7 at ambient temperatures. The concentration of hydroxide ions can be expressed in terms of pOH, which is close to (14 − pH), so the pOH of pure water is also close to 7. Addition of a base to water will reduce the hydrogen cation concentration and therefore increase the hydroxide ion concentration (decrease pH, increase pOH) even if the base does not itself contain hydroxide. For example, ammonia solutions have a pH greater than 7 due to the reaction NH3 + H+ , which decreases the hydrogen cation concentration, which increases the hydroxide ion concentration. pOH can be kept at a nearly constant value with various buffer solutions.
In an aqueous solution the hydroxide ion is a base in the Brønsted–Lowry sense as it can accept a proton from a Brønsted–Lowry acid to form a water molecule. It can also act as a Lewis base by donating a pair of electrons to a Lewis acid. In aqueous solution both hydrogen and hydroxide ions are strongly solvated, with hydrogen bonds between oxygen and hydrogen atoms. Indeed, the bihydroxide ion has been characterized in the solid state. This compound is centrosymmetric and has a very short hydrogen bond (114.5 pm) that is similar to the length in the bifluoride ion (114 pm). In aqueous solution the hydroxide ion forms strong hydrogen bonds with water molecules. A consequence of this is that concentrated solutions of sodium hydroxide have high viscosity due to the formation of an extended network of hydrogen bonds as in hydrogen fluoride solutions.
In solution, exposed to air, the hydroxide ion reacts rapidly with atmospheric carbon dioxide, acting as an acid, to form, initially, the bicarbonate ion.
OH− + CO2
The equilibrium constant for this reaction can be specified either as a reaction with dissolved carbon dioxide or as a reaction with carbon dioxide gas (see Carbonic acid for values and details). At neutral or acid pH, the reaction is slow, but is catalyzed by the enzyme carbonic anhydrase, which effectively creates hydroxide ions at the active site.
Solutions containing the hydroxide ion attack glass. In this case, the silicates in glass are acting as acids. Basic hydroxides, whether solids or in solution, are stored in airtight plastic containers.
The hydroxide ion can function as a typical electron-pair donor ligand, forming such complexes as tetrahydroxoaluminate/tetrahydroxidoaluminate [Al(OH)4]−. It is also often found in mixed-ligand complexes of the type [MLx(OH)y]z+, where L is a ligand. The hydroxide ion often serves as a bridging ligand, donating one pair of electrons to each of the atoms being bridged. As illustrated by [Pb2(OH)]3+, metal hydroxides are often written in a simplified format. It can even act as a 3-electron-pair donor, as in the tetramer [PtMe3(OH)]4.
When bound to a strongly electron-withdrawing metal centre, hydroxide ligands tend to ionise into oxide ligands. For example, the bichromate ion [HCrO4]− dissociates according to
[O3CrO–H]− [CrO4]2− + H+
with a pKa of about 5.9.
Vibrational spectra
The infrared spectra of compounds containing the OH functional group have strong absorption bands in the region centered around 3500 cm−1. The high frequency of molecular vibration is a consequence of the small mass of the hydrogen atom as compared to the mass of the oxygen atom, and this makes detection of hydroxyl groups by infrared spectroscopy relatively easy. A band due to an OH group tends to be sharp. However, the band width increases when the OH group is involved in hydrogen bonding. A water molecule has an HOH bending mode at about 1600 cm−1, so the absence of this band can be used to distinguish an OH group from a water molecule.
When the OH group is bound to a metal ion in a coordination complex, an M−OH bending mode can be observed. For example, in [Sn(OH)6]2− it occurs at 1065 cm−1. The bending mode for a bridging hydroxide tends to be at a lower frequency as in [(bipyridine)Cu(OH)2Cu(bipyridine)]2+ (955 cm−1). M−OH stretching vibrations occur below about 600 cm−1. For example, the tetrahedral ion [Zn(OH)4]2− has bands at 470 cm−1 (Raman-active, polarized) and 420 cm−1 (infrared). The same ion has a (HO)–Zn–(OH) bending vibration at 300 cm−1.
Applications
Sodium hydroxide solutions, also known as lye and caustic soda, are used in the manufacture of pulp and paper, textiles, drinking water, soaps and detergents, and as a drain cleaner. Worldwide production in 2004 was approximately 60 million tonnes. The principal method of manufacture is the chloralkali process.
Solutions containing the hydroxide ion are generated when a salt of a weak acid is dissolved in water. Sodium carbonate is used as an alkali, for example, by virtue of the hydrolysis reaction
+ H2O + OH− (pKa2= 10.33 at 25 °C and zero ionic strength)
Although the base strength of sodium carbonate solutions is lower than a concentrated sodium hydroxide solution, it has the advantage of being a solid. It is also manufactured on a vast scale (42 million tonnes in 2005) by the Solvay process. An example of the use of sodium carbonate as an alkali is when washing soda (another name for sodium carbonate) acts on insoluble esters, such as triglycerides, commonly known as fats, to hydrolyze them and make them soluble.
Bauxite, a basic hydroxide of aluminium, is the principal ore from which the metal is manufactured. Similarly, goethite (α-FeO(OH)) and lepidocrocite (γ-FeO(OH)), basic hydroxides of iron, are among the principal ores used for the manufacture of metallic iron.
Inorganic hydroxides
Alkali metals
Aside from NaOH and KOH, which enjoy very large scale applications, the hydroxides of the other alkali metals also are useful. Lithium hydroxide (LiOH) is used in breathing gas purification systems for spacecraft, submarines, and rebreathers to remove carbon dioxide from exhaled gas.
2 LiOH + CO2 → Li2CO3 + H2O
The hydroxide of lithium is preferred to that of sodium because of its lower mass. Sodium hydroxide, potassium hydroxide, and the hydroxides of the other alkali metals are also strong bases.
Alkaline earth metals
Beryllium hydroxide Be(OH)2 is amphoteric. The hydroxide itself is insoluble in water, with a solubility product log K*sp of −11.7. Addition of acid gives soluble hydrolysis products, including the trimeric ion [Be3(OH)3(H2O)6]3+, which has OH groups bridging between pairs of beryllium ions making a 6-membered ring. At very low pH the aqua ion [Be(H2O)4]2+ is formed. Addition of hydroxide to Be(OH)2 gives the soluble tetrahydroxoberyllate or tetrahydroxidoberyllate anion, [Be(OH)4]2−.
The solubility in water of the other hydroxides in this group increases with increasing atomic number. Magnesium hydroxide Mg(OH)2 is a strong base (up to the limit of its solubility, which is very low in pure water), as are the hydroxides of the heavier alkaline earths: calcium hydroxide, strontium hydroxide, and barium hydroxide. A solution or suspension of calcium hydroxide is known as limewater and can be used to test for the weak acid carbon dioxide. The reaction Ca(OH)2 + CO2 Ca2+ + + OH− illustrates the basicity of calcium hydroxide. Soda lime, which is a mixture of the strong bases NaOH and KOH with Ca(OH)2, is used as a CO2 absorbent.
Boron group elements
The simplest hydroxide of boron B(OH)3, known as boric acid, is an acid. Unlike the hydroxides of the alkali and alkaline earth hydroxides, it does not dissociate in aqueous solution. Instead, it reacts with water molecules acting as a Lewis acid, releasing protons.
B(OH)3 + H2O + H+
A variety of oxyanions of boron are known, which, in the protonated form, contain hydroxide groups.
Aluminium hydroxide Al(OH)3 is amphoteric and dissolves in alkaline solution.
Al(OH)3 (solid) + OH− (aq) (aq)
In the Bayer process for the production of pure aluminium oxide from bauxite minerals this equilibrium is manipulated by careful control of temperature and alkali concentration. In the first phase, aluminium dissolves in hot alkaline solution as , but other hydroxides usually present in the mineral, such as iron hydroxides, do not dissolve because they are not amphoteric. After removal of the insolubles, the so-called red mud, pure aluminium hydroxide is made to precipitate by reducing the temperature and adding water to the extract, which, by diluting the alkali, lowers the pH of the solution. Basic aluminium hydroxide AlO(OH), which may be present in bauxite, is also amphoteric.
In mildly acidic solutions, the hydroxo/hydroxido complexes formed by aluminium are somewhat different from those of boron, reflecting the greater size of Al(III) vs. B(III). The concentration of the species [Al13(OH)32]7+ is very dependent on the total aluminium concentration. Various other hydroxo complexes are found in crystalline compounds. Perhaps the most important is the basic hydroxide AlO(OH), a polymeric material known by the names of the mineral forms boehmite or diaspore, depending on crystal structure. Gallium hydroxide, indium hydroxide, and thallium(III) hydroxide are also amphoteric. Thallium(I) hydroxide is a strong base.
Carbon group elements
Carbon forms no simple hydroxides. The hypothetical compound C(OH)4 (orthocarbonic acid or methanetetrol) is unstable in aqueous solution:
C(OH)4 → + H3O+
+ H+ H2CO3
Carbon dioxide is also known as carbonic anhydride, meaning that it forms by dehydration of carbonic acid H2CO3 (OC(OH)2).
Silicic acid is the name given to a variety of compounds with a generic formula [SiOx(OH)4−2x]n. Orthosilicic acid has been identified in very dilute aqueous solution. It is a weak acid with pKa1 = 9.84, pKa2 = 13.2 at 25 °C. It is usually written as H4SiO4, but the formula Si(OH)4 is generally accepted. Other silicic acids such as metasilicic acid (H2SiO3), disilicic acid (H2Si2O5), and pyrosilicic acid (H6Si2O7) have been characterized. These acids also have hydroxide groups attached to the silicon; the formulas suggest that these acids are protonated forms of polyoxyanions.
Few hydroxo complexes of germanium have been characterized. Tin(II) hydroxide Sn(OH)2 was prepared in anhydrous media. When tin(II) oxide is treated with alkali the pyramidal hydroxo complex is formed. When solutions containing this ion are acidified, the ion [Sn3(OH)4]2+ is formed together with some basic hydroxo complexes. The structure of [Sn3(OH)4]2+ has a triangle of tin atoms connected by bridging hydroxide groups. Tin(IV) hydroxide is unknown but can be regarded as the hypothetical acid from which stannates, with a formula [Sn(OH)6]2−, are derived by reaction with the (Lewis) basic hydroxide ion.
Hydrolysis of Pb2+ in aqueous solution is accompanied by the formation of various hydroxo-containing complexes, some of which are insoluble. The basic hydroxo complex [Pb6O(OH)6]4+ is a cluster of six lead centres with metal–metal bonds surrounding a central oxide ion. The six hydroxide groups lie on the faces of the two external Pb4 tetrahedra. In strongly alkaline solutions soluble plumbate ions are formed, including [Pb(OH)6]2−.
Other main-group elements
In the higher oxidation states of the pnictogens, chalcogens, halogens, and noble gases there are oxoacids in which the central atom is attached to oxide ions and hydroxide ions. Examples include phosphoric acid H3PO4, and sulfuric acid H2SO4. In these compounds one or more hydroxide groups can dissociate with the liberation of hydrogen cations as in a standard Brønsted–Lowry acid. Many oxoacids of sulfur are known and all feature OH groups that can dissociate.
Telluric acid is often written with the formula H2TeO4·2H2O but is better described structurally as Te(OH)6.
Ortho-periodic acid can lose all its protons, eventually forming the periodate ion [IO4]−. It can also be protonated in strongly acidic conditions to give the octahedral ion [I(OH)6]+, completing the isoelectronic series, [E(OH)6]z, E = Sn, Sb, Te, I; z = −2, −1, 0, +1. Other acids of iodine(VII) that contain hydroxide groups are known, in particular in salts such as the mesoperiodate ion that occurs in K4[I2O8(OH)2]·8H2O.
As is common outside of the alkali metals, hydroxides of the elements in lower oxidation states are complicated. For example, phosphorous acid H3PO3 predominantly has the structure OP(H)(OH)2, in equilibrium with a small amount of P(OH)3.
The oxoacids of chlorine, bromine, and iodine have the formula OA(OH), where n is the oxidation number: +1, +3, +5, or +7, and A = Cl, Br, or I. The only oxoacid of fluorine is F(OH), hypofluorous acid. When these acids are neutralized the hydrogen atom is removed from the hydroxide group.
Transition and post-transition metals
The hydroxides of the transition metals and post-transition metals usually have the metal in the +2 (M = Mn, Fe, Co, Ni, Cu, Zn) or +3 (M = Fe, Ru, Rh, Ir) oxidation state. None are soluble in water, and many are poorly defined. One complicating feature of the hydroxides is their tendency to undergo further condensation to the oxides, a process called olation. Hydroxides of metals in the +1 oxidation state are also poorly defined or unstable. For example, silver hydroxide Ag(OH) decomposes spontaneously to the oxide (Ag2O). Copper(I) and gold(I) hydroxides are also unstable, although stable adducts of CuOH and AuOH are known. The polymeric compounds M(OH)2 and M(OH)3 are in general prepared by increasing the pH of an aqueous solutions of the corresponding metal cations until the hydroxide precipitates out of solution. On the converse, the hydroxides dissolve in acidic solution. Zinc hydroxide Zn(OH)2 is amphoteric, forming the tetrahydroxidozincate ion in strongly alkaline solution.
Numerous mixed ligand complexes of these metals with the hydroxide ion exist. In fact, these are in general better defined than the simpler derivatives. Many can be made by deprotonation of the corresponding metal aquo complex.
LnM(OH2) + B LnM(OH) + BH+ (L = ligand, B = base)
Vanadic acid H3VO4 shows similarities with phosphoric acid H3PO4 though it has a much more complex vanadate oxoanion chemistry. Chromic acid H2CrO4, has similarities with sulfuric acid H2SO4; for example, both form acid salts A+[HMO4]−. Some metals, e.g. V, Cr, Nb, Ta, Mo, W, tend to exist in high oxidation states. Rather than forming hydroxides in aqueous solution, they convert to oxo clusters by the process of olation, forming polyoxometalates.
Basic salts containing hydroxide
In some cases, the products of partial hydrolysis of metal ion, described above, can be found in crystalline compounds. A striking example is found with zirconium(IV). Because of the high oxidation state, salts of Zr4+ are extensively hydrolyzed in water even at low pH. The compound originally formulated as ZrOCl2·8H2O was found to be the chloride salt of a tetrameric cation [Zr4(OH)8(H2O)16]8+ in which there is a square of Zr4+ ions with two hydroxide groups bridging between Zr atoms on each side of the square and with four water molecules attached to each Zr atom.
The mineral malachite is a typical example of a basic carbonate. The formula, Cu2CO3(OH)2 shows that it is halfway between copper carbonate and copper hydroxide. Indeed, in the past the formula was written as CuCO3·Cu(OH)2. The crystal structure is made up of copper, carbonate and hydroxide ions. The mineral atacamite is an example of a basic chloride. It has the formula, Cu2Cl(OH)3. In this case the composition is nearer to that of the hydroxide than that of the chloride CuCl2·3Cu(OH)2. Copper forms hydroxyphosphate (libethenite), arsenate (olivenite), sulfate (brochantite), and nitrate compounds. White lead is a basic lead carbonate, (PbCO3)2·Pb(OH)2, which has been used as a white pigment because of its opaque quality, though its use is now restricted because it can be a source for lead poisoning.
Structural chemistry
The hydroxide ion appears to rotate freely in crystals of the heavier alkali metal hydroxides at higher temperatures so as to present itself as a spherical ion, with an effective ionic radius of about 153 pm. Thus, the high-temperature forms of KOH and NaOH have the sodium chloride structure, which gradually freezes in a monoclinically distorted sodium chloride structure at temperatures below about 300 °C. The OH groups still rotate even at room temperature around their symmetry axes and, therefore, cannot be detected by X-ray diffraction. The room-temperature form of NaOH has the thallium iodide structure. LiOH, however, has a layered structure, made up of tetrahedral Li(OH)4 and (OH)Li4 units. This is consistent with the weakly basic character of LiOH in solution, indicating that the Li–OH bond has much covalent character.
The hydroxide ion displays cylindrical symmetry in hydroxides of divalent metals Ca, Cd, Mn, Fe, and Co. For example, magnesium hydroxide Mg(OH)2 (brucite) crystallizes with the cadmium iodide layer structure, with a kind of close-packing of magnesium and hydroxide ions.
The amphoteric hydroxide Al(OH)3 has four major crystalline forms: gibbsite (most stable), bayerite, nordstrandite, and doyleite.
All these polymorphs are built up of double layers of hydroxide ions – the aluminium atoms on two-thirds of the octahedral holes between the two layers – and differ only in the stacking sequence of the layers. The structures are similar to the brucite structure. However, whereas the brucite structure can be described as a close-packed structure in gibbsite the OH groups on the underside of one layer rest on the groups of the layer below. This arrangement led to the suggestion that there are directional bonds between OH groups in adjacent layers. This is an unusual form of hydrogen bonding since the two hydroxide ion involved would be expected to point away from each other. The hydrogen atoms have been located by neutron diffraction experiments on α-AlO(OH) (diaspore). The O–H–O distance is very short, at 265 pm; the hydrogen is not equidistant between the oxygen atoms and the short OH bond makes an angle of 12° with the O–O line. A similar type of hydrogen bond has been proposed for other amphoteric hydroxides, including Be(OH)2, Zn(OH)2, and Fe(OH)3.
A number of mixed hydroxides are known with stoichiometry A3MIII(OH)6, A2MIV(OH)6, and AMV(OH)6. As the formula suggests these substances contain M(OH)6 octahedral structural units. Layered double hydroxides may be represented by the formula . Most commonly, z = 2, and M2+ = Ca2+, Mg2+, Mn2+, Fe2+, Co2+, Ni2+, Cu2+, or Zn2+; hence q = x.
In organic reactions
Potassium hydroxide and sodium hydroxide are two well-known reagents in organic chemistry.
Base catalysis
The hydroxide ion may act as a base catalyst. The base abstracts a proton from a weak acid to give an intermediate that goes on to react with another reagent. Common substrates for proton abstraction are alcohols, phenols, amines, and carbon acids. The pKa value for dissociation of a C–H bond is extremely high, but the pKa alpha hydrogens of a carbonyl compound are about 3 log units lower. Typical pKa values are 16.7 for acetaldehyde and 19 for acetone. Dissociation can occur in the presence of a suitable base.
RC(O)CH2R' + B RC(O)CH−R' + BH+
The base should have a pKa value not less than about 4 log units smaller, or the equilibrium will lie almost completely to the left.
The hydroxide ion by itself is not a strong enough base, but it can be converted in one by adding sodium hydroxide to ethanol
OH− + EtOH EtO− + H2O
to produce the ethoxide ion. The pKa for self-dissociation of ethanol is about 16, so the alkoxide ion is a strong enough base. The addition of an alcohol to an aldehyde to form a hemiacetal is an example of a reaction that can be catalyzed by the presence of hydroxide. Hydroxide can also act as a Lewis-base catalyst.
As a nucleophilic reagent
The hydroxide ion is intermediate in nucleophilicity between the fluoride ion F−, and the amide ion . Ester hydrolysis under alkaline conditions (also known as base hydrolysis)
R1C(O)OR2 + OH− R1CO(O)H + −OR2 R1CO2− + HOR2
is an example of a hydroxide ion serving as a nucleophile.
Early methods for manufacturing soap treated triglycerides from animal fat (the ester) with lye.
Other cases where hydroxide can act as a nucleophilic reagent are amide hydrolysis, the Cannizzaro reaction, nucleophilic aliphatic substitution, nucleophilic aromatic substitution, and in elimination reactions. The reaction medium for KOH and NaOH is usually water but with a phase-transfer catalyst the hydroxide anion can be shuttled into an organic solvent as well, for example in the generation of the reactive intermediate dichlorocarbene.
Notes
References
Bibliography
Oxyanions
Water chemistry | Hydroxide | Chemistry | 5,644 |
15,003,815 | https://en.wikipedia.org/wiki/Glutaminolysis | Glutaminolysis (glutamine + -lysis) is a series of biochemical reactions by which the amino acid glutamine is lysed to glutamate, aspartate, CO2, pyruvate, lactate, alanine and citrate.
The glutaminolytic pathway
Glutaminolysis partially recruits reaction steps from the citric acid cycle and the malate-aspartate shuttle.
Reaction steps from glutamine to α-ketoglutarate
The conversion of the amino acid glutamine to α-ketoglutarate takes place in two reaction steps:
1. Hydrolysis of the amino group of glutamine yielding glutamate and ammonium.
Catalyzing enzyme: glutaminase (EC 3.5.1.2)
2. Glutamate can be excreted or can be further metabolized to α-ketoglutarate.
For the conversion of glutamate to α-ketoglutarate three different reactions are possible:
Catalyzing enzymes:
glutamate dehydrogenase (GlDH), EC 1.4.1.2
glutamate pyruvate transaminase (GPT), also called alanine transaminase (ALT), EC 2.6.1.2
glutamate oxaloacetate transaminase (GOT), also called aspartate transaminase (AST), EC 2.6.1.1 (component of the malate aspartate shuttle)
Recruited reaction steps of the citric acid cycle and malate aspartate shuttle
α-ketoglutarate + NAD+ + CoASH → succinyl-CoA + NADH+H+ + CO2
catalyzing enzyme: α-ketoglutarate dehydrogenase complex
succinyl-CoA + GDP + Pi → succinate + GTP
catalyzing enzyme: succinyl-CoA-synthetase, EC 6.2.1.4
succinate + FAD → fumarate + FADH2
catalyzing enzyme: succinate dehydrogenase, EC 1.3.5.1
fumarate + H2O → malate
catalyzing enzyme: fumarase, EC 4.2.1.2
malate + NAD+ → oxaloacetate + NADH + H+
catalyzing enzyme: malate dehydrogenase, EC 1.1.1.37 (component of the malate aspartate shuttle)
oxaloacetate + acetyl-CoA + H2O → citrate + CoASH
catalyzing enzyme: citrate synthase, EC 2.3.3.1
Reaction steps from malate to pyruvate and lactate
The conversion of malate to pyruvate and lactate is catalyzed by
NAD(P) dependent malate decarboxylase (malic enzyme; EC 1.1.1.39 and 1.1.1.40) and
lactate dehydrogenase (LDH; EC 1.1.1.27)
according to the following equations:
malate + NAD(P)+→ pyruvate + NAD(P)H + H+ + CO2
pyruvate + NADH + H+ → lactate + NAD+
Intracellular compartmentalization of the glutaminolytic pathway
The reactions of the glutaminolytic pathway take place partly in the mitochondria and to some extent in the cytosol (compare the metabolic scheme of the glutaminolytic pathway).
An important energy source in tumor cells
Glutaminolysis takes place in all proliferating cells, such as lymphocytes, thymocytes, colonocytes, adipocytes and especially in tumor cells. Glutaminolysis has been targeted for therapeutic purposes. In tumor cells the citric acid cycle is truncated due to an inhibition of the enzyme aconitase (EC 4.2.1.3) by high concentrations of reactive oxygen species (ROS) Aconitase catalyzes the conversion of citrate to isocitrate.
On the other hand, tumor cells over express phosphate dependent glutaminase and NAD(P)-dependent malate decarboxylase, which in combination with the remaining reaction steps of the citric acid cycle from α-ketoglutarate to citrate impart the possibility of a new energy producing pathway, the degradation of the amino acid glutamine to glutamate, aspartate, pyruvate CO2, lactate and citrate.
Besides glycolysis in tumor cells glutaminolysis is another main pillar for energy production. High extracellular glutamine concentrations stimulate tumor growth and are essential for cell transformation. On the other hand, a reduction of glutamine correlates with phenotypical and functional differentiation of the cells.
Energy efficacy of glutaminolysis in tumor cells
one ATP by direct phosphorylation of GDP
two ATP from oxidation of FADH2
three ATP at a time for the NADH + H+ produced within the α-ketoglutarate dehydrogenase reaction, the malate dehydrogenase reaction and the malate decarboxylase reaction.
Due to low glutamate dehydrogenase and glutamate pyruvate transaminase activities, in tumor cells the conversion of glutamate to alpha-ketoglutarate mainly takes place via glutamate oxaloacetate transaminase.
Advantages of glutaminolysis in tumor cells
Glutamine is the most abundant amino acid in the plasma and an additional energy source in tumor cells especially when glycolytic energy production is low due to a high amount of the dimeric form of M2-PK.
Glutamine and its degradation products glutamate and aspartate are precursors for nucleic acid and serine synthesis.
Glutaminolysis is insensitive to high concentrations of reactive oxygen species (ROS).
Due to the truncation of the citric acid cycle the amount of acetyl-CoA infiltrated in the citric acid cycle is low and acetyl-CoA is available for de novo synthesis of fatty acids and cholesterol. The fatty acids can be used for phospholipid synthesis or can be released.
Fatty acids represent an effective storage vehicle for hydrogen. Therefore, the release of fatty acids is an effective way to get rid of cytosolic hydrogen produced within the glycolytic glyceraldehyde 3-phosphate dehydrogenase (GAPDH; EC 1.2.1.9) reaction.
Glutamate and fatty acids are immunosuppressive. The release of both metabolites may protect tumor cells from immune attacks.
It has been discussed that the glutamate pool may drive the endergonic uptake of other amino acids by system ASC.
Glutamine can be converted to citrate without NADH production, uncoupling NADH production from biosynthesis.
See also
Citric acid cycle
Malate-aspartate shuttle
References
External links
The glutaminolytic pathway
Metabolism
Biochemistry | Glutaminolysis | Chemistry,Biology | 1,549 |
66,440,883 | https://en.wikipedia.org/wiki/Vericiguat | Vericiguat, sold under the brand name Verquvo, is a medication used to reduce the risk of cardiovascular death and hospitalization in certain patients with heart failure after a recent acute decompensation event. It is taken by mouth. Vericiguat is a soluble guanylate cyclase (sGC) stimulator.
Common side effects include low blood pressure and low red cell count (anemia).
It was approved for medical use in the United States in January 2021, and for use in the European Union in July 2021. The U.S. Food and Drug Administration considers it to be a first-in-class medication.
Medical uses
Vericiguat is indicated to reduce the risk of cardiovascular death and hospitalization for heart failure following a prior hospitalization for heart failure or need for outpatient intravenous diuretics, in adults with symptomatic chronic heart failure and an ejection fraction of less than 45%.
Vericiguat is usually given orally once every day with food. No dose adjustments are required in the elderly, in people with mild-to-moderate liver failure, or in those with impaired kidney function. As of 2024, no studies have found information for patients with severely impaired kidney function, severe liver failure, or are on dialysis.
Vericiguat is contraindicated in pregnancy. While there are no studies on its safety when used by pregnant women, animals studies suggest higher rates of birth defects, as well as increased number of abortions and resorptions. It may also pass into breast milk, but the effects on breastfed infants is unknown. The manufacturer advises that child-bearing age patients should be on contraception and assessed for pregnancy before starting treatment.
Adverse effects
The most common side effects of vericiguat include symptomatic low blood pressure and anemia. Patients taking other soluble guanylate cyclase inhibitors should not take vericiguat.
Pharmacology
Vericiguat is a direct stimulator of soluble guanylate cyclase, an important enzyme in vascular smooth muscle cells. Specifically, vericiguat binds to the beta-subunit of the target site on the soluble guanylate cyclase enzyme. Soluble guanylate cyclase catalyzes the formation of cyclic GMP upon interaction with nitric oxide to activate a number of downstream signaling cascades, which can compensate for defects in this pathway and resulting losses in regulatory myocardial and vascular cellular processes due to cardiovascular complications.
Pharmacokinetics
After vericiguat is administered (100 mg by mouth once daily), the average steady state and Cmax and AUC for patients with cardiovascular failure is 350 mcg/L and 6,680 mcg/h/L with a Tmax of one hour. Vericiguat has a positive food effect, and therefore patients are advised to consume food with the drug for an oral bioavailability of 93%. Vericiguat is extensively protein bound in plasma. Vericiguat is primarily metabolized via phase 2 conjugation reactions, with a minor CYP-mediated oxidative metabolite. The major metabolite is glucuronidated and inactive. The typical half-life profile for patients with heart failure is 30 hours. Vericiguat has a decreased clearance in patients with systolic heart failure.
History
The U.S. Food and Drug Administration (FDA) approved vericiguat based on evidence from a clinical trial (NCT02861534) which consisted of 5,050 participants aged 23 to 98 years old with worsening heart failure. The trial was conducted at 694 sites in 42 countries in Europe, Asia, North and South America. The trial enrolled participants with symptoms of worsening heart failure. Participants were randomly assigned to receive vericiguat or a placebo pill once a day. Neither the participants nor the health care professionals knew if the participants were given vericiguat or placebo pills until after the trial was complete. It was awarded a fast track designation on 19 January 2021.
Society and culture
Legal status
On 20 May 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for vericiguat, intended for the treatment of symptomatic chronic heart failure in adults with reduced ejection fraction. The applicant for this medicinal product is Bayer AG. Vericiguat was approved for medical use in the European Union in July 2021.
References
Further reading
External links
Soluble guanylate cyclase stimulators
Pyrazolopyridines
Fluoroarenes
Pyrimidines
Carbamates
Amines
Drugs developed by Merck & Co.
2-Fluorophenyl compounds | Vericiguat | Chemistry | 1,016 |
10,622,169 | https://en.wikipedia.org/wiki/King%20post | A king post (or king-post or kingpost) is a central vertical post used in architectural or bridge designs, working in tension to support a beam below from a truss apex above (whereas a crown post, though visually similar, supports items above from the beam below).
In aircraft design a strut called a king post acts in compression, similarly to an architectural crown post. Usage in mechanical plant and marine engineering differs again, as noted below.
Architecture
A king post extends vertically from a crossbeam (the tie beam) to the apex of a triangular truss. The king post, itself in tension, connects the apex of the truss with its base, holding up the tie beam (also in tension) at the base of the truss. The post can be replaced with an iron rod called a king rod (or king bolt) and thus a king rod truss. The king post truss is also called a "Latin truss".
In traditional timber framing, a crown post looks similar to a king post, but it is very different structurally: whereas the king post is in tension, usually supporting the tie beam as a truss, the crown post is supported by the tie beam and is in compression. The crown post rises to a crown plate immediately below collar beams which it supports; it does not rise to the apex like a king post. Historically a crown post was called a king post in England but this usage is obsolete.
An alternative truss construction uses two queen posts (or queen-posts). These vertical posts, positioned along the base of the truss, are supported by the sloping sides of the truss, rather than reaching its apex. A development adds a collar beam above the queen posts, which are then termed queen struts. A section of the tie beam between the queen posts may be removed to create a hammerbeam roof.
King post truss
The king post truss is used for simple roof trusses and short-span bridges. It is the simplest form of truss in that it is constructed of the fewest truss members (individual lengths of wood or metal). The truss consists of two diagonal members that meet at the apex of the truss, one horizontal beam that serves to tie the bottom end of the diagonals together, and the king post which connects the apex to the horizontal beam below. For a roof truss, the diagonal members are called rafters, and the horizontal member may serve as a ceiling joist. A bridge would require two king post trusses with the driving surface between them. A roof usually uses many side-by-side trusses depending on the size of the structure.
Pont-y-Cafnau, the world's first iron railway bridge, is of the king post type.
History
King posts were used in timber-framed roof construction in Roman buildings, and in medieval architecture in buildings such as parish churches and tithe barns. The oldest surviving roof truss in the world is a king post truss in Saint Catherine's Monastery, Egypt, built between 548 and 565.
King posts also appear in Gothic Revival architecture, Queen Anne style architecture and occasionally in modern construction. King post trusses are also used as a structural element in wood and metal bridges.
A painting by Karl Blechen circa 1833 illustrating construction of the second Devil's Bridge (Teufelsbrücke) in the Schöllenen Gorge shows multiple king posts suspended from the apex of the falsework upon which the masonry arch has been laid. In this example, beams in compression are supported by each king post several feet below the apex, and the bottom of the king posts can clearly be seen to be unsupported.
Norman truss
Architectural historians in the French colonial cities St Louis, Missouri and New Orleans, Louisiana use the term "Norman roof" to refer to a steeply pitched roof; it is supported by what they call a "Norman truss" which is similar to a king post truss. This is a through-purlin truss consisting of a tie beam and paired truss blades, with a central king post to support the roof ridge. The name derives from a belief that this system of construction was introduced to North America by settlers from Normandy in northern France, but it is really a misnomer as the system was more widely used than that. The difference between a Norman truss and a king post truss is the tie beam in a Norman truss is technically a collar beam (a beam between the rafters above the rafter feet) where the king post truss the rafters land on top of a tie beam.
Aviation
King posts are also used in the construction of some wire-braced aircraft, where a king post supports the top cables or "ground wires" supporting the wing. Only on the ground are these wires from the kingpost in tension, while in the air under positive g flight they are unloaded.
Mechanical plant
The very robust hinge connecting the boom to the chassis in a backhoe, similar in function and appearance to a large automotive kingpin, is called a king post.
Marine engineering
On a cargo ship or oiler a king post is an upright with cargo-handling or fueling rig devices attached to it. On a cargo vessel king posts are designed for handling cargo, and so are located at the forward or after end of a hatch. For an oiler they are located over the fuel transfer lines.
See also
Strut
Cabane strut
Queen post
Timber roof truss
References
Notes
Bibliography
External links
Bridge Basics
Timber roofs
Crown post roofs
King and Queen post roofs on the former mansion at Parlington, near Aberford in Yorkshire, England
An Illustrated Roof Glossary (archived)
Architectural elements
Structural engineering
Trusses
Timber framing
Truss bridges by type | King post | Technology,Engineering | 1,141 |
9,853,620 | https://en.wikipedia.org/wiki/In-circuit%20testing | In-circuit testing (ICT) is an example of white box testing where an electrical probe tests a populated printed circuit board (PCB), checking for shorts, opens, resistance, capacitance, and other basic quantities which will show whether the assembly was correctly fabricated. It may be performed with a "bed of nails" test fixture and specialist test equipment, or with a fixtureless in-circuit test setup. In-Circuit Test (ICT) is a widely used and cost-efficient method for testing medium- to high-volume electronic printed circuit board assemblies (PCBAs). It has maintained its popularity over the years due to its ability to diagnose component-level faults and its operational speed.
Using In-Circuit Test fixtures is a very effective way of maintaining standards when carrying out tests. It can help to reduce production downtime by identifying faults early in the testing process, ensuring that defective products are removed from the production line and fixed.
Fixtures for in-circuit testing
A common form of in-circuit testing uses a bed-of-nails tester. This is a fixture that uses an array of spring-loaded pins known as "pogo pins". When a printed circuit board is aligned with and pressed down onto the bed-of-nails tester, the pins make electrical contact with locations on the circuit board, allowing them to be used as test points for in-circuit testing. Bed-of-nails testers have the advantage that many tests may be performed at a time, but have the disadvantage of placing substantial strain on the PCB.
An alternative is the use of flying probes, which place less mechanical strain on the boards being tested. Their advantages and disadvantages are the opposite of bed-of-nails testers: the flying probes must be moved between tests, but they place much less strain on the PCB.
There are a range of companies who specifically create In-Circuit Test Fixtures and Test Systems, including companies such as Teradyne & Keysight who build and manufacture test systems. There are also a range of independent fixture houses which supply and manufacture in-circuit test fixtures such as INGUN (who provide fixture kits), Forwessun & Rematek.
Example test sequence
Discharging capacitors and especially electrolytic capacitors (for safety and measurement stability, this test sequence must be done first before testing any other items)
Contact Test (To verify the test system is connected to the Unit Under Test (UUT)
Shorts testing (Test for solder shorts and opens)
Analog tests (Test all analog components for placement and correct value)
Test for defective open pins on devices
Test for capacitor orientation defects
Power up UUT
Powered analog (Test for correct operation of analog components such as regulators and opamps)
Powered digital (Test the operation of digital components and Boundary scan devices)
JTAG boundary scan tests
Flash memory, EEPROM, and other device programming
Discharging capacitors as UUT is powered down
While in-circuit testers are typically limited to testing the above devices, it is possible to add additional hardware to the test fixture to allow different solutions to be implemented. Such additional hardware includes:
Cameras to test for presence and correct orientation of components
Photodetectors to test for LED color and intensity
External timer counter modules to test very high frequencies (over 50 MHz) crystals and oscillators
Signal waveform analysis, e.g. slew rate measurement, envelope curve etc.
External equipment can be used for hi-voltage measurement (more than 100Vdc due to limitation of voltage that is provided) or AC equipment Source those have interface to PC as the ICT Controller
Bead probe technology to access small traces that cannot be accessed by traditional means
Limitations
While in-circuit test is a very powerful tool for testing PCBs, it has these limitations:
Parallel components can often only be tested as one component if the components are of the same type (i.e. two resistors); though different components in parallel may be testable using a sequence of different tests - e.g. a DC voltage measurement versus a measurement of AC injection current at a node.
Electrolytic components can be tested for polarity only in specific configurations (e.g. if not parallel connected to power rails) or with a specific sensor
The quality of electrical contacts can not be tested unless extra test points and/or a dedicated extra cable harness are provided.
It is only as good as the design of the PCB. If no test access has been provided by the PCB designer then some tests will not be possible. See Design For Test guidelines.
Related technologies
The following are related technologies and are also used in electronic production to test for the correct operation of Electronics Printed Circuit boards:
PCB electrical test of bare PCBs
AXI Automated x-ray inspection
JTAG Joint Test Action Group (Boundary Scan Technology)
AOI automated optical inspection
Functional testing (see Acceptance testing and FCT)
References
External links
In-Circuit Test Tutorial
Printed circuit board manufacturing
Hardware testing
Electronic test equipment | In-circuit testing | Technology,Engineering | 1,023 |
19,151,674 | https://en.wikipedia.org/wiki/Interaction%20technique | An interaction technique, user interface technique or input technique is a combination of hardware and software elements that provides a way for computer users to accomplish a single task. For example, one can go back to the previously visited page on a Web browser by either clicking a button, pressing a key, performing a mouse gesture or uttering a speech command. It is a widely used term in human-computer interaction. In particular, the term "new interaction technique" is frequently used to introduce a novel user interface design idea.
Definition
Although there is no general agreement on the exact meaning of the term "interaction technique", the most popular definition is from the computer graphics literature:
A more recent variation is:
The computing view
From the computer's perspective, an interaction technique involves:
One or several input devices that capture user input,
One or several output devices that display user feedback,
A piece of software that:
interprets user input into commands the computer can understand,
produces user feedback based on user input and the system's state.
Consider for example the process of deleting a file using a contextual menu. This assumes the existence of a mouse (input device), a screen (output device), and a piece of code that paints a menu and updates its selection (user feedback) and sends a command to the file system when the user clicks on the "delete" item (interpretation). User feedback can be further used to confirm that the command has been invoked.
The user's view
From the user's perspective, an interaction technique is a way to perform a single computing task and can be informally expressed with user instructions or usage scenarios. For example, "to delete a file, right-click on the file you want to delete, then click on the delete item".
The designer's view
From the user interface designer's perspective, an interaction technique is a well-defined solution to a specific user interface design problem. Interaction techniques as conceptual ideas can be refined, extended, modified and combined. For example, contextual menus are a solution to the problem of rapidly selecting commands. Pie menus are a radial variant of contextual menus. Marking menus combine pie menus with gesture recognition.
Level of granularity
One extant cause of confusion in the general discussion of interaction is a lack of clarity about levels of granularity. Interaction techniques are usually characterized at a low level of granularity—not necessarily at the lowest level of physical events, but at a level that is technology-, platform-, and/or implementation-dependent. For example, interaction techniques exist that are specific to mobile devices, touch-based displays, traditional mouse/keyboard inputs, and other paradigms—in other words, they are dependent on a specific technology or platform. In contrast, viewed at higher levels of granularity, interaction is not tied to any specific technology or platform. The interaction of 'filtering', for example, can be characterized in a way that is technology-independent—e.g., performing an action such that some information is hidden and only a subset of the original information remains. Such an interaction could be implemented using any number of techniques, and on any number of platforms and technologies. See also the discussion of #interaction patterns below.
Interaction tasks and domain objects
An interaction task is "the unit of an entry of information by the user", such as entering a piece of text, issuing a command, or specifying a 2D position. A similar concept is that of domain object, which is a piece of application data that can be manipulated by the user.
Interaction techniques are the glue between physical I/O devices and interaction tasks or domain objects. Different types of interaction techniques can be used to map a specific device to a specific domain object. For example, different gesture alphabets exist for pen-based text input.
In general, the less compatible the device is with the domain object, the more complex the interaction technique. For example, using a mouse to specify a 2D point involves a trivial interaction technique, whereas using a mouse to rotate a 3D object requires more creativity to design the technique and more lines of code to implement it.
A current trend is to avoid complex interaction techniques by matching physical devices with the task as close as possible, such as exemplified by the field of tangible computing. But this is not always a feasible solution. Furthermore, device/task incompatibilities are unavoidable in computer accessibility, where a single switch can be used to control the whole computer environment.
Interaction style
Interaction techniques that share the same metaphor or design principles can be seen as belonging to the same interaction style. General examples are command line and direct manipulation user interfaces.
Interaction patterns
While interaction techniques are typically technology-, platform-, and/or implementation-dependent (see #level of granularity above), human-computer or human-information interactions can be characterized at higher levels of abstraction that are independent of particular technologies and platforms. At such levels of abstraction, the concern is not precisely how an interaction is performed; rather, the concern is a conceptual characterization of what the interaction is, and what the general utility of the interaction is for the user(s). Thus, any single interaction pattern may be instantiated by any number of interaction techniques, on any number of different technologies and platforms. Interaction patterns are more concerned with the timeless, invariant qualities of an interaction.
Visualization technique
Interaction techniques essentially involve data entry and manipulation, and thus place greater emphasis on input than output. Output is merely used to convey affordances and provide user feedback. The use of the term input technique further reinforces the central role of input. Conversely, techniques that mainly involve data exploration and thus place greater emphasis on output are called visualization techniques. They are studied in the field of information visualization.
Research and innovation
A large part of research in human-computer interaction involves exploring easier-to-learn or more efficient interaction techniques for common computing tasks. This includes inventing new (post-WIMP) interaction techniques, possibly relying on methods from user interface design, and assessing their efficiency with respect to existing techniques using methods from experimental psychology. Examples of scientific venues in these topics are the UIST and the CHI conferences. Other research focuses on the specification of interaction techniques, sometimes using formalisms such as Petri nets for the purposes of formal verification.
See also
3D interaction techniques
Interaction styles
Types of user interface
Input devices
Interaction Design
Interactivity
Information Visualization
Visual Analytics
Widget (GUI)
References
External links
UIST video archive
Patterns for effective interaction design
User interfaces
Graphical user interfaces
Human–computer interaction
technique | Interaction technique | Technology,Engineering | 1,337 |
26,442,981 | https://en.wikipedia.org/wiki/Lactarius%20semisanguifluus | Lactarius semisanguifluus is a species of fungus in the family Russulaceae.
See also
List of Lactarius species
References
semisanguifluus
Fungi described in 1950
Fungi of Europe
Edible fungi
Fungus species | Lactarius semisanguifluus | Biology | 49 |
61,155,496 | https://en.wikipedia.org/wiki/Umbilicaric%20acid | Umbilicaric acid is an organic polyphenolic carboxylic acid made by several species of lichen. It is named after Umbilicaria. Umbilicaric acid is a tridepside, containing three phenol rings in orsellinic acid moieties.
Identification of unbilicaric acid can be important in the identification of lichen species.
See also
Gyrophoric acid
References
Polyphenols
Benzoic acids
Benzoate esters
Lichen products | Umbilicaric acid | Chemistry | 106 |
4,236,543 | https://en.wikipedia.org/wiki/Ashvini | Ashvini (अश्विनी, ) is the first nakshatra (lunar mansion) in Indian astronomy having a spread from 0°-0'-0" to 13°-20', corresponding to the head of Aries, including the stars β and γ Arietis. The name aśvinī is used by Varahamihira (6th century). The older name of the asterism, found in the Atharvaveda (AVS 19.7; in the dual) and in Panini (4.3.36), was aśvayúja, "harnessing horses". This nakshatra belongs to Mesha Rasi. Notable personalities born in this nakshatra are Sania Mirza, Bhimsen Joshi, Yukta Mookhey.
Astrology
Ashvini is ruled by Ketu, the descending lunar node. In electional astrology, Ashvini is classified as a small constellation, meaning that it is believed to be advantageous to begin works of a precise or delicate nature while the moon is in Ashvini. Ashvini is ruled by the Ashvinas, the heavenly twin brother gods who served as physicians to the gods and goddesses. Ashvini is represented by the bee hive.
Traditional Indian names are determined by which pada (quarter) of a nakshatra the Ascendant was in at the time of birth. In the case of Ashvini, the given name would begin with the following syllables: Chu, Che, Cho, La.
See also
List of Nakshatras
References
Nakshatra | Ashvini | Astronomy | 332 |
74,022,275 | https://en.wikipedia.org/wiki/Revisionist%20just%20war%20theory | Revisionist just war theory is a development of just war theory that, unlike traditional just war theory, seeks to integrate jus ad bellum and jus in bello, therefore rejecting many traditional beliefs such as moral equality of combatants. Opposing traditionalists such as Michael Walzer, revisionists include Jeff McMahan, Cécile Fabre, Bradley J. Strawser, and David Rodin.
References
Further reading
Just war theory
20th century in philosophy
21st century in philosophy | Revisionist just war theory | Biology | 96 |
5,478,813 | https://en.wikipedia.org/wiki/T-stage | T-stage is a British term for a compressor used in a particular concept for a variable cycle combat engine. The T-stage is part of the HP rotor in this concept.
A US concept for a variable cycle combat engine also uses a similar compressor arrangement as part of the HP rotor. It is called a core driven fan stage (CDFS) by General Electric Aviation in their Variable Cycle Engine (VCE) which ran in 1981.
Alternative concepts including an LP driven stage are shown in the US patent "Variable Cycle Gas Turbine Engines" filed in 1975.
References
Jet engines | T-stage | Technology | 117 |
33,430,669 | https://en.wikipedia.org/wiki/Porpolomopsis%20calyptriformis | Porpolomopsis calyptriformis, commonly known as the pink wax cap, ballerina waxcap or salmon waxy cap, is a species of agaric (gilled mushroom) in the family Hygrophoraceae. The species has a European distribution, occurring mainly in agriculturally unimproved grassland. Threats to its habitat have resulted in the species being assessed as globally "vulnerable" on the IUCN Red List of Threatened Species. A similar but as yet unnamed species occurs in North America.
Taxonomy
The species was first described in 1838 by the Rev. Miles Joseph Berkeley as Agaricus calyptraeformis (so spelt), based on specimens he collected locally in England. In 1889, Swiss mycologist Victor Fayod moved it to the genus Hygrocybe. The specific epithet comes from Greek καλὐπτρα (= a woman's veil) + Latin forma (= shape), hence "veil-shaped".
In 2008, Bresinsky proposed the genus Porpolomopsis to accommodate the species. Recent molecular research, based on cladistic analysis of DNA sequences, found that Porpolomopsis calyptriformis does not belong in Hygrocybe sensu stricto and confirmed its removal to the genus Porpolomopsis.
Description
Basidiocarps are agaricoid, up to 125mm (5 in) tall, the cap narrowly conical at first, retaining an acute umbo when expanded, up to 75mm (3 in) across, often splitting when expanded, the margins turning upwards. The cap surface is smooth to fibrillose, slightly shiny or greasy, pale rose-pink to lilac-pink (rarely white). The lamellae (gills) are widely spaced, waxy, cap-coloured or whiter. The stipe (stem) is smooth, white to pale cap-coloured, lacking a ring. The spore print is white, the spores (under a microscope) smooth, inamyloid, ellipsoid, c. 6.5 to 8.0 by 4.5 to 5.5μm.
The species can normally be distinguished in the field, thanks to its shape and colour. No other European waxcap is pink with a pointed cap.
Distribution and habitat
The Pink Waxcap is widespread but generally rare throughout Europe, with its "stronghold" in the United Kingdom where it is not uncommon. Like other waxcaps, it occurs in old, agriculturally unimproved, short-sward grassland (pastures and lawns). The species has been reported from North America, but specimens that have been DNA-sequenced are not the same as the European P. calyptriformis.
Recent research suggests waxcaps are neither mycorrhizal nor saprotrophic but may be associated with mosses.
Conservation
Porpolomopsis calyptriformis is typical of waxcap grasslands, a declining habitat due to changing agricultural practices. As a result, the species is of global conservation concern and is listed as "vulnerable" on the IUCN Red List of Threatened Species. It is also one of 33 larger fungi proposed for international protection under the Bern Convention. Porpolomopsis calyptriformis also appears on the official or provisional national red lists of threatened fungi in several European countries, including Austria, Bulgaria, the Czech Republic, Denmark, France, Germany (Bavaria), Hungary, Italy, Poland, Slovakia, Spain, and Switzerland.
References
Fungi of Europe
Fungi described in 1838
Hygrophoraceae
Taxa named by Miles Joseph Berkeley
Fungus species | Porpolomopsis calyptriformis | Biology | 746 |
12,045,078 | https://en.wikipedia.org/wiki/Excision%20repair%20cross-complementing | Excision repair cross-complementing (ERCC) is a set of proteins which are involved in DNA repair.
In humans, ERCC proteins are transcribed from the following genes:
ERCC1, ERCC2, ERCC3, ERCC4, ERCC5, ERCC6, and ERCC8.
Members 1 though 5 are associated with Xeroderma Pigmentosum.
Members 6 and 8 are associated with Cockayne syndrome.
References
DNA repair | Excision repair cross-complementing | Chemistry,Biology | 97 |
66,742,200 | https://en.wikipedia.org/wiki/Tricholoma%20batschii | Tricholoma batschii is a species of fungus belonging to the family Tricholomataceae.
It is found in Europe.
References
batschii
Fungi described in 1969
Fungi of Europe
Fungus species | Tricholoma batschii | Biology | 43 |
68,010,706 | https://en.wikipedia.org/wiki/Set%20Decorators%20Society%20of%20America%20Awards | The Set Decorators Society of America (SDSA) Awards are awards honoring the best set decorators in film and television. The inaugural SDSA Film Awards were held on March 31, 2021, and nominations were announced March 11, 2021. The first SDSA Television Awards took place on July 30, 2021, and the nominations were unveiled on June 16, 2021.
Categories
Film
Best Achievement in Decor/Design of a Feature Film – Period
Best Achievement in Decor/Design of a Feature Film – Science Fiction or Fantasy
Best Achievement in Decor/Design of a Feature Film – Contemporary
Best Achievement in Decor/Design of a Feature Film – Musical or Comedy
Television
Best Achievement in Decor/Design of a One Hour Contemporary Series
Best Achievement in Decor/Design of a One Hour Fantasy or Science Fiction Series
Best Achievement in Decor/Design of a One Hour Period Series
Best Achievement in Decor/Design of a Television Movie or Limited Series
Best Achievement in Decor/Design of a Half-Hour Single-Camera Series
Best Achievement in Decor/Design of a Half-Hour Multi-Camera Series
Best Achievement in Decor/Design of a Short Format: Webseries, Music Video or Commercial
Best Achievement in Decor/Design of a Variety, Reality or Competition Series
Best Achievement in Decor/Design of a Variety Special
Best Achievement in Decor/Design of a Daytime Series
Ceremonies
2020
2021
2022
2023
2024
References
External links
Entertainment industry societies
Film organizations in the United States
Guilds in the United States
Scenic design
Set decorators | Set Decorators Society of America Awards | Engineering | 298 |
70,131,321 | https://en.wikipedia.org/wiki/Deposit%20gauge | A deposit gauge is a large, funnel-like scientific instrument used for capturing and measuring atmospheric particulates, notably soot, carried in air pollution and deposited back down to ground.
Design and construction
Deposit gauges are similar to rain gauges. They have a large circular funnel on top, made of stone so as not to be corroded by acid rain and mounted on a simple wooden or metal stand, which drains down into a collection bottle beneath. Typically the funnel has a wire-mesh screen around its perimeter to deter perching birds. Most are made to a standardized design, known as a standard deposit gauge, introduced in 1916 and formalized in a British Standard in 1951, which means the pollution collected in different places can be systematically studied and compared. The bottle is removed after a month and the contents taken away for analysis of water (such as rain, fog, and snow), insoluble matter (such as soot), and soluble matter.
Early history
The first gauges of this type were developed in the early 20th century by W.J. Russell of St Bartholomew's Hospital and the Coal Smoke Abatement Society. Between 1910 and 1916, the design was refined and standardized by the Committee for the Investigation of Atmospheric Pollution, a group of expert, volunteer scientists studying air pollution of which Sir Napier Shaw, first director of the Met Office, was chair. The first scientific paper featuring deposit gauge measurements was titled "The Sootfall of London: Its Amount, Quality, and Effects" and published in The Lancet in January 1912. Thanks to the introduction of the deposit gauge, air quality in Britain was monitored systematically from 1914 onward and this played an important role in determining the effectiveness of efforts to control pollution. By 1927, some deposit gauges were already showing 50 percent reductions in "deposited matter", although air pollution remained a major problem.
Over the next few decades, deposit gauges were deployed in many British towns and cities, allowing rough comparisons to be made of pollution in different parts of the country. According to pollution historian Stephen Mosley, by 1949, some 177 gauges had been deployed across Britain, so creating the world's first large-scale pollution monitoring network, but the number increased dramatically after the Great London Smog of 1952, reaching 615 in 1954 and 1066 in 1966.
Modern use
Although deposit gauges were inaccurate and their limitations were well known from the start, their widespread introduction still represented a considerable advance in the study and comparison of pollution at different times of the year and in different places. In his book State, Science and the Skies: Governmentalities of the British Atmosphere, Mark Whitehead, a geography lecturer at Aberystywth University, has described the deposit gauge as "perhaps the most important technological device in the history of Britain's air pollution monitoring". Even so, from the mid-20th century, it was gradually superseded by more accurate instruments and better methods of data collection and analysis.
Today, although air pollution is more likely to be measured with automated electronic sensors, deposit gauges are still occasionally used. Modern variants of the standard deposit gauge include the so-called "frisbee" gauge, in which the deposit collector is shaped like an inverted frisbee. Other variants include the directional deposit gauge, which has four tall, removable bottles to collect deposits arriving from different directions.
See also
Rain gauge
Air pollution measurement
References
Further reading
Air pollution
Atmospheric chemistry
Measuring instruments
Scientific instruments | Deposit gauge | Chemistry,Technology,Engineering | 699 |
869,797 | https://en.wikipedia.org/wiki/Neurospora%20crassa | Neurospora crassa is a type of red bread mold of the phylum Ascomycota. The genus name, meaning 'nerve spore' in Greek, refers to the characteristic striations on the spores. The first published account of this fungus was from an infestation of French bakeries in 1843.
Neurospora crassa is used as a model organism because it is easy to grow and has a haploid life cycle that makes genetic analysis simple since recessive traits will show up in the offspring. Analysis of genetic recombination is facilitated by the ordered arrangement of the products of meiosis in Neurospora ascospores. Its entire genome of seven chromosomes has been sequenced.
Neurospora was used by Edward Tatum and George Wells Beadle in their experiments for which they won the Nobel Prize in Physiology or Medicine in 1958. Beadle and Tatum exposed N. crassa to x-rays, causing mutations. They then observed failures in metabolic pathways caused by errors in specific enzymes. This led them to propose the "one gene, one enzyme" hypothesis that specific genes code for specific proteins. Their hypothesis was later elaborated to enzyme pathways by Norman Horowitz, also working on Neurospora. As Norman Horowitz reminisced in 2004, "These experiments founded the science of what Beadle and Tatum called 'biochemical genetics'. In actuality, they proved to be the opening gun in what became molecular genetics and all developments that have followed from that."
In the 24 April 2003 issue of Nature, the genome of N. crassa was reported as completely sequenced. The genome is about 43 megabases long and includes approximately 10,000 genes. There is a project underway to produce strains containing knockout mutants of every N. crassa gene.
In its natural environment, N. crassa lives mainly in tropical and sub-tropical regions. It can be found growing on dead plant matter after fires.
Neurospora is actively used in research around the world. It is important in the elucidation of molecular events involved in circadian rhythms, epigenetics and gene silencing, cell polarity, cell fusion, development, as well as many aspects of cell biology and biochemistry.
The sexual cycle
Sexual fruiting bodies (perithecia) can only be formed when two mycelia of different mating type come together (see Figure). Like other Ascomycetes, N. crassa has two mating types that, in this case, are symbolized by A and a. There is no evident morphological difference between the A and a mating type strains. Both can form abundant protoperithecia, the female reproductive structure (see Figure). Protoperithecia are formed most readily in the laboratory when growth occurs on solid (agar) synthetic medium with a relatively low source of nitrogen. Nitrogen starvation appears to be necessary for expression of genes involved in sexual development. The protoperithecium consists of an ascogonium, a coiled multicellular hypha that is enclosed in a knot-like aggregation of hyphae. A branched system of slender hyphae, called the trichogyne, extends from the tip of the ascogonium projecting beyond the sheathing hyphae into the air. The sexual cycle is initiated (i.e. fertilization occurs) when a cell (usually a conidium) of opposite mating type contacts a part of the trichogyne (see Figure). Such contact can be followed by cell fusion leading to one or more nuclei from the fertilizing cell migrating down the trichogyne into the ascogonium. Since both A and a strains have the same sexual structures, neither strain can be regarded as exclusively male or female. However, as a recipient, the protoperithecium of both the A and a strains can be thought of as the female structure, and the fertilizing conidium can be thought of as the male participant.
The subsequent steps following fusion of A and a haploid cells, have been outlined by Fincham and Day and Wagner and Mitchell. After fusion of the cells, the further fusion of their nuclei is delayed. Instead, a nucleus from the fertilizing cell and a nucleus from the ascogonium become associated and begin to divide synchronously. The products of these nuclear divisions (still in pairs of unlike mating type, i.e. A/a) migrate into numerous ascogenous hyphae, which then begin to grow out of the ascogonium. Each of these ascogenous hypha bends to form a hook (or crozier) at its tip and the A and a pair of haploid nuclei within the crozier divide synchronously. Next, septa form to divide the crozier into three cells. The central cell in the curve of the hook contains one A and one a nucleus (see Figure). This binuclear cell initiates ascus formation and is called an "ascus-initial" cell. Next the two uninucleate cells on either side of the first ascus-forming cell fuse with each other to form a binucleate cell that can grow to form a further crozier that can then form its own ascus-initial cell. This process can then be repeated multiple times.
After formation of the ascus-initial cell, the A and a nucleus fuse with each other to form a diploid nucleus (see Figure). This nucleus is the only diploid nucleus in the entire life cycle of N. crassa. The diploid nucleus has 14 chromosomes formed from the two fused haploid nuclei that had 7 chromosomes each. Formation of the diploid nucleus is immediately followed by meiosis. The two sequential divisions of meiosis lead to four haploid nuclei, two of the A mating type and two of the a mating type. One further mitotic division leads to four A and four a nucleus in each ascus. Meiosis is an essential part of the life cycle of all sexually reproducing organisms, and in its main features, meiosis in N. crassa seems typical of meiosis generally.
As the above events are occurring, the mycelial sheath that had enveloped the ascogonium develops as the wall of the perithecium, becomes impregnated with melanin, and blackens. The mature perithecium has a flask-shaped structure.
A mature perithecium may contain as many as 300 asci, each derived from identical fusion diploid nuclei. Ordinarily, in nature, when the perithecia mature the ascospores are ejected rather violently into the air. These ascospores are heat resistant and, in the lab, require heating at 60 °C for 30 minutes to induce germination. For normal strains, the entire sexual cycle takes 10 to 15 days. In a mature ascus containing eight ascospores, pairs of adjacent spores are identical in genetic constitution, since the last division is mitotic, and since the ascospores are contained in the ascus sac that holds them in a definite order determined by the direction of nuclear segregations during meiosis. Since the four primary products are also arranged in sequence, a first division segregation pattern of genetic markers can be distinguished from a second division segregation pattern.
Fine structure genetic analysis
Because of the above features N. crassa was found to be very useful for the study of genetic events occurring in individual meioses. Mature asci from a perithecium can be separated on a microscope slide and the spores experimentally manipulated. These studies usually involved the separate culture of individual ascospores resulting from a single meiotic event and determining the genotype of each spore. Studies of this type, carried out in several different laboratories, established the phenomenon of "gene conversion" (e.g. see references).
As an example of the gene conversion phenomenon, consider genetic crosses of two N. crassa mutant strains defective in gene pan-2. This gene is necessary for the synthesis of pantothenic acid (vitamin B5), and mutants defective in this gene can be experimentally identified by their requirement for pantothenic acid in their growth medium. The two pan-2 mutations B5 and B3 are located at different sites in the pan-2 gene, so that a cross of B5 ´ B3 yields wild-type recombinants at low frequency. An analysis of 939 asci in which the genotypes of all meiotic products (ascospores) could be determined found 11 asci with an exceptional segregation pattern. These included six asci in which there was one wild-type meiotic product but no expected reciprocal double-mutant (B5B3) product. Furthermore, in three asci the ratio of meiotic products was 1B5:3B3, rather than in the expected 2:2 ratio. This study, as well as numerous additional studies in N. crassa and other fungi (reviewed by Whitehouse), led to an extensive characterization of gene conversion. It became clear from this work that gene conversion events arise when a molecular recombination event happens to occur near the genetic markers under study (e.g. pan-2 mutations in the above example). Thus studies of gene conversion allowed insight into the details of the molecular mechanism of recombination. Over the decades since the original observations of Mary Mitchell in 1955, a sequence of molecular models of recombination have been proposed based on both emerging genetic data from gene conversion studies and studies of the reaction capabilities of DNA. Current understanding of the molecular mechanism of recombination is discussed in the Wikipedia articles Gene conversion and Genetic recombination. An understanding of recombination is relevant to several fundamental biologic problems, such the role of recombination and recombinational repair in cancer (see BRCA1) and the adaptive function of meiosis (see Meiosis).
Adaptive function of mating type
That mating in N. crassa can only occur between strains of different mating types suggests that some degree of outcrossing is favored by natural selection. In haploid multicellular fungi, such as N. crassa, meiosis occurring in the brief diploid stage is one of their most complex processes. Although physically much larger than the diploid stage, the haploid multicellular vegetative stage characteristically has a simple modular construction with little differentiation. In N. crassa, recessive mutations affecting the diploid stage of the life cycle are quite frequent in natural populations. These mutations, when homozygous in the diploid stage, often cause spores to have maturation defects or to produce barren fruiting bodies with few ascospores (sexual spores). Most of these homozygous mutations cause abnormal meiosis (e.g., disturbed chromosome pairing or pachytene or diplotene). The number of genes affecting the diploid stage was estimated to be at least 435 (about 4% of the total number of 9,730 genes). Thus, outcrossing, promoted by the necessity for the union of opposite mating types, likely provides the benefit of masking recessive mutations that would otherwise be harmful to sexual spore formation (see Complementation (genetics)).
Current research
Neurospora crassa is not only a model organism for the study of phenotypic types in knock-out variants, but a particularly useful organism widely used in computational biology and the circadian clock. It has a natural reproductive cycle of 22 hours and is influenced by external factors such as light and temperature. Knock out variants of wild type N. crassa are widely studied to determine the influence of particular genes (see Frequency (gene)).
See also
Notes and references
References
.
.
External links
Neurospora crassa genome
Montenegro-Montero A. (2010) "The Almighty Fungi: The Revolutionary Neurospora crassa". A historical view of the many contributions of this organism to molecular biology.
Sordariales
Fungal models
Fungus genetics
Fungi in cultivation
Meat substitutes
Fungus species | Neurospora crassa | Biology | 2,529 |
18,752,982 | https://en.wikipedia.org/wiki/Hammond%20Clock%20Company | The Hammond Clock Company of Chicago (Illinois) produced electric clocks between 1928 and 1941. It was one of the ventures of Laurens Hammond, the inventor of the famous Hammond organ.
Invention of the Hammond clock motor
As Stuyvesant Barry reports in his biography of Laurens Hammond, Hammond himself acknowledged that his invention of the clock that was to bear his name was inspired by the success of Henry Warren's Telechron clocks. Upon discovering the Telechron technology, Hammond designed a motor that was synchronous, like Warren's, that is to say, it rotated at a speed that was tied to the frequency of the current supplied by the power grid. In this way, any clock operated by such a motor would run with great precision as long as the operators of the power grid kept the current's frequency constant. This had become possible since the introduction of the Warren master clock, an innovation of which Hammond took full advantage with his own invention. Hammond's motor, however, differed from Warren's in a number of respects: above all, it ran more slowly and was not self-starting. (Warren had patented his self-starting technology.) The latter Hammond did not consider to be a disadvantage; he believed that people would be misled by their clocks if they restarted automatically after a power outage. As Hammond's new clock motor was not self-starting, his clocks possessed a characteristic little knob on the back that one had to spin to start the motor.
The company
The Hammond Clock Company was founded in 1928 to produce and market clocks that were equipped with Hammond's new motor. The Hammond clock factory manufactured more than 100 different clock models, some simple and cheap, others made from expensive materials such as marble and onyx. Hammond employed well-paid toolmakers who created sophisticated tools to stamp out the various components of his clocks, which could then be assembled in a belt operation by unskilled laborers. In addition, Hammond licensed his invention to other clock makers such as Waterbury, Sessions, and Ingraham.
In 1932, the economic troubles of the Great Depression threatened the clock-making industry; about 150 clock companies went out of business. To make matters worse, Hammond's licensees discovered that Hammond's patent on his motor was invalid, due to an earlier German invention of the same technology. In this situation, Hammond attempted to save his factory by starting the production of an electric bridge table. This proved to be nothing but a fleeting success. Hammond did finally manage to save his company in 1931 with a $75,000.00 contract from the Postal Telegraph Company, for putting their company name on large electric wall clocks. These clocks were to replace old key-wind clocks in railroad stations. What further saved the company was his invention of the Hammond organ. His first model, the Model A Console organ was released in 1935, year in which his company was renamed "The Hammond Organ Company" to reflect the new emphasis. The production of clocks was discontinued entirely in 1941.
Notes
Further reading
There is less literature on the Hammond clocks than on the Telechrons. Apart from some websites, such as the ones referred to in the notes, one may consult Spin to Start, the newsletter of the Synchronous Society, which was devoted to the collection of Hammond clocks. Only two issues have appeared, however: vol. 1, no. 1 (October 1996) and vol. 1, no. 2 (February 1998).
External links
American companies established in 1928
Electronics industry
Clock brands
Manufacturing companies established in 1928 | Hammond Clock Company | Technology | 721 |
37,135,857 | https://en.wikipedia.org/wiki/C20H28N4O2 | {{DISPLAYTITLE:C20H28N4O2}}
The molecular formula C20H28N4O2 may refer to:
AB-CHMINACA, an indazole-based synthetic cannabinoid
Rolofylline, an experimental diuretic which acts as a selective adenosine A1 receptor antagonist
Molecular formulas | C20H28N4O2 | Physics,Chemistry | 74 |
31,906,445 | https://en.wikipedia.org/wiki/Siah%20interacting%20protein%20N-terminal%20domain | In molecular biology the protein domain, Siah interacting protein N-terminal domain is found at the N-terminal of the protein, Siah interacting protein (SIP). It has a helical hairpin structure with a hydrophobic core which is further stabilised by an arrangement of side chains contributed by the two amphipathic helices. The function of this domain remains to be fully elucidated, but it is known to be vital for interactions with Siah. It has also been hypothesised that SIP can dimerise through this N-terminal domain.
Function
SIP protein
The SIP protein has a role as an adaptor protein, it links the E3 ubiquitin ligase activity of Siah-1 with Skp1 and Ebi F-Box protein in the degradation of beta-catenin, a transcriptional activator of TCF/LEF genes. This is important for signalling that the protein needs to undergo proteolysis at the 26S proteasome.
N-terminal domain of SIP protein
More specifically, the N-terminal domain of the SIP protein is a dimerisation domain. Its precise function is yet to be elucidated. More recent studies have shown that when the N-terminal domain is shortened, or in other words truncated by a nonsense mutation, it results in an increase in the import of SIP into the nucleus and enhances its proapoptotic effect (programmed cell death). These findings indicate that the SIP protein and in particular the N-terminal domain may provide important information about drug resistance.
Structure
The N terminal domain is a dimer. Each monomer has an alpha helical hairpin in which the two alpha helices are connected by a tight 3-residue turn. Hairpins from two monomers associate as a four-helix bundle.
References
Protein domains | Siah interacting protein N-terminal domain | Biology | 376 |
37,441,088 | https://en.wikipedia.org/wiki/Field%20Information%20Agency%2C%20Technical | The Field Information Agency, Technical (FIAT) was a US Army agency for securing the "major, and perhaps only, material reward of victory, namely, the advancement of science and the improvement of production and standards of living in the United Nations, by proper exploitation of German methods in these fields"; FIAT ended in 1947, when Operation Paperclip began functioning.
The United States organization was designated FIAT (Field Investigation Agency-Technical). FIAT continued TIIC's work, combing the mountains of papers found in factories and hideaways, to ferret out the scientific secrets, the mechanical devices, and the special techniques developed by the Germans in the decades following World War I. Important papers, documented wherever possible by observations, drawings and photographs, were flown back to Washington."
Organization
Early in 1945, foreseeing a vastly increased military and civilian interest after hostilities ended in Germany, Secretary of War Stimson had sent his scientific consultant, E. L. Bowles, to Europe to help set up a single high-level scientific and technological intelligence organization. Later, in April, among his other assignments, General Clay had acquired the job of working with Bowles in carrying out the mission from the Secretary of War. Since the new organization would have to be combined for as long as SHAEF existed, Clay had selected as its chief Brig. R. J. Maunsell (British), who was already chief of the Special Sections Subdivision, and as the deputy chief Col. Ralph M. Osborne (US). Clay also gave the organization a name, Field Information Agency, to which Maunsell added the word "Technical" to make a pronounceable acronym, FIAT. FIAT was from the first conceived as a post-hostilities agency. It would inherit from the Special Sections Subdivision a military mission and, in the search for information to use against Japan, also a wartime mission; but in the long run it would be oriented at least equally toward civilian interests. Chief among its interests would be "the securing of the major, and perhaps only, material reward of victory, namely, the advancement of science and the improvement of production and standards of living in the United Nations by proper exploitation of German methods in these fields." FIAT's scope was therewith extended to take in scientific and industrial processes and patents having civilian as well as military applications. Although Clay, Bowles, and Maunsell envisioned FIAT as having exclusive "control and actual handling of operations concerning enemy personnel, documents, and equipment of scientific and industrial interest," they discovered before long that to set up an agency with such sweeping authority in the bureaucratic thickets of SHAEF was not possible. Direct control of operations was already in the hands of various long-established SHAEF elements and would remain there-except for Operation DUSTBIN, which came under FIAT along with its parent agency, the Special Sections Subdivision, on 1 July, and the 6800 T Force, which by the time it passed to FIAT (on 1 August) had practically finished assessing its assigned and uncovered targets. The one new T Force operation in the FIAT period was conducted in Berlin in July and August. In its charter, issued at the end of May, FIAT was authorized to "coordinate, integrate, and direct the activities of the various missions and agencies" interested in scientific and technical intelligence but prohibited from collecting and exploiting such information on its own responsibility.
Never the high-powered intelligence unit Stimson had wanted and, after SHAEF was dissolved, an orphan shared administratively by the US Group Control Council and USFET without being adopted by either, FIAT eventually came by its distinctive role in the occupation almost inadvertently. In the summer of 1945, from its office in Frankfurt and branches in Paris, London, and Berlin, it provided accreditation, support, and services to civilian investigators from the Technical Industrial Intelligence Committee (Foreign Economic Administration) then arriving in Europe in large numbers to comb German plants and laboratories for information on everything from plastics to shipbuilding and building materials to chemicals. As military units that had been engaged in gathering technical intelligence were redeployed beginning in the late summer, FIAT frequently also became the custodian of the documents and equipment they had collected.
Meanwhile, in June, President Truman had established the Publications Board under the Director of War Mobilization and Reconversion and instructed it to review all scientific and technical information developed with government funds during the war with a view toward declassifying and publishing it. In August, after V-J Day, the President also ordered "prompt, public and general dissemination" of scientific and industrial information obtained from the enemy and assigned this responsibility as well to the Publication Board. At first informally and later, in December, by War Department order, FIAT acquired the responsibility for the Publication Board program in Germany and a mission, which was the same one in fact that had been foreseen for it in June, namely, to exploit Germany's scientific and industrial secrets for the benefit of the world. As the military intelligence projects were completed and phased out in late 1945 and early 1946, the volume of civilian investigations increased; FIAT microfilming teams ranged across Germany, and the Frankfurt office screened, edited, and translated reports before shipping them to the United States. By the end of the first year of the occupation, FIAT had processed over 23,000 reports, shipped 108 items of equipment (whole plants sometimes were counted as single items), and collected 53 tons of documents.
The earliest Joint Intelligence Objectives search teams were followed by others, which were to dig out industrial and scientific secrets in particular. The Technical Industrial Intelligence Committee was one group of these, composed of three hundred and eighty civilians representing seventeen American industries. Later came the teams of the Office of the Publication Board itself and many mow groups direct from private industry. Of the latter—called, in Germany, Field Intelligence Agencies, Technical (FIAT) – there have been over five hundred; of one to ten members each, operating by invitation and under the aegis of the OPB.
Today the search still goes on. The Office of Technical Services has a European staff of four to five hundred J At Hoechst, it has one hundred abstractors who struggle feverishly to keep ahead of the forty OTS document-recording cameras which route to them each month over one hundred thousand feet of microfilm.
Administratively, the story started in 1944, when the Combined Intelligence Objectives Subcommittee (CIOS), was organized in London by authority of the British and American chiefs of staff. The American membership of CIOS was represented by the War, Navy and State Departments, Army Air Forces, Foreign Economic Administration, Office of Strategic Services, and Office of Scientific and Research and Development. It was CIOS who organized the first teams of experts and started them toward predetermined objectives on the European continent. To make the most of the fact-finding thrusts, the Technical Industrial Intelligence Committee (TIIC) was organized in Washington. TIIC was under the joint chiefs of staff, and its job was to help government agencies get needed information as it was collected from the liberated countries. CIOS folded up last July, along with SHAEF, and its functions were largely taken over by Field Information Agency, Technical (FIAT). TIIC, however, was still on the job, getting information from its own investigations from FIAT, and from a scattering of other sources. The TIIC was headed by Howland H. Sargeant, of the US Alien Properties Custodian.
Results
The main result of the FIAT was to make public many of the technical and scientific advances made by the German and Axis government during World War II. Some of the results showed that publications in Germany were hampered, not by lack of talent, but by lack of paper for printing. Max von Laue wrote in April 1948 that the FIAT Reviews indicated what German scientists were doing during the war. He admitted that scientific journal publication lapsed, one after another, toward the end of the war. But this was not from a lack of articles to be published, he asserted. The lapse of publication in science was due mostly to lack of paper, bomb damage to printing houses, and other economic strictures. For example, he cites, the Zeitschrift für Physik, had sixty articles waiting for publication at the end of the war, and they could have accepted 86 more articles.
The scientists at the Kaiser Wilhelm Institutes were world leaders in plant breeding efforts, and sent some of their colleagues out with the army to "acquire" specimens. These scientists were part of the SS Ahnenerbe, which included specialists in race, biologists, physicians, historians, botanists, zoologists, geneticists and plant breeders. Like the ERR, these units had an armed "Sammelkommando" or "Collecting Commando", to secure what was needed in the occupied countries. Near the end of the war, Hitler ordered that this collection be destroyed, but the order was disobeyed. Although Heinz Brűcher cooperated with the Allies at the end of the war, and even wrote some articles for the US Army Field Information Assistance Technical Unit (FIAT), the seed collections remained hidden away until he could retrieve them in 1947, and take them first to Sweden, then to South America.
Many of the German research organizations were temporarily suspended during the Allied occupation of the country. And part of their duties were to inventory the loss of research materials, including books, during the war. A report made by the Field Information Agency, Technical (FIAT) in August 1945 noted that many of the library books were removed to the basements of the University of Heidelberg buildings. These buildings were at that time used for billets for the American troops, and the need to replace the books was evident.
"...A subsequent ACC Law, No. 29, ordered that "any of the four powers in occupation of Germany... may request in writing an authenticated copy of any book, paper, statement, account, writing or other document from the files of any German industrial, business or commercial enterprise." This law enabled the Allies to confiscate virtually every document, regardless of whether it belonged to the state or a private individual. The Americans in particular took advantage of Law No. 29. Branches of the Field Information Agency, Technical, USA (FIAT), processed over 29,000 reports, confiscated 55 tons of documents, and made over 3,400 trips within Germany to investigate the so-called "targets" of interest through June 30, 1946. Considering the transatlantic transportation problems they were already experiencing, the Americans preferred to seize "lightweight" goods, such as documents, as compensation for their participation in World War II. On the other hand, the Soviet Military Administration looked, as previously shown, for machinery and equipment. It is obvious that, at the beginning, they underestimated the value of patents, trademarks and blueprints. They even left behind the library of the Reichspatentamt in Berlin, which the Americans confiscated then when they arrived in mid-1945."
The desired result was to allow Allied businesses and industry to take advantage of Axis research before and during the war, when the isolation of German science had denied the rest of the world the benefits of their research. These published results also helped scientists and researchers in the former Axis countries, who also had been denied access to German research. In an announcement in 1947, the Petroleum Times announces that the CIOS, BIOS and FIAT reports were available for purchase from HM Stationery Office. They also announced that there would be a display of the reports at a number of cities around the country. Also, the BIOS "...has access to a considerable number of site reports on German factories and research establishments, original German documents and miscellaneous items of information which, by their nature, are not suitable for reproduction and publication..."
References
Publications
Field Information Agency, Technical. "FIAT Review of German Science, Allied Edition." A series of reports and comprehensive library on the status of German science in various disciplines published after the war, from the time when Nazi control of information cut off the flow of scientific publications. Copies of the FIAT Review are still available from NTIS.
Bartels, Julius (1899–1964), [and others], 1948, Geophysics. [n.p.] Office of the Military Government for Germany, Field Information Agencies Technical, British, French, U.S., 1948. 2 volumes, illustrations, maps (part fold.); 23 cm. Series: FIAT review of German science, 1939–1946. Notes: Text in German. Includes bibliographies. OCLC: 04286518.
Eppler, W. F., 1947, Bearing jewels of hardened synthetic spinel. [n.p.] Field Information Agency, Technical [1947] 40 pages including 9 tables; 27 cm. Notes: At head of title: Office of Military Government for Germany (US) ... OCLC: 12367985.
Josephson, G. W., 1946, Kyanite and synthetic sillimanite in Germany. [S.l.]: Office of Military Government for Germany (US), Field Information Agency, Technical, 1946. 13 pages; 27 cm. Series: FIAT final report; no. 803. Notes: "26. April, 1946." Added Entry: Germany (Territory under Allied occupation, 1945–1955 : U.S. Zone). Office of Military Government. Field Information Agency, Technical. OCLC: 27926539.
Merker, Leon, 1947, The synthetic stone industry of Germany. [n.p.] Field Information Agency, Technical [1947] 24 pages; 27 cm. Notes: At head of title: Office of military government for Germany (US). OCLC: 12364098.
Mügge, Ratje (1896– ), et al., 1948, Meteorology and physics of the atmosphere. [n.p.] Off. of Military Govt. for Germany, Field Information Agencies Technical, British, French, U.S., 1948. 291 pages, illustrations; 22 cm. Series: FIAT review of German science, 1939–1946. Notes: Text in German. Includes bibliographies. OCLC: 13374025.
Rüger, Ludwig (1896– ), et al., 1948, Geology and palaeontology. [n.p.] Office of Military Govt. for Germany, Field Information Agencies Technical, British, French, U.S., 1948. 246 pages, folded map; 23 cm. Series: FIAT review of German science, 1939–1946. Notes: Text in German. Includes Bibliographies. OCLC: 13234581.
Scheumann, Karl Hermann (1881– ), 1948, Petrography. [S.l.]: Office of Military Government for Germany, Field Information Agencies Technical, British, French, U.S., 1948. 2 volumes; 22 cm. Series: FIAT review of German science, 1939–1946. Notes: Text in German. Includes bibliographies. Part 1- Minerals. Part 2- Minerals and ores. OCLC: 01814899.
Steinmetz, Hermann (1879– ), Berek, M., [et al.], 1948, Mineralogy. [S.l.]: Office of the Military Government for Germany, Field Information Agencies Technical, British, French, U.S., 1948. 304 pages, tables; 23 cm. Series: FIAT review of German science, 1939–1946. Notes: Text in German. "Mineralogische Lehrbücher": pages [291]-294. Bibliographical footnotes. OCLC: 4339330.
Wissmann, Hermann von (1895– ), J. Blüthgen [and others], 1948, Geography. [n.p.] Office of Military Government for Germany, Field Information Agencies Technical, British French, U.S., 1948. 4 volumes in 1. Series: FIAT review of German science, 1939–1946. Notes: Text in German. Includes bibliographies. OCLC: 03375791.
Field Information Agency, Technical (FIAT). 1945. "German Universities and Technical High schools." 21 August 1945. Issued by the UD Group Control Council (Germany), office of the Director of Intelligence, FIAT. Air Force Historical Research Agency (AFHRA), Maxwell Air Force Base, AL. IRIS #115705.
Bibliography
"German Scientific Work in 1939–1945." The Military Engineer. March–April 1949. Page 139. Descriptions of the FIAT Reviews of German science.
Mumford, Russell W., McAllister, Malcolm H., Smith, Joseph P., Into, A. Norman and Gloss, Gunter H. Office of Military Government for Germany (US). "The Mining and Refining of Potash in the American and British Zones of Germany." FIAT Final Report #1045. Field Information Agency, Technical (FIAT), Technical Industrial Intelligence Division, US Department of Commerce. March 5, 1947.
Walker, C. Lester, October 1946, "German War Secrets by the Thousands: Secrets By The Thousands ." Harper's Magazine. Pages 329–336.
German-American history
Aftermath of World War II in the United States
Science and technology during World War II
United States intelligence operations
Code names
Science in Nazi Germany | Field Information Agency, Technical | Technology | 3,562 |
67,046,481 | https://en.wikipedia.org/wiki/Ministry%20of%20Oil%20and%20Mineral%20Resources%20%28Syria%29 | The Ministry of Oil and Mineral Resources is a department of the Cabinet of Syria of the Syria. It is led by the Minister of Oil.
History
In 2020, United States sanctions were placed on the ministry.
Responsibilities
Identified the functions and competencies of the Ministry of Petroleum and Mineral recourses under Legislative Decree No. 121 of 1970 and Act No. 45 6/30/2001 so as to become its terms of reference are the following:
Supervision of the institutions and enterprises of the ministry.
The supervision of the Prospecting production and investment efficiently managed if productivity was oil and mineral resources.
The policy of all aspects of the activity on the oil, gas and mineral resources.
Supervision of the implementation of development projects and the activity related with both oil and mineral resources.
Adoption of the development plans of institutions and companies of the ministry, that continued their implementation.
Prepare studies and plans which the process of development required and modernization of the ministry and with a view to keeping pace with The developments oil industry and mineral resources in the world.
Coordination among the institutions, and enterprises of the ministry, and work to resolve all differences.
Working to secure funding for its projects productivity and investment cooperation with the concerned authorities.
Institutions and companies affiliated with the Ministry
General Oil Corporation
Syrian Petroleum Company
Syrian Gas Company
Syrian Company for Oil Transport
Al Furat Oil Company
Deir Ezzor Oil Company
Kawkab Oil Company
Hayyan Oil Company
Ebla Oil Company
Degla Oil Company
Al-Rasheed Oil Company
General Corporation for Oil Refining and Distribution of Petroleum Derivatives
State Company for Homs Refinery
State Company for Baniyas Refinery
General Corporation of Geology and Mineral Resources
The General Company for Phosphates and Mines
National Seismic Center
Institute of Petroleum and Mineral Professions
Ministers for Oil and Mineral Resources
See also
Petroleum industry in Syria
Ministry of Petroleum
References
Energy in Syria
Ministries established in 1966
1966 establishments in Syria
Energy ministries
Mining ministries
Oil | Ministry of Oil and Mineral Resources (Syria) | Engineering | 376 |
12,467,386 | https://en.wikipedia.org/wiki/Ticket%20Granting%20Ticket | In some computer security systems, a Ticket Granting Ticket or Ticket to Get Tickets (TGT) is a small, encrypted identification file with a limited validity period. After authentication, this file is granted to a user for data traffic protection by the key distribution center (KDC) subsystem of authentication services such as Kerberos. The TGT file contains the session key, its expiration date, and the user's IP address, which protects the user from man-in-the-middle attacks.
The TGT is used to obtain a service ticket from Ticket Granting Service (TGS). User is granted access to network services only after this service ticket is provided.
Key management
Computer access control protocols
Authentication protocols
Key transport protocols
Computer network security | Ticket Granting Ticket | Engineering | 157 |
319,341 | https://en.wikipedia.org/wiki/Guidance%20system | A guidance system is a virtual or physical device, or a group of devices implementing a controlling the movement of a ship, aircraft, missile, rocket, satellite, or any other moving object. Guidance is the process of calculating the changes in position, velocity, altitude, and/or rotation rates of a moving object required to follow a certain trajectory and/or altitude profile based on information about the object's state of motion.
A guidance system is usually part of a Guidance, navigation and control system, whereas navigation refers to the systems necessary to calculate the current position and orientation based on sensor data like those from compasses, GPS receivers, Loran-C, star trackers, inertial measurement units, altimeters, etc. The output of the navigation system, the navigation solution, is an input for the guidance system, among others like the environmental conditions (wind, water, temperature, etc.) and the vehicle's characteristics (i.e. mass, control system availability, control systems correlation to vector change, etc.). In general, the guidance system computes the instructions for the control system, which comprises the object's actuators (e.g., thrusters, reaction wheels, body flaps, etc.), which are able to manipulate the path and orientation of the object without direct or continuous human control.
One of the earliest examples of a true guidance system is that used in the German V-1 during World War II. The navigation system consisted of a simple gyroscope, an airspeed sensor, and an altimeter. The guidance instructions were target altitude, target velocity, cruise time, and engine cut off time.
A guidance system has three major sub-sections: Inputs, Processing, and Outputs. The input section includes sensors, course data, radio and satellite links, and other information sources. The processing section, composed of one or more CPUs, integrates this data and determines what actions, if any, are necessary to maintain or achieve a proper heading. This is then fed to the outputs which can directly affect the system's course. The outputs may control speed by interacting with devices such as turbines, and fuel pumps, or they may more directly alter course by actuating ailerons, rudders, or other devices.
History
Inertial guidance systems were originally developed for rockets. American rocket pioneer Robert Goddard experimented with rudimentary gyroscopic systems. Dr. Goddard's systems were of great interest to contemporary German pioneers including Wernher von Braun. The systems entered more widespread use with the advent of spacecraft, guided missiles, and commercial airliners.
US guidance history centers around 2 distinct communities. One driven out of Caltech and NASA Jet Propulsion Laboratory, the other from the German scientists that developed the early V2 rocket guidance and MIT. The GN&C system for V2 provided many innovations and was the most sophisticated military weapon in 1942 using self-contained closed loop guidance. Early V2s leveraged 2 gyroscopes and lateral accelerometer with a simple analog computer to adjust the azimuth for the rocket in flight. Analog computer signals were used to drive 4 external rudders on the tail fins for flight control. Von Braun engineered the surrender of 500 of his top rocket scientists, along with plans and test vehicles, to the Americans. They arrived in Fort Bliss, Texas in 1945 and were subsequently moved to Huntsville, Alabama, in 1950 (aka Redstone arsenal). Von Braun's passion was interplanetary space flight. However his tremendous leadership skills and experience with the V-2 program made him invaluable to the US military. In 1955 the Redstone team was selected to put America's first satellite into orbit putting this group at the center of both military and commercial space.
The Jet Propulsion Laboratory traces its history from the 1930s, when Caltech professor Theodore von Karman conducted pioneering work in rocket propulsion. Funded by Army Ordnance in 1942, JPL's early efforts would eventually involve technologies beyond those of aerodynamics and propellant chemistry. The result of the Army Ordnance effort was JPL's answer to the German V-2 missile, named MGM-5 Corporal, first launched in May 1947. On December 3, 1958, two months after the National Aeronautics and Space Administration (NASA) was created by Congress, JPL was transferred from Army jurisdiction to that of this new civilian space agency. This shift was due to the creation of a military focused group derived from the German V2 team. Hence, beginning in 1958, NASA JPL and the Caltech crew became focused primarily on unmanned flight and shifted away from military applications with a few exceptions. The community surrounding JPL drove tremendous innovation in telecommunication, interplanetary exploration and earth monitoring (among other areas).
In the early 1950s, the US government wanted to insulate itself against over dependency on the German team for military applications. Among the areas that were domestically "developed" was missile guidance. In the early 1950s the MIT Instrumentation Laboratory (later to become the Charles Stark Draper Laboratory, Inc.) was chosen by the Air Force Western Development Division to provide a self-contained guidance system backup to Convair in San Diego for the new Atlas intercontinental ballistic missile. The technical monitor for the MIT task was a young engineer named Jim Fletcher who later served as the NASA Administrator. The Atlas guidance system was to be a combination of an on-board autonomous system, and a ground-based tracking and command system. This was the beginning of a philosophic controversy, which, in some areas, remains unresolved. The self-contained system finally prevailed in ballistic missile applications for obvious reasons. In space exploration, a mixture of the two remains.
In the summer of 1952, Dr. Richard Battin and Dr. J. Halcombe ("Hal") Laning Jr., researched computational based solutions to guidance as computing began to step out of the analog approach. As computers of that time were very slow (and missiles very fast) it was extremely important to develop programs that were very efficient. Dr. J. Halcombe Laning, with the help of Phil Hankins and Charlie Werner, initiated work on MAC, an algebraic programming language for the IBM 650, which was completed by early spring of 1958. MAC became the work-horse of the MIT lab. MAC is an extremely readable language having a three-line format, vector-matrix notations and mnemonic and indexed subscripts. Today's Space Shuttle (STS) language called HAL, (developed by Intermetrics, Inc.) is a direct offshoot of MAC. Since the principal architect of HAL was Jim Miller, who co-authored with Hal Laning a report on the MAC system, it is a reasonable speculation that the space shuttle language is named for Jim's old mentor, and not, as some have suggested, for the electronic superstar of the Arthur Clarke movie "2001-A Space Odyssey." (Richard Battin, AIAA 82–4075, April 1982)
Hal Laning and Richard Battin undertook the initial analytical work on the Atlas inertial guidance in 1954. Other key figures at Convair were Charlie Bossart, the Chief Engineer, and Walter Schweidetzky, head of the guidance group. Walter had worked with Wernher von Braun at Peenemuende during World War II.
The initial "Delta" guidance system assessed the difference in position from a reference trajectory. A velocity to be gained (VGO) calculation is made to correct the current trajectory with the objective of driving VGO to Zero. The mathematics of this approach were fundamentally valid, but dropped because of the challenges in accurate inertial navigation (e.g. IMU Accuracy) and analog computing power. The challenges faced by the "Delta" efforts were overcome by the "Q system" of guidance. The "Q" system's revolution was to bind the challenges of missile guidance (and associated equations of motion) in the matrix Q. The Q matrix represents the partial derivatives of the velocity with respect to the position vector. A key feature of this approach allowed for the components of the vector cross product (v, xdv,/dt) to be used as the basic autopilot rate signals-a technique that became known as "cross-product steering." The Q-system was presented at the first Technical Symposium on Ballistic Missiles held at the Ramo-Wooldridge Corporation in Los Angeles on June 21 and 22, 1956. The "Q System" was classified information through the 1960s. Derivations of this guidance are used for today's military missiles. The CSDL team remains a leader in the military guidance and is involved in projects for most divisions of the US military.
On August 10 of 1961 NASA awarded MIT a contract for preliminary design study of a guidance and navigation system for Apollo program. (see Apollo on-board guidance, navigation, and control system, Dave Hoag, International Space Hall of Fame Dedication Conference in Alamogordo, N.M., October 1976 ). Today's space shuttle guidance is named PEG4 (Powered Explicit Guidance). It takes into account both the Q system and the predictor-corrector attributes of the original "Delta" System (PEG Guidance). Although many updates to the shuttles navigation system have taken place over the last 30 years (ex. GPS in the OI-22 build), the guidance core of today's Shuttle GN&C system has evolved little. Within a manned system, there is a human interface needed for the guidance system. As Astronauts are the customer for the system, many new teams are formed that touch GN&C as it is a primary interface to "fly" the vehicle. For the Apollo and STS (Shuttle system) CSDL "designed" the guidance, McDonnell Douglas wrote the requirements and IBM programmed the requirements.
Much system complexity within manned systems is driven by "redundancy management" and the support of multiple "abort" scenarios that provide for crew safety. Manned US Lunar and Interplanetary guidance systems leverage many of the same guidance innovations (described above) developed in the 1950s. So while the core mathematical construct of guidance has remained fairly constant, the facilities surrounding GN&C continue to evolve to support new vehicles, new missions and new hardware. The center of excellence for the manned guidance remains at MIT (CSDL) as well as the former McDonnell Douglas Space Systems (in Houston).
See also
Automotive navigation system
Autopilot
Guide rail
List of missiles
Robotic navigation
Precision-guided munition
Guided bomb
Missile
Missile guidance
Terminal guidance
Proximity sensor
Artillery fuze
Magnetic proximity fuze
Proximity fuze
References
Further reading
An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition (AIAA Education Series) Richard Battin, May 1991
Space Guidance Evolution-A Personal Narrative, Richard Battin, AIAA 82–4075, April 1982
Military technology
Uncrewed vehicles
Applications of control engineering
NASA spin-off technologies
de:Navigationssystem
stq:Autonavigation | Guidance system | Engineering | 2,246 |
7,799,969 | https://en.wikipedia.org/wiki/Sprung%20floor | A sprung floor is a floor that absorbs shocks, giving it a softer feel. Such floors are considered the best kind for dance and indoor sports and physical education, and can enhance performance and greatly reduce injuries. Modern sprung floors are supported by foam backing or rubber feet, while traditional floors provide their spring through bending woven wooden battens. Sprung floors have been used in dance halls and performance venues since the 19th century, and are also used in gymnastics, cheerleading, and other athletic activities that require a cushioned surface. The construction of sprung floors can vary, but they generally consist of a performance surface layer on top of a sprung sub-floor with shock-absorbing materials. Sprung floors provide benefits such as injury reduction, enhanced performance, and appropriate traction for users.
One of the earliest on-record sprung-floor ballrooms is Papanti's for dance lessons in Boston, built in 1837. There was also one in the New Zealand Premier House, when expanded in 1872–73.
Dance halls with sprung hard wood floors date back to the late 19th century. The sprung floor at Blackpool Tower Ballroom dates from 1894. The UK's Accrington Conservative Club, built in 1890, had a Grand Ballroom with a sprung floor.
Many other historical dance halls have sprung hard wood floors, such as the Spanish Ballroom at Glen Echo Park, Maryland (1933), Willowbrook Ballroom in Chicago (1921), the Crystal Ballroom in Portland, Oregon (1914), the Carrillo Ballroom in Santa Barbara, California (1914), and Younger Hall (1929) in St Andrews, Scotland.
Modern sprung floors are designed to dampen bounce and so are sometimes called semi-sprung. A spring floor on the other hand is a type of floor designed to provide bounce; they are used for floor exercises in gymnastics or for cheerleading.
Terminology
A sprung floor is also sometimes referred to as a floating floor. That term, though, more often refers to a floor that insulates against noise or a raised floor with ducts and wires underneath, as in computer facilities.
The top layer of a sprung floor is a performance surface. This can be either a natural material such as solid wood, engineered wood or rubber, or it can be a synthetic surface such as vinyl, linoleum, or polyurethane.
A sprung floor excluding the surface is often referred to as the sub-floor. Most sprung floors require a level sub-floor to be installed.
The term speed refers to the traction (kinetic friction) of performance surfaces: fast describes a slippery surface, and slow describes a higher-traction surface, like a gym floor.
Requirements
The basic requirements for a sports floor or a dance floor are the same. They should encourage optimum performance and be safe. There are many differences between what would be the best floor for various sports and forms of dance. However, the requirements are similar enough that one can have a floor suitable for general use; exceptions, such as judo, generally involve the use of additional mats on top of the flooring.
This article deals mainly with requirements which are common across different disciplines. The performance surface article deals more with customization for different activities.
These basic requirements are covered in more detail in the standards listed below.
It should have just the right amount of give; it should not be too hard, which causes repetitive strain injuries, or too soft, which is tiring.
It should be even and flat with only small variation in characteristics across the surface.
It should be springy and return energy to lift the feet when moving, but not too springy like a trampoline.
It should absorb the energy of falls and reduce injuries.
It should have appropriate traction: too much and the foot might twist when turning, too little and it can be dangerously slippery.
There should not be any sideways movement. Sideways movement hampers balance, which is why very thick pile carpeting can be dangerous for the elderly (thick underlay, however, is good).
It should be primarily area elastic rather than point elastic. It should depress more like a wooden floor than a sponge rubber one, but the effect should not extend too far, and the surface layer can be point elastic.
It should be easy to see action on the floor: it should not be too light or dark.
It should be neither too noisy nor too quiet in use.
It should not become dangerous if liquid is spilled on it, and it should be easy to clean up such spillages. This is a major cause of injury.
Additionally, many such floors are multipurpose. For instance, a community hall might be used for play groups and old-age groups, for dances, aerobics and sports, and for seating for plays. The floor may have to support heavy objects like pianos. There may also be requirements for ease of cleaning and maintenance. Cost of repair after damage by vandals or stiletto heels is also a consideration. Note that the necessity to serve multiple purposes can often be eased by the use of a gym floor cover to protect the floor.
There is no combined safety standard applicable to multiple situations, such as for playground surfaces, sprung floors, or use in old-age centers, but a specification that conforms to a minimum sports or dance standard should be adequate to prevent serious injuries (e.g., broken bones) for children falling from , as from a toddlers table, or hip injuries in the elderly.
Construction of a sprung floor
Sprung floors come in a few major types:
Traditional wood basket-weave
Wood with high durometer neoprene pads. Sometimes both basket-weave and neoprene pads are used.
Foam rubber with a wood or other area elastic layer on top
A few sprung floors use actual springs - the special spring floors used by cheerleaders and tumblers often have coil springs under them.
The construction may be built into the area, or it may be composed of modules that slot together and can be disassembled for tours.
Performance halls should be designed and built with sprung floors in mind. A depth of at least should be allowed for the floor. This can be a major constraint when laying a sprung floor in a hall not designed for it. Most can accommodate a maximum of , and some sprung floors designed for refurbishments are as low as . Ramps for wheelchairs will be needed at the doors. If the ramp is outside the hall, the doors will need the bottoms trimmed off (easing) and their height will therefore be reduced. Ramps can have a 1:12 incline at most, and they may also need a safety zone around them. Thus if the floor is 5 cm deep, the ramp should be long or more. The underfloor needs to be made flat either with levelling cement, very careful trowelling, or by using shims and a layer of masonite. Any new cement must be allowed to dry for at least a month. A membrane vapour barrier should be used to prevent moisture from the ground.
A semi-traditional floor would have wood battens laid on pads made of neoprene, which is more durable than rubber. Pads are typically laid apart and are thick. Then more wood battens are put on top at right angles, halfway between the pads. A traditional floor might have three layers of this springing. Then two layers of plywood are placed on top, offset by 45–90 degrees so that the joints do not match up. The plywood spreads the load. Finally, the actual surface is made from a layer of strong, durable wood like oak, beech or maple, or other types of wood that are covered with a vinyl surface. There may also be provision to prevent the floor from depressing too much if a very heavy weight is placed on it.
There should normally be a fairly wide gap between the floor and the wall to allow for expansion and to allow air to circulate. This is often covered by a skirting board or molding, to make the gap less apparent. It is because the floor is free-standing, rather than connected to walls or joists, that it is also referred to as a floating floor.
The performance surface is normally of vinyl or hardwood, engineered wood or laminate. For dance the surface may be replaceable, so that a theatre can adapt easily to either ballet or tap dance.
Generating power from dance
A number of green nightclubs, including Rotterdam's Club Watt, have installed sprung floors which help generate power for their music and lightshows. The floors are suspended on transducers that act like shock absorbers. To absorb the energy produced by dancers, piezoelectric crystals are used. When compressed, these crystals charge nearby batteries.
Open and closed cells
The neoprene pads used in sprung floors may be described as having open cells or closed cells. A cell is a void inside the neoprene, which may be a single cell or a network of small ones.
A closed cell is like a balloon, where the air inside cannot escape and the pad is bouncy and returns most of the energy put in. A pad with many small, closed cells may also be referred to as a foam, but typically only a single large closed cell is used, as the cell can expand sideways and so provides characteristics more like a long spring.
Open cells have small holes which let the air inside escape and tend to dissipate the energy input. A pad with many open cells may also be referred to as a sponge.
As with everything to do with sprung floors, a combination of types is often used. A core of softer durometer may have a harder outer layer shaped so that heavy falls encounter more resistance instead of 'bottoming out' to a concrete subfloor. This also protects against deformation by heavy weights like pianos.
Portable dance floors
Portable dance floors are mobile dance floors which provide a temporary surface for dancing. They can be installed quickly in any area by laying down panels and placed in a cart for ease of storage. They can be designed for both indoor and outdoor use.
A portable dance floor is typically about , and consists of many panels to create the desired size. There is trim edging around the border, allowing users to enter the floor safely. The portable panels are constructed from either oak parquet, for indoor use, or honeycomb aluminum and laminate for indoor and outdoor use. Other portable dance floors are made out of a polypropylene base with a commercial grade laminate top surface. These floors are extremely durable and often used in the event-planning and hospitality industries. While older-style portable dance floors feature solid-wood construction, many portable dance floors use an interlocking system for a simple system to connect the pieces. High-quality, weather-resistant, portable dance floors are also engineered to be used for both indoor and outdoor applications.
Portable dance floors are commonly used for events like parties and weddings or conferences where there is no permanent dance floor available. They may also be used on cruise ships or in schools as a way to utilize limited space.
Standards
The same standards are applicable to dance as to sport. These describe minimum standards suitable for a general purpose hall. The ranges of parameters are wide enough to cover optimizing most special purpose halls as well:
EN 14904 is a new European standard which will replace European national standards. This was used for the World Cup in Germany, and covers both sports and dance halls. It also deals explicitly with some special purpose floors.
DIN 18032 part 2 was the German standard and was for a long time considered best practice.
BS 7044 part 4 was the British standard for artificial sports surfaces. This has been superseded by BS EN 14904: 2006.
History
There does not seem to be a researched history of sprung floors. Indeed, their use has gone out of fashion with the invent of proprietary systems. There would not have been much perceived need until recently, when concrete slabs started being generally used for sub-floors. Before then floors were mainly either earthen or used wood on joists, both of which provide some cushioning from shocks. Early sprung floors often used leaf or coil springs, whence the name; these floors tended to bounce, but modern floors have suppressed this 'trampoline' effect and so are often called semi-sprung. Other materials have also been used, a notable example is the sprung floor in Danceland, Manitou Beach which was constructed in 1928 using coiled horsehair springs under a maple floor.
The earliest references on the web seem to be:
There was a ballroom with a sprung floor built in 1837 on Tremont Street in Boston by the Italian Lorenzo Papanti for dancing lessons, according to The Proper Bostonians, which lasted until 1899. This was stated to be the first such floor in America, suggesting that Papanti had brought knowledge of sprung floors from Europe. There was another at Boston's Odd Fellows Hall in 1885.
The New Zealand Prime Ministerial home was expanded 1872–73. The build included a ballroom with a sprung floor and New Zealand's first elevator.
Cosmopolitan Hall, a dance hall with a sprung floor was built in the Over-the-Rhine area of Cincinnati in 1885.
Many sprung floors were installed for dance soon after 1900 in places like embassies, hotels, and private clubs. Use of sprung floors exploded with the opening of large public dance halls between 1920 and 1945.
The use of sprung floors for sport date to the 1936 Olympics in Berlin; before then floor exercises were performed on grass. Spring floors for professional acrobats probably date long before this.
Benefits
Modern sprung floors are considered the best kind for dance and indoor sports and physical education, as they enhance performance and significantly reduce injuries. A study by Smith et al. (2015) found that dancers using sprung floors reported fewer injuries compared to those on non-sprung surfaces. Additionally, research by the International Association of Dance Medicine and Science (IADMS) supports the effectiveness of sprung floors in minimizing joint stress.
Sprung floors provide numerous advantages for dancers compared to traditional flooring. The cushioning and shock absorption of a sprung floor helps to reduce the risk of injuries such as ankle sprains, shin splints, and joint pain that can occur from the high-impact movements of dance. The "give" of a sprung floor also allows dancers to land jumps and perform other acrobatic elements with less stress on their bodies. Additionally, the increased energy return of a sprung floor can enhance a dancer's performance by providing a springy surface that propels their movements. Many professional dance companies and studios therefore prioritize having sprung floors installed to protect their dancers' health and enable them to perform at their best.
See also
Dance floor (disambiguation)
Floor (gymnastics)
Performance surface
References
External links
, rec.arts.dance newsgroup FAQ list
Dance and health
Dance venues
Floor construction
Sports equipment
Dance equipment | Sprung floor | Engineering | 3,006 |
55,757,890 | https://en.wikipedia.org/wiki/Maughanasilly%20Stone%20Row | Maughanasilly Stone Row is a stone row and National Monument located in County Cork, Ireland.
Location
The stone row is located to the northeast of Lough Atooreen, on the eastern slopes of Knockbreteen, north of Kealkill. Another stone circle is at Illane, NNE of Maughanasilly.
History
Maughanasilly Stone Row was erected during the Bronze Age, c. 1600–1500 BC, making it contemporary with the Indo-Aryan migrations and the rise of Shang China, the New Kingdom of Egypt and Mycenaean Greece. It was used for archaeoastronomical purposes, for making observations of lunar standstills and equinoxes.
It was excavated in 1977 by Ann Lynch. Shallow pits were found with quartz pebbles scattered around. Two flint scrapers were also found.
Description
There are five standing stones and one prostrate stone, aligned approximate NE-SW. The tallest stone is high and weighs about 8 tonnes.
References
National monuments in County Cork
Archaeoastronomy
Archaeological sites in County Cork
16th-century BC works | Maughanasilly Stone Row | Astronomy | 228 |
7,330,969 | https://en.wikipedia.org/wiki/Microphthalmia-associated%20transcription%20factor | Microphthalmia-associated transcription factor also known as class E basic helix-loop-helix protein 32 or bHLHe32 is a protein that in humans is encoded by the MITF gene.
MITF is a basic helix-loop-helix leucine zipper transcription factor involved in lineage-specific pathway regulation of many types of cells including melanocytes, osteoclasts, and mast cells. The term "lineage-specific", since it relates to MITF, means genes or traits that are only found in a certain cell type. Therefore, MITF may be involved in the rewiring of signaling cascades that are specifically required for the survival and physiological function of their normal cell precursors.
MITF, together with transcription factor EB (TFEB), TFE3 and TFEC, belong to a subfamily of related bHLHZip proteins, termed the MiT-TFE family of transcription factors. The factors are able to form stable DNA-binding homo- and heterodimers. The gene that encodes for MITF resides at the mi locus in mice, and its protumorogenic targets include factors involved in cell death, DNA replication, repair, mitosis, microRNA production, membrane trafficking, mitochondrial metabolism, and much more. Mutation of this gene results in deafness, bone loss, small eyes, and poorly pigmented eyes and skin. In human subjects, because it is known that MITF controls the expression of various genes that are essential for normal melanin synthesis in melanocytes, mutations of MITF can lead to diseases such as melanoma, Waardenburg syndrome, and Tietz syndrome. Its function is conserved across vertebrates, including in fishes such as zebrafish and Xiphophorus.
An understanding of MITF is necessary to understand how certain lineage-specific cancers and other diseases progress. In addition, current and future research can lead to potential avenues to target this transcription factor mechanism for cancer prevention.
Clinical significance
Mutations
As mentioned above, changes in MITF can result in serious health conditions. For example, mutations of MITF have been implicated in both Waardenburg syndrome and Tietz syndrome.
Waardenburg syndrome is a rare genetic disorder. Its symptoms include deafness, minor defects, and abnormalities in pigmentation. Mutations in the MITF gene have been found in certain patients with Waardenburg syndrome, type II. Mutations that change the amino acid sequence of that result in an abnormally small MITF are found. These mutations disrupt dimer formation, and as a result cause insufficient development of melanocytes. The shortage of melanocytes causes some of the characteristic features of Waardenburg syndrome.
Tietz syndrome, first described in 1923, is a congenital disorder often characterized by deafness and leucism. Tietz is caused by a mutation in the MITF gene. The mutation in MITF deletes or changes a single amino acid base pair specifically in the base motif region of the MITF protein. The new MITF protein is unable to bind to DNA and melanocyte development and subsequently melanin production is altered. A reduced number of melanocytes can lead to hearing loss, and decreased melanin production can account for the light skin and hair color that make Tietz syndrome so noticeable.
Melanoma
Melanocytes are commonly known as cells that are responsible for producing the pigment melanin which gives coloration to the hair, skin, and nails. The exact mechanisms of how exactly melanocytes become cancerous are relatively unclear, but there is ongoing research to gain more information about the process. For example, it has been uncovered that the DNA of certain genes is often damaged in melanoma cells, most likely as a result of damage from UV radiation, and in turn increases the likelihood of developing melanoma. Specifically, it has been found that a large percentage of melanomas have mutations in the B-RAF gene which leads to melanoma by causing an MEK-ERK kinase cascade when activated. In addition to B-RAF, MITF is also known to play a crucial role in melanoma progression. Since it is a transcription factor that is involved in the regulation of genes related to invasiveness, migration, and metastasis, it can play a role in the progression of melanoma.
Target genes
MITF recognizes E-box (CAYRTG) and M-box (TCAYRTG or CAYRTGA) sequences in the promoter regions of target genes. Known target genes (confirmed by at least two independent sources) of this transcription factor include,
Additional genes identified by a microarray study (which confirmed the above targets) include the following,
The LysRS-Ap4A-MITF signaling pathway
The LysRS-Ap4A-MITF signaling pathway was first discovered in mast cells, in which, the A mitogen-activated protein kinase (MAPK) pathway is activated upon allergen stimulation. The binding of immunoglobulin E to the high-affinity IgE receptor (FcεRI) provides the stimulus that starts the cascade.
Lysyl-tRNA synthetase (LysRS) normally resides in the multisynthetase complex. This complex consists of nine different aminoacyl-tRNA synthetases and three scaffold proteins and has been termed the "signalosome" due to its non-catalytic signalling functions. After activation, LysRS is phosphorylated on Serine 207 in a MAPK-dependent manner. This phosphorylation causes LysRS to change its conformation, detach from the complex and translocate into the nucleus, where it associates with the encoding histidine triad nucleotide–binding protein 1 (HINT1) thus forming the MITF-HINT1 inhibitory complex. The conformational change also switches LysRS activity from aminoacylation of Lysine tRNA to diadenosine tetraphosphate (Ap4A) production. Ap4A, which is an adenosine joined to another adenosine through a 5‘-5’tetraphosphate bridge, binds to HINT1 and this releases MITF from the inhibitory complex, allowing it to transcribe its target genes. Specifically, Ap4A causes a polymerization of the HINT1 molecule into filaments. The polymerization blocks the interface for MITF and thus prevents the binding of the two proteins. This mechanism is dependent on the precise length of the phosphate bridge in the Ap4A molecule so other nucleotides such as ATP or AMP will not affect it.
MITF is also an integral part of melanocytes, where it regulates the expression of a number of proteins with melanogenic potential. Continuous expression of MITF at a certain level is one of the necessary factors for melanoma cells to proliferate, survive and avoid detection by host immune cells through the T-cell recognition of the melanoma-associated antigen (melan-A). Post-translational modifications of the HINT1 molecules have been shown to affect MITF gene expression as well as the binding of Ap4A. Mutations in HINT1 itself have been shown to be the cause of axonal neuropathies. The regulatory mechanism relies on the enzyme diadenosine tetraphosphate hydrolase, a member of the Nudix type 2 enzymatic family (NUDT2), to cleave Ap4A, allow the binding of HINT1 to MITF and thus suppress the expression of the MITF transcribed genes. NUDT2 itself has also been shown to be associated with human breast carcinoma, where it promotes cellular proliferation. The enzyme is 17 kDa large and can freely diffuse between the nucleus and cytosol explaining its presence in the nucleus. It has also been shown to be actively transported into the nucleus by directly interacting with the N-terminal domain of importin-β upon immunological stimulation of the mast cells. Growing evidence is pointing to the fact that the LysRS-Ap4A-MITF signalling pathway is in fact an integral aspect of controlling MITF transcriptional activity.
Activation of the LysRS-Ap4A-MITF signalling pathway by isoproterenol has been confirmed in cardiomyocytes. A heart specific isoform of MITF is a major regulator of cardiac growth and hypertrophy responsible for heart growth and for the physiological response of the cardiomyocytes to beta-adrenergic stimulation.
Phosphorylation
MITF is phosphorylated on several serine and tyrosine residues. Serine phosphorylation is regulated by several signaling pathways including MAPK/BRAF/ERK, receptor tyrosine kinase KIT, GSK-3 and mTOR. In addition, several kinases including PI3K, AKT, SRC and P38 are also critical activators of MITF phosphorylation. In contrast, tyrosine phosphorylation is induced by the presence of the KIT oncogenic mutation D816V. This KITD816V pathway is dependent on SRC protein family activation signaling. The induction of serine phosphorylation by the frequently altered MAPK/BRAF pathway and the GSK-3 pathway in melanoma regulates MITF nuclear export and thereby decreasing MITF activity in the nucleus. Similarly, tyrosine phosphorylation mediated by the presence of the KIT oncogenic mutation D816V also increases the presence of MITF in the cytoplasm.
Interactions
Most transcription factors function in cooperation with other factors by protein–protein interactions. Association of MITF with other proteins is a critical step in the regulation of MITF-mediated transcriptional activity. Some commonly studied MITF interactions include those with MAZR, PIAS3, Tfe3, hUBC9, PKC1, and LEF1. Looking at the variety of structures gives insight into MITF's varied roles in the cell.
The Myc-associated zinc-finger protein related factor (MAZR) interacts with the Zip domain of MITF. When expressed together, both MAZR and MITF increase promoter activity of the mMCP-6 gene. MAZR and MITF together transactivate the mMCP-6 gene. MAZR also plays a role in the phenotypic expression of mast cells in association with MITF.
PIAS3 is a transcriptional inhibitor that acts by inhibiting STAT3's DNA binding activity. PIAS3 directly interacts with MITF, and STAT3 does not interfere with the interaction between PIAS3 and MITF. PIAS3 functions as a key molecule in suppressing the transcriptional activity of MITF. This is important when considering mast cell and melanocyte development.
MITF, TFE3 and TFEB are part of the basic helix-loop-helix-leucine zipper family of transcription factors. Each protein encoded by the family of transcription factors can bind DNA. MITF is necessary for melanocyte and eye development and new research suggests that TFE3 is also required for osteoclast development, a function redundant of MITF. The combined loss of both genes results in severe osteopetrosis, pointing to an interaction between MITF and other members of its transcription factor family. In turn, TFEB has been termed as the master regulator of lysosome biogenesis and autophagy. Interestingly, MITF, TFEB and TFE3 separate roles in modulating starvation-induced autophagy have been described in melanoma. Moreover, MITF and TFEB proteins, directly regulate each other’s mRNA and protein expression while their subcellular localization and transcriptional activity are subject to similar modulation, such as the mTOR signaling pathway.
UBC9 is a ubiquitin conjugating enzyme whose proteins associates with MITF. Although hUBC9 is known to act preferentially with SENTRIN/SUMO1, an in vitro analysis demonstrated greater actual association with MITF. hUBC9 is a critical regulator of melanocyte differentiation. To do this, it targets MITF for proteasome degradation.
Protein kinase C-interacting protein 1 (PKC1) associates with MITF. Their association is reduced upon cell activation. When this happens MITF disengages from PKC1. PKC1 by itself, found in the cytosol and nucleus, has no known physiological function. It does, however, have the ability to suppress MITF transcriptional activity and can function as an in vivo negative regulator of MITF induced transcriptional activity.
The functional cooperation between MITF and the lymphoid enhancing factor (LEF-1) results in a synergistic transactivation of the dopachrome tautomerase gene promoter, which is an early melanoblast marker. LEF-1 is involved in the process of regulation by Wnt signaling. LEF-1 also cooperates with MITF-related proteins like TFE3. MITF is a modulator of LEF-1, and this regulation ensures efficient propagation of Wnt signals in many cells.
Translational regulation
Translational regulation of MITF is still an unexplored area with only two peer-reviewed papers (as of 2019) highlighting the importance. During glutamine starvation of melanoma cells ATF4 transcripts increases as well as the translation of the mRNA due to eIF2α phosphorylation. This chain of molecular events leads to two levels of MITF suppression: first, ATF4 protein binds and suppresses MITF transcription and second, eIF2α blocks MITF translation possibly through the inhibition of eIF2B by eIF2α.
MITF can also be directly translationally modified by the RNA helicase DDX3X. The 5' UTR of MITF contains important regulatory elements (IRES) that is recognized, bound and activated by DDX3X. Although, the 5' UTR of MITF only consists of a nucleotide stretch of 123-nt, this region is predicted to fold into energetically favorable RNA secondary structures including multibranched loops and asymmetric bulges that is characteristics of IRES elements. Activation of this cis-regulatory sequences by DDX3X promotes MITF expression in melanoma cells.
See also
Microphthalmia
Splashed white
References
External links
Transcription factors
Gene expression
Human proteins | Microphthalmia-associated transcription factor | Chemistry,Biology | 3,006 |
8,248,238 | https://en.wikipedia.org/wiki/Dortmund%20Data%20Bank | The Dortmund Data Bank (short DDB) is a factual data bank for thermodynamic and thermophysical data. Its main usage is the data supply for process simulation where experimental data are the basis for the design, analysis, synthesis, and optimization of chemical processes. The DDB is used for fitting parameters for thermodynamic models like NRTL or UNIQUAC and for many different equations describing pure component properties, e.g., the Antoine equation for vapor pressures. The DDB is also used for the development and revision of predictive methods like UNIFAC and PSRK.
Contents
Mixture properties
Phase equilibria data (vapor–liquid, liquid–liquid, solid–liquid), data on azeotropy and zeotropy
Mixing enthalpies
Gas solubilities
Activity coefficients at infinite dilution
Heat capacities and excess heat capacities
Volumes, densities, and excess volumes (volume effect of mixing)
Salt solubilities
Octanol-water partition coefficients
Critical data
The mixture data banks contain () approx. 308,000 data sets with 2,157,000 data points for 10,750 components building 84,870 different binary, ternary, and higher systems/combinations.
Pure component properties
Saturated vapor pressures
Saturated densities
Viscosities
Thermal conductivities
Critical data (Tc, Pc, Vc)
Triple points
Melting points
Heat capacities
Heats of fusion, sublimation and vaporization
Heats of formation and combustion
Heats and temperatures of transitions for solids
Speed of sound
P-v-T data including virial coefficients
Energy functions
Enthalpies and entropies
Surface tensions
The pure component properties data bank contains () approx. 157,000 data sets with 1,080,000 data points for 16,700 different components.
Data sources
The DDB is a collection of experimental data published by the original authors. All data are referenced and a quite large literature data bank is part of the DDB, currently containing more than 92,000 articles, books, private communications, deposited documents from Russia (VINITI), Ukraine (Ukrniiti) and other former USSR states, company reports (mainly from former GDR companies), theses, patents, and conference contributions.
Secondary sources like data collections are normally neglected and only used as a literature source. Derived data are also not collected with the main exception of the azeotropic data bank which is built partly from evaluated vapor–liquid equilibrium data.
History
The Dortmund Data Bank was founded in the 1970s at the University of Dortmund in Germany. The original reason for starting a vapor–liquid phase equilibria data collection was the development of the group contribution method UNIFAC which allows to estimate vapor pressures of mixtures.
The DDB has since been extended to many other properties and has increased dramatically in size also because of intensive (German) government aid. The funding has ended and the further development and maintenance is performed by DDBST GmbH, a company founded by members of the industrial chemistry chair of the Carl von Ossietzky University of Oldenburg, Germany.
Additional contributors are the DECHEMA, the FIZ CHEMIE (Berlin), the Technical University in Tallinn, and others.
Availability
The Dortmund Data Bank is distributed by DDBST GmbH as in-house software. Many parts of the Dortmund Data Bank are also distributed as part of the DETHERM data bank which is also available online.
See also
Beilstein database
Elektrolytdatenbank Regensburg
References
External links
DDBST GmbH
DDB Online Search
DECHEMA
Topological Analysis of the Gibbs Energy Function (Liquid-Liquid Equilibrium Correlation Data). Including a Thermodinamic Review and a Graphical User Interface (GUI) for Surfaces/Tie-lines/Hessian matrix analysis - University of Alicante
Thermodynamics
Chemical databases
Technical University of Dortmund
University of Oldenburg
Thermodynamics databases | Dortmund Data Bank | Physics,Chemistry,Mathematics | 794 |
964,083 | https://en.wikipedia.org/wiki/Messier%2062 | Messier 62 or M62, also known as NGC 6266 or the Flickering Globular Cluster, is a globular cluster of stars in the south of the equatorial constellation of Ophiuchus. It was discovered in 1771 by Charles Messier, then added to his catalogue eight years later.
M62 is about from Earth and from the Galactic Center. It is among the ten most massive and luminous globular clusters in the Milky Way, showing an integrated absolute magnitude of −9.18. It has an estimated mass of and a mass-to-light ratio of in the core visible light band, the V band. It has a projected ellipticity of 0.01, meaning it is essentially spherical. The density profile of its member stars suggests it has not yet undergone core collapse. It has a core radius of , a half-mass radius of , and a half-light radius of . The stellar density at the core is per cubic parsec. It has a tidal radius of .
The cluster shows at least two distinct populations of stars, which most likely represent two separate episodes of star formation. Of the main sequence stars in the cluster, are from the first generation and from the second. The second is enriched by elements released by the first. In particular, abundances of helium, carbon, magnesium, aluminium, and sodium differ between these two.
Indications are this is an Oosterhoff type I, or "metal-rich" system. A 2010 study identified 245 variable stars in the cluster's field, of which 209 are RR Lyrae variables, four are Type II Cepheids, 25 are long period variables, and one is an eclipsing binary. The cluster may prove to be the galaxy's richest in terms of RR Lyrae variables. It has ten binary millisecond pulsars, including one (M62B) that is displaying eclipsing behavior from gas streaming off its companion, and one (M62H) with an orbiting exoplanet about three times the mass of Jupiter. There are multiple X-ray sources, including 50 within the half-mass radius. 47 blue straggler candidates have been identified, formed from the merger of two stars in a binary system, and these are preferentially concentrated near the core region.
It is hypothesized that this cluster may be host to an intermediate mass black hole (IMBH) – it is considered well-suited for searching for such an object. A brief study, before 2013, of the proper motion of stars within of the core did not require an IMBH to explain. However, simulations can not rule out one with a mass of a few thousand in M62's core. For example, based upon radial velocity measurements within an arcsecond of the core, Kiselev et al. (2008) made the claim of an IMBH in M15, likewise with mass of .
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 62, Galactic Globular Clusters Database page
M62 on willig.net
Messier 062
Messier 062
062
Messier 062
?
Discoveries by Charles Messier | Messier 62 | Astronomy | 657 |
76,235,042 | https://en.wikipedia.org/wiki/Mid-Holocene%20hemlock%20decline | The mid-Holocene hemlock decline was an abrupt decrease in Eastern Hemlock (Tsuga canadensis) populations noticeable in fossil pollen records across the tree's range. It has been estimated to have occurred approximately 5,500 calibrated radiocarbon years before 1950 AD. The decline has been linked to insect activity and to climate factors. Post-decline pollen records indicate changes in other tree species' populations after the event and an eventual recovery of hemlock populations over a period of about 1000-2000 years at some sites.
Causes
Some relatively earlier studies on this event link it to insect outbreaks (e.g. hemlock looper), while more recent research has argued for climate changes as the driving factors in this decline. Evidence used to point towards an insect outbreak includes sudden nature of the event and the debated assertion that similar trends were not shown in other species. Fossil evidence used to support the insect pathogen argument include the presence of fossil hemlock looper and spruce budworm head capsules, and more prevalent than normal macrofossil hemlock needles with evidence of feeding by the hemlock looper. Arguments for climate changes as the driving factor of this event include linking the decline in hemlock fossil pollen to trends from other tree species and to lake-level reconstructions from sediment cores and ground-penetrating radar that indicate a change to drier conditions. These climate changes may have been associated with shifts in atmospheric and ocean circulation. While its causes have been debated, this event may be used to provide insight into how modern forests may respond to pathogen outbreaks or to anthropogenic climate change.
Post-decline dynamics
Increases in the fossil pollen of other tree species such as birch have been found at some sites following the decline in hemlock pollen. In some areas, hemlock fossil pollen indicates a recovery of the population that took place over the period from about 1000-2000 years after the decline, while in other areas, fossil pollen indicates that the hemlock population never fully recovered or that forest composition was forever altered following the event. A continuation of drought conditions may have delayed hemlock recovery in some areas.
References
Wikipedia Student Program
Paleoecology | Mid-Holocene hemlock decline | Biology | 437 |
72,071,878 | https://en.wikipedia.org/wiki/ISO/IEC%2021838 | ISO/IEC 21838 is a multi-part standard published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) in 2001, which outlines requirements for top-level ontology development and describes several top-level ontologies that satisfy those requirements, including Basic Formal Ontology (BFO), Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE), and TUpper. ISO/IEC 21838 is intended to promote interoperability among lower level, domain-specific ontologies, and to foster coherent ontology design, for example, through the coordinated re-engineering of legacy ontologies which have been developed using heterogeneous top-level categories.
Background
ISO/IEC 21838 was developed by Subcommittee 32 for Data Management and Interchange, of the ISO and IEC Joint Technical Committee 1 for Information Technology. The standard consists thus far of four parts:
ISO/IEC 21838-1:2021 Information technology - Top-level ontologies (TLO) - Part 1: Requirements, which describes characteristics required of domain-neutral top-level ontologies for use with lower-level domain ontologies to support data exchange, retrieval, discovery, integration and analysis.
ISO/IEC 21838-2:2021 Information technology - Top-level ontologies (TLO) - Part 2: Basic Formal Ontology (BFO) describes Basic Formal Ontology (BFO), as an ontology conformant to the requirements specified in ISO/IEC 21838-1.
ISO/IEC 21838-3:2023 Information Technology - Top-level ontologies (TLO) - Part 3: Descriptive ontology for linguistic and cognitive engineering (DOLCE) describes Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE), as an ontology conformant to the requirements specified in ISO/IEC 21838-1.
ISO/IEC 21838-4:2023 Information Technology - Top-level ontologies (TLO) - Part 4: TUpper describes TUpper, as an ontology conformant to the requirements specified in ISO/IEC 21838-1.
Top-Level Ontology Requirements
ISO/IEC 21838-1 prescribes the following requirements for any top-level ontology.
A TLO shall include a textual artifact represented by a natural language document providing:
A list of domain-neutral terms and relational expressions
Identification of primitive terms, i.e. terms that cannot be defined without circularity.
Definitions of all non-primitive terms and relational expressions that are non-circular, form a consistent set, and are concise, i.e. contain no redundant elements.
The signature of the TLO shall contain no terms or relational expressions that are used exclusively in one or in a restricted group of domains.
In addition the TLO shall be made available via at least one machine-readable axiomatization in either:
OWL 2 with the direct semantics, or
a Description Logic (DL) that is designated by the World Wide Web Consortium as a successor of OWL 2
The TLO shall further be made available via a Common Logic (CL) axiomatization conforming to ISO/IEC 24707.
The ontology documentation specified above shall be made publicly available and consist of:
A natural language document designed to support use and maintenance of the ontology by human users
An axiomatization of the ontology in OWL 2 with direct semantics designed to support computational reasoning
Where relevant, a CL axiomatization of the ontology in an ISO/IEC 24707 conformant language
Supplementary documentation shall be made publicly available:
Specifying how the ontology is used or is intended to be used
Specifying how the OWL axiomatization is logically derivable from the CL axiomatization
Demonstrating the breadth of coverage of the ontology
Documenting policies for ontology management
Demonstrating Breadth of Coverage
To demonstrate a sufficiently broad coverage domain and thus to show that it is a true top-level ontology, each candidate TLO is required to show that it has a very wide range of application, ideally one that covers all entities in the universe. Given that the main purpose of the TLO is to enhance the data in a range of databases in such a way as to promote their integration and discoverability, it suffices to demonstrate that coverage domain of the candidate TLO extends across a very broad and diverse range of types of data which the terms in the ontology may then be used to annotate. The strategy for demonstrating breadth of coverage accordingly rests on the provision in ISO/IEC 21838-1 of a list of types of data, including data about:
Space and time; space and place; time and change
Parts, wholes, unity and boundaries
Scale and granularity
Qualities and other attributes (such as dispositions and roles)
Quantities and mathematical entities
Causality, processes and events
Constitution
Information and reference
Artifacts and socially constructed entities (such as money)
Mental entities
Each candidate TLO is required to specify how it will deal with data under all, or nearly all, of these headings, or to specify ontologies built using this TLO which already serve this purpose. Basic Formal Ontology, for example (see below), has no native term for information entities such as sentences or data items or publications. These terms are however supplied by the BFO-conformant Information Artifact Ontology (IAO).
Basic Formal Ontology as a Top-Level Ontology
ISO/IEC 21838-2 describes how Basic Formal Ontology (BFO) satisfies the requirements of ISO/IEC 21838-1. BFO is an ontology developed by Barry Smith and his collaborators. A BFO textbook was published in 2015 to promote interoperability among the very large number of domain ontologies built using its terms and relational expressions.
The BFO ontology is documented in the ISO Standards Maintenance Portal here. This includes:
A natural language document providing domain-neutral terms and relational expressions accompanied by concise, consistent, and non-circular definitions for all non-primitive terms
A signature containing no terms or relational expressions used exclusively in one or in a restricted group of domains.
An axiomatization in OWL 2 with the direct semantics.
An axiomatization in CL.
Specification of the logical derivability of the OWL axiomatization from the CL axiomatization, and breadth of ontology coverage.
A specification of the different ways in which developers of ontologies at lower levels can demonstrate conformity to BFO.
Reference to the statement of principles of use and management of the ontology provided by the Open Biological and Biomedical Ontologies (OBO) Foundry, principles which are adopted by the BFO developer community.
In addition, the community of BFO developers and users has provided:
A publicly available ontology developers' guide,
Publicly available repositories for the OWL 2 and CL axiomatizations incorporating commentary on these axiomatizations and identifying candidate areas for revision.
A publicly available list of ontologies reusing BFO and of organizations using BFO in their ontology development work, available at https://basic-formal-ontology.org/users.html.
See Also
Basic Formal Ontology
Formal ontology
Semantic interoperability
Barry Smith (ontologist)
References
Further reading
ISO standards
IEC standards
2001 establishments | ISO/IEC 21838 | Technology | 1,485 |
12,885,993 | https://en.wikipedia.org/wiki/AIGO | The Australian International Gravitational Observatory (AIGO) is a research facility located near Gingin, north of Perth in Western Australia. It is part of a worldwide effort to directly detect gravitational waves. Note that these are a major prediction of the general theory of relativity and are not to be confused with gravity waves, a phenomenon studied in fluid mechanics.
It is operated by the Australian International Gravitational Research Centre (AIGRC) through the University of Western Australia under the auspices of the Australian Consortium for Interferometric Gravitational Astronomy (ACIGA).
The current aim of the facility is to develop advanced techniques for improving the sensitivity of interferometric gravitational wave detectors such as LIGO and VIRGO. A study of operational interferometric gravitational wave detectors shows that AIGO is situated in almost the ideal location to complement existing detectors in the Northern hemisphere.
Current facilities
Current facilities (AIGO Stage I) consist of an L-shaped ultra high vacuum system, measuring 80 m on each side forming an interferometer for detecting gravitational waves.
LIGO-Australia
LIGO-Australia was a proposed plan (AIGO Stage II) to install an Advanced LIGO interferometer at AIGO, forming a triangle of three Advanced LIGO detectors. It was to consist of an L-shaped interferometer, measuring 5 km on each side, with vacuum pipes about 700 mm in diameter.
A 2010 developmental roadmap issued by the Gravitational Wave International Committee (GWIC) for the field of gravitational-wave astronomy recommended that an expansion of the global array of interferometric detectors be pursued as a highest priority. In its roadmap, GWIC identified the Southern Hemisphere as one of the key locations in which a gravitational-wave interferometer could most effectively complement existing detectors. The AIGO facility in Western Australia was well-located to work with the existing and planned components of the global network, and already possessed an active gravitational-wave community.
The LIGO-Australia plan was approved by LIGO's US funding agency, the National Science Foundation, contingent on the understanding that it involved no increase in LIGO's total budget. The cost of building, operating and staffing the interferometer would have rested entirely with the Australian government. After a year-long effort, the LIGO Laboratory reluctantly acknowledged that the proposed relocation of an Advanced LIGO detector to Australia was not to occur. The Australian government had committed itself to a balanced budget and this precluded any new starts in science. The deadline for a response from Australia passed on 1 October 2011.
The proposal was then moved to India, where the Indian Initiative in Gravitational-wave Observations obtained some government support to pursue a similar plan, named LIGO-India, as AIGO had attempted. India is not quite as good a location as Australia, but provides most of the benefit.
Co-located facilities
AIGO is on the same grounds as the Gravity Discovery Centre and the GDC Observatory, of which are educational and instructional facilities open to the general public. It is also the site of the Geoscience Australia Gingin Magnetic Observatory, one of a network of nine for monitoring the Earth's magnetic field.
See also
Interferometric gravitational-wave detector
References
External links
LIGO-Australia Home Page
ACIGA Home Page
Gravity Discovery Center Home Page
Interferometers
Gravitational-wave telescopes
Astronomical observatories in Western Australia
Shire of Gingin | AIGO | Technology,Engineering | 686 |
6,591,301 | https://en.wikipedia.org/wiki/Macintosh%20User%20Group | A Macintosh User Group (MUG) is a users' group of people who use Macintosh computers made by Apple Inc. or other manufacturers and who use the Macintosh operating system (OS). These groups are primarily locally situated and meet regularly to discuss Macintosh computers, the Mac OS, software and peripherals that work with these computers. Some groups focus on the older versions of Mac OS, up to Mac OS 9, but the majority now focus on the current version of Mac operating system, macOS.
Activities
Macintosh user groups are independent organizations that elect their own leaders, develop and present topics at group meetings, schedule special events, frequently have a newsletter and/or web page, and other activities. MUGs generally have an affiliation with Apple Inc., which maintains a User Group Advisory Board consisting of MUG officers and members, who advise Apple on user group matters and relationships. Apple also maintains a MUG locator service on their website. MUGs may be community groups, government agencies, corporations, schools, universities, online, professional organizations, or software specific. Another website, the MUG Center, provides a variety of resources to MUGs and Mac users, including a comprehensive list of links to MUG websites.
Users' groups have been around since the early days of Apple, when computers were often just kits and the user groups met to learn how to put the computers together. Many early users' groups were Apple user groups that became MUGs when Apple started the Macintosh line of computers in 1984.
The following is a 2005 list from the Apple User Group Locator of 19 still active MUGs that had initial meeting dates in 1975 - 1978 (note that these MUGs supported Apple products prior to the Macintosh computer line with original meeting dates that predate the Macintosh computer line):
Apple Computer Information & Data Exchange of Rochester, Inc.,
Apple Corps of Dallas,
Apple Macintosh Users Group (Sydney),
Apple Pugetsound Program Library Exchange,
AppleRock,
AppleSiders of Cincinnati,
Apple Squires of the Ozarks,
Apple Users' Society of Melbourne (AUSOM Inc),
Charlotte Apple Computer Club,
Colorado Macintosh User Group,
The Northwest of Us Macintosh User Group Chicago, Northwest,
Dallas Macintosh Users Group,
Denver Apple Pi,
Houston Area Apple User Group,
Louisville Apple Users Group,
Macintosh User Group of the Southern Tier,
Maryland Apple Corps., Inc.,
North Orange County Computer Club, MacIntosh SIG,
Pennsylvania Macintosh User Group,
The Michigan Apple,
The Minnesota Apple Computer User Group,
Washington Apple Pi, Ltd.
MUGs exploded in size in the 1980s and were a primary method of distribution of freeware and shareware software. Many MUGs had a "Disk-of-the-Month" and large newsletters for members. Computer hardware and software companies found MUGs to be a valuable place to provide information about their products. They were often speakers at MUG meetings. While many of these companies still speak at MUGs, the Internet has replaced many of the tools of information and software access that were not as available to the public prior to the late 1990s. Many MUGs have had to reinvent themselves to focus on tools a MUG can better provide, mostly focusing on education and/or hands on experiences. Today's MUGs are generally smaller, but have had some revitalization with the increased popularity of Apple Inc. products in the mid-2000s. Another educational competitor to MUGs has been Apple Inc.'s retail stores. Apple Inc. provides customers with professional assistance through their Genius Bar at Apple retail stores. The largest MUG was the Berkeley Macintosh Users Group, closely followed by the Macintosh SIG of the Boston Computer Society, and a friendly rivalry between the two groups energized both throughout the 1990s.
See also
Apple community
References
External links
Apple's User Group page
MUG search page
Blog for Apple's User Group Advisory Board, Sandy Foderick - editor, Tom Piper - Vendor Relations
The MUG Center
Complete list of Mac User Groups in the UK
Computer clubs
User groups
Macintosh platform
Macintosh websites
Apple Inc. user groups | Macintosh User Group | Technology | 815 |
51,199,133 | https://en.wikipedia.org/wiki/Claudio%20Luchinat | Claudio Luchinat (born February 15, 1952, in Florence) is an Italian chemist. He is author of about 550 publications in Bioinorganic Chemistry, NMR and Structural Biology, and of four books. According to Google scholar, his h-index is 90 and his papers have been quoted more than 33,000 times ().
He earned a PhD in Chemistry from the University of Florence. He has been full professor of Chemistry at the University of Bologna (1986–96).
He is currently a researcher at the University of Florence and full professor of Chemistry at the same university (1996–, CERM and Department of Chemistry). He is member of the Italian Chemical Society, New York Academy of Sciences, American Association for the Advancement of Science.
References
21st-century Italian chemists
Bioinorganic chemistry
University of Florence alumni
1952 births
Living people | Claudio Luchinat | Chemistry,Biology | 176 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.