id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,540,711 | https://en.wikipedia.org/wiki/Phantom%20energy | Phantom energy is a hypothetical form of dark energy satisfying the equation of state with . It possesses negative kinetic energy, and predicts expansion of the universe in excess of that predicted by a cosmological constant, which leads to a Big Rip. The idea of phantom energy is often dismissed, as it would suggest that the vacuum is unstable with negative mass particles bursting into existence. The concept is hence tied to emerging theories of a continuously created negative mass dark fluid, in which the cosmological constant can vary as a function of time.
Big Rip mechanism
The existence of phantom energy could cause the expansion of the universe to accelerate so quickly that a scenario known as the Big Rip, a possible end to the universe, occurs. The expansion of the universe reaches an infinite degree in finite time, causing expansion to accelerate without bounds. This acceleration necessarily passes the speed of light (since it involves expansion of the universe itself, not particles moving within it), causing more and more objects to leave our observable universe faster than its expansion, as light and information emitted from distant stars and other cosmic sources cannot "catch up" with the expansion. As the observable universe expands, objects will be unable to interact with each other via fundamental forces, and eventually, the expansion will prevent any action of forces between any particles, even within atoms, "ripping apart" the universe, making distances between individual particles infinite.
One application of phantom energy in 2007 was to a cyclic model of the universe, which reverses its expansion extremely shortly before the would-be Big Rip. This cyclic model can be more complicated if the mass–energy of every point in the universe dense enough to collapse into black hole core substance that will bounce after reaching a maximum threshold of compression causing the next big bang (the overall scenario is highly unlikely).
See also
Quintessence (physics)
References
Further reading
Robert R. Caldwell et al.: Phantom Energy and Cosmic Doomsday
Dark energy
Physical cosmological concepts | Phantom energy | [
"Physics",
"Astronomy"
] | 398 | [
"Physical cosmological concepts",
"Unsolved problems in astronomy",
"Physical quantities",
"Concepts in astrophysics",
"Concepts in astronomy",
"Unsolved problems in physics",
"Energy (physics)",
"Dark energy",
"Wikipedia categories named after physical quantities"
] |
1,540,855 | https://en.wikipedia.org/wiki/Translation%20management%20system | A translation management system (TMS), formerly globalization management system (GMS), is a type of software for automating many parts of the human language translation process and maximizing translator efficiency. The idea of a translation management system is to automate all repeatable and non-essential work that can be done by software/systems and leaving only the creative work of translation and review to be done by human beings. A translation management system generally includes at least two types of technology: process management technology to automate the flow of work, and linguistic technology to aid the translator.
In a typical TMS, process management technology is used to monitor source language content for changes and route the content to various translators and reviewers. These translators and reviewers may be located across the globe and typically access the TMS via the Internet.
Translation management systems are most commonly used today for managing various aspects translation business.
Naming
Although translation management systems (TMS) seems to be the currently favoured term in the language localisation industry, these solutions are also known as globalization management systems (GMS) or global content management systems (GCMS). They work with content management systems (CMS) as separate, but linked programs or as simple add-ons that can answer specific multilingual requirements.
Overview
A TMS typically connects to a CMS to manage foreign language content. It tends to address the following categories in different degrees, depending on each offering:
Business administration: project management, resource management, financial management. This category is traditionally related to enterprise resource planning (ERP) tools.
Business process management: workflow, collaboration, content connectors. This category is traditionally the domain of specialised project management tools.
Language management: integrated translation memory, webtop translation tools, customer review and markup. This is traditionally performed with specialised translation tools.
CMS excels at process management while ignoring business management and translation tools, which are strongholds of TMS.
Features and benefits
The measurable benefits of using a TMS are similar to those found in a CMS, but with a multilingual twist: the localization workflow is automated, thus reducing management and overhead costs and time for everyone involved; localization costs are reduced, time to market is decreased and translation quality improves; with the evolution of cloud computing, projects can be launched from any corporate location instead of one centralized headquarters; finally, the cooperation between headquarters, third party vendors, and national branches increases thanks to more thorough reporting.
A typical TMS workflow goes through the following steps:
Change detection of updated or new materials is a must either with standard off-the-shelf CMSs or with the use of custom-developed connectors in the case of proprietary systems. Content is automatically extracted from the CMS and packaged for transmission to the TMS. In some cases, file manipulation may be needed for later analysis and translation. Project managers customise workflows to match their business needs. Every participant in the workflow receives a notification where there is new work to be done, and a unique number is assigned to every project and every task for traceability. Translators and revisers work either online or offline and their queries and comments are tracked through the system. Translators or revisers receive comments from the customer's in-country reviewers to verify and implement any corrections. After the documents are approved, the TM is automatically updated for later reuse. Finally, the translated materials are returned into their CMS for publishing and productivity and efficiency metrics are available through reports.
Linguistic technology generally includes at least translation memory and terminology database; some systems also integrate machine translation technology. Translation memory is a database of all previously translated sentences. While a translator performs translation, he or she is automatically prompted with similar sentences from the memory that were previously translated. A terminology database is a glossary that contains specific words and phrases and their context-appropriate translations.
A machine translation system is a program that uses natural language processing technology to automatically translate a text from one language to another.
Future
Future trends in TMSs include:
interoperation with more CMS offerings: content managers should be able to order translations within their own environment
tie in with text authoring environments: for existing multilingual content leverage against new writing
incorporation of business management functions: to preview the localization cost and timeframe
integration with enterprise systems: general ledger applications and sales force automation tools
Target markets and licensing
TMS vendors target two main buyers when marketing and selling their products. On the one hand, software developer-only companies attract content producers, and sell their offering with no strings attached. On the other hand, software developers can also be language service providers (LSPs), so they offer their language services over their custom-made technological offering for easier customer integration. The latter is commonly referred to as a captive solution, meaning that buyers must use the TMS developer's language services in order to take advantage of their platform.
Content producers with preferred or previous language service agreements to third LSPs may prefer to maintain their independence and purchase software licences only. However, a combined option of technology solution and language services in one package is bound to be more cost effective. Similarly, LSPs may prefer to contact technology vendors who are not part of the competition, offering also language services. Many LSPs got nervous when SDL bought Trados in 2005, becoming the biggest translation technology provider, while still having language services as part of their activities. As a result of this, competitive cloud translation management systems that combine TMS functionality with CAT tools and online translation editors, started making their way to the market.
See also
Computer-assisted translation
Internationalization and localization
References
Translation software
Content management systems
Internationalization and localization | Translation management system | [
"Technology"
] | 1,163 | [
"Natural language and computing",
"Internationalization and localization"
] |
1,541,115 | https://en.wikipedia.org/wiki/Pistonless%20rotary%20engine | A pistonless rotary engine is an internal combustion engine that does not use pistons in the way a reciprocating engine does. Designs vary widely but typically involve one or more rotors, sometimes called rotary pistons. Although many different designs have been constructed, only the Wankel engine has achieved widespread adoption.
The term rotary combustion engine has been used as a name for these engines to distinguish them from early (generally up to the early 1920s) aircraft engines and motorcycle engines also known as rotary engines. However, both continue to be called rotary engines and only the context determines which type is meant, whereas the "pistonless" prefix is less ambiguous.
Pistonless rotary engines
A pistonless rotary engine replaces the linear reciprocating motion of a piston with more complex compression/expansion motions with the objective of improving some aspect of the engine's operation, such as: higher efficiency thermodynamic cycles, lower mechanical stress, lower vibration, higher compression, or less mechanical complexity. the Wankel engine is the only successful pistonless rotary engine, but many similar concepts have been proposed and are under various stages of development. Examples of rotary engines include:
Production stage
Wankel engine
LiquidPiston engine
Beauchamp Tower's nineteenth century spherical steam engine (in actual use as a steam engine, but theoretically adaptable to use internal combustion)
Development stage
Engineair engine
Hamilton Walker engines
Libralato rotary Atkinson cycle engine
Nutating disc engine
Quasiturbine
RKM engine,
Sarich orbital engine
Swing-piston engine, Trochilic
Wave disk engine
Conceptual stage
Gerotor engine
Internally Radiating Impulse Structure: IRIS engine
See also
Range extender (vehicle)
Further reading
Jan P. Norbye: 'Rivals to the Wankel: A Roundup of Rotary Engines', Popular Science, Jan 1967, pp 80–85.
Article referencing the October 1964 issue of Mechanix Illustrated and the AMC/Rambler rotary
Proposed engines
Engine technology
ar:محرك فانكل | Pistonless rotary engine | [
"Technology"
] | 401 | [
"Engine technology",
"Proposed engines",
"Engines",
"Pistonless rotary engine"
] |
1,541,301 | https://en.wikipedia.org/wiki/Lugol%27s%20iodine | Lugol's iodine, also known as aqueous iodine and strong iodine solution, is a solution of potassium iodide with iodine in water. It is a medication and disinfectant used for a number of purposes. Taken by mouth it is used to treat thyrotoxicosis until surgery can be carried out, protect the thyroid gland from radioactive iodine, and to treat iodine deficiency. When applied to the cervix it is used to help in screening for cervical cancer. As a disinfectant it may be applied to small wounds such as a needle stick injury. A small amount may also be used for emergency disinfection of drinking water.
Side effects may include allergic reactions, headache, vomiting, and conjunctivitis. Long term use may result in trouble sleeping and depression. It should not typically be used during pregnancy or breastfeeding. Lugol's iodine is a liquid made up of two parts potassium iodide for every one part elemental iodine in water.
Lugol's iodine was first made in 1829 by the French physician Jean Lugol. It is on the World Health Organization's List of Essential Medicines. Lugol's iodine is available as a generic medication and over the counter. Lugol's solution is available in different strengths of iodine. Large volumes of concentrations more than 2.2% may be subject to regulation.
Uses
Medical uses
Preoperative administration of Lugol's solution decreases intraoperative blood loss during thyroidectomy in patients with Graves' disease. However, it appears ineffective in patients who are already euthyroid on anti-thyroid drugs and levothyroxine.
During colposcopy, Lugol's iodine is applied to the vagina and cervix. Normal vaginal tissue stains brown due to its high glycogen content, while tissue suspicious for cancer does not stain, and thus appears pale compared to the surrounding tissue. Biopsy of suspicious tissue can then be performed. This is called a Schiller's test.
Patients at high risk of oesophageal squamous cell carcinoma are usually followed using a combination of Lugol's chromoendoscopy and narrow-band imaging. With Lugol's iodine, low-grade dysplasia appears as an unstained or weakly stained area; high-grade dysplasia is consistently unstained.
Lugol's iodine may also be used to better visualize the mucogingival junction in the mouth. Similar to the method of staining mentioned above regarding a colposcopy, alveolar mucosa has a high glycogen content that gives a positive iodine reaction vs. the keratinized gingiva.
Lugol's iodine may also be used as an oxidizing germicide, however it is somewhat undesirable in that it may lead to scarring and discolors the skin temporarily. One way to avoid this problem is by using a solution of 70% ethanol to wash off the iodine later.
Lugol's iodine was distributed in Polish People's Republic after the Chernobyl catastrophe, due to government not being informed of how severe the event was and overestimating radiation, and unavailability of iodine tablets.
Science
As a mordant when performing a Gram stain. It is applied for 1 minute after staining with crystal violet, but before ethanol to ensure that gram positive organisms' peptidoglycan remains stained, easily identifying it as a gram positive in microscopy.
This solution is used as an indicator test for the presence of starches in organic compounds, with which it reacts by turning a dark-blue/black. Elemental iodine solutions like Lugol's will stain starches due to iodine's interaction with the coil structure of the polysaccharide. Starches include the plant starches amylose and amylopectin and glycogen in animal cells. Lugol's solution will not detect simple sugars such as glucose or fructose. In the pathologic condition amyloidosis, amyloid deposits (i.e., deposits that stain like starch, but are not) can be so abundant that affected organs will also stain grossly positive for the Lugol reaction for starch.
It can be used as a cell stain, making the cell nuclei more visible and for preserving phytoplankton samples.
Lugol's solution can also be used in various experiments to observe how a cell membrane uses osmosis and diffusion.
Lugol's solution is also used in the marine aquarium industry. Lugol's solution provides a strong source of free iodine and iodide to reef inhabitants and macroalgae. Although the solution is thought to be effective when used with stony corals, systems containing xenia and soft corals are assumed to be particularly benefited by the use of Lugol's solution. Used as a dip for stony and soft or leather corals, Lugol's may help rid the animals of unwanted parasites and harmful bacteria. The solution is thought to foster improved coloration and possibly prevent bleaching of corals due to changes in light intensity, and to enhance coral polyp expansion. The blue colors of Acropora spp. are thought to be intensified by the use of potassium iodide. Specially packaged supplements of the product intended for aquarium use can be purchased at specialty stores and online.
Outdated uses
Up until early 1970s, it was often recommended for use in victims of rape in order to avoid pregnancy. The idea stemmed from the fact that, in the laboratory, Lugol's iodine appeared to kill sperm cells even in such great dilutions as 1:32. Thus it was thought that an intrauterine application of Lugol's iodine, immediately after the event, would help avoid pregnancy.
Side effects
Because it contains free iodine, Lugol's solution at 2% or 5% concentration without dilution is irritating and destructive to mucosa, such as the lining of the esophagus and stomach. Doses of 10 mL of undiluted 5% solution have been reported to cause gastric lesions when used in endoscopy. The LD50 for 5% Iodine is 14,000 mg/kg (14 g/kg) in rats, and 22,000 mg/kg (22 g/kg) in mice.
The World Health Organization classifies substances taken orally with an LD50 of 5–50 mg/kg as the second highest toxicity class, Class Ib (Highly Hazardous). The Global Harmonized System of Classification and Labeling of Chemicals categorizes this as Category 2 with a hazard statement "Fatal if swallowed". Potassium iodide is not considered hazardous.
Mechanism of action
The above uses and effects are consequences of the fact that the solution is a source of effectively free elemental iodine, which is readily generated from the equilibrium between elemental iodine molecules and polyiodide ions in the solution.
History
It was historically used as a first-line treatment for hyperthyroidism, as the administration of pharmacologic amounts of iodine leads to temporary inhibition of iodine organification in the thyroid gland, caused by phenomena including the Wolff–Chaikoff effect and the Plummer effect. However it is not used to treat certain autoimmune causes of thyroid disease as iodine-induced blockade of iodine organification may result in hypothyroidism. They are not considered as a first line therapy because of possible induction of resistant hyperthyroidism but may be considered as an adjuvant therapy when used together with other hyperthyroidism medications.
Lugol's iodine has been used traditionally to replenish iodine deficiency. Because of its wide availability as a drinking-water decontaminant, and high content of potassium iodide, emergency use of it was at first recommended to the Polish government in 1986, after the Chernobyl disaster to replace and block any intake of radioactive , even though it was known to be a non-optimal agent, due to its somewhat toxic free-iodine content. Other sources state that pure potassium iodide solution in water (SSKI) was eventually used for most of the thyroid protection after this accident. There is "strong scientific evidence" for potassium iodide thyroid protection to help prevent thyroid cancer. Potassium iodide does not provide immediate protection but can be a component of a general strategy in a radiation emergency.
Historically, Lugol's iodine solution has been widely available and used for a number of health problems with some precautions. Lugol's is sometimes prescribed in a variety of alternative medical treatments. Only since the end of the Cold War has the compound become subject to national regulation in the English-speaking world.
Society and culture
Regulation
Until 2007, in the United States, Lugol's solution was unregulated and available over the counter as a general reagent, an antiseptic, a preservative, or as a medicament for human or veterinary application.
Since 1 August 2007, the DEA regulates all iodine solutions containing greater than 2.2% elemental iodine as a List I precursor because they may potentially be used in the illicit production of methamphetamine. Transactions of up to one fluid ounce (30 ml) of Lugol's solution are exempt from this regulation.
Formula and manufacture
Lugol's solution is commonly available in different potencies of (nominal) 1%, 2%, 5% or 10%. Iodine concentrations greater than 2.2% are subject to US regulations. If the US regulations are taken literally, their 2.2% maximum iodine concentration limits a Lugol's solution to maximum (nominal) 0.87%.
The most commonly used (nominal) 5% solution consists of 5% (wt/v) iodine () and 10% (wt/v) potassium iodide (KI) mixed in distilled water and has a total iodine content of 126.4 mg/mL. The (nominal) 5% solution thus has a total iodine content of 6.32 mg per drop of 0.05 mL; the (nominal) 2% solution has 2.53 mg total iodine content per drop.
Potassium iodide renders the elementary iodine soluble in water through the formation of the triiodide () ion. It is not to be confused with tincture of iodine solutions, which consist of elemental iodine, and iodide salts dissolved in water and alcohol. Lugol's solution contains no alcohol.
Other names for Lugol's solution are (iodine-potassium iodide); Markodine, Strong solution (Systemic); and Aqueous Iodine Solution BP.
Economics
In the United Kingdom, in 2015, the NHS paid £9.57 per 500 ml of solution.
References
Antiseptics
Chemical tests
Disinfectants
Iodine
Staining dyes
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Lugol's iodine | [
"Chemistry"
] | 2,295 | [
"Chemical tests"
] |
1,542,238 | https://en.wikipedia.org/wiki/Smooth%20structure | In mathematics, a smooth structure on a manifold allows for an unambiguous notion of smooth function. In particular, a smooth structure allows mathematical analysis to be performed on the manifold.
Definition
A smooth structure on a manifold is a collection of smoothly equivalent smooth atlases. Here, a smooth atlas for a topological manifold is an atlas for such that each transition function is a smooth map, and two smooth atlases for are smoothly equivalent provided their union is again a smooth atlas for This gives a natural equivalence relation on the set of smooth atlases.
A smooth manifold is a topological manifold together with a smooth structure on
Maximal smooth atlases
By taking the union of all atlases belonging to a smooth structure, we obtain a maximal smooth atlas. This atlas contains every chart that is compatible with the smooth structure. There is a natural one-to-one correspondence between smooth structures and maximal smooth atlases.
Thus, we may regard a smooth structure as a maximal smooth atlas and vice versa.
In general, computations with the maximal atlas of a manifold are rather unwieldy. For most applications, it suffices to choose a smaller atlas.
For example, if the manifold is compact, then one can find an atlas with only finitely many charts.
Equivalence of smooth structures
If and are two maximal atlases on the two smooth structures associated to and are said to be equivalent if there is a diffeomorphism such that
Exotic spheres
John Milnor showed in 1956 that the 7-dimensional sphere admits a smooth structure that is not equivalent to the standard smooth structure. A sphere equipped with a nonstandard smooth structure is called an exotic sphere.
E8 manifold
The E8 manifold is an example of a topological manifold that does not admit a smooth structure. This essentially demonstrates that Rokhlin's theorem holds only for smooth structures, and not topological manifolds in general.
Related structures
The smoothness requirements on the transition functions can be weakened, so that the transition maps are only required to be -times continuously differentiable; or strengthened, so that the transition maps are required to be real-analytic. Accordingly, this gives a or (real-)analytic structure on the manifold rather than a smooth one. Similarly, a complex structure can be defined by requiring the transition maps to be holomorphic.
See also
References
Differential topology
Structures on manifolds | Smooth structure | [
"Mathematics"
] | 474 | [
"Topology",
"Differential topology"
] |
1,542,906 | https://en.wikipedia.org/wiki/Apportionment%20in%20the%20European%20Parliament | The apportionment of seats within the European Parliament to each member state of the European Union is set out by the EU treaties. According to European Union treaties, the distribution of seats is "degressively proportional" to the population of the member states, with negotiations and agreements between member states playing a role. Thus the allocation of seats is not strictly proportional to the size of a state's population, nor does it reflect any other automatically triggered or fixed mathematical formula. The process can be compared to the composition of the electoral college used to elect the President of the United States of America in that, pro rata, the smaller state received more places in the electoral college than the more populous states.
After the withdrawal of the United Kingdom from the EU in 2020, the number of MEPs, including the president, dropped to 705 but since the 2024 election, it increased to 720. The maximum number allowed by the Lisbon Treaty is 751.
Background
When the Parliament was established in 1952 as the 78-member "Common Assembly of the European Coal and Steel Community" the then-three smaller states (Belgium, Luxembourg, and the Netherlands) were concerned about being under-represented and hence they were granted more seats than their population would have allowed. Membership increased to 142 with the Assembly expanded to cover the Economic and Atomic Energy Communities. It then grew further with each enlargement, which each time allowing smaller nations to have greater proportion of seats relative to larger states.
Relative influence of voters from different EU member states (2024 - 2029)
Influence is proportional to seats-to-votes ratio and inversely proportional to Inhabitants to MEPs ratio.
Nice system (2003 – 2009)
The Nice Treaty provided for a maximum of 736 seats. In 2009, with about 500 million EU citizens, this meant that there were on average 670,000 citizens represented by each MEP. Some states divide the electorate for their allocated MEPs into sub-national constituencies. However, they may not be divided in such a way that the system would no longer be proportional.
The 2004 European Parliament election was the first conducted under the Nice Treaty, with 732 seats for the 25 member states.
The 2009 European Parliament election was conducted under the rules included in the Nice Treaty which provided for a maximum number of 736, although that figure had been breached on the accession of new members to the EU, these states being allowed parliamentary representation without a corresponding reduction in the number of MEPs allotted to other member states. This happened in 2007 on the accession of Romania and Bulgaria, when the number of seats temporarily increased to 785. It subsequently returned to 736 in the 2009 election.
Lisbon system (2009 – present)
Under the Lisbon Treaty, which first applied to the 2014 European Parliament election, the cap on the number of seats was raised to 750, with a maximum of 96 and a minimum of 6 seats per state. They continue to be distributed "degressively proportional" to the populations of the EU's member states.
There was controversy over the fact that the population figures are based on residents, not citizens, resulting in countries with larger disenfranchised immigrant populations gaining more under Lisbon than those with smaller ones. Italy would have been the greatest loser under the Lisbon system and sought the same number of MEPs as France and the United Kingdom. Italy raised the issue during treaty negotiations and succeeded in gaining one extra MEP (giving it the same as the UK) while the President of the European Parliament would not be counted as a lawmaker hence keeping the number of MEPs to the 750-seat limit.
2011 amendment
In 2011 an amendment, which came into force on 1 December, temporarily increased the Lisbon limit to 754. This allowed member states who gained seats under Lisbon to take them before the 2014 election, while allowing Germany which lost seats under Lisbon to retain them until the 2014 election. This amendment, in effect, institutes a transitional manner of distributing MEPs to take account of the fact that the 2009 European Parliamentary elections took place under the rules contained in the Nice Treaty and not in the Lisbon Treaty. That result means that member state that are to gain seats in parliament under the Lisbon rules may take them, but that Germany which loses three seats under the Lisbon rules keeps those seats until the next elections, due in 2014. As a result, Germany temporarily exceeds the maximum number of MEPs allocatable to a member state under the Lisbon Treaty by having 99 MEPs, three above the intended limit.
2013 amendment
Following the accession of Croatia on 1 July 2013 with 12 extra seats, the apportionment was amended for the 2014 elections, when 12 countries lost one seat (including Croatia itself).
2014 election
From October 2008, MEP Andrew Duff (ALDE, UK) has advocated within the European Parliament for a reform of EU electoral law for the 2014 elections, including the creation of a single constituency of 25 seats in which each European citizen would be entitled to vote on the basis of pan-European lists. He has been nominated rapporteur, as the European Parliament has the right of initiative in this field ruled by unanimity in the Council. After the 2009 election, Duff proposed a new version of his report, which was adopted by the parliamentary Committee on Constitutional Affairs (AFCO) in April 2011. However, the plenary session of the Parliament referred the report back to the AFCO committee in July 2011. A third version of the report was published in September 2011 and adopted by the AFCO committee in January 2012, but was withdrawn before being discussed in plenary in March 2012 for fear that it would likely be turned down.
On 13 March 2013 the European Parliament voted a new proposal updating seat assignments per country for 2014, taking into account demographic changes and bringing the total number of seats back to the nominal 751 enshrined in the Lisbon Treaty. The same document suggests the creation of a formal process "based on objective criteria to be applied in a pragmatic manner" for apportioning seats in future elections.
2019 election
The EU needed to revise the apportionment of seats in time for the next European Parliament election, expected to be held in May 2019, when the United Kingdom's 73 MEPs may have vacated their seats following Brexit. In April 2017, a group of European lawmakers discussed what should be done about the vacated seats. One plan, supported by Enrico Letta, Gianni Pittella and Emmanuel Macron, was to replace the 73 seats with a pan-European constituency list. Other options which were considered include dropping the British seats without replacement and reassigning some or all of the existing seats from other countries to reduce inequality of representation. A plan to reduce the number of seats to 705 was approved by the Parliament in February 2018. It involves redistributing 27 seats to under-represented members and reserving the remaining 46 for future EU expansions. A proposal by the Constitutional Affairs Committee to create a pan-member constituency was rejected by the Parliament at the same time.
The proposed redistribution did not occur due to the Brexit extension until 31 October, and the allocation used was the same as in 2014. After Brexit took legal effect, the seat distribution was decided by the European Council. Those countries which were allocated additional seats elected MEPs who only took office after Brexit had taken effect.
2024 election
In February 2023, the AFCO committee of the European Parliament released a draft report (whose rapporteurs are Lóránt Vincze and Sandro Gozi) on the necessary changes to the composition of the European Parliament in order to respect the principle of degressive proportionality (enshrined in the TEU). The draft report suggested a new apportionnement which aimed at respecting the degressive proportionality while also resulting in no loss of seats for any Member State, therefore leading to an expansion in the number of MEPs, from 705 to 716. On 12 June 2023, the report was approved by the AFCO committee, with the apportionment being unchanged compared to the draft report. On 15 June 2023 the report was approved by the EP plenary.
In July 2023, the European Council put forward its own proposed apportionment for the tenth European Parliament, which would add 15 new MEPs and thus take the number of seats from 705 to 720. In this proposal, no Member State would lose any spots in the hemicycle and the countries gaining new seats would be as indicated in the table below under New allocation of seats (final decision for 2024).
On 15 September 2023, the European Parliament approved the apportionment proposed by the Council, with 515 votes in favor, 74 against and 44 abstentions.
Furthermore, this decision envisages the future (before the 2029-2034 parliamentary term) definition of "an objective, fair, durable and transparent seat distribution method implementing the principle of degressive proportionality, without prejudice to the institutions’ prerogatives under the Treaties".
Degressive proportionality breached.
Changes in membership
Source for MEP figures 1952–2004: European Navigator. Source for population figures and MEP figures for 2007 and 2009: European Parliament, full population figures . December 2011 figures reflect the members added to the European Parliament by the Protocol Amending the Protocol on Transitional Provisions (OJ 29.9.2010, C 263, p. 1) which came into force on 1 December 2011. Figures for 2019 follow parliamentary decision of February 2018.
See also
United States congressional apportionment
References
European Parliament apportionment
Numbering in politics | Apportionment in the European Parliament | [
"Mathematics"
] | 1,968 | [
"Mathematical objects",
"Numbers",
"Numbering in politics"
] |
1,543,002 | https://en.wikipedia.org/wiki/Polybius%20%28urban%20legend%29 | Polybius is a purported 1981 arcade game that features in an urban legend. The legend describes the game as part of a government-run crowdsourced psychology experiment based in Portland, Oregon. Gameplay supposedly produced intense psychoactive and addictive effects in the player. These few publicly staged arcade machines were said to have been visited periodically by men in black for the purpose of data-mining the machines and analyzing these effects. Supposedly, all of these Polybius arcade machines then disappeared from the arcade market.
This urban legend has persisted in video game journalism and through continued interest, and it has inspired video games with the same name.
Legend
The urban legend says that in 1981, when new arcade games were uncommon, an unheard-of arcade game appeared in several suburbs of Portland, Oregon. The game was popular to the point of addiction, with lines forming around the machines and often resulting in fights over who would play next. The machines were visited by men in black, who collected unknown data from the machines, allegedly testing responses to the game's psychoactive effects. Players supposedly suffered from a series of unpleasant side effects, including seizures, amnesia, insomnia, night terrors, and hallucinations. Approximately one month after its supposed release in 1981, Polybius is said to have disappeared without a trace.
The company named in most accounts of the game is Sinneslöschen. The word is described by writer Brian Dunning as "not-quite-idiomatic German" (a word constructed outside the norms of German-language usage and grammar) meaning "sense delete" or "sensory deprivation", and in German would be pronounced . Its meanings are derived from the German words Sinne ("senses") and löschen ("to extinguish" or "to delete"), though the way they are combined is not standard German; Sinnlöschen would be more correct.
The game has the same name as the classical Greek historian Polybius, born in Arcadia and known for his assertion that historians should never report what they cannot verify through interviews with eyewitnesses.
The first online mention of Polybius is a coinop.org article alleged to have been created in 1998, which extends the legend by claiming possession of a ROM image file from the 1981 arcade machine, claiming to have played it, and on May 16, 2009, promising to bring future updates pending an investigatory flight to Kyiv, Ukraine. The first known printed mention of Polybius, exposing the legend to a mass-market audience, is in the September 2003 issue of GamePro. The feature story "Secrets and Lies" declared the game's existence to be "inconclusive", helping to both spark curiosity and spread the story.
Reception
The alleged original Polybius arcade game is generally believed to have never existed, and the legend a hoax. Snopes.com, a fact-checking website, concludes the game is a modern-day version of 1980s rumors of "men in black". This led to the hypothesis that the government was hosting some sort of experiment and sending subliminal messages to the players. Magazines and mainstream news of the early 1980s do not mention Polybius. Aside from the mockup cabinets and games inspired by the myth, no authentic cabinets or ROM dumps have ever been documented.
Skeptics and researchers differ on when, how, and why the story of Polybius began. American producer and author Brian Dunning believes it is an urban legend that grew out of a mixture of influences in the 1980s. He notes real news reports that two players fell ill in Portland on the same day in 1981, one collapsing with a migraine headache after playing Tempest, and another suffering from stomach pain after playing Asteroids for 28 hours in a filmed attempt to break a world record at the same arcade. Dunning records that the Federal Bureau of Investigation raided several video arcades in the area just ten days later, where the owners were suspected of using the machines for gambling, and the lead-up to the raid involved FBI agents monitoring arcade cabinets for evidence of tampering and recording high scores. Dunning suggests that these two events were combined into an urban legend about government-monitored arcade machines making players ill. He believes that such a myth must have been established by 1984, and that it influenced the plot of the film The Last Starfighter, in which a teenager is recruited by aliens who monitor him playing a covertly-developed arcade game. Dunning considers "Sinneslöschen" to be the kind of name that a non-German speaker would generate if they tried to create a compound word using an English-to-German dictionary.
Internet writer Patrick Kellogg believes that players claiming to remember having played or seen Polybius since the 1980s may actually be recalling the video game Cube Quest. It was released in arcades in 1983 as a shooting game played from laserdisc. Kellogg describes its visuals as "revolutionary" and far ahead of typical games of the time. He states that frequent breakdowns are typical of laserdisc games, so this one was often removed from arcades.
Ben Silverman of Yahoo! Games remarked, "Unfortunately, there is no evidence that the game ever existed, no less turned its users into babbling lunatics ... Still, Polybius has enjoyed cult-like status as a throwback to a more technologically paranoid era." Ripley's Believe It or Not! called Polybius "the most dangerous video game to never exist". Offbeat Oregon History says "There remains a possibilitya tiny one, really too small to measure — that the legend is true." Portland Monthly calls it "one of Portland's craziest urban legends", comparing it to the CIA's MKUltra mind control program of the 1950s-1970s.
Legacy
Video games
In 2007, freeware developers and arcade constructors Rogue Synapse published a free downloadable game titled Polybius for Windows at sinnesloschen.com. Its design is partly based on a contested description of the Polybius arcade machine posted on a forum by an individual named Steven Roach who claimed to have worked on the original. To complete the illusion, Rogue Synapse's owner Dr. Estil Vance founded a Texas-based corporation bearing the name Sinnesloschen (without umlaut) in 2007. He transferred to it the "Rogue Synapse" trademark and a newly registered trademark on "Polybius". Its website says that it is an "attempt to recreate the Polybius game as it might have existed in 1981".
In 2016, Llamasoft announced Polybius for the PlayStation 4 with PlayStation VR support, released on the PlayStation store on Tuesday, May 9, 2017. In early marketing, its co-author Jeff Minter claimed to have been permitted to play the original Polybius arcade machine in a warehouse in Basingstoke, England. He later acknowledged that his game was inspired by the urban legend but does not attempt to reproduce its alleged gameplay.
Other media
It has a central cameo as the "main attraction" in the Nine Inch Nails music video "Less Than".
Polybius has cameos in many TV series, such as The Goldbergs (2013), The Simpsons (2006) and Dimension 404 (2017). The Loki (2021) cameo gained its own acclaim on social media, including that the game seems catastrophically integral to the multiverse, and is a key example of Loki interplaying conspiracy with reality. For Paper Girls (2022), CBR reported that the Polybius cameo conferred the series with 1980s science fiction credentials, and differentiated it from Stranger Things (2016).
The Polybius Conspiracy is a 7-part podcast published in 2017, adapted from a canceled feature film project.
See also
List of urban legends
Satanic panic
Toynbee tiles
Polybius square
References
External links
, includes alleged cabinet photograph
7 Greatest Video Game Legends
Polybius home page
Article about the game in Atlas Obscura
Eight minute documentary by the BBC
American urban legends
Arcade video games
Conspiracy theories in the United States
Creepypasta
Fictional video games
Science and technology-related conspiracy theories
Video game hoaxes | Polybius (urban legend) | [
"Technology"
] | 1,648 | [
"Science and technology-related conspiracy theories"
] |
1,543,032 | https://en.wikipedia.org/wiki/DIY%20audio | DIY Audio, do it yourself audio. Rather than buying a piece of possibly expensive audio equipment, such as a high-end audio amplifier or speaker, the person practicing DIY Audio will make it themselves. Alternatively, a DIYer may take an existing manufactured item of vintage era and update or modify it. The benefits of doing so include the satisfaction of creating something enjoyable, the possibility that the equipment made or updated is of higher quality than commercially available products and the pleasure of creating a custom-made device for which no exact equivalent is marketed. Other motivations for DIY audio can include getting audio components at a lower cost, the entertainment of using the item, and being able to ensure quality of workmanship.
History
Audio DIY came to prominence in the 1950s to 1960s, as audio reproduction was relatively new and the technology complex. Audio reproduction equipment, and in particular high performance equipment, was not generally offered at the retail level. Kits and designs were available for consumers to build their own equipment. Famous vacuum tube kits from Dynaco, Heathkit, and McIntosh, as well as solid state (transistor) kits from Hafler allowed for consumers to build their own hi fidelity systems. Books and magazines were published which explained new concepts regarding the design and operation of vacuum tube and (later) transistor circuits.
While audio equipment has become easily accessible in the current day and age, there still exists an interest in building and repairing one's own equipment including, but not limited to; pre-amplifiers, amplifiers, speakers, cables, CD players and turntables. Today, a network of companies, parts vendors, and on-line communities exist to foster this interest. DIY is especially active in loudspeaker and in tube amplification. Both are relatively simple to design and fabricate without access to sophisticated industrial equipment. Both enable the builder to pick and choose between various available parts, on matters of price as well as quality, allow for extensive experimentation, and offer the chance to use exotic or highly labor-intensive solutions, which would be expensive for a manufacturer to implement, but only require personal labor by the DIYer, which is a source of satisfaction to them.
Construction issues
Since the 1960s, integrated circuits make construction of DIY audio systems easier, but the proliferation of surface mount components (which are small and sometimes difficult to solder with a soldering iron) and fine pitch printed circuit boards (PCBs) can make the physical act of construction more difficult. Nevertheless, surface mounting is often used, as are conventional PCBs and electronic components, while some enthusiasts insist on using old-style perforated cardboard onto which individual components are hardwired and soldered. Test equipment is readily available for purchase and enables convenient testing of parts and systems. Specifications of parts and components are readily accessible through the Internet including data sheets and equipment designs.
It has become easier to make audio components from scratch rather than from kits due to the availability of CAD software for printed circuit board (PCB) layouts and electronic circuit simulation. Such software can be free, and a trial version may also be used. PCB vendors are more accessible than ever, and can manufacture PCBs in small quantities for the do-it-yourselfer. In fact, kits and chemicals for self-manufacturing one's own PCB can be obtained. Electronic parts and components are accessible online or in speciality shops, and various high-end parts vendors exist. On the other hand, a wide variety of kits, designs and premanufactured PCBs are available for almost any type of audio component.
To construct a device takes more than knowledge of circuits, many would urge that the mechanical aspects of cabinets, cases and chassis' are the most time consuming aspects of audio DIY. Drilling, metalworking and physical measurements are critical to constructing almost any DIY audio project, especially speakers. Measuring equipment such as a Vernier caliper is often essential. Woodworking skills are required to construct wooden enclosures (e.g. for speakers), with some enthusiasts going beyond traditional woodworking to CNC turning, and luxurious veneers and lacquers. Room acoustics solutions are also popular among DIYers, as they can be made with inexpensive and readily available insulating materials, and can be dimensioned to fit each particular room in a precise and aesthetically pleasing way.
DIY audio involves projects directed to audio. Many DIY audio people fancy themselves to be audiophiles. These people use rare and expensive parts and components in their projects. Examples are the use of silver wire, expensive capacitors, non-standard solders of various alloys, and use of parts that have been cryogenically cooled.
Vacuum tube or valve projects are common in audio DIY. While, for mass market audio components, the vacuum tube has been replaced in modern times with the transistor and IC, the vacuum tube remains prominent in specialty high end audio equipment. Thus, interest exists in building components using vacuum tubes, and the vacuum tube is still widely available. There is a wide variety of tubes manufactured nowadays, and many tubes on the market are advertised as NOS; not all of the latter being genuinely NOS. Circuits utilizing tubes often are far less complicated than those utilizing transistors or op-amps. Tube enthusiasts often use transformers, sometimes custom-made ones, or even hand-wind their own transformers using cores and wire of their own choice. Note that vacuum tube projects almost always use dangerously high voltages and should be undertaken with due care.
In case lead-containing solder is used instead of RoHS-compliant solder, appropriate environmental precautions with regard to lead and lead products should be taken.
Tweaking and tweakers
DIY audio can also involve tweaking of mass market components. It is thought that mass market audio components are compromised by the use of cheap or inferior internal parts that can be easily replaced with high quality substitutes. As a result, an audio component of improved characteristics may be obtained for relatively low cost. Some common changes include replacing opamps, replacing capacitors (recap), or even replacing resistors in order to increase signal-to-noise ratio. Changing an audio component in this way is similar to what a tweaker or modder does with a personal computer.
Circuit bending
Circuit bending is the creative customization of the circuits within electronic devices such as low voltage, battery-powered guitar effects, and small digital synthesizers to create new musical or visual instruments and sound generators. Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with noise music, though many more conventional contemporary musicians and musical groups have been known to experiment with bent instruments. Circuit bending usually involves dismantling the machine and adding components such as switches and potentiometers that alter the circuit.
Cloning and cloners
Another common practice in the DIY audio community is to attempt to clone or copy a pre-existing design or component from a commercial manufacturer. This involves obtaining a lawful public version of, or lawfully reverse engineering, the circuit schematics for the design, and/or even the publicly available PCB layouts. Such a clone will not be a perfect copy since different brands and types of parts (often newer parts) will be used, and mechanical aspects of construction will likely differ. However, the circuit or other distinguishing features should be close to the original.
There are many reasons for wanting to recreate an existing design. The design might be historically important and/or out of production, so the only way to obtain the component is to build it. The design might be very simple so copying it is easily done. The commercial product might be very expensive but its design known, so it may be built for far less than it cost to be purchased. The original design may have some sentimental value to the person building the recreation, and the design built for the memories in one's past. The copy may be made to test or evaluate design concepts or principles in the original.
As an example, a well known clone includes amplifiers using high power integrated circuits, such as the National Semiconductor LM3875 and LM3886. The use of a high power IC as part of a quality audio amplifier was popularized by the 47 Labs Gaincard amplifier, and thus the DIY amplifiers using power ICs are often called chipamps or Gainclones.
Usually cloning additionally involves improving or tweaking (see above) the original design, potentially by using more modern components (in the case of discontinued designs,) higher quality parts, or more efficient board layout.
Operational amplifier swapping
Operational amplifier (op-amps) swapping is the process of replacing an operational amplifier in audio equipment with a different one, in an attempt to improve performance or change the perceived sound quality. Op-amps are used in most audio devices, and most op-amps have the same pinouts, making replacement fairly simple. If the new device's parameters sometimes do not match it can lead to problems like high-frequency oscillation.
References
External links
DIY audio wiki
Audio electronics
Do it yourself | DIY audio | [
"Engineering"
] | 1,857 | [
"Audio electronics",
"Audio engineering"
] |
1,543,320 | https://en.wikipedia.org/wiki/Counter-battery%20radar | A counter-battery radar or weapon tracking radar is a radar system that detects artillery projectiles fired by one or more guns, howitzers, mortars or rocket launchers and, from their trajectories, locates the position on the ground of the weapon that fired it. Such radars are a subclass of the wider class of target acquisition radars.
Early counter-battery radars were generally used against mortars, whose lofted trajectories were highly symmetrical and allowed easy calculation of the launcher's location. Starting in the 1970s, digital computers with improved calculation capabilities allowed more complex trajectories of long-range artillery to also be determined. Normally, these radars would be attached to friendly artillery units or their support units, allowing them to quickly arrange counter-battery fire.
With the aid of modern communications systems, the information from a single radar can be rapidly disseminated over long distances. This allows the radar to notify multiple batteries as well as provide early warning to the friendly targets. Modern radar can locate hostile batteries up to about away depending on the radar's capabilities and the terrain and weather. Some counter-battery radars can also be used to track the fire of friendly artillery and calculate corrections to adjust its fire onto a particular place, but this is usually a secondary mission objective.
Radar is the most recently developed means of locating hostile artillery. The emergence of indirect fire in World War I saw the development of sound ranging, flash spotting and aerial reconnaissance, both visual and photographic. Radars, like sound ranging and flash spotting, require hostile guns, etc., to fire before they can be located.
History
The first radars were developed for anti-aircraft purposes just before World War II. These were soon followed by fire control radars for ships and coastal artillery batteries. The latter could observe the splashes of water from missing shots, enabling corrections to be plotted. Generally, the shells could not be seen directly by the radar, as they were too small and rounded to make a strong return, and travelled too quickly for the mechanical antennas of the era to follow.
Radar operators in light anti-aircraft batteries close to the front line found they were able to track mortar bombs. This was likely helped by the fins of the bomb producing a partial corner cube that strongly reflected the signal. These accidental intercepts led to their dedicated use in this role, with special secondary instruments if necessary, and development of radars designed for mortar locating. Dedicated mortar-locating radars were common starting in the 1960s and were used until around 2000.
Locating mortars was relatively easy because of their high, arcing, trajectory. At times, just after firing and just before impact, the trajectory is almost linear. If a radar observes the shell at two points in time just after launch, the line between those points can be extended to the ground and provides a highly accurate position of the mortar, more than enough for counter-battery artillery to hit it with ease. Better radars were also able to detect howitzers when firing at high angles, elevations greater than 45°, although such use was quite rare.
Low angle trajectories normally used by guns, howitzers and rockets were more difficult. Purely ballistic low-angle trajectories are lopsided, being relatively parabolic for the start of the flight but becoming much more curved near the end. This is further modified by otherwise minor effects like wind, air pressure differences and aerodynamic effects, which have time to add up to a noticeable effect on long-range fire but can be ignored for short-range systems like mortars. These effects are minimized immediately after launch, but the low angle makes it difficult to see the rounds during this time, in contrast to a mortar which climbs above the horizon almost immediately. Adding to the problem is the fact that traditional artillery shells make for difficult radar targets.
By the early 1970s, radar systems capable of locating guns appeared possible, and many European members of NATO embarked on the joint Project Zenda. This was short-lived for unclear reasons, but the US embarked on its Firefinder program and Hughes Aircraft Company developed the necessary algorithms, although it took two or three years of difficult work.
The next step forward was European when in 1986 France, West Germany and UK agreed the 'List of Military Requirements' for a new counter-battery radar. The distinguishing feature was that instead of just locating individual guns, etc., the radar would be able to locate many simultaneously and group them into batteries with a centre point, dimensions and attitude of the long axis of the battery. This radar eventually reached service as Euro-ART's COBRA (COunter Battery RAdar) AESA system. 29 COBRA systems were produced and delivered in a roll-out which was completed in Aug. 2007 (12 to Germany – out of which two were re-sold to Turkey, 10 to France and 7 to the UK).
Three additional systems were ordered in Feb. 2009 by the United Arab Emirates Armed Forces. Simultaneous with the development of COBRA, Norway and Sweden developed a smaller, more mobile counter-battery radar known as ARTHUR. It was taken into service in 1999 and is today used by 7 NATO countries and The Republic Of South Korea. New versions of ARTHUR have twice the accuracy of the original.
Operations in Iraq and Afghanistan led to a new need for a small counter-mortar radar for use in forward operating bases, providing 360° coverage and requiring a minimal crew. In another back to the future step it has also proved possible to add counter-battery software to battlefield airspace surveillance radars.
Description
The basic technique is to track a projectile for sufficient time to record a segment of the trajectory. This is usually done automatically, but some early and not so early radars required the operator to manually track the projectile. Once a trajectory segment is captured it can then be processed to determine its point of origin on the ground. Before digital terrain databases this involved manual iteration with a paper map to check the altitude at the coordinates, change the location altitude and recompute the coordinates until a satisfactory location was found.
An additional problem was detecting the projectile in flight in the first place. The conical shaped beam of a traditional radar had to be pointing in the right direction, but in order to have sufficient power and accuracy for this the beam's angle was limited, typically to about 25°, which made finding a projectile quite difficult. One technique was to deploy listening posts that told the radar operator roughly where to point the beam; in some cases the radar was not switched on until this point to make it less vulnerable to electronic counter-measures (ECM). However, conventional radar beams were not notably effective.
Since a parabola is defined by just three points, tracking a long segment of the trajectory was not notably efficient. The Royal Radar Establishment in the UK developed a different approach for their Green Archer system. Instead of a conical beam, the radar signal was produced in the form of a fan, about 40° wide and 1° high. A Foster scanner modified the signal to cause it to focus on a horizontal location that rapidly scanned back and forth. This allowed it to comprehensively scan a small slice of the sky.
The operator would watch for mortar bombs to pass through the slice, locating its range with pulse timing, its horizontal location by the location of the Foster scanner at that instant, and its vertical location from the known angle of the thin beam. The operator would then flick the antenna to a second angle facing higher into the air, and wait for the signal to appear there. This produced the necessary two points that could be processed by an analogue computer. A similar system was the US AN/MPQ-4, although this was a somewhat later design and somewhat more automated as a result.
Once phased array radars compact enough for field use and with reasonable digital computing power appeared, they offered a better solution. A phased array radar has many transmitter/receiver modules which use differential tuning to rapidly scan up to a 90° arc without moving the antenna. They can detect and track anything in their field of view, providing they have sufficient computing power. They can filter out the targets of no interest (e.g., aircraft) and depending on their capability track a useful proportion of the rest.
Counter-battery radars used to be mostly X band because this offers the greatest accuracy for the small radar targets. However, in the radars produced today, C band and S band are common. The Ku band has also been used. Projectile detection ranges are governed by the radar cross section (RCS) of the projectiles. Typical RCS are:
The best modern radars can detect howitzer shells at around , and rockets/mortars at . The trajectory has to be high enough to be seen by the radar at these ranges, and since the best locating results for guns and rockets are achieved with a reasonable length of trajectory segment close to the gun, long range detection does not guarantee good locating results. The accuracy of location is typically given by a circular error probable (CEP), the circle around the target in which 50% of locations will fall, expressed as a percentage of range. Modern radars typically give CEPs around 0.3–0.4% of range. However, with these figures, long range accuracy may be insufficient to satisfy the rules of engagement for counter-battery fire in counter insurgency operations.
Radars typically have a crew of 4–8 soldiers. Only one is needed to actually operate the radar. Older types were mostly trailer-mounted with a separate generator, so it took 15–30 minutes to bring into action and needed a larger crew. Self-propelled ones have been used since the 1960s. To produce accurate locations, radars have to know their own precise coordinates and be precisely oriented. Until about 1980 this relied upon conventional artillery survey, although gyroscopic orientation from the mid-1960s helped. Modern radars have an integral inertial navigation system, often aided by GPS.
Radars can detect projectiles at considerable distances. Larger projectiles give stronger reflected signals (RCS). Detection ranges depend upon capturing at least several seconds of a trajectory and can be limited by the radar horizon and the height of the trajectory. For non-parabolic trajectories, it is important to capture a trajectory as close as possible to its source in order to obtain the necessary accuracy.
Action on locating hostile artillery depends on policy and circumstances. In some armies, radars may have the authority to send target details to counter-battery fire units and order them to fire. In others they may merely report data to an HQ that then takes action. Modern radars usually record the target as well as the firing position of hostile artillery. This is usually for intelligence purposes because it is seldom possible to give the target sufficient warning time in a battlefield environment, even with data communications.
There are exceptions. The new lightweight counter-mortar radar (LCMR – AN/TPQ 48) is crewed by two soldiers and designed to be deployed inside forward positions. In these circumstances it can immediately alert adjacent troops as well as pass target data to mortars close by for counter-fire. Similarly the new GA10 (Ground Alerter 10) radar was qualified and successfully deployed by the French land forces in 2020 in several different FOBs worldwide.
Threats
Radars are vulnerable and high-value targets. They are easy to detect and locate if the enemy has the necessary ELINT/ESM capability. The consequences of this detection are likely to be attack by artillery fire or aircraft, including anti-radiation missiles, or electronic countermeasures. The usual measures against detection are using a radar horizon to screen from ground-based detection, minimising transmission time and using alerting arrangements to tell the radar when hostile artillery is active. Deploying radars singly and moving frequently reduces exposure to attack.
In low-threat environments, such as the Balkans in the 1990s, they may transmit continuously and deploy in clusters to provide all-around surveillance.
In other circumstances, particularly counter-insurgency, where ground attack with direct fire or short range indirect fire is the main threat, radars deploy in defended localities but do not need to move, unless they need to cover a different area.
Safety
Counter-battery radars operate at microwave frequencies with relatively high average energy consumption, up to the tens of kilowatts. The area immediately forward of the radar array for high energy radars is dangerous to human health. The intense radar waves of systems like the AN/TPQ-36 can detonate electrically fused ammunition at short ranges.
Counter-battery radar systems
1RL126 (NATO reporting name: Small Fred) counter-battery/surveillance radar, mounted onto a PRP-3 Val
1B75 Penicillin acoustic-thermal artillery-reconnaissance system
AN/MPQ-10 (mortar locating), Echo Band, before 1945. Modified in 1980's to AN/MPQ-10S (Saunders Modification) provided Echo Band tracking and C-Band surface-to-air missile guidance simulations for ECM training.
AN/KPQ-1 (mortar locating), before 1954
AN/MPQ-4 (mortar locating), 1958
AN/TPQ-36 Firefinder radar, 1982
AN/TPQ-37 Firefinder radar, 1980
AN/MPQ-64F1 Improved Sentinel (in Multi Mode Sentinel), 1997 initial Sentinel, ~2020 improved
AN/TPQ-48 Lightweight Counter Mortar Radar (LCMR)
AN/TPQ-49 LCMR Counterfire Radar
AN/TPQ-50 LCMR Counterfire Radar, 2011
AN/TPQ-53 Quick Reaction Capability Radar, 2009
AN/TPS-80 combined air surveillance and counter-battery radar, 2016
ARSOM 2P (NATO reporting name: Small Yawn)
Aistyonok, 2008
ASELSAN STR
ARTHUR counter-battery radar
Mod D variant is used by the British Army under the designation TAIPAN.
DTCBia combined air surveillance and counter-battery radar, 2024
Euro-Art COBRA (radar) Active electronically scanned array
EL/M-2084 combined air surveillance and counter-battery radar
Giraffe AMB combined air surveillance and counter-battery radar
SNAR 1, SNAR 2 (NATO reporting name: Pork Trough) mortar-projectile tracking radar
Swathi Weapon Locating Radar
SLC-2 Radar
Type 373 Radar
Radar FA No 15 (Cymbeline) (mortar locating)
Radar FA No 8 (Green Archer) (mortar locating)
RZRA Liwiec Artillery Reconnaissance Radar System
Red Color
Type 704 Radar
BL904 radar
1L219 Zoopark-1, 1989
1L220 Zoopark-2
See also
Counter-battery fire
Counter rocket, artillery and mortar (C-RAM)
Short range air defense (SHORAD)
Shoot-and-scoot
References
External links
Facts of Counter-Battery Radar on www.GlobalSecurity.org
Counter-battery radars
Ground radars
Warning systems | Counter-battery radar | [
"Technology",
"Engineering"
] | 3,051 | [
"Warning systems",
"Safety engineering",
"Measuring instruments",
"Weapon locating radar"
] |
1,543,358 | https://en.wikipedia.org/wiki/Darboux%27s%20theorem | In differential geometry, a field in mathematics, Darboux's theorem is a theorem providing a normal form for special classes of differential 1-forms, partially generalizing the Frobenius integration theorem. It is named after Jean Gaston Darboux who established it as the solution of the Pfaff problem.
It is a foundational result in several fields, the chief among them being symplectic geometry. Indeed, one of its many consequences is that any two symplectic manifolds of the same dimension are locally symplectomorphic to one another. That is, every -dimensional symplectic manifold can be made to look locally like the linear symplectic space with its canonical symplectic form.
There is also an analogous consequence of the theorem applied to contact geometry.
Statement
Suppose that is a differential 1-form on an -dimensional manifold, such that has constant rank . Then
if everywhere, then there is a local system of coordinates in which
if everywhere, then there is a local system of coordinates in which
Darboux's original proof used induction on and it can be equivalently presented in terms of distributions or of differential ideals.
Frobenius' theorem
Darboux's theorem for ensures that any 1-form such that can be written as in some coordinate system .
This recovers one of the formulation of Frobenius theorem in terms of differential forms: if is the differential ideal generated by , then implies the existence of a coordinate system where is actually generated by .
Darboux's theorem for symplectic manifolds
Suppose that is a symplectic 2-form on an -dimensional manifold . In a neighborhood of each point of , by the Poincaré lemma, there is a 1-form with . Moreover, satisfies the first set of hypotheses in Darboux's theorem, and so locally there is a coordinate chart near in which
Taking an exterior derivative now shows
The chart is said to be a Darboux chart around . The manifold can be covered by such charts.
To state this differently, identify with by letting . If is a Darboux chart, then can be written as the pullback of the standard symplectic form on :
A modern proof of this result, without employing Darboux's general statement on 1-forms, is done using Moser's trick.
Comparison with Riemannian geometry
Darboux's theorem for symplectic manifolds implies that there are no local invariants in symplectic geometry: a Darboux basis can always be taken, valid near any given point. This is in marked contrast to the situation in Riemannian geometry where the curvature is a local invariant, an obstruction to the metric being locally a sum of squares of coordinate differentials.
The difference is that Darboux's theorem states that can be made to take the standard form in an entire neighborhood around . In Riemannian geometry, the metric can always be made to take the standard form at any given point, but not always in a neighborhood around that point.
Darboux's theorem for contact manifolds
Another particular case is recovered when ; if everywhere, then is a contact form. A simpler proof can be given, as in the case of symplectic structures, by using Moser's trick.
The Darboux-Weinstein theorem
Alan Weinstein showed that the Darboux's theorem for sympletic manifolds can be strengthened to hold on a neighborhood of a submanifold:
Let be a smooth manifold endowed with two symplectic forms and , and let be a closed submanifold. If , then there is a neighborhood of in and a diffeomorphism such that .
The standard Darboux theorem is recovered when is a point and is the standard symplectic structure on a coordinate chart.
This theorem also holds for infinite-dimensional Banach manifolds.
See also
Carathéodory–Jacobi–Lie theorem, a generalization of this theorem.
Moser's trick
Symplectic basis
References
External links
G. Darboux, "On the Pfaff Problem", transl. by D. H. Delphenich
G. Darboux, "On the Pfaff Problem (cont.)", transl. by D. H. Delphenich
Differential systems
Symplectic geometry
Coordinate systems in differential geometry
Theorems in differential geometry
Mathematical physics | Darboux's theorem | [
"Physics",
"Mathematics"
] | 903 | [
"Theorems in differential geometry",
"Applied mathematics",
"Theoretical physics",
"Coordinate systems in differential geometry",
"Theorems in geometry",
"Coordinate systems",
"Mathematical physics"
] |
1,543,423 | https://en.wikipedia.org/wiki/Eye%20tracking | Eye tracking is the process of measuring either the point of gaze (where one is looking) or the motion of an eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in psycholinguistics, marketing, as an input device for human-computer interaction, and in product design.
In addition, eye trackers are increasingly being used for assistive and rehabilitative applications such as controlling wheelchairs, robotic arms, and prostheses. Recently, eye tracking has been examined as a tool for the early detection of autism spectrum disorder. There are several methods for measuring eye movement, with the most popular variant using video images to extract eye position. Other methods use search coils or are based on the electrooculogram.
History
In the 1800s, studies of eye movement were made using direct observations. For example, Louis Émile Javal observed in 1879 that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops (called fixations) and quick saccades. This observation raised important questions about reading, questions which were explored during the 1900s: On which words do the eyes stop? For how long? When do they regress to already seen words?
Edmund Huey built an early eye tracker, using a sort of contact lens with a hole for the pupil. The lens was connected to an aluminum pointer that moved in response to the movement of the eye. Huey studied and quantified regressions (only a small proportion of saccades are regressions), and he showed that some words in a sentence are not fixated.
The first non-intrusive eye-trackers were built by Guy Thomas Buswell in Chicago, using beams of light that were reflected on the eye, then recording on film. Buswell made systematic studies into reading and picture viewing.
In the 1950s, Alfred L. Yarbus performed eye tracking research, and his 1967 book is often quoted. He showed that the task given to a subject has a very large influence on the subject's eye movement. He also wrote about the relation between fixations and interest:
The cyclical pattern in the examination of pictures "is dependent on not only what is shown on the picture, but also the problem facing the observer and the information that he hopes to gain from the picture."
In the 1970s, eye-tracking research expanded rapidly, particularly reading research. A good overview of the research in this period is given by Rayner.
In 1980, Just and Carpenter formulated the influential Strong eye-mind hypothesis, that "there is no appreciable lag between what is fixated and what is processed". If this hypothesis is correct, then when a subject looks at a word or object, he or she also thinks about it (process cognitively), and for exactly as long as the recorded fixation. The hypothesis is often taken for granted by researchers using eye-tracking. However, gaze-contingent techniques offer an interesting option in order to disentangle overt and covert attentions, to differentiate what is fixated and what is processed.
During the 1980s, the eye-mind hypothesis was often questioned in light of covert attention, the attention to something that one is not looking at, which people often do. If covert attention is common during eye-tracking recordings, the resulting scan-path and fixation patterns would often show not where attention has been, but only where the eye has been looking, failing to indicate cognitive processing.
The 1980s also saw the birth of using eye-tracking to answer questions related to human-computer interaction. Specifically, researchers investigated how users search for commands in computer menus. Additionally, computers allowed researchers to use eye-tracking results in real time, primarily to help disabled users.
More recently, there has been growth in using eye tracking to study how users interact with different computer interfaces. Specific questions researchers ask are related to how easy different interfaces are for users. The results of the eye tracking research can lead to changes in design of the interface. Another recent area of research focuses on Web development. This can include how users react to drop-down menus or where they focus their attention on a website so the developer knows where to place an advertisement.
According to Hoffman, current consensus is that visual attention is always slightly (100 to 250 ms) ahead of the eye. But as soon as attention moves to a new position, the eyes will want to follow.
Specific cognitive processes still cannot be inferred directly from a fixation on a particular object in a scene. For instance, a fixation on a face in a picture may indicate recognition, liking, dislike, puzzlement etc. Therefore, eye tracking is often coupled with other methodologies, such as introspective verbal protocols.
Thanks to advancement in portable electronic devices, portable head-mounted eye trackers currently can achieve excellent performance and are being increasingly used in research and market applications targeting daily life settings. These same advances have led to increases in the study of small eye movements that occur during fixation, both in the lab and in applied settings.
In the 21st century, the use of artificial intelligence (AI) and artificial neural networks has become a viable way to complete eye-tracking tasks and analysis. In particular, the convolutional neural network lends itself to eye-tracking, as it is designed for image-centric tasks. With AI, eye-tracking tasks and studies can yield additional information that may not have been detected by human observers. The practice of deep learning also allows for a given neural network to improve at a given task when given enough sample data. This requires a relatively large supply of training data, however.
The potential use cases for AI in eye-tracking cover a wide range of topics from medical applications to driver safety to game theory and even education and training applications.
Tracker types
Eye-trackers measure rotations of the eye in one of several ways, but principally they fall into one of three categories:
measurement of the movement of an object (normally, a special contact lens) attached to the eye
optical tracking without direct contact to the eye
measurement of electric potentials using electrodes placed around the eyes.
Eye-attached tracking
The first type uses an attachment to the eye, such as a special contact lens with an embedded mirror or magnetic field sensor, and the movement of the attachment is measured with the assumption that it does not slip significantly as the eye rotates. Measurements with tight-fitting contact lenses have provided extremely sensitive recordings of eye movement, and magnetic search coils are the method of choice for researchers studying the dynamics and underlying physiology of eye movement. This method allows the measurement of eye movement in horizontal, vertical and torsion directions.
Optical tracking
The second broad category uses some non-contact, optical method for measuring eye motion. Light, typically infrared, is reflected from the eye and sensed by a video camera or some other specially designed optical sensor. The information is then analyzed to extract eye rotation from changes in reflections. Video-based eye trackers typically use the corneal reflection (the first Purkinje image) and the center of the pupil as features to track over time. A more sensitive type of eye-tracker, the dual-Purkinje eye tracker, uses reflections from the front of the cornea (first Purkinje image) and the back of the lens (fourth Purkinje image) as features to track. A still more sensitive method of tracking is to image features from inside the eye, such as the retinal blood vessels, and follow these features as the eye rotates. Optical methods, particularly those based on video recording, are widely used for gaze-tracking and are favored for being non-invasive and inexpensive.
Electric potential measurement
The third category uses electric potentials measured with electrodes placed around the eyes. The eyes are the origin of a steady electric potential field which can also be detected in total darkness and if the eyes are closed. It can be modelled to be generated by a dipole with its positive pole at the cornea and its negative pole at the retina. The electric signal that can be derived using two pairs of contact electrodes placed on the skin around one eye is called Electrooculogram (EOG). If the eyes move from the centre position towards the periphery, the retina approaches one electrode while the cornea approaches the opposing one. This change in the orientation of the dipole and consequently the electric potential field results in a change in the measured EOG signal. Inversely, by analysing these changes in eye movement can be tracked. Due to the discretisation given by the common electrode setup, two separate movement components – a horizontal and a vertical – can be identified. A third EOG component is the radial EOG channel, which is the average of the EOG channels referenced to some posterior scalp electrode. This radial EOG channel is sensitive to the saccadic spike potentials stemming from the extra-ocular muscles at the onset of saccades, and allows reliable detection of even miniature saccades.
Due to potential drifts and variable relations between the EOG signal amplitudes and the saccade sizes, it is challenging to use EOG for measuring slow eye movement and detecting gaze direction. EOG is, however, a very robust technique for measuring saccadic eye movement associated with gaze shifts and detecting blinks.
Contrary to video-based eye-trackers, EOG allows recording of eye movements even with eyes closed, and can thus be used in sleep research. It is a very light-weight approach that, in contrast to current video-based eye-trackers, requires low computational power, works under different lighting conditions and can be implemented as an embedded, self-contained wearable system. It is thus the method of choice for measuring eye movement in mobile daily-life situations and REM phases during sleep. The major disadvantage of EOG is its relatively poor gaze-direction accuracy compared to a video tracker. That is, it is difficult to determine with good accuracy exactly where a subject is looking, though the time of eye movements can be determined.
Technologies and techniques
The most widely used current designs are video-based eye-trackers. A camera focuses on one or both eyes and records eye movement as the viewer looks at some kind of stimulus. Most modern eye-trackers use the center of the pupil and infrared / near-infrared non-collimated light to create corneal reflections (CR). The vector between the pupil center and the corneal reflections can be used to compute the point of regard on surface or the gaze direction. A simple calibration procedure of the individual is usually needed before using the eye tracker.
Two general types of infrared / near-infrared (also known as active light) eye-tracking techniques are used: bright-pupil and dark-pupil. Their difference is based on the location of the illumination source with respect to the optics. If the illumination is coaxial with the optical path, then the eye acts as a retroreflector as the light reflects off the retina creating a bright pupil effect similar to red eye. If the illumination source is offset from the optical path, then the pupil appears dark because the retroreflection from the retina is directed away from the camera.
Bright-pupil tracking creates greater iris/pupil contrast, allowing more robust eye-tracking with all iris pigmentation, and greatly reduces interference caused by eyelashes and other obscuring features. It also allows tracking in lighting conditions ranging from total darkness to very bright.
Another, less used, method is known as passive light. It uses visible light to illuminate, something which may cause some distractions to users. Another challenge with this method is that the contrast of the pupil is less than in the active light methods, therefore, the center of iris is used for calculating the vector instead. This calculation needs to detect the boundary of the iris and the white sclera (limbus tracking). It presents another challenge for vertical eye movements due to obstruction of eyelids.
Eye-tracking setups vary greatly. Some are head-mounted, some require the head to be stable (for example, with a chin rest), and some function remotely and automatically track the head during motion. Most use a sampling rate of at least 30 Hz. Although 50/60 Hz is more common, today many video-based eye trackers run at 240, 350 or even 1000/1250 Hz, speeds needed to capture fixational eye movements or correctly measure saccade dynamics.
Eye movements are typically divided into fixations and saccades – when the eye gaze pauses in a certain position, and when it moves to another position, respectively. The resulting series of fixations and saccades is called a scanpath. Smooth pursuit describes the eye following a moving object. Fixational eye movements include microsaccades: small, involuntary saccades that occur during attempted fixation. Most information from the eye is made available during a fixation or smooth pursuit, but not during a saccade.
Scanpaths are useful for analyzing cognitive intent, interest, and salience. Other biological factors (some as simple as gender) may affect the scanpath as well. Eye tracking in human–computer interaction (HCI) typically investigates the scanpath for usability purposes, or as a method of input in gaze-contingent displays, also known as gaze-based interfaces.
Data presentation
Interpretation of the data that is recorded by the various types of eye-trackers employs a variety of software that animates or visually represents it, so that the visual behavior of one or more users can be graphically resumed. The video is generally manually coded to identify the AOIs (areas of interest) or recently using artificial intelligence. Graphical presentation is rarely the basis of research results, since they are limited in terms of what can be analysed - research relying on eye-tracking, for example, usually requires quantitative measures of the eye movement events and their parameters, The following visualisations are the most commonly used:
Animated representations of a point on the interface
This method is used when the visual behavior is examined individually indicating where the user focused their gaze in each moment, complemented with a small path that indicates the previous saccade movements, as seen in the image.
Static representations of the saccade path
This is fairly similar to the one described above, with the difference that this is static method. A higher level of expertise than with the animated ones is required to interpret this.
Heat maps
An alternative static representation, used mainly for the agglomerated analysis of the visual exploration patterns in a group of users. In these representations, the 'hot' zones or zones with higher density designate where the users focused their gaze (not their attention) with a higher frequency. Heat maps are the best known visualization technique for eyetracking studies.
Blind zones maps, or focus maps
This method is a simplified version of the heat maps where the visually less attended zones by the users are displayed clearly, thus allowing for an easier understanding of the most relevant information, that is to say, it provides more information about which zones were not seen by the users.
Saliency maps Similar to heat maps, a saliency map illustrates areas of focus by brightly displaying the attention-grabbing objects over an initially black canvas. The more focus is given to a particular object, the brighter it will appear.
Eye-tracking vs. gaze-tracking
Eye-trackers necessarily measure the rotation of the eye with respect to some frame of reference. This is usually tied to the measuring system. Thus, if the measuring system is head-mounted, as with EOG or a video-based system mounted to a helmet, then eye-in-head angles are measured. To deduce the line of sight in world coordinates, the head must be kept in a constant position or its movements must be tracked as well. In these cases, head direction is added to eye-in-head direction to determine gaze direction.
However, if the motion of the head is minor, the eye remains in constant position.
If the measuring system is table-mounted, as with scleral search coils or table-mounted camera (remote) systems, then gaze angles are measured directly in world coordinates. Typically, in these situations head movements are prohibited. For example, the head position is fixed using a bite bar or a forehead support. Then a head-centered reference frame is identical to a world-centered reference frame. Or colloquially, the eye-in-head position directly determines the gaze direction.
Some results are available on human eye movements under natural conditions where head movements are allowed as well. The relative position of eye and head, even with constant gaze direction, influences neuronal activity in higher visual areas.
Practice
A great deal of research has gone into studies of the mechanisms and dynamics of eye rotation, but the goal of eye tracking is most often to estimate gaze direction. Users may be interested in what features of an image draw the eye, for example. The eye tracker does not provide absolute gaze direction, but rather can measure only changes in gaze direction. To determine precisely what a subject is looking at, some calibration procedure is required in which the subject looks at a point or series of points, while the eye tracker records the value that corresponds to each gaze position. (Even those techniques that track features of the retina cannot provide exact gaze direction because there is no specific anatomical feature that marks the exact point where the visual axis meets the retina, if indeed there is such a single, stable point.) An accurate and reliable calibration is essential for obtaining valid and repeatable eye movement data, and this can be a significant challenge for non-verbal subjects or those who have unstable gaze.
Each method of eye-tracking has advantages and disadvantages, and the choice of an eye-tracking system depends on considerations of cost and application. There are offline methods and online procedures like AttentionTracking. There is a trade-off between cost and sensitivity, with the most sensitive systems costing many tens of thousands of dollars and requiring considerable expertise to operate properly. Advances in computer and video technology have led to the development of relatively low-cost systems that are useful for many applications and fairly easy to use. Interpretation of the results still requires some level of expertise, however, because a misaligned or poorly calibrated system can produce wildly erroneous data.
Eye-tracking while driving a car in a difficult situation
The eye movement of two groups of drivers have been filmed with a special head camera by a team of the Swiss Federal Institute of Technology: Novice and experienced drivers had their eye-movement recorded while approaching a bend of a narrow road.
The series of images has been condensed from the original film frames to show 2 eye fixations per image for better comprehension.
Each of these stills corresponds to approximately 0.5 seconds in real time.
The series of images shows an example of eye fixations #9 to #14 of a typical novice and of an experienced driver.
Comparison of the top images shows that the experienced driver checks the curve and even has Fixation No. 9 left to look aside while the novice driver needs to check the road and estimate his distance to the parked car.
In the middle images, the experienced driver is now fully concentrating on the location where an oncoming car could be seen. The novice driver concentrates his view on the parked car.
In the bottom image the novice is busy estimating the distance between the left wall and the parked car, while the experienced driver can use their peripheral vision for that and still concentrate vision on the dangerous point of the curve: If a car appears there, the driver has to give way, i.e. stop to the right instead of passing the parked car.
More recent studies have also used head-mounted eye tracking to measure eye movements during real-world driving conditions.
Eye-tracking of younger and elderly people while walking
While walking, elderly subjects depend more on foveal vision than do younger subjects. Their walking speed is decreased by a limited visual field, probably caused by a deteriorated peripheral vision.
Younger subjects make use of both their central and peripheral vision while walking. Their peripheral vision allows faster control over the process of walking.
Applications
A wide variety of disciplines use eye-tracking techniques, including cognitive science; psychology (notably psycholinguistics; the visual world paradigm); human-computer interaction (HCI); human factors and ergonomics; marketing research and medical research (neurological diagnosis). Specific applications include the tracking eye movement in language reading, music reading, human activity recognition, the perception of advertising, playing of sports, distraction detection and cognitive load estimation of drivers and pilots and as a means of operating computers by people with severe motor impairment. In the field of virtual reality, eye tracking is used in head mounted displays for a variety of purposes including to reduce processing load by only rendering the graphical area within the user's gaze.
Commercial applications
In recent years, the increased sophistication and accessibility of eye-tracking technologies have generated a great deal of interest in the commercial sector. Applications include web usability, advertising, sponsorship, package design and automotive engineering. In general, commercial eye-tracking studies function by presenting a target stimulus to a sample of consumers while an eye tracker records eye activity. Examples of target stimuli may include websites, television programs, sporting events, films and commercials, magazines and newspapers, packages, shelf displays, consumer systems (ATMs, checkout systems, kiosks) and software. The resulting data can be statistically analyzed and graphically rendered to provide evidence of specific visual patterns. By examining fixations, saccades, pupil dilation, blinks and a variety of other behaviors, researchers can determine a great deal about the effectiveness of a given medium or product. While some companies complete this type of research internally, there are many private companies that offer eye-tracking services and analysis.
One field of commercial eye-tracking research is web usability. While traditional usability techniques are often quite powerful in providing information on clicking and scrolling patterns, eye-tracking offers the ability to analyze user interaction between the clicks and how much time a user spends between clicks, thereby providing valuable insight into which features are the most eye-catching, which features cause confusion and which are ignored altogether. Specifically, eye-tracking can be used to assess search efficiency, branding, online advertisements, navigation usability, overall design and many other site components. Analyses may target a prototype or competitor site in addition to the main client site.
Eye-tracking is commonly used in a variety of different advertising media. Commercials, print ads, online ads and sponsored programs are all conducive to analysis with current eye-tracking technology. One example is the analysis of eye movements over advertisements in the Yellow Pages. One study focused on what particular features caused people to notice an ad, whether they viewed ads in a particular order and how viewing times varied. The study revealed that ad size, graphics, color, and copy all influence attention to advertisements. Knowing this allows researchers to assess in great detail how often a sample of consumers fixates on the target logo, product or ad. Hence an advertiser can quantify the success of a given campaign in terms of actual visual attention. Another example of this is a study that found that in a search engine results page, authorship snippets received more attention than the paid ads or even the first organic result.
Yet another example of commercial eye-tracking research comes from the field of recruitment. A study analyzed how recruiters screen LinkedIn profiles and presented results as heat maps.
Safety applications
Scientists in 2017 constructed a Deep Integrated Neural Network (DINN) out of a Deep Neural Network and a convolutional neural network. The goal was to use deep learning to examine images of drivers and determine their level of drowsiness by "classify[ing] eye states." With enough images, the proposed DINN could ideally determine when drivers blink, how often they blink, and for how long. From there, it could judge how tired a given driver appears to be, effectively conducting an eye-tracking exercise. The DINN was trained on data from over 2,400 subjects and correctly diagnosed their states 96%-99.5% of the time. Most other artificial intelligence models performed at rates above 90%. This technology could ideally provide another avenue for driver drowsiness detection.
Game theory applications
In a 2019 study, a Convolutional Neural Network (CNN) was constructed with the ability to identify individual chess pieces the same way other CNNs can identify facial features. It was then fed eye-tracking input data from 30 chess players of various skill levels. With this data, the CNN used gaze estimation to determine parts of the chess board to which a player was paying close attention. It then generated a saliency map to illustrate those parts of the board. Ultimately, the CNN would combine its knowledge of the board and pieces with its saliency map to predict the players' next move. Regardless of the training dataset the neural network system was trained upon, it predicted the next move more accurately than if it had selected any possible move at random, and the saliency maps drawn for any given player and situation were more than 54% similar.
Assistive technology
People with severe motor impairment can use eye tracking for interacting with computers as it is faster than single switch scanning techniques and intuitive to operate. Motor impairment caused by Cerebral Palsy or Amyotrophic lateral sclerosis often affects speech, and users with Severe Speech and Motor Impairment (SSMI) use a type of software known as Augmentative and Alternative Communication (AAC) aid, that displays icons, words and letters on screen and uses text-to-speech software to generate spoken output. In recent times, researchers also explored eye tracking to control robotic arms and powered wheelchairs. Eye tracking is also helpful in analysing visual search patterns, detecting presence of Nystagmus and detecting early signs of learning disability by analysing eye gaze movement during reading.
Aviation applications
Eye tracking has already been studied for flight safety by comparing scan paths and fixation duration to evaluate the progress of pilot trainees, for estimating pilots' skills, for analyzing crew's joint attention and shared situational awareness. Eye tracking technology was also explored to interact with helmet mounted display systems and multi-functional displays in military aircraft. Studies were conducted to investigate the utility of eye tracker for Head-up target locking and Head-up target acquisition in Helmet mounted display systems (HMDS). Pilots' feedback suggested that even though the technology is promising, its hardware and software components are yet to be matured. Research on interacting with multi-functional displays in simulator environment showed that eye tracking can improve the response times and perceived cognitive load significantly over existing systems. Further, research also investigated utilizing measurements of fixation and pupillary responses to estimate pilot's cognitive load. Estimating cognitive load can help to design next generation adaptive cockpits with improved flight safety. Eye tracking is also useful for detecting pilot fatigue.
Automotive applications
In recent time, eye tracking technology is investigated in automotive domain in both passive and active ways. National Highway Traffic Safety Administration measured glance duration for undertaking secondary tasks while driving and used it to promote safety by discouraging the introduction of excessively distracting devices in vehicles In addition to distraction detection, eye tracking is also used to interact with IVIS. Though initial research investigated the efficacy of eye tracking system for interaction with HDD (Head Down Display), it still required drivers to take their eyes off the road while performing a secondary task. Recent studies investigated eye gaze controlled interaction with HUD (Head Up Display) that eliminates eyes-off-road distraction. Eye tracking is also used to monitor cognitive load of drivers to detect potential distraction. Though researchers explored different methods to estimate cognitive load of drivers from different physiological parameters, usage of ocular parameters explored a new way to use the existing eye trackers to monitor cognitive load of drivers in addition to interaction with IVIS.
Entertainment applications
The 2021 video game Before Your Eyes registers and reads the player's blinking, and uses it as the main way of interacting with the game.
Engineering applications
The widespread use of eye-tracking technology has shed light to its use in empirical software engineering in the most recent years. The eye-tracking technology and data analysis techniques are used to investigate the understandability of software engineering concepts by the researchers. These include the understandability of business process models, and diagrams used in software engineering such as UML activity diagrams and EER diagrams. Eye-tracking metrics such as fixation, scan-path, scan-path precision, scan-path recall, fixations on area of interest/relevant region are computed, analyzed and interpreted in terms of model and diagram understandability. The findings are used to enhance the understandability of diagrams and models with proper model related solutions and by improving personal related factors such as working-memory capacity, cognitive-load, learning style and strategy of the software engineers and modelers.
Cartographic applications
Cartographic research has widely adopted eye tracking techniques. Researchers have used them to see how individuals perceive and interpret maps. For example, eye tracking has been used to study differences in perception of 2D and 3D visualization, comparison of map reading strategies between novices and experts or students and their geography teachers, and evaluation of the cartographic quality of maps. Besides, cartographers have employed eye tracking to investigate various factors affecting map reading, including attributes such as color or symbol density. Numerous studies about the usability of map applications took advantage of eye tracking, too.
The cartographic community's daily engagement with visual and spatial data positioned it to contribute significantly to eye tracking data visualization methods and tools. For example, cartographers have developed methods for integrating eye tracking data with GIS, utilizing GIS software for further visualization and analysis. The community has also delivered tools for visualizing eye tracking data or a toolbox for the identification of eye fixations based on a spatial component of eye-tracking data.
Privacy concerns
With eye tracking projected to become a common feature in various consumer electronics, including smartphones, laptops and virtual reality headsets, concerns have been raised about the technology's impact on consumer privacy. With the aid of machine learning techniques, eye tracking data may indirectly reveal information about a user's ethnicity, personality traits, fears, emotions, interests, skills, and physical and mental health condition. If such inferences are drawn without a user's awareness or approval, this can be classified as an inference attack. Eye activities are not always under volitional control, e.g., "stimulus-driven glances, pupil dilation, ocular tremor, and spontaneous blinks mostly occur without conscious effort, similar to digestion and breathing”. Therefore, it can be difficult for eye tracking users to estimate or control the amount of information they reveal about themselves.
See also
Eye tracking on the ISS
Foveated imaging
Mouse-Tracking
Screen reading
Notes
References
Romano Bergstrom, Jennifer (2014). Eye Tracking in User Experience Design. Morgan Kaufmann. .
Bojko, Aga (2013). Eye Tracking The User Experience (A Practical Guide to Research). Rosenfeld Media. .
Commercial eye tracking
Attention
Cognitive science
Articles containing video clips
Human eye
History of human–computer interaction
Market research
Multimodal interaction
Promotion and marketing communications
Usability
Vision
Web design
Applications of computer vision
Virtual reality
Augmented reality
Metaverse
Spatial computing | Eye tracking | [
"Technology",
"Engineering"
] | 6,414 | [
"History of human–computer interaction",
"Design",
"Web design",
"History of computing"
] |
1,543,618 | https://en.wikipedia.org/wiki/Egosyntonicity | In psychoanalysis, egosyntonic behaviors, values, and feelings are in harmony with or acceptable to the needs and goals of the ego, or consistent with one's ideal self-image. Egodystonic (or ego alien) behaviors are the opposite, referring to thoughts and behaviors (dreams, compulsions, desires, etc.) that are conflicting or dissonant with the needs and goals of the ego, or further, in conflict with a person's ideal self-image.
Applicability
Abnormal psychology has studied egosyntonic and egodystonic concepts in some detail. Many personality disorders are egosyntonic, which makes their treatment difficult as the patients may not perceive anything wrong and view their perceptions and behavior as reasonable and appropriate. For example, a person with narcissistic personality disorder has an excessively positive self-regard and rejects suggestions that challenge this viewpoint. This corresponds to the general concept in psychiatry of poor insight. Anorexia nervosa, a difficult-to-treat disorder (formerly considered an Axis I disorder before the release of the DSM-5) characterized by a distorted body image and fear of gaining weight, is also considered egosyntonic because many of its sufferers deny that they have a problem. Problem gambling, however, is only sometimes seen as egosyntonic, depending partly on the reactions of the individual involved and whether they know that their gambling is problematic.
An illustration of the differences between an egodystonic and egosyntonic mental disorder is in comparing obsessive–compulsive disorder (OCD) and obsessive–compulsive personality disorder. OCD is considered to be egodystonic as the thoughts and compulsions experienced or expressed are not consistent with the individual's self-perception, meaning the thoughts are unwanted, distressing, and reflect the opposite of their values, desires, and self-construct. In contrast, obsessive–compulsive personality disorder is egosyntonic, as the patient generally perceives their obsession with orderliness, perfectionism, and control, as reasonable and even desirable.
Freudian heritage
The words "egosyntonic" and "egodystonic" originated as early-1920s translations of the German words "ichgerecht" and "nicht ichgerecht", "ichfremd", or "ichwidrig", which were introduced in 1914 by Freud in his book On Narcissism and remained an important part of his conceptual inventory. Freud applied these words to the relationship between a person's "instincts" and their "ego." Freud saw psychic conflict arising when "the original lagging instincts ... come into conflict with the ego (or ego-syntonic instincts)". According to him, "ego-dystonic" sexual instincts were bound to be "repressed." Anna Freud stated that psychological "defences" which were "ego-syntonic" were harder to expose than ego-dystonic impulses, because the former are 'familiar' and taken for granted. Later psychoanalytic writers emphasised how direct expression of the repressed was ego-dystonic, and indirect expression more ego-syntonic.
Otto Fenichel distinguished between morbid impulses, which he saw as ego-syntonic, and compulsive symptoms which struck their possessors as ego-alien. Heinz Hartmann, and after him ego psychology, also made central use of the twin concepts.
See also
References
Ego psychology
Narcissism
Personality disorders | Egosyntonicity | [
"Biology"
] | 734 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
1,543,682 | https://en.wikipedia.org/wiki/DC%20One%20Million | "DC One Million" is a comic book crossover storyline which ran through an eponymous weekly miniseries and through special issues of almost all of the "DCU" titles published by DC Comics in November 1998. It featured a vision of the DC Universe in the 853rd century (85,201–85,300 AD), chosen because that is the century in which DC will have published issue #1,000,000 of Action Comics if it maintains a regular monthly publishing schedule. The miniseries was written by Grant Morrison and drawn by Val Semeiks.
Set-up
The core of the event was a four-issue miniseries, in which the 20th-century Justice League of America and the 853rd-century Justice Legion Alpha cooperate to defeat a plot by the supervillain Vandal Savage (who, as an immortal, lives to the far flung century) and future Superman nemesis Solaris the Living Sun. Thirty-four other series then being published by DC also put out a single issue numbered #1,000,000, which either showed its characters' involvement in the central plot or gave a glimpse of what its characters' descendants/successors would be doing in the 853rd century. Hitman #1,000,000 was essentially a parody of the entire storyline. A trade paperback collection was subsequently published consisting of the four-issue mini-series and the tie-in issues that were necessary to follow the main plot. The series was then followed by a one-shot issue titled DC One Million 80-Page Giant #1,000,000 (1999), which was a collection of further adventures in the life of the future heroes.
Plot
In the 853rd century, the original Superman ("Superman-Prime One Million") still lives, but has spent over 15,000 years in exile within his Fortress of Solitude, located at the heart of the Sun, to keep it alive. During this time of absence, everyone he knew and loved died one by one. One of his descendants is "Kal Kent", the Superman of the 853rd century.
The galaxy in this far future is protected by the Justice Legions, which were inspired by the 20th-century Justice League and the 31st-century Legion of Super-Heroes, among others. Justice Legion Alpha, which protects the solar system, includes Kal Kent and future analogues of Wonder Woman, Hourman, Starman, Aquaman, the Flash and Batman. Advanced terraforming processes have made all the Solar System's planets habitable, with the ones most distant from the Sun being warmed by Solaris, a "star computer" which was once a villain, but was reprogrammed by one of Superman's descendants.
Superman-Prime announces that he will soon return to humanity and, to celebrate, Justice Legion Alpha travels to the late 20th century to meet Superman's original teammates in the JLA and bring them and Superman to the future to participate in games and displays of power as part of the celebration.
Meanwhile, in Russia, Vandal Savage single-handedly defeats the Titans (Arsenal, Tempest, Jesse Quick and Supergirl) when they attempt to stop him from purchasing nuclear-powered Rocket Red suits. He then launches four Rocket Red suits (with a Titan trapped inside each of the four) in a nuclear strike on Washington D.C., Metropolis, Brussels and Singapore.
One member of the Justice Legion Alpha (the future Starman) has been bribed into betraying his teammates by Solaris, which has returned to its old habits. Before the original heroes can be returned to their own time, the future Hourman android collapses and releases a virus programmed by Solaris to attack machines and humans.
The virus affects the guidance systems of the Rocket Red suits and causes one of them to instead detonate over Montevideo, killing over 1 million people. Tempest (the Titan inside) had escaped long before the suit exploded by using the ice that formed on the suit at high altitude, although he subsequently blacked out and fell into the sea. The virus also drives humans insane, causing an increase in anger and paranoia worldwide. Believing that this was deliberately planned by the JLA to stop him, Savage launches an all-out war on superhumans using "blitz engines" he had created and hidden while allied with Adolf Hitler during World War II. The paranoia caused by the virus also leads the Justice Legion Alpha and the contemporary heroes to attack each other, although the Justice Legion Alpha manage to coordinate themselves enough to stop the other Rocket Red suits from hitting their targets.
The remnants of the JLA that stayed in the present and the Justice Legion Alpha overcome their paranoia when the future Superman and Steel realize the significance of the symbol they both wear; as the Huntress had pointed out to Steel earlier, wearing the 'S' means that he has to make the hard choices. The two JLAs are eventually able to stop the virus when it is discovered that it is a complex computer program looking for appropriate hardware. To provide this hardware, the heroes are forced to build the body of Solaris (including in it a DNA sample of Superman's wife Lois Lane) and the virus flees from Earth to this body, bringing Solaris to life. In a final act of repentance, the future Starman sacrifices himself to banish Solaris from the Solar System. The future Superman forces himself through time using confiscated time travel technology he finds in the Watchtower, almost dying in the process due to the drain on his powers.
Meanwhile, in the 853rd century, the original JLA are fighting an alliance between Solaris and Vandal Savage. Savage has found a sample of kryptonite on Mars (where it was left by the future Starman back in the 20th century), which he gives to Solaris. Savage has also hired Walker Gabriel to steal the time travel gauntlets of the 853rd century Flash (John Fox) to ensure the Justice Legion Alpha remains trapped in the past, but ultimately double-crosses Gabriel.
Solaris, in a final attack, slaughters thousands of superhumans so that it can fire the kryptonite into the sun and kill Superman-Prime before he emerges. The JLA's Green Lantern — a hero who uses a power that Solaris has never encountered before — causes Solaris to go supernova and he and the 853rd century Superman contain the resulting blast — but not before the kryptonite is released.
The future Vandal Savage teleports from Mars to Earth using the stolen Time-Gauntlets. It turns out, however, that Walker Gabriel and Mitch Shelley, the Resurrection Man (an immortal who had become Savage's greatest foe through the millennia), had sabotaged the Gauntlets so that Savage, instead of travelling only in space, also travels through time, arriving in Montevideo moments before the nuclear blast he caused centuries earlier, finally bringing his life to an end.
It is then revealed that a secret conspiracy — forewarned by the trouble in the 20th century, mainly in that the Huntress, inspired by the time capsules which students in her class were currently making, realized they had centuries to foil the plot — has spent the intervening centuries coming up with a foolproof plan for stopping Solaris. Their actions included replacing the hidden kryptonite with a disguised Green Lantern power ring, with which the original Superman emerges from the Sun and finishes off Solaris.
In the aftermath, the original Superman and the future Hourman use the DNA sample to recreate Lois Lane, complete with superpowers. Superman then also recreates Krypton, along with all its deceased inhabitants, in Earth's Solar system, and lives happily ever after with Lois.
Later, in the miniseries The Kingdom, it is established that this timeline is merely one of many possibilities and thus not definite due to the mutable effects of Hypertime.
Crossovers
Alongside the main DC One Million miniseries and the accompanying 80-Page Giant issue, the following ongoing DC Comics books also partook in the event:
Action Comics
Adventures of Superman
Aquaman
Azrael
Batman
Batman: Shadow of the Bat
Booster Gold
Catwoman
Chase
Chronos
Creeper
Detective Comics
Flash
Green Arrow
Green Lantern
Hitman
Hourman
Impulse
JLA
Legion of Super-Heroes (vol. 4)
Legionnaires
Lobo
Martian Manhunter
Nightwing
The Power of Shazam (vol. 2)
Resurrection Man
Robin
Starman (vol. 2)
Superboy
Supergirl
Superman (vol. 2)
Superman: The Man of Steel
Superman: The Man of Tomorrow
Wonder Woman
Young Heroes in Love
Young Justice
The Justice Legions
There are 24 Justice Legions, each based on 20th- and 30th-century superhero teams. Those featured include:
Justice Legion A is based on the Justice League.
Justice Legion B is based on the Titans. Members include Nightwing (a bat-like humanoid), Aqualad (a humanoid made from water), Troy (a younger version of the 853rd century Wonder Woman), Arsenal (a robot) and Joto (killed in a teleporter accident).
Justice Legion L is based on the Legion of Super-Heroes and protects an artificially created planetary system (all that remains of the United Planets). Members include Cosmicbot (a cyborg based on magnetism, modelled on Cosmic Boy), Titangirl (the combined psychic energy of all Titanians, based on Saturn Girl), Implicate Girl (who contains the abilities of all three trillion Carggites in her "third eye", loosely based on Triplicate Girl), Brainiac 417 (a disembodied intelligence, based on Brainiac 5 and Apparition), the M'onelves (who combine the powers of M'onel and Shrinking Violet) and humanoid versions of Umbra and Chameleon.
Justice Legion S consists of numerous Superboy clones, all with different powers. Members include Superboy 820 (with aquatic powers), Superboy 3541 (who can increase his size) and Superboy One Million (who can channel any of their powers through "the Eye"). They all (most notably One Million) resemble OMAC as much as Superboy. This was an intentional pun, as the title of the story was "One Million And Counting", which referred to the 1 million clones and formed the OMAC acronym.
Justice Legion T is based on Young Justice. Members include Superboy One Million (as referred to above), Robin the Toy Wonder (an optimistic robot sidekick to the 853rd century Batman) and Impulse (the living embodiment of random thoughts lost in the Speed Force).
Justice Legion Z (for Zoomorphs) is based on the Legion of Super-Pets. Members include Proty One Million and Master Mind. A version of Comet is also a member.
Other characters
Several other futuristic versions of DC characters appeared in the crossover, including:
Atom
Azrael
Booster Gold
Captain Marvel
Catwoman
Charade City
Gunfire
Lex Luthor
Supergirl
Later references
In 2008, 10 years after the crossover, an issue of Booster Gold (vol. 2) was published as Booster Gold #1,000,000 and was announced as an official DC One Million tie-in by DC Comics. This comic introduced Peter Platinum, the Booster Gold of the 853rd century.
Grant Morrison's All-Star Superman miniseries made several references to the DC One Million miniseries. The Superman from DC One Million makes an appearance and the series ends with Superman becoming an energy being who resides in the Sun after his body has been supercharged with yellow solar energy (similar in appearance to Superman-Prime) and Solaris makes an appearance as well.
Morrison's Batman #700 also briefly shows the One Million Batman and his sidekick—Robin, the Toy Wonder—alongside a number of future iterations of Batman.
The One Million Batman, Robin the Toy Wonder and One Million Superman play a significant role in Superman/Batman #79–80, in which Epoch battles Batmen and Supermen from various time periods.
By signing into WBID account in the video game Batman: Arkham Origins, the costume of the One Million version of Batman will be unlocked for use.
Awards
The original miniseries was a top vote-getter for the Comics Buyer's Guide Fan Award for Favorite Limited Series for 1999. The storyline was a top vote-getter for the Comics Buyer's Guide Award for Favorite Story for 1999.
Collected editions
DC One Million, later reprinted with the title JLA: One Million (208 pages, DC Comics, June 1999, , Titan Books, June 1999, , DC Comics, June 2004, ) collects:
DC One Million (by Grant Morrison, with pencils by Val Semeiks and inks by Prentis Rollins/Jeff Albrecht/Del Barras, four-issue miniseries)
Green Lantern #1,000,000 (by Ron Marz, with pencils by Bryan Hitch and inks by Andy Lanning/Paul Neary)
Resurrection Man #1,000,000 (by Dan Abnett/Andy Lanning, with art by Jackson Guice)
Starman #1,000,000 (by James Robinson, with pencils by Peter Snejbjerg and inks by Wade Von Grawbadger)
JLA #1,000,000 (by Grant Morrison, with pencils by Howard Porter and inks by John Dell)
Superman: The Man of Tomorrow #1,000,000 (by Mark Schultz, with pencils by Georges Jeanty and inks by Dennis Janke/Denis Rodier)
Detective Comics #1,000,000 (by Chuck Dixon, with pencils by Greg Land and inks by Drew Geraci)
DC One Million Omnibus (1,080 pages, DC Comics, October 2013, ) collects:
DC One Million #1–4, plus the #1,000,000 issues of Action Comics, Adventures Of Superman, Aquaman, Azrael, Batman, Batman: Shadow Of The Bat, Catwoman, Chase, Chronos, The Creeper, Detective Comics, The Flash, Green Arrow, Green Lantern, Hitman, Impulse, JLA, Legion of Super-Heroes, Legionnaires, Lobo, Martian Manhunter, Nightwing, Power Of Shazam, Resurrection Man, Robin, Starman, Superboy, Supergirl, Superman (vol. 2), Superman: The Man of Steel, Superman: The Man of Tomorrow, Wonder Woman and Young Justice; as well as Booster Gold #1,000,000, DC One Million 80-Page Giant #1 and Superman/Batman #79–80 (the Omnibus did not include the #1,000,000 issue of Young Heroes in Love, as it was a creator-owned series).
References
External links
Comics Buyer's Guide Fan Awards
Sequart on DC One Million
DC Comics dimensions
DC Comics planets
Comics about time travel
Comics by Grant Morrison
Fiction set in the 7th millennium or beyond
Works set in the future
Fiction about malware
Fiction about nanotechnology
Comics about artificial intelligence | DC One Million | [
"Materials_science"
] | 3,058 | [
"Fiction about nanotechnology",
"Nanotechnology"
] |
1,543,684 | https://en.wikipedia.org/wiki/Free%20Software%20Magazine | Free Software Magazine (also known as FSM and originally titled The Open Voice) is a Web site that produces a (generally bi-monthly) mostly free-content online magazine about free software.
It was started in November 2004 by Australian Tony Mobily, the former editor of TUX Magazine, under the auspices of The Open Company Partners, Inc. (based in the United States), and carried the subtitle The free magazine for the free software world.
History
FSM was originally conceived by its creator as a magazine to be sold in both print and electronic formats, with a higher signal-to-noise ratio than mass-produced print Linux magazines. Under this model, the articles were freely licensed six weeks after the print edition's publication. As O'Reilly Media's onLAMP.com noted, "several excellent magazines cover Linux, but they’re directed at particular subsets of Linux users and don’t have the broad mandate of Free Software Magazine."
However, the high costs of printing and postage resulted in the magazine moving to exclusively electronic publication via PDF.
PDF version history
Initially a print-ready, hand-crafted PDF version was available for download. With Issue 16 (February 2007), this was withdrawn, with the publishers citing time and money constraints. As a result, the magazine is no longer available in print copy. This move sparked a harsh response from some members of the community. As a result, from March 2008, PDF and printer friendly version of articles and PDF versions of entire issues were made available to all logged-in users. These PDF files are created automatically using TOXIC and omit the styling and presentation of the print-ready ones.
Content
FSM devotes most of its context to Linux, the GNU Project and free software in general, including articles about software freedom and how it can be protected. The issues had three main sections:
Power-up Non-technical articles about various subjects (interviews, opinions, book reviews, etc.)
User space Articles aimed at end users.
Hacker's code Technical articles about what can be achieved with free software.
Most of the articles are released under a free license (generally a Creative Commons License or GNU Free Documentation License). Some articles are released under a verbatim-copying-only license.
In keeping with the move to more on-line content, FSM moved to blog-style columns where regular authors write on more political, philosophical and ethical aspects of the free software world, and discuss free software advocacy and community in addition to tutorials and reviews of free software. There is also a community posts section which allows registered users to post similar blog-style pieces. The site also features a regular webcomic "the Bizarre Cathedral".
Free Software Daily
Free Software Daily (FS Daily) was a website originally created by the staff of FSM that posted summaries of articles about free software. At first, it was based on Slash and was similar in nature to Slashdot.org. However, the project died before it could gain momentum, mainly because of the huge hardware resources required by Slash and the time constraints of the FSM staff.
The FSM website's blogs somewhat filled the gap that Free Software Daily originally planned to fill. But later, FS Daily came back, first as a Pligg based site, and then as a Drigg site. Drigg was developed by Free Software Magazine's editor Tony Mobily specifically for FSDaily. However, Drigg is now available as a standard Drupal module.
Although Free Software Magazine and Free Software Daily share similar motives and a common root, they are no longer directly connected.
Free Software Magazine Press
In 2009 Free Software Magazine Press published their first book under the imprint of Free Software Magazine Press. The book, Achieving Impossible Things with Free Culture and Commons-Based Enterprise by Terry Hancock, was published both as a printed book and as a series of free articles released under an "Attribution Share-Alike" Creative Commons license.
See also
Linux Journal
Linux Weekly News
Linux Gazette
References
External links
Bimonthly magazines published in the United States
Computer magazines published in the United States
Downloadable magazines
Drupal
Free magazines
Free software websites
Linux magazines
Magazines established in 2004
Online computer magazines | Free Software Magazine | [
"Technology"
] | 861 | [
"Computing websites",
"Free software websites"
] |
1,543,711 | https://en.wikipedia.org/wiki/Compactification%20%28physics%29 | In theoretical physics, compactification means changing a theory with respect to one of its space-time dimensions. Instead of having a theory with this dimension being infinite, one changes the theory so that this dimension has a finite length, and may also be periodic.
Compactification plays an important part in thermal field theory where one compactifies time, in string theory where one compactifies the extra dimensions of the theory, and in two- or one-dimensional solid state physics, where one considers a system which is limited in one of the three usual spatial dimensions.
At the limit where the size of the compact dimension goes to zero, no fields depend on this extra dimension, and the theory is dimensionally reduced.
In string theory
In string theory, compactification is a generalization of Kaluza–Klein theory. It tries to reconcile the gap between the conception of our universe based on its four observable dimensions with the ten, eleven, or twenty-six dimensions which theoretical equations lead us to suppose the universe is made with.
For this purpose it is assumed the extra dimensions are "wrapped" up on themselves, or "curled" up on Calabi–Yau spaces, or on orbifolds. Models in which the compact directions support fluxes are known as flux compactifications. The coupling constant of string theory, which determines the probability of strings splitting and reconnecting, can be described by a field called a dilaton. This in turn can be described as the size of an extra (eleventh) dimension which is compact. In this way, the ten-dimensional type IIA string theory can be described as the compactification of M-theory in eleven dimensions. Furthermore, different versions of string theory are related by different compactifications in a procedure known as T-duality.
The formulation of more precise versions of the meaning of compactification in this context has been promoted by discoveries such as the mysterious duality.
Flux compactification
A flux compactification is a particular way to deal with additional dimensions required by string theory.
It assumes that the shape of the internal manifold is a Calabi–Yau manifold or generalized Calabi–Yau manifold which is equipped with non-zero values of fluxes, i.e. differential forms, that generalize the concept of an electromagnetic field (see p-form electrodynamics).
The hypothetical concept of the anthropic landscape in string theory follows from a large number of possibilities in which the integers that characterize the fluxes can be chosen without violating rules of string theory. The flux compactifications can be described as F-theory vacua or type IIB string theory vacua with or without D-branes.
See also
Dimensional reduction
References
Further reading
Chapter 16 of Michael Green, John H. Schwarz and Edward Witten (1987). Superstring theory. Cambridge University Press. Vol. 2: Loop amplitudes, anomalies and phenomenology. .
Brian R. Greene, "String Theory on Calabi–Yau Manifolds". .
Mariana Graña, "Flux compactifications in string theory: A comprehensive review", Physics Reports 423, 91–158 (2006). .
Michael R. Douglas and Shamit Kachru "Flux compactification", Rev. Mod. Phys. 79, 733 (2007). .
Ralph Blumenhagen, Boris Körs, Dieter Lüst, Stephan Stieberger, "Four-dimensional string compactifications with D-branes, orientifolds and fluxes", Physics Reports 445, 1–193 (2007). .
String theory | Compactification (physics) | [
"Astronomy"
] | 737 | [
"String theory",
"Astronomical hypotheses"
] |
1,543,735 | https://en.wikipedia.org/wiki/Matrix%20norm | In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication.
Preliminaries
Given a field of either real or complex numbers (or any complete subset thereof), let be the -vector space of matrices with rows and columns and entries in the field A matrix norm is a norm on
Norms are often expressed with double vertical bars (like so: ). Thus, the matrix norm is a function that must satisfy the following properties:
For all scalars and matrices
(positive-valued)
(definite)
(absolutely homogeneous)
(sub-additive or satisfying the triangle inequality)
The only feature distinguishing matrices from rearranged vectors is multiplication. Matrix norms are particularly useful if they are also sub-multiplicative:
Every norm on can be rescaled to be sub-multiplicative; in some books, the terminology matrix norm is reserved for sub-multiplicative norms.
Matrix norms induced by vector norms
Suppose a vector norm on and a vector norm on are given. Any matrix induces a linear operator from to with respect to the standard basis, and one defines the corresponding induced norm or operator norm or subordinate norm on the space of all matrices as follows:
where denotes the supremum. This norm measures how much the mapping induced by can stretch vectors.
Depending on the vector norms , used, notation other than can be used for the operator norm.
Matrix norms induced by vector p-norms
If the p-norm for vectors () is used for both spaces and then the corresponding operator norm is:These induced norms are different from the "entry-wise" p-norms and the Schatten p-norms for matrices treated below, which are also usually denoted by
Geometrically speaking, one can imagine a p-norm unit ball in , then apply the linear map to the ball. It would end up becoming a distorted convex shape , and measures the longest "radius" of the distorted convex shape. In other words, we must take a p-norm unit ball in , then multiply it by at least , in order for it to be large enough to contain .
p = 1 or ∞
When or we have simple formulas.
which is simply the maximum absolute column sum of the matrix.
which is simply the maximum absolute row sum of the matrix.
For example, for
we have that
Spectral norm (p = 2)
When (the Euclidean norm or -norm for vectors), the induced matrix norm is the spectral norm. The two values do not coincide in infinite dimensions — see Spectral radius for further discussion. The spectral radius should not be confused with the spectral norm. The spectral norm of a matrix is the largest singular value of , i.e., the square root of the largest eigenvalue of the matrix where denotes the conjugate transpose of :where represents the largest singular value of matrix
There are further properties:
Proved by the Cauchy–Schwarz inequality.
. Proven by singular value decomposition (SVD) on .
, where is the Frobenius norm. Equality holds if and only if the matrix is a rank-one matrix or a zero matrix.
.
Matrix norms induced by vector α- and β-norms
We can generalize the above definition. Suppose we have vector norms and for spaces and respectively; the corresponding operator norm isIn particular, the defined previously is the special case of .
In the special cases of and , the induced matrix norms can be computed by where is the i-th row of matrix .
In the special cases of and , the induced matrix norms can be computed by where is the j-th column of matrix .
Hence, and are the maximum row and column 2-norm of the matrix, respectively.
Properties
Any operator norm is consistent with the vector norms that induce it, giving
Suppose ; ; and are operator norms induced by the respective pairs of vector norms ; ; and . Then,
this follows from
and
Square matrices
Suppose is an operator norm on the space of square matrices
induced by vector norms and .
Then, the operator norm is a sub-multiplicative matrix norm:
Moreover, any such norm satisfies the inequality
for all positive integers r, where is the spectral radius of . For symmetric or hermitian , we have equality in () for the 2-norm, since in this case the 2-norm is precisely the spectral radius of . For an arbitrary matrix, we may not have equality for any norm; a counterexample would be
which has vanishing spectral radius. In any case, for any matrix norm, we have the spectral radius formula:
Energy norms
If the vector norms and are given in terms of energy norms based on symmetric positive definite matrices and respectively, the resulting operator norm is given as
Using the symmetric matrix square roots of and respectively, the operator norm can be expressed as the spectral norm of a modified matrix:
Consistent and compatible norms
A matrix norm on is called consistent with a vector norm on and a vector norm on , if:
for all and all . In the special case of and , is also called compatible with .
All induced norms are consistent by definition. Also, any sub-multiplicative matrix norm on induces a compatible vector norm on by defining .
"Entry-wise" matrix norms
These norms treat an matrix as a vector of size , and use one of the familiar vector norms. For example, using the p-norm for vectors, , we get:
This is a different norm from the induced p-norm (see above) and the Schatten p-norm (see below), but the notation is the same.
The special case p = 2 is the Frobenius norm, and p = ∞ yields the maximum norm.
and norms
Let be the dimension columns of matrix . From the original definition, the matrix presents data points in an -dimensional space. The norm is the sum of the Euclidean norms of the columns of the matrix:
The norm as an error function is more robust, since the error for each data point (a column) is not squared. It is used in robust data analysis and sparse coding.
For , the norm can be generalized to the norm as follows:
Frobenius norm
When for the norm, it is called the Frobenius norm or the Hilbert–Schmidt norm, though the latter term is used more frequently in the context of operators on (possibly infinite-dimensional) Hilbert space. This norm can be defined in various ways:
where the trace is the sum of diagonal entries, and are the singular values of . The second equality is proven by explicit computation of . The third equality is proven by singular value decomposition of , and the fact that the trace is invariant under circular shifts.
The Frobenius norm is an extension of the Euclidean norm to and comes from the Frobenius inner product on the space of all matrices.
The Frobenius norm is sub-multiplicative and is very useful for numerical linear algebra. The sub-multiplicativity of Frobenius norm can be proved using Cauchy–Schwarz inequality.
Frobenius norm is often easier to compute than induced norms, and has the useful property of being invariant under rotations (and unitary operations in general). That is, for any unitary matrix . This property follows from the cyclic nature of the trace ():
and analogously:
where we have used the unitary nature of (that is, ).
It also satisfies
and
where is the Frobenius inner product, and Re is the real part of a complex number (irrelevant for real matrices)
Max norm
The max norm is the elementwise norm in the limit as goes to infinity:
This norm is not sub-multiplicative; but modifying the right-hand side to makes it so.
Note that in some literature (such as Communication complexity), an alternative definition of max-norm, also called the -norm, refers to the factorization norm:
Schatten norms
The Schatten p-norms arise when applying the p-norm to the vector of singular values of a matrix. If the singular values of the matrix are denoted by σi, then the Schatten p-norm is defined by
These norms again share the notation with the induced and entry-wise p-norms, but they are different.
All Schatten norms are sub-multiplicative. They are also unitarily invariant, which means that for all matrices and all unitary matrices and .
The most familiar cases are p = 1, 2, ∞. The case p = 2 yields the Frobenius norm, introduced before. The case p = ∞ yields the spectral norm, which is the operator norm induced by the vector 2-norm (see above). Finally, p = 1 yields the nuclear norm (also known as the trace norm, or the Ky Fan 'n'-norm), defined as:
where denotes a positive semidefinite matrix such that . More precisely, since is a positive semidefinite matrix, its square root is well defined. The nuclear norm is a convex envelope of the rank function , so it is often used in mathematical optimization to search for low-rank matrices.
Combining von Neumann's trace inequality with Hölder's inequality for Euclidean space yields a version of Hölder's inequality for Schatten norms for :
In particular, this implies the Schatten norm inequality
Monotone norms
A matrix norm is called monotone if it is monotonic with respect to the Loewner order. Thus, a matrix norm is increasing if
The Frobenius norm and spectral norm are examples of monotone norms.
Cut norms
Another source of inspiration for matrix norms arises from considering a matrix as the adjacency matrix of a weighted, directed graph. The so-called "cut norm" measures how close the associated graph is to being bipartite:
where . Equivalent definitions (up to a constant factor) impose the conditions ; ; or .
The cut-norm is equivalent to the induced operator norm , which is itself equivalent to another norm, called the Grothendieck norm.
To define the Grothendieck norm, first note that a linear operator is just a scalar, and thus extends to a linear operator on any . Moreover, given any choice of basis for and , any linear operator extends to a linear operator , by letting each matrix element on elements of via scalar multiplication. The Grothendieck norm is the norm of that extended operator; in symbols:
The Grothendieck norm depends on choice of basis (usually taken to be the standard basis) and .
Equivalence of norms
For any two matrix norms and , we have that:
for some positive numbers r and s, for all matrices . In other words, all norms on are equivalent; they induce the same topology on . This is true because the vector space has the finite dimension .
Moreover, for every matrix norm on there exists a unique positive real number such that is a sub-multiplicative matrix norm for every ; to wit,
A sub-multiplicative matrix norm is said to be minimal, if there exists no other sub-multiplicative matrix norm satisfying .
Examples of norm equivalence
Let once again refer to the norm induced by the vector p-norm (as above in the Induced norm section).
For matrix of rank , the following inequalities hold:
See also
Dual norm
Logarithmic norm
Notes
References
Bibliography
James W. Demmel, Applied Numerical Linear Algebra, section 1.7, published by SIAM, 1997.
Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, published by SIAM, 2000.
John Watrous, Theory of Quantum Information, 2.3 Norms of operators, lecture notes, University of Waterloo, 2011.
Kendall Atkinson, An Introduction to Numerical Analysis, published by John Wiley & Sons, Inc 1989
Norms (mathematics)
Linear algebra | Matrix norm | [
"Mathematics"
] | 2,417 | [
"Linear algebra",
"Mathematical analysis",
"Norms (mathematics)",
"Algebra"
] |
1,543,775 | https://en.wikipedia.org/wiki/Code%20of%20conduct | A code of conduct is a set of rules outlining the norms, rules, and responsibilities or proper practices of an individual party or an organization.
Companies' codes of conduct
A company code of conduct is a set of rules which is commonly written for employees of a company, which protects the business and informs the employees of the company's expectations. It is appropriate for even the smallest of companies to create a document containing important information on expectations for employees. The document does not need to be complex or have elaborate policies.
Failure of an employee to follow a company's code of conduct can have negative consequences. In Morgan Stanley v. Skowron, 989 F. Supp. 2d 356 (S.D.N.Y. 2013), applying New York's faithless servant doctrine, the court held that a hedge fund's employee engaging in insider trading in violation of his company's code of conduct, which also required him to report his misconduct, must repay his employer the full $31 million his employer paid him as compensation during his period of faithlessness.
Accountants' code of conduct
In its 2007 International Good Practice Guidance, "Defining and Developing an Effective Code of Conduct for Organizations", provided the following working definition: "Principles, values, standards, or rules of behaviour that guide the decisions, procedures, and systems of an organization in a way that (a) contributes to the welfare of its key stakeholders, and (b) respects the rights of all constituents affected by its operations."
Codes of conduct in practice
A code of conduct can be an important part in establishing an inclusive culture, but it is not a comprehensive solution on its own. An ethical culture is created by the organization's leaders who manifest their ethics in their attitudes and behaviour. Studies of codes of conduct in the private sector show that their effective implementation must be part of a learning process that requires training, consistent enforcement, and continuous measurement/improvement: simply requiring members to read the code is not enough to ensure that they understand it and will remember its contents. Castellano et al. describe Tom Morris' book If Aristotle Ran General Motors as "compelling" and "persuasive" in arguing that in addition to codes of conduct and ethical guidelines, the creation of an ethical workplace climate requires "socially harmonious relationships" to be embedded in practice. The proof of effectiveness is when employees/members feel comfortable enough to voice concerns and believe that the organization will respond with appropriate action.
Examples
Banking Code
Bushido
Code of Conduct for the International Red Cross and Red Crescent Movement and NGOs in Disaster Relief
Code of Conduct for Justices of the Supreme Court of the United States
Code of Hammurabi
Code of the United States Fighting Force
Code of Service Discipline
Declaration of Geneva
Declaration of Helsinki
Don't be evil
Eight precepts
Election Commission of India's Model Code of Conduct
Five Pillars of Islam
Golden Rule
Geneva Conventions
Hippocratic Oath
ICC Cricket Code of Conduct
International Code of Conduct against Ballistic Missile Proliferation (ICOC or Hague Code of Conduct)
Izzat
Journalist's Creed
Kapu
Moral Code of the Builder of Communism
Pāṭimokkha
Pirate code
Rule of Saint Benedict
Ten Commandments
Ten precepts (Taoism)
Uniform Code of Military Justice
Vienna Convention on Diplomatic Relations
See also
Programming ethics
References
External links
Applied ethics
Morality
Political charters
Rules | Code of conduct | [
"Biology"
] | 670 | [
"Behavior",
"Human behavior",
"Applied ethics"
] |
1,543,776 | https://en.wikipedia.org/wiki/EPDM%20rubber | EPDM rubber (ethylene propylene diene monomer rubber) is a type of synthetic rubber that is used in many applications.
EPDM is an M-Class rubber under ASTM standard D-1418; the M class comprises elastomers with a saturated polyethylene chain (the M deriving from the more correct term polymethylene). EPDM is made from ethylene, propylene, and a diene comonomer that enables crosslinking via sulfur vulcanization. Typically used dienes in the manufacture of EPDM rubbers are ethylidene norbornene (ENB), dicyclopentadiene (DCPD), and vinyl norbornene (VNB). Varying diene contents are reported in commercial products, which are generally in the range from 2 to 12%.
The earlier relative of EPDM is EPR, ethylene propylene rubber (useful for high-voltage electrical cables), which is not derived from any diene precursors and can be crosslinked only using radical methods such as peroxides.
As with most rubbers, EPDM as used is always compounded with fillers such as carbon black and calcium carbonate, with plasticisers such as paraffinic oils, and has functional rubbery properties only when crosslinked. Crosslinking mainly occurs via vulcanisation with sulfur but is also accomplished with peroxides (for better heat resistance) or phenolic resins. High-energy radiation, such as from electron beams, is sometimes used to produce foams, wire, and cable.
Properties
Typical properties of EPDM vulcanizates are given below. EPDM can be compounded to meet specific properties to a limit, depending first on the EPDM polymers available, then the processing and curing method(s) employed. EPDMs are available in various molecular weights (indicated in Mooney viscosity ML(1+4) at 125 °C), varying levels of ethylene, third monomer, and oil content.
Because of chemical interactions, EPDM degrades when in contact with bituminous material such as EPDM gaskets on asphalt shingles.
Uses
Relative to rubbers with unsaturated backbones (natural rubber, SBR, neoprene), rubbers with saturated polymer backbones, such as EPDM, exhibit superior resistance to heat, light, and ozone exposure. For this reason they are useful in external harsh environments. EPDM in particular exhibits outstanding resistance to heat, ozone, steam, and weather. As such, EPDM can be formulated to be resistant to temperatures as high as 150 °C, and, properly formulated, can be used outdoors for many years or decades without degradation. EPDM has good low-temperature properties, with elastic properties to temperatures as low as −40 °C depending on the grade and the formulation.
EPDM is stable towards fireproof hydraulic fluids, ketones, hot and cold water, and alkalis.
As a durable elastomer, EPDM is conformable, impermeable, and a good electrical insulator. Solid EPDM and expanded EPDM foam are often used for sealing and gasketing, as well as membranes and diaphragms. EPDM is often used when a component must prevent fluid flow while remaining flexible. It can also be used to provide cushioning or elasticity. While EPDM has decent tensile strength, its flexibility makes it inappropriate for rigid parts such as gears, shafts, and structural beams.
It is used to create weatherstripping, seals on doors for refrigerators and freezers (where it also acts as an insulator), face masks for industrial respirators, glass run channels, radiators, garden and appliance hose (where it is used as a hose material as well as for gaskets), tubing, washers, O-rings, electrical insulation, and geomembranes.
A common use is in vehicles, where EPDM is used for door seals, window seals, trunk seals, and sometimes hood seals. Other uses in vehicles include wiper blades, cooling system circuit hoses; water pumps, thermostats, EGR valves, EGR coolers, heaters, oil coolers, radiators, and degas bottles are connected with EPDM hoses. EPDM is also used as charge air tubing on turbocharged engines to connect the cold side of the charge air cooler (intercooler) to the intake manifold.
EPDM seals can be a source of squeaking noise due to the movement of the seal against the opposing surface (and its attendant friction). The noise can be alleviated using specialty coatings that are applied at the time of manufacture of the seal. Such coatings can also improve the chemical resistance of EPDM rubber. Some vehicle manufacturers also recommend a light application of silicone dielectric grease to weatherstrip to reduce noise.
This synthetic rubber membrane has also been used for pond liners and flat roofs because of its durability and low maintenance costs. As a roofing membrane it does not pollute the run-off rainwater (which is of vital importance for rainwater harvesting).
It is used for belts, electrical insulation, vibrators, solar panel heat collectors, and speaker cone surrounds. It is also used as a functional additive to modify and enhance the impact characteristics of thermoset plastics, thermoplastics, and many other materials.
EPDM is also used for components that provide elasticity; for example, it is used for bungee cords, elastic tie-downs, straps, and hangers that attach exhaust systems to the underfloor of vehicles (since a rigid connection would transfer vibration, noise, and heat to the body). It is also used for cushioned edge guards and bumpers on appliances, equipment, and machinery.
Colored EPDM granules are mixed with polyurethane binders and troweled or sprayed onto concrete, asphalt, screenings, interlocking brick, wood, etc., to create a non-slip, soft, porous safety surface for wet-deck areas such as pool decks. It is used as safety surfacing under playground play equipment (designed to help lessen fall injury). (see Playground surfacing.)
Production of synthetic rubber in the 2010s exceeded 10 million tonnes annually and was over 15 million tonnes in each of 2017, 2018, and 2019 and only slightly less in 2020.
Further reading
Thermoplastic vulcanizates (mixtures of vulcanized polymers such as EPDM and immiscible polymers such as polypropylene).
References
Elastomers
Polymers
Roofing materials | EPDM rubber | [
"Chemistry",
"Materials_science"
] | 1,383 | [
"Polymers",
"Synthetic materials",
"Polymer chemistry",
"Elastomers"
] |
1,543,837 | https://en.wikipedia.org/wiki/Phase%20response | In signal processing, phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter.
Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time.
See also
Group delay and phase delay
References
Trigonometry
Wave mechanics
Signal processing | Phase response | [
"Physics",
"Technology",
"Engineering"
] | 148 | [
"Physical phenomena",
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Classical mechanics",
"Waves",
"Wave mechanics"
] |
1,543,901 | https://en.wikipedia.org/wiki/AGARD | The Advisory Group for Aerospace Research and Development (AGARD) was an agency of NATO that existed from 1952 to 1996.
AGARD was founded as an Agency of the NATO Military Committee. It was set up in May 1952 with headquarters in Neuilly sur Seine, France.
In a mission statement in the 1982 History it published, the purpose involved "bringing together the leading personalities of the NATO nations in the fields of science and technology relating to aerospace".
The Advisory Group was organized by panels:
Aerospace medical, avionics, electromagnetic wave propagation, flight mechanics, fluid dynamics, guidance and control, propulsion and energetics, structures and materials, and technical information.
In 1958 Theodore von Kármán hired Moe Berg to accompany him to the AGARD conference in Paris. "AGARD's aim was to encourage European countries to develop weapons technology on their own instead of relying on the U.S. defense industry to do it for them."
Activities
There were annual meetings, frequently in Paris, but also in Delft, Turin, Cambridge, Washington DC.
The Advisory Group administered a consultant and exchange program including lecture series and technical panels.
The AGARD publishing program included a multilingual aeronautical dictionary, about ninety titles per year, with a normal run of 1200. An Agardograph is a work prepared by, or on behalf of, AGARD's panels. For example, an agardograph on the AGARD-B wind tunnel model was prepared.
Later examples of AGARD studies include such topics as non-lethal weapons, theatre ballistic missile defence, protection of large aircraft in peace support operations, and limiting collateral damage caused by air-delivered weapons. AGARD was also one of the first NATO organizations to cooperate with Russia in a mutual exchange of information dealing with flight safety.
AGARD merged with the NATO Defence Research Group (DRG) in 1996 to become the NATO Research and Technology Organisation (RTO).
See also
Aeronautics
Notes
Further reading
Theodore von Kármán with Lee Edson (1967) The Wind and Beyond: Theodore von Kármán Pioneer in Aviation and Pathfinder in Space: Little, Brown and Company
The AGARD History 1952-1987 (1999), Advisory Group for Aerospace Research & Development,
External links
Science and Technology Organization (STO)
Aeronautics organizations
Aerospace engineering organizations
Military technology
NATO agencies
Scientific organizations based in France | AGARD | [
"Engineering"
] | 472 | [
"Aerospace engineering",
"Aerospace engineering organizations",
"Aeronautics organizations"
] |
1,543,903 | https://en.wikipedia.org/wiki/Ribbon%20development | Ribbon development refers to the building of houses along the routes of communications radiating from a human settlement. The resulting linear settlements are clearly visible on land use maps and aerial photographs, giving cities and the countryside a particular character. Such development generated great concern in the United Kingdom during the 1920s and the 1930s as well as in numerous other countries during the decades since.
Normally the first ribbons are focused on roads. Following the Industrial Revolution, ribbon development became prevalent along railway lines, predominantly in Russia, the United Kingdom, and the United States. However, the investment required to build railway stations, the ensuing attractiveness of easy rail access, and need for accompanying roads often led to new small settlements outside of the center city. Ribbon developments yielded attractive home locations on isolated roads as increasing motor car ownership meant that houses could be sold easily even if they were remote from workplaces and urban centres. Developers were pleased to not have to construct additional roads, thereby saving money and plot space. Ribbon developments also filled spaces at the interstice between urban areas, and resultingly appealed to potential buyers needing to access one or more of these locations.
The extent of this development practice around roads led to several problems becoming more intense. Ribbon developments were ultimately recognized as an inefficient use of resources, requiring bypass roads to be built, and often served as a precursor to untrammelled urban sprawl. Thus a key aim for the United Kingdom's post-war planning system was to implement a presumption and convention that rendered new ribbon developments undesirable. Urban sprawl/suburbanization of large areas led to the introduction of green belt policies, new towns, planned suburbs and garden cities.
History
Following the Industrial Revolution, ribbon development became prevalent along railway lines, predominantly in Russia, the United Kingdom, and the United States. The deliberate promotion of Metro-land along London's Metropolitan Railway serves as a strong example of this form of development. Similar examples can be found from Long Island (where Frederick W Dunton bought much real estate to encourage New Yorkers to settle along the Long Island Rail Road lines), Boston and across the American Midwest.
Ribbon development is not restricted to construction along road or rail corridors, as it can also occur along ridge lines, canals and coastlines, the last of which occurs especially as people seeking seachange lifestyles build their houses for an optimal view.
The resulting towns and cities are often difficult to service efficiently due to their remoteness and lack of density. Often, the first problem noticed by residents is increased traffic congestion, as an increased number of people moves along the narrow urban corridor while development continues at the lengthening end of the corridor. Urban consolidation and smart city growth are often solutions that encourage growth towards a more compact urban form.
Ribbon development can also be compared with a linear village – a village that grows linearly along a transportation route as part of a city's expansion into the frontier. They also lead to dispersion of functions, as the need for pockets of dense development that rely on each other becomes less important.
Ribbon development has long been viewed as a special problem in the Republic of Ireland, where "one-off houses" proliferate on rural roads. This causes difficulties in the efficient supply of water, sewerage, broadband, electricity, telephones and public transport. In 1998, Frank McDonald contrasted development in the Republic with that in Northern Ireland: "Enniskillen [in Northern Ireland] is well defined with clear boundaries to the town and well-laid-out shopping streets. Letterkenny, [in the Republic] by contrast, appears as just one long street with bungalow development trailing off over all the surrounding hills." The houses (ofter disparaged as "McMansions") are also criticised for spoiling countryside scenery: Monaghan County Council in 2013 declared that "The Council will resist development that would create or extend ribbon development." Tipperary County Council and many other councils have adopted similar policies.
Recently, in places such as Flanders, Belgium, regional zoning policy has resulted in ribbon development patterns. Various spatial policies embedded in these plans help predict where ribbon developments may occur and at what rate.
Criticisms
Increased congestion
Due to the main road being flanked by homes or commercial establishments, stoppages in traffic may frequently occur as a result of deliveries or vehicles entering or exiting driveways. This can pose danger for other vehicles that may not see entering traffic, especially if the road is bordered by garages. Residents may also choose to walk alongside the road, an activity made more dangerous by fast-moving traffic.
Utility access
For as simple as linear construction emanating from a city is, the length of a ribbon corridor can pose financial concerns for utility companies as they serve buildings. Density is preferable for utility grids, thereby risking poor access for far-away buildings.
Disruptions during construction
Construction of a new home or building within a ribbon development may severely disrupt the flow of vehicles along the road because there are no feeder streets for construction vehicles to station on. Traffic may be forced into a singular lane or subjected to an alternating pattern.
Obstruction of countryside
Because most ribbon developments exist in rural areas outside of cities, properties can disturb or obstruct the natural landscapes along the road may be constructed along an overlook, removing the public's ability to enjoy the landscape in favor of a single property owner.
Municipal boundaries
Elongated ribbon developments also pose challenges for municipal governments as they partition out rural areas for townships and schools. Rather than development in small towns where schools and other public amenities reside, certain locations within a ribbon development may be difficult to serve by a government and, in turn, cost more in public expenditures.
See also
Green belt
Linear village
One-off housing
Towards an Urban Renaissance
Urban Sprawl
References
Urban planning
Urban studies and planning terminology | Ribbon development | [
"Engineering"
] | 1,174 | [
"Urban planning",
"Architecture"
] |
1,543,932 | https://en.wikipedia.org/wiki/Mercury%20coulometer | In electrochemistry, a mercury coulometer is an analytical instrument which uses mercury to perform coulometry (determining the amount of matter transformed in a chemical reaction by measuring electric current) on the following reaction:
These oxidation/reduction processes have 100% efficiency with the wide range of the current densities. Measuring of the quantity of electricity (coulombs) is based on the changes of the mass of the mercury electrode. Mass of the electrode can be increased during cathodic deposition of the mercury ions or decreased during the anodic dissolution of the metal.
where is the quantity of electricity; are the mass changes; is the Faraday constant; and is the molar mass of mercury.
Use as Hour Meter
Before the development of solid-state electronics, coulometers were used as long-period (up to 25,000 hours) elapsed hour meters in electronic equipment and other devices, including on the Apollo Program space vehicles. By passing a constant, calibrated current through the coulometer, the movement of a gap between mercury droplets provides a visual indication of elapsed time. Brands included HP's Chronister and Curtis' Indachron.
Construction
This coulometer has different constructions but all of them are based on mass measurements. The device consists of two reservoirs connected by a thin graduated capillary tube containing a solution of the mercury(II)-ions. Each of the reservoirs has an electrode immersed in a drop of mercury. Another small drop of mercury is inserted into the capillary. When the current is turned on, it initiates dissolution of the metallic mercury on one side of the drop in the capillary and deposition on the other side of the same drop. This drop starts to move. Because of the high efficiency of the deposition/dissolution of the mercury under the current influence, the mass or volume of this small drop is constant and its movement is linearly correlated with the passed charge. If the direction of the current is changed, the drop moves in the opposite direction. The sensitivity of this type of coulometer depends on the diameter of the capillary.
See also
Copper coulometer
Notes
Mercury (element)
Electroanalytical chemistry devices
Coulometers | Mercury coulometer | [
"Chemistry"
] | 454 | [
"Electroanalytical chemistry devices",
"Electroanalytical chemistry"
] |
1,543,960 | https://en.wikipedia.org/wiki/Effective%20population%20size | The effective population size (Ne) is the size of an idealised population that would experience the same rate of genetic drift as the real population. Idealised populations are those following simple one-locus models that comply with assumptions of the neutral theory of molecular evolution. The effective population size is normally smaller than the census population size N, partly because chance events prevent some individuals from breeding, and partly due to background selection and genetic hitchhiking.
The same real population could have a different effective population size for different properties of interest, such as genetic drift (or more precisely, the speed of coalescence) over one generation vs. over many generations. Within a species, areas of the genome that have more genes and/or less genetic recombination tend to have lower effective population sizes, because of the effects of selection at linked sites. In a population with selection at many loci and abundant linkage disequilibrium, the coalescent effective population size may not reflect the census population size at all, or may reflect its logarithm.
The concept of effective population size was introduced in the field of population genetics in 1931 by the American geneticist Sewall Wright. Some versions of the effective population size are used in wildlife conservation.
Empirical measurements
In a rare experiment that directly measured genetic drift one generation at a time, in Drosophila populations of census size 16, the effective population size was 11.5. This measurement was achieved through studying changes in the frequency of a neutral allele from one generation to another in over 100 replicate populations.
More commonly, effective population size is estimated indirectly by comparing data on current within-species genetic diversity to theoretical expectations. According to the neutral theory of molecular evolution, an idealised diploid population will have a pairwise nucleotide diversity equal to 4Ne, where is the mutation rate. The effective population size can therefore be estimated empirically by dividing the nucleotide diversity by 4. This captures the cumulative effects of genetic drift, genetic hitchhiking, and background selection over longer timescales. More advanced methods, permitting a changing effective population size over time, have also been developed.
The effective size measured to reflect these longer timescales may have little relationship to the number of individuals physically present in a population. Measured effective population sizes vary between genes in the same population, being low in genome areas of low recombination and high in genome areas of high recombination. Sojourn times are proportional to N in neutral theory, but for alleles under selection, sojourn times are proportional to log(N). Genetic hitchhiking can cause neutral mutations to have sojourn times proportional to log(N): this may explain the relationship between measured effective population size and the local recombination rate.
If the recombination map of recombination frequencies along chromosomes is known, Ne can be inferred from rP2 = 1 / (1+4Ne r), where rP is the Pearson correlation coefficient between loci. This expression can be interpreted as the probability that two lineages coalesce before one allele on either lineage recombines onto some third lineage.
A survey of publications on 102 mostly wildlife animal and plant species yielded 192 Ne/N ratios. Seven different estimation methods were used in the surveyed studies. Accordingly, the ratios ranged widely from 10-6 for Pacific oysters to 0.994 for humans, with an average of 0.34 across the examined species. Based on these data they subsequently estimated more comprehensive ratios, accounting for fluctuations in population size, variance in family size and unequal sex-ratio. These ratios average to only 0.10-0.11.
A genealogical analysis of human hunter-gatherers (Eskimos) determined the effective-to-census population size ratio for haploid (mitochondrial DNA, Y chromosomal DNA), and diploid (autosomal DNA) loci separately: the ratio of the effective to the census population size was estimated as 0.6–0.7 for autosomal and X-chromosomal DNA, 0.7–0.9 for mitochondrial DNA and 0.5 for Y-chromosomal DNA.
Selection effective size
In an idealised Wright-Fisher model, the fate of an allele, beginning at an intermediate frequency, is largely determined by selection if the selection coefficient s ≫ 1/N, and largely determined by neutral genetic drift if s ≪ 1/N. In real populations, the cutoff value of s may depend instead on local recombination rates. This limit to selection in a real population may be captured in a toy Wright-Fisher simulation through the appropriate choice of Ne. Populations with different selection effective population sizes are predicted to evolve profoundly different genome architectures.
History of theory
Ronald Fisher and Sewall Wright originally defined effective population size as "the number of breeding individuals in an idealised population that would show the same amount of dispersion of allele frequencies under random genetic drift or the same amount of inbreeding as the population under consideration". This implied two potentially different effective population sizes, based either on the one-generation increase in variance across replicate populations (variance effective population size), or on the one-generation change in the inbreeding coefficient (inbreeding effective population size). These two are closely linked, and derived from F-statistics, but they are not identical.
Today, the effective population size is usually estimated empirically with respect to the amount of within-species genetic diversity divided by the mutation rate, yielding a coalescent effective population size that reflects the cumulative effects of genetic drift, background selection, and genetic hitchhiking over longer time periods. Another important effective population size is the selection effective population size 1/scritical, where scritical is the critical value of the selection coefficient at which selection becomes more important than genetic drift.
Variance effective size
In the Wright-Fisher idealized population model, the conditional variance of the allele frequency , given the allele frequency in the previous generation, is
Let denote the same, typically larger, variance in the actual population under consideration. The variance effective population size is defined as the size of an idealized population with the same variance. This is found by substituting for and solving for which gives
In the following examples, one or more of the assumptions of a strictly idealised population are relaxed, while other assumptions are retained. The variance effective population size of the more relaxed population model is then calculated with respect to the strict model.
Variations in population size
Population size varies over time. Suppose there are t non-overlapping generations, then effective population size is given by the harmonic mean of the population sizes:
For example, say the population size was N = 10, 100, 50, 80, 20, 500 for six generations (t = 6). Then the effective population size is the harmonic mean of these, giving:
{|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|}
Note this is less than the arithmetic mean of the population size, which in this example is 126.7. The harmonic mean tends to be dominated by the smallest bottleneck that the population goes through.
Dioeciousness
If a population is dioecious, i.e. there is no self-fertilisation then
or more generally,
where D represents dioeciousness and may take the value 0 (for not dioecious) or 1 for dioecious.
When N is large, Ne approximately equals N, so this is usually trivial and often ignored:
Variance in reproductive success
If population size is to remain constant, each individual must contribute on average two gametes to the next generation. An idealized population assumes that this follows a Poisson distribution so that the variance of the number of gametes contributed, k is equal to the mean number contributed, i.e. 2:
However, in natural populations the variance is often larger than this. The vast majority of individuals may have no offspring, and the next generation stems only from a small number of individuals, so
The effective population size is then smaller, and given by:
Note that if the variance of k is less than 2, Ne is greater than N. In the extreme case of a population experiencing no variation in family size, in a laboratory population in which the number of offspring is artificially controlled, Vk = 0 and Ne = 2N.
Non-Fisherian sex-ratios
When the sex ratio of a population varies from the Fisherian 1:1 ratio, effective population size is given by:
Where Nm is the number of males and Nf the number of females. For example, with 80 males and 20 females (an absolute population size of 100):
{|
|-
|
|
|-
|
|
|-
|
|
|}
Again, this results in Ne being less than N.
Inbreeding effective size
Alternatively, the effective population size may be defined by noting how the average inbreeding coefficient changes from one generation to the next, and then defining Ne as the size of the idealized population that has the same change in average inbreeding coefficient as the population under consideration. The presentation follows Kempthorne (1957).
For the idealized population, the inbreeding coefficients follow the recurrence equation
Using Panmictic Index (1 − F) instead of inbreeding coefficient, we get the approximate recurrence equation
The difference per generation is
The inbreeding effective size can be found by solving
This is
.
Theory of overlapping generations and age-structured populations
When organisms live longer than one breeding season, effective population sizes have to take into account the life tables for the species.
Haploid
Assume a haploid population with discrete age structure. An example might be an organism that can survive several discrete breeding seasons. Further, define the following age structure characteristics:
Fisher's reproductive value for age ,
The chance an individual will survive to age , and
The number of newborn individuals per breeding season.
The generation time is calculated as
average age of a reproducing individual
Then, the inbreeding effective population size is
Diploid
Similarly, the inbreeding effective number can be calculated for a diploid population with discrete age structure. This was first given by Johnson, but the notation more closely resembles Emigh and Pollak.
Assume the same basic parameters for the life table as given for the haploid case, but distinguishing between male and female, such as N0ƒ and N0m for the number of newborn females and males, respectively (notice lower case ƒ for females, compared to upper case F for inbreeding).
The inbreeding effective number is
See also
Minimum viable population
Small population size
References
External links
https://web.archive.org/web/20050524144622/http://www.kursus.kvl.dk/shares/vetgen/_Popgen/genetics/3/6.htm — on Københavns Universitet.
Population genetics
Population ecology
Ecological metrics
Quantitative genetics | Effective population size | [
"Mathematics"
] | 2,256 | [
"Ecological metrics",
"Quantity",
"Metrics"
] |
1,544,082 | https://en.wikipedia.org/wiki/Perkin%20reaction | The Perkin reaction is an organic reaction developed by English chemist William Henry Perkin in 1868 that is used to make cinnamic acids. It gives an α,β-unsaturated aromatic acid or α-substituted β-aryl acrylic acid by the aldol condensation of an aromatic aldehyde and an acid anhydride, in the presence of an alkali salt of the acid. The alkali salt acts as a base catalyst, and other bases can be used instead.
Several reviews have been written.
Reaction mechanism
Clear from the reaction mechanism, the anhydride of aliphatic acid must contain at least 2 α-H for the reaction to occur. The above mechanism is not universally accepted, as several other versions exist, including decarboxylation without acetic group transfer.
Applications
Salicylaldehyde converted to coumarin using acetic anhydride with acetate as base.
cinnamic acid is prepared from benzaldehyde, again using acetic anhydride in the presence of sodium or potassium acetate.
resveratrol (c.f. fo-ti), a phytoestrogenic stilbene is yet another product of this methodology.
See also
Erlenmeyer–Plöchl azlactone and amino-acid synthesis
Stobbe condensation
Pechmann condensation
References
Condensation reactions
Name reactions | Perkin reaction | [
"Chemistry"
] | 294 | [
"Name reactions",
"Condensation reactions",
"Organic reactions"
] |
1,544,141 | https://en.wikipedia.org/wiki/John%20Augustine%20Zahm | John Augustine Zahm (pseudonym H. J. Mozans), CSC (June 14, 1851 – November 10, 1921) was a Holy Cross priest, author, scientist, and explorer of South America. He was born at New Lexington, Ohio, and died in Munich, Germany.
Early life
Zahm was born on June 14, 1851 in a log home in Jackson Township, Perry County, Ohio to John and Mary (née Braddock) Zahm. His mother was born in Pennsylvania and was of English descent, having Edward Braddock as an ancestor. His father was an immigrant to the United States from Olsberg, Germany.
Zahm initially attended a one-room schoolhouse in Logan, with Januarius MacGahan being one of his classmates, before the family moved to Huntington, Indiana from where he learned of the University of Notre Dame.
Education and career
Zahm attended the University of Notre Dame in 1867 and graduated with honors in 1871 as a Novice of the Congregation of Holy Cross. He finished his theological studies and was ordained in 1875. Zahm was hired by the University of Notre Dame as a science teacher although he had interest in literature. His brother Albert attended Notre Dame as a student while John was on the faculty.
During his time teaching he wrote the text Sound and Music in 1892. He was appointed the Vice President of Notre Dame at 25 years of age and held the position for nine years. In 1895, he was recognized as Doctor of Philosophy by Pope Leo XIII. Zahm championed the view of Notre Dame becoming a research university dedicated to scholarship, which was at odds with Andrew Morrissey, who hoped to keep the institution a smaller boarding school.
Writing
Zahm is the author of many scholarly texts and published works against Darwinism. He also wrote Catholic scientific essays published in American Catholic Quarterly and Catholic World, among others.
Zahm fought through writing and used his detailed background in science to defend the ability of God and the Catholic faith to remain in the scientific sphere. Focusing on Catholic men of science in the past, Zahm founded a magazine, Catholic Science and Catholic Scientists. Between 1891 and 1896, he published multiple books and articles on the topic, culminating with Evolution and Dogma in 1896.
In this text, as in his others, Zahm argued that Roman Catholicism could fully accept an evolutionary view of biological systems, as long as this view was not centered around Darwin's theory of natural selection. After the Vatican decided to censure the book in 1898, Zahm fully accepted this rebuttal and pulled away from any writing concerning the relationship of theology and science.
Zahm's pseudonym was derived from the way he signed his name as a youth: Jno. S. (Stanislaus, an abandoned middle name) Zahm. His works have been translated into French, Italian, Spanish, and have been published and read North and South America, as well as in Europe. These include: Woman in Science and Great Inspirers. The Quest for El Dorado, and the general title of his trilogy was "Following the Conquistadores", and the titles of books called Up the Orinoco and Down the Magdalena (1910), Along the Andes and Down the Amazon (1912) and In South America's Southland (1916), all drew from his travels throughout South America. He was an enthusiastic Dante student and assembled at Notre Dame one of the three largest Dante libraries in the U.S.
Zahm befriended 26th President of the United States Theodore Roosevelt, who also loved and read Dante in Italian. It was Zahm who talked President Roosevelt into participating in what came to be known as the Roosevelt-Rondon Scientific Expedition to South America, and which would also include Theodore's son, Kermit, and Colonel Cândido Mariano da Silva Rondon, to go up the Rio da Dúvida (River of Doubt, now the Roosevelt River).
Death and legacy
Zahm planned a book on historical and archaeological study of the Holy Land, but died of bronchial pneumonia in a Munich hospital en route to the Middle East. The manuscripts of his working book From Berlin to Baghdad and Babylon were found and published posthumously.
Works authored
Zahm used a number of pseudonyms, mainly H. J. Mozans, but also A. H. Johns, Manso, and A. H. Solis.
Books
(Full Text)
(Full Text)
(Full Text). Introduction by Theodore Roosevelt.
(Full Text)
(Full Text)
Articles (selection)
J. A. Zahm as "A. H. Johns". "Woman's Work in Bible Study and Translation", in The Catholic World, New York, Vol. 95/June 1912
See also
Zahm Hall, a men's residence hall at Notre Dame named after Zahm
List of Roman Catholic scientist-clerics
References
Sources
The River of Doubt by Candice Millard (Doubleday 2005)
Further reading
Appleby, R. Scott. (1987). Between Americanism and Modernism: John Zahm and Theistic Evolution. Church History. Vol. 56, No. 4. pp. 474–490.
Sloan, Philip R. (2009). Bringing Evolution to Notre Dame: Father John Zahm, C.S.C. and Theistic Evolutionism. American Midland Naturalist. Vol. 161, No. 2. pp. 189–205.
The Catholic Historical Review wrote about John Augustine Zahm: "Dr. John H. Zahm, C. S. C.", The Catholic Historical Review, Vol. 7, No. 4 (Jan., 1922), p. 480, Published by: Catholic University of America Press
Weber, Ralph E. Notre Dame's John Zahm: Catholic Apologist and Educator. (Notre Dame, Indiana: University of Notre Dame Press, 1961)
External links
Biography of John A. Zahm, C.S.C.
1851 births
1921 deaths
American explorers
19th-century American Roman Catholic priests
Catholic clergy scientists
American science writers
People from New Lexington, Ohio
American Roman Catholic writers
Congregation of Holy Cross
University of Notre Dame faculty
University of Notre Dame alumni
Explorers of Amazonia
Theistic evolutionists
Catholics from Ohio
University of Notre Dame Trustees
Writers from Ohio | John Augustine Zahm | [
"Biology"
] | 1,265 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
1,544,170 | https://en.wikipedia.org/wiki/Anisogamy | Anisogamy is a form of sexual reproduction that involves the union or fusion of two gametes that differ in size and/or form. The smaller gamete is male, a sperm cell, whereas the larger gamete is female, typically an egg cell. Anisogamy is predominant among multicellular organisms. In both plants and animals, gamete size difference is the fundamental difference between females and males.
Anisogamy most likely evolved from isogamy. Since the biological definition of male and female is based on gamete size, the evolution of anisogamy is viewed as the evolutionary origin of male and female sexes. Anisogamy is an outcome of
both natural selection and sexual selection, and led the sexes to different primary and secondary sex characteristics including sex differences in behavior.
Geoff Parker, Robin Baker, and Vic Smith were the first to provide a mathematical model for the evolution of anisogamy that was consistent with modern evolutionary theory. Their theory was widely accepted but there are alternative hypotheses about the evolution of anisogamy.
Etymology
Anisogamy (the opposite of isogamy) comes from the ancient Greek negative prefix a(n)- (alpha privative), the Greek adjective isos (meaning equal) and the Greek verb gameo (meaning to have sex/to reproduce), eventually meaning "non-equal reproduction" obviously referring to the enormous differences between male and female gametes in size and abilities. The first known use of the term anisogamy was in the year 1891.
Definition
Anisogamy is the form of sexual reproduction that involves the union or fusion of two gametes which differ in size and/or form. The smaller gamete is considered to be male (a sperm cell), whereas the larger gamete is regarded as female (typically an egg cell, if non-motile).
There are several types of anisogamy. Both gametes may be flagellated and therefore motile. Alternatively, as in flowering plants, conifers and gnetophytes, neither of the gametes are flagellated. In these groups, the male gametes are non-motile cells within pollen grains, and are delivered to the egg cells by means of pollen tubes. In the red alga Polysiphonia, non-motile eggs are fertilized by non-motile sperm.
The form of anisogamy that occurs in animals, including humans, is oogamy, where a large, non-motile egg (ovum) is fertilized by a small, motile sperm (spermatozoon). The egg is optimized for longevity, whereas the small sperm is optimized for motility and speed. The size and resources of the egg cell allow for the production of pheromones, which attract the swimming sperm cells.
Sexual dimorphism
Anisogamy is a core element of sexual dimorphism that helps to explain phenotypic differences between sexes. Researchers estimate that over 99.99% of eukaryotes reproduce sexually. Most do so by way of male and female sexes, both of which are optimized for reproductive potential. Due to their differently sized and shaped gametes, both males and females have developed physiological and behavioral differences that optimize the individual's fecundity. Since most egg laying females typically must bear the offspring and have a more limited reproductive cycle, this typically makes females a limiting factor in the reproductive success rate of males in a species. This process is also true for females selecting males, and assuming that males and females are selecting for different traits in partners, would result in phenotypic differences between the sexes over many generations. This hypothesis, known as the Bateman's Principle, is used to understand the evolutionary pressures put on males and females due to anisogamy. Although this assumption has criticism, it is a generally accepted model for sexual selection within anisogamous species. The selection for different traits depending on sex within the same species is known as sex-specific selection, and accounts for the differing phenotypes found between the sexes of the same species. This sex-specific selection between sexes over time also leads to the development of secondary sex characteristics, which assist males and females in reproductive success.
In most species, both sexes choose mates based on the available phenotypes of potential mates. These phenotypes are species-specific, resulting in varying strategies for successful sexual reproduction. For example, large males are sexually selected for in elephant seals because their large size helps the male fight off other males, but small males are sexually selected for in spiders for they can mate with the female more quickly while avoiding sexual cannibalism. However, despite the large range of sexually selected phenotypes, most anisogamous species follow a set of predictable desirable traits and selective behaviors based on general reproductive success models.
Female phenotypes
For internal fertilizers, female investment is high in reproduction since they typically expend more energy throughout a single reproductive event. This can be seen as early as oogenesis, for the female sacrifices gamete number for gamete size to better increase the survival chances of the potential zygote; a process more energetically demanding than spermatogenesis in males. Oogenesis occurs in the ovary, a female-specific organ that also produces hormones to prepare other female-specific organs for the changes necessary in the reproductive organs to facilitate egg delivery in external fertilizers, and zygote development in internal fertilizers. The egg cell produced is not only large, but sometimes even immobile, requiring contact with the more mobile sperm to instigate fertilization.
Since this process is very energy-demanding and time-consuming for the female, mate choice is often integrated into the female's behavior. Females will often be very selective of the males they choose to reproduce with, for the phenotype of the male can be indicative of the male's physical health and heritable traits. Females employ mate choice to pressure males into displaying their desirable traits to females through courtship, and if successful, the male gets to reproduce. This encourages males and females of specific species to invest in courtship behaviors as well as traits that can display physical health to a potential mate. This process, known as sexual selection, results in the development of traits to ease reproductive success rather than individual survival, such as the inflated size of a termite queen. It is also important for females to select against potential mates that may have a sexually transmitted infection, for the disease could not only hurt the female's reproductive ability, but also damage the resulting offspring.
Although not uncommon in males, females are more associated with parental care. Since females are on a more limited reproductive schedule than males, a female often invests more in protecting the offspring to sexual maturity than the male. Like mate choice, the level of parental care varies greatly between species, and is often dependent on the number of offspring produced per sexual encounter.
In many species, including ones from all major vertebrate groups females can utilize sperm storage, a process by which the female can store excess sperm from a mate, and fertilize her eggs long after the reproductive event if mating opportunities drop or quality of mates decreases. By being able to save sperm from more desirable mates, the female gains more control over its own reproductive success, thus allowing for the female to be more selective of males as well as making the timing of fertilization potentially more frequent if males are scarce.
Male phenotypes
For males of all species, the sperm cells they produce are optimized for ensuring fertilization of the female egg. These sperm cells are created through spermatogenesis, a form of gametogenesis that focuses on developing the most possible gametes per sexual encounter. Spermatogenesis occurs in the testis, a male specific organ that also produces hormones that trigger the development of secondary sex characteristics. Since the male's gametes are energetically cheap and abundant in every ejaculation, a male can greatly increase his sexual success by mating far more frequently than the female. Sperm, unlike egg cells, are also mobile, allowing for the sperm to swim towards the egg through the female's sexual organs. Sperm competition is also a major factor in the development of sperm cells. Only one sperm can fertilize an egg, and since females can potentially mate with more than one male before fertilization occurs, producing sperm cells that are faster, more abundant, and more viable than that produced by other males can give a male reproductive advantage.
Since females are often the limiting factor in a species reproductive success, males are often expected by the females to search and compete for the female, known as intraspecific competition. This can be seen in organisms such as bean beetles, as the male that searches for females more frequently is often more successful at finding mates and reproducing. In species undergoing this form of selection, a fit male would be one that is fast, has more refined sensory organs, and spatial awareness.
Some secondary sex characteristics are not only meant for attracting mates, but also for competing with other males for copulation opportunities. Some structures, such as antlers in deer, can provide benefits to the male's reproductive success by providing a weapon to prevent rival males from achieving reproductive success. However, other structures such as the large colorful tail feathers found in male peacocks, are a result of Fisherian runaway as well as several more species specific factors. Due to females selecting for specific traits in males, over time, these traits are exaggerated to the point where they could hinder the male's survivability. However, since these traits greatly benefit sexual selection, their usefulness in providing more mating opportunities overrides the possibility that the trait could lead to a shortening of its lifespan through predation or starvation. These desirable traits extend beyond physical body parts, and often extend into courtship behavior and nuptial gifts as well.
Although some behaviors in males are meant to work within the parameters of female choice, some male traits work against it. Strong enough males, in some cases, can force themselves upon a female, forcing fertilization and overriding female choice. Since this can often be dangerous for the female, an evolutionary arms race between the sexes is often an outcome.
History
Charles Darwin wrote that anisogamy had an impact on the evolution of sexual dimorphism. He also argued that anisogamy had an impact on sexual behavior. Anisogamy first became a major topic in the biological sciences when Charles Darwin wrote about sexual selection.
Mathematical models seeking to account for the evolution of anisogamy were published as early as 1932, but the first model consistent with evolutionary theory was that published by Geoff Parker, Robin Baker and Vic Smith in 1972.
Evolution
Although its evolution has left no fossil records, it is generally accepted that anisogamy evolved from isogamy and that it has evolved independently in several groups of eukaryotes including protists, algae, plants and animals. According to John Avise anisogamy probably originated around the same time sexual reproduction and multicellularity occurred, over 1 billion years ago. Anisogamy first evolved in multicellular haploid species after different mating types had become established.
The three main theories for the evolution of anisogamy are gamete competition, gamete limitation, and intracellular conflicts, but the last of these three is not well supported by current evidence. Both gamete competition and gamete limitation assume that anisogamy originated through disruptive selection acting on an ancestral isogamous population with external fertilization, due to a trade-off between larger gamete number and gamete size (which in turn affects zygote survival), because the total resource one individual can invest in reproduction is assumed to be fixed.
The first formal, mathematical theory proposed to explain the evolution of anisogamy was based on gamete limitation: this model assumed that natural selection would lead to gamete sizes that result in the largest population-wide number of successful fertilizations. If it is assumed that a certain amount of resources provided by the gametes are needed for the survival of the resulting zygote, and that there is a trade-off between the size and number of gametes, then this optimum was shown to be one where both small (male) and large (female) gametes are produced. However, these early models assume that natural selection acts mainly at the population level, something that is today known to be a very problematic assumption.
The first mathematical model to explain the evolution of anisogamy via individual level selection, and one that became widely accepted was the theory of gamete or sperm competition. Here, selection happens at the individual level: those individuals that produce more (but smaller) gametes also gain a larger proportion of fertilizations simply because they produce a larger number of gametes that 'seek out' those of the larger type. However, because zygotes formed from larger gametes have better survival prospects, this process can again lead to the divergence of gametes sizes into large and small (female and male) gametes. The end result is one where it seems that the numerous, small gametes compete for the large gametes that are tasked with providing maximal resources for the offspring.
Some recent theoretical work has challenged the gamete competition theory, by showing that gamete limitation by itself can lead to the divergence of gamete sizes even under selection at the individual level. While this is possible, it has also been shown that gamete competition and gamete limitation are the ends of a continuum of selective pressures, and they can act separately or together depending on the conditions. These selection pressures also act in the same direction (to increase gamete numbers at the expense of size) and at the same level (individual selection). Theory also suggests that gamete limitation could only have been the dominant force of selection for the evolutionary origin of the sexes under quite limited circumstances, and the presence on average of just one competitor can makes the 'selfish' evolutionary force of gamete competition stronger than the 'cooperative' force of gamete limitation even if gamete limitation is very acute (approaching 100% of eggs remaining unfertilized).
There is then a relatively sound theory base for understanding this fundamental transition from isogamy to anisogamy in the evolution of reproduction, which is predicted to be associated with the transition to multicellularity. In fact, Hanschen et al. (2018) demonstrate that anisogamy evolved from isogamous multicellular ancestors and that anisogamy would subsequently drive secondary sexual dimorphism. Some comparative empirical evidence for the gamete competition theories exists, although it is difficult to use this evidence to fully tease apart the competition and limitation theories because their testable predictions are similar. It has also been claimed that some of the organisms used in such comparative studies do not fit the theoretical assumptions well.
A valuable model system to the study of the evolution of anisogamy is the volvocine algae, which group of chlorophytes is quite unique for its extant species exhibit a diversity of mating systems (isogamy and anisogamy) in addition to its extremes in both unicellularity and multicellularity with a diversity of forms in species of intermediate ranges of sizes. Marine algae have been closely studied to understand the trajectories of such diversified reproductive systems, evolution of sex and mating types, as well as the adaptiveness and stability of anisogamy.
See also
Bateman's principle
Evolution of sex
Meiosis
Sex
References
Reproduction
Germ cells
Asymmetry | Anisogamy | [
"Physics",
"Biology"
] | 3,222 | [
"Behavior",
"Reproduction",
"Biological interactions",
"Asymmetry",
"Symmetry"
] |
1,544,179 | https://en.wikipedia.org/wiki/Technogaianism | Technogaianism (a portmanteau word combining "techno-" for technology and "gaian" for Gaia philosophy) is a bright green environmentalist stance of active support for the research, development and use of emerging and future technologies to help restore Earth's environment. Technogaianists argue that developing safe, clean, alternative technology should be an important goal of environmentalists and environmentalism.
Philosophy
This point of view is different from the default position of radical environmentalists and a common opinion that all technology necessarily degrades the environment, and that environmental restoration can therefore occur only with reduced reliance on technology. Technogaianists argue that technology gets cleaner and more efficient with time. They would also point to such things as hydrogen fuel cells to demonstrate that developments do not have to come at the environment's expense. More directly, they argue that such things as nanotechnology and biotechnology can directly reverse environmental degradation. Molecular nanotechnology, for example, could convert garbage in landfills into useful materials and products, while biotechnology could lead to novel microbes that devour hazardous waste.
While many environmentalists still contend that most technology is detrimental to the environment, technogaianists point out that it has been in humanity's best interests to exploit the environment mercilessly until fairly recently. This sort of behavior follows accurately to current understandings of evolutionary systems, in that when new factors (such as foreign species or mutant subspecies) are introduced into an ecosystem, they tend to maximize their own resource consumption until either, a) they reach an equilibrium beyond which they cannot continue unmitigated growth, or b) they become extinct. In these models, it is completely impossible for such a factor to totally destroy its host environment, though they may precipitate major ecological transformation before their ultimate eradication. Technogaianists believe humanity has currently reached just such a threshold, and that the only way for human civilization to continue advancing is to accept the tenets of technogaianism and limit future exploitive exhaustion of natural resources and minimize further unsustainable development or face the widespread, ongoing mass extinction of species. The destructive effects of modern civilization can be mitigated by technological solutions, such as using nuclear power. Furthermore, technogaianists argue that only science and technology can help humanity be aware of, and possibly develop counter-measures for, risks to civilization, humans and planet Earth such as a possible impact event.
Sociologist James Hughes mentions Walter Truett Anderson, author of To Govern Evolution: Further Adventures of the Political Animal, as an example of a technogaian political philosopher; argues that technogaianism applied to environmental management is found in the reconciliation ecology writings such as Michael Rosenzweig's Win-Win Ecology: How The Earth's Species Can Survive In The Midst of Human Enterprise; and considers Bruce Sterling's Viridian design movement to be an exemplary technogaian initiative.
The theories of English writer Fraser Clark may be broadly categorized as technogaian. Clark advocated "balancing the hippie right brain with the techno left brain". The idea of combining technology and ecology was extrapolated at length by a South African eco-anarchist project in the 1990s. The Kagenna Magazine project aimed to combine technology, art, and ecology in an emerging movement that could restore the balance between humans and nature.
George Dvorsky suggests the sentiment of technogaianism is to heal the Earth, use sustainable technology, and create ecologically diverse environments. Dvorsky argues that defensive counter measures could be designed to counter the harmful effects of asteroid impacts, earthquakes, and volcanic eruptions. Dvorksky also suggest that genetic engineering could be used to reduce the environmental impact humans have on the earth.
Methods
Environmental monitoring
Technology facilities the sampling, testing, and monitoring of various environments and ecosystems. NASA uses space-based observations to conduct research on solar activity, sea level rise, the temperature of the atmosphere and the oceans, the state of the ozone layer, air pollution, and changes in sea ice and land ice.
Geoengineering
Climate engineering is a technogaian method that uses two categories of technologies- carbon dioxide removal and solar radiation management. Carbon dioxide removal addresses a cause of climate change by removing one of the greenhouse gases from the atmosphere. Solar radiation management attempts to offset the effects of greenhouse gases by causing the Earth to absorb less solar radiation.
Earthquake engineering is a technogaian method concerned with protecting society and the natural and man-made environment from earthquakes by limiting the seismic risk to acceptable levels.
Another example of a technogaian practice is an artificial closed ecological system used to test if and how people could live and work in a closed biosphere, while carrying out scientific experiments. It is in some cases used to explore the possible use of closed biospheres in space colonization, and also allows the study and manipulation of a biosphere without harming Earth's. The most advanced technogaian proposal is the "terraforming" of a planet, moon, or other body by deliberately modifying its atmosphere, temperature, or ecology to be similar to those of Earth in order to make it habitable by humans.
Genetic engineering
S. Matthew Liao, professor of philosophy and bioethics at New York University, claims that the human impact on the environment could be reduced by genetically engineering humans to have, a smaller stature, an intolerance to eating meat, and an increased ability to see in the dark, thereby using less lighting. Liao argues that human engineering is less risky than geoengineering.
Genetically modified foods have reduced the amount of herbicide and insecticide needed for cultivation. The development of glyphosate-resistant (Roundup Ready) plants has changed the herbicide use profile away from more environmentally persistent herbicides with higher toxicity, such as atrazine, metribuzin and alachlor, and reduced the volume and danger of herbicide runoff.
An environmental benefit of Bt-cotton and maize is reduced use of chemical insecticides. A PG Economics study concluded that global pesticide use was reduced by 286,000 tons in 2006, decreasing the environmental impact of herbicides and pesticides by 15%. A survey of small Indian farms between 2002 and 2008 concluded that Bt cotton adoption had led to higher yields and lower pesticide use. Another study concluded insecticide use on cotton and corn during the years 1996 to 2005 fell by of active ingredient, which is roughly equal to the annual amount applied in the EU. A Bt cotton study in six northern Chinese provinces from 1990 to 2010 concluded that it halved the use of pesticides and doubled the level of ladybirds, lacewings and spiders and extended environmental benefits to neighbouring crops of maize, peanuts and soybeans.
Examples of implementation
Related environmental ethical schools and movements
See also
Appropriate technology
Digital public goods
Eco-innovation
Ecological modernization
Ecomodernism
Ecosia
Ecotechnology
Environmental ethics
Green development
Green nanotechnology
List of environmental issues
Open-source appropriate technology
Solarpunk
Ten Technologies to Fix Energy and Climate
References
External links
Green Progress
Viridian Design Movement
WorldChanging
Bright green environmentalism
Green politics
Environmentalism
Transhumanism | Technogaianism | [
"Technology",
"Engineering",
"Biology"
] | 1,449 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
1,544,327 | https://en.wikipedia.org/wiki/Limiting%20reagent | The limiting reagent (or limiting reactant or limiting agent) in a chemical reaction is a reactant that is totally consumed when the chemical reaction is completed. The amount of product formed is limited by this reagent, since the reaction cannot continue without it. If one or more other reagents are present in excess of the quantities required to react with the limiting reagent, they are described as excess reagents or excess reactants (sometimes abbreviated as "xs"), or to be in abundance.
The limiting reagent must be identified in order to calculate the percentage yield of a reaction since the theoretical yield is defined as the amount of product obtained when the limiting reagent reacts completely. Given the balanced chemical equation, which describes the reaction, there are several equivalent ways to identify the limiting reagent and evaluate the excess quantities of other reagents.
Method 1: Comparison of reactant amounts
This method is most useful when there are only two reactants. One reactant (A) is chosen, and the balanced chemical equation is used to determine the amount of the other reactant (B) necessary to react with A. If the amount of B actually present exceeds the amount required, then B is in excess and A is the limiting reagent. If the amount of B present is less than required, then B is the limiting reagent.
Example for two reactants
Consider the combustion of benzene, represented by the following chemical equation:
2 C6H6(l) + 15 O2(g) -> 12 CO2(g) + 6 H2O(l)
This means that 15 moles of molecular oxygen (O2) is required to react with 2 moles of benzene (C6H6)
The amount of oxygen required for other quantities of benzene can be calculated using cross-multiplication (the rule of three). For example,
if 1.5 mol C6H6 is present, 11.25 mol O2 is required:
If in fact 18 mol O2 are present, there will be an excess of (18 - 11.25) = 6.75 mol of unreacted oxygen when all the benzene is consumed. Benzene is then the limiting reagent.
This conclusion can be verified by comparing the mole ratio of O2 and C6H6 required by the balanced equation with the mole ratio actually present:
required:
actual:
Since the actual ratio is larger than required, O2 is the reagent in excess, which confirms that benzene is the limiting reagent.
Method 2: Comparison of product amounts which can be formed from each reactant
In this method the chemical equation is used to calculate the amount of one product which can be formed from each reactant in the amount present. The limiting reactant is the one which can form the smallest amount of the product considered. This method can be extended to any number of reactants more easily than the first method.
Example
20.0 g of iron (III) oxide (Fe2O3) are reacted with 8.00 g aluminium (Al) in the following thermite reaction:
Fe2O3(s) + 2 Al(s) -> 2 Fe(l) + Al2O3(s)
Since the reactant amounts are given in grams, they must be first converted into moles for comparison with the chemical equation, in order to determine how many moles of Fe can be produced from either reactant.
Moles of Fe which can be produced from reactant Fe2O3
Moles of Fe which can be produced from reactant Al
There is enough Al to produce 0.297 mol Fe, but only enough Fe2O3 to produce 0.250 mol Fe. This means that the amount of Fe actually produced is limited by the Fe2O3 present, which is therefore the limiting reagent.
Shortcut
It can be seen from the example above that the amount of product (Fe) formed from each reagent X (Fe2O3 or Al) is proportional to the quantity
This suggests a shortcut which works for any number of reagents. Just calculate this formula for each reagent, and the reagent that has the lowest value of this formula is the limiting reagent. We can apply this shortcut in the above example.
See also
Limiting factor
References
Chemical reactions
Stoichiometry | Limiting reagent | [
"Chemistry"
] | 908 | [
"Stoichiometry",
"Chemical reaction engineering",
"nan"
] |
1,544,409 | https://en.wikipedia.org/wiki/Savepoint | A savepoint is a way of implementing subtransactions (also known as nested transactions) within a relational database management system by indicating a point within a transaction that can be "rolled back to" without affecting any work done in the transaction before the savepoint was created. Multiple savepoints can exist within a single transaction. Savepoints are useful for implementing complex error recovery in database applications. If an error occurs in the midst of a multiple-statement transaction, the application may be able to recover from the error (by rolling back to a savepoint) without needing to abort the entire transaction.
A savepoint can be declared by issuing a SAVEPOINT name statement. All changes made after a savepoint has been declared can be undone by issuing a ROLLBACK TO SAVEPOINT name command. Issuing RELEASE SAVEPOINT name will cause the named savepoint to be discarded, but will not otherwise affect anything. Issuing the commands ROLLBACK or COMMIT will also discard any savepoints created since the start of the main transaction.
Savepoints are defined in the SQL standard and are supported by all established SQL relational databases, including PostgreSQL, Oracle Database, Microsoft SQL Server, MySQL, IBM Db2, SQLite (since 3.6.8), Firebird, H2 Database Engine, and Informix (since version 11.50xC3).
Data management
Transaction processing
Database management systems | Savepoint | [
"Technology"
] | 288 | [
"Data management",
"Data"
] |
1,544,719 | https://en.wikipedia.org/wiki/Ice%20pick | An ice pick is a pointed metal tool used to break, pick or chip at ice. The design consists of a sharp metal spike attached to a wooden handle. The tool's design has been relatively unchanged since its creation. The only notable differences in the design are the material used for the handle. The handle material is usually made out of wood but can also be made from plastic or rubber. These materials can be better in terms of safety and allow the user to better grip the pick during use.
History
During the 1800s, ice blocks were gathered from frozen water sources and distributed to nearby homes. Ice picks were used to easily cut the blocks into smaller pieces for use. In many cases these smaller blocks were used in iceboxes. Iceboxes are similar in use to refrigerators, with the major difference being that iceboxes could only stay cold for a limited time. They needed to be restocked with ice regularly to continue proper functioning. The ice pick slowly began to lose popularity beginning in the early to mid-1900s due to the creation of the modern refrigerator. Many refrigerators came with a built-in ice maker which allowed for easy access to small ice chunks at any time and eliminated the need for the ice pick.
Modern usage
Some bartenders will carve or chip blocks of ice into aesthetically pleasing shapes to be served with their drinks, using tools including an ice pick.
Because blocks of ice melt much slower than cubes, sailors, campers and others who will be away from civilization for periods of time may carry blocks of ice along with an ice pick to shape and serve the blocks.
Use as weapon
Because of its availability and ability to puncture the skin easily, the ice pick has sometimes been used as a weapon. Most notoriously, New York's organized crime groups known as Murder Incorporated made extensive use of the ice pick as a weapon during the 1930s and 1940s. There were up to 1,000 murders committed by this group. In 1932, the bodies of two young men were found who had been stabbed numerous times with an ice pick. A man named Jacob Drucker was found guilty of murdering a man in 1944. His victim had been stabbed with an ice pick over 20 times. The most feared hitman of his day, Abe Reles, used the ice pick as his weapon of choice, usually stabbing his victims in the ear.
According to New York City police, ice picks are still used today as street weapons. On August 21, 2012, a man was attacked with an ice pick in the Bronx. John Martinez, a man from the Bronx, was convicted of several robberies using an ice pick in 2011.
Murderer Richard Kuklinski, who claimed to have killed over 200 people, was reported to have used an ice pick among other weapons.
During the early morning of August 16, 1975, Utah Highway Patrol Trooper Robert Hayward arrested Serial Killer Ted Bundy, amongst the items found during a search of Ted's Volkswagen was a brown gym bag containing a red handled ice pick along with handcuffs, rope, ski mask, panty hose mask, flashlight, GLAD garbage bags, and other incriminating items.
Leon Trotsky is sometimes incorrectly said to have been killed with an ice pick. He was actually killed with an ice axe, a mountaineering tool.
An ice pick (later revealed as a screwdriver) and a kitchen knife were used by Luka Magnotta to murder Jun Lin in 2012.
In 2018, a 25-year-old man was killed at a New York City bus stop when he was stabbed with an ice pick.
in the Philippines, an ice pick is a common weapon, particularly in the slums of Manila.
Use in medicine
The lobotomy was a medical treatment that gained popularity during the mid-1930s. Lobotomist Walter Freeman performed thousands of lobotomies across the world. Reportedly, he used an ice pick from his family's kitchen. The pick would be inserted into the brain through the eye socket. The procedure would be done without the use of anesthetics. This "Ice Pick Lobotomy," was believed to diminish mental issues however these often resulted in paralysis and early death. This treatment failed due to a lack of testing before being performed on thousands of people. Walter Freeman's medical license was revoked in 1967 after a woman died during a lobotomy. This method of lobotomy led to the deaths of around 500 people over the course of 50 years. By the 1970s, the procedure would be banned in many countries for being inhumane.
References
External links
Mechanical hand tools | Ice pick | [
"Physics"
] | 934 | [
"Mechanics",
"Mechanical hand tools"
] |
1,544,912 | https://en.wikipedia.org/wiki/Solid-state%20nuclear%20track%20detector | A solid-state nuclear track detector or SSNTD (also known as an etched track detector or a dielectric track detector, DTD) is a sample of a solid material (photographic emulsion, crystal, glass or plastic) exposed to nuclear radiation (neutrons or charged particles, occasionally also gamma rays), etched in a corrosive chemical, and examined microscopically. When the nuclear particles pass through the material they leave trails of molecular damage, and these damaged regions are etched faster than the bulk material, generating holes called tracks.
The size and shape of these tracks yield information about the mass, charge, energy and direction of motion of the particles. The main advantages over other radiation detectors are the detailed information available on individual particles, the persistence of the tracks allowing measurements to be made over long periods of time, and the simple, cheap and robust construction of the detector. For these reasons, SSNTDs are commonly used to study cosmic rays, long-lived radioactive elements, radon concentration in houses, and the age of geological samples.
The basis of SSNTDs is that charged particles damage the detector within nanometers along the track in such a way that the track can be etched many times faster than the undamaged material. Etching, typically for several hours, enlarges the damage to conical pits of micrometer dimensions, that can be observed with a microscope. For a given type of particle, the length of the track gives the energy of the particle. The charge can be determined from the etch rate of the track compared to that of the bulk. If the particles enter the surface at normal incidence, the pits are circular; otherwise the ellipticity and orientation of the elliptical pit mouth indicate the direction of incidence.
A material commonly used in SSNTDs is polyallyl diglycol carbonate (also known as CR-39). It is a clear, colorless, rigid plastic with the chemical formula C12H18O7. Etching to expose radiation damage is typically performed using solutions of caustic alkalis such as sodium hydroxide, often at elevated temperatures for several hours.
See also
nuclear track detectors that are not solid state
cloud chamber
bubble chamber
solid-state (semiconductor) nuclear detectors that do not record tracks
surface-barrier detector
silicon drift detector
lithium-drifted silicon detector - Si(Li)
intrinsic detector
External links
Gregory Choppin, Jan-Olov Liljenzin, Jan Rydberg Radiochemistry and Nuclear Chemistry, Chapter 8, "Detection and Measurement Techniques"
Nuclear physics
Particle detectors | Solid-state nuclear track detector | [
"Physics",
"Technology",
"Engineering"
] | 516 | [
"Particle detectors",
"Measuring instruments",
"Nuclear physics"
] |
1,544,923 | https://en.wikipedia.org/wiki/Planet%20V | Planet V is a hypothetical fifth terrestrial planet posited by NASA scientists John Chambers and Jack J. Lissauer to have once existed between Mars and the asteroid belt. In their hypothesis the Late Heavy Bombardment of the Hadean era began after perturbations from the other terrestrial planets caused Planet V's orbit to cross into the asteroid belt. Chambers and Lissauer presented the results of initial tests of this hypothesis during the 33rd Lunar and Planetary Science Conference, held from March 11 through 15, 2002.
Hypothesis
In the Planet V hypothesis, five terrestrial planets were produced during the planetary formation era. The fifth terrestrial planet began on a low-eccentricity orbit between Mars and the asteroid belt with a semi-major axis between 1.8 and 1.9 AU. While long-lived, this orbit was unstable on a time-scale of 600 Myr. Eventually perturbations from the other inner planets drove Planet V onto a high-eccentricity orbit which crossed into the inner asteroid belt. Asteroids were scattered onto Mars-crossing and resonant orbits by close encounters with Planet V. Many of these asteroids then evolved onto Earth-crossing orbits temporarily enhancing the lunar impact rate. This process continued until Planet V was lost most likely by impacting the Sun after entering the ν6 secular resonance.
Tests and results
As an initial test of the Planet V hypothesis, Chambers and Lissauer conducted 36 computer simulations of the Solar System with an additional terrestrial planet. A variety of parameters were used to determine the impacts of Planet V's initial orbit and mass. The mean time at which Planet V was lost was found to increase from 100 Myr to 400 Myr as its initial semi-major axis was increased from 1.8 to 1.9 AU. Results consistent with the current Solar System were most common with a 0.25 Mars mass Planet V. In cases with a larger mass Planet V collisions between planets were likely. Overall a third of these simulations were deemed successful in that Planet V was removed without impacting another planet. To test whether Planet V could increase the lunar impact rate they added test particles to one of the simulations. After an initial decline the number of particles on Earth-crossing orbits increased after Planet V entered the inner asteroid belt a pattern consistent with the LHB. These results were presented at the 33rd Lunar and Planetary Science Conference.
In a later article published in the journal Icarus in 2007, Chambers reported the results of 96 simulations examining the orbital dynamics of the Solar System with five terrestrial planets. In a quarter of the simulations Planet V was ejected or impacted the Sun without other terrestrial planets suffering collisions. This result was most frequent if Planet V's mass was less than 0.25 of Mars. The other simulations were not considered successful because Planet V either survived for the entire 1 billion year length of the simulations, or collisions occurred between planets.
The terrestrial Planet V hypothesis was examined by Ramon Brasser and Alessandro Morbidelli in 2011. Their work was the first to focus on the magnitude of the bombardment caused by Planet V. Brasser and Morbidelli calculated that to create the Late Heavy Bombardment Planet V would have to remove 95% of the pre-LHB main asteroid belt or 98% of the inner asteroid belt (semi-major axis < 2.5 AU). Depleting the main asteroid belt by 95% with a 0.5 Mars-mass Planet V was found to require it remain in an orbit crossing the entire asteroid belt for 300 million years. This orbital evolution was not observed in any simulations; Planet V typically entered an Earth-crossing orbit resulting in a short dynamic lifetime before entering such an orbit. In a few percent of simulations Planet V remained in the inner belt long enough to produce the LHB. However, producing the LHB from the inner asteroid belt would require the inner asteroid belt to have begun with 4–13 times the mass, and 10–24 time the orbital density, as the rest of the asteroid belt.
Brasser and Morbidelli also examined the hypothesis that Planet V caused the LHB by disrupting putative asteroid belts between the terrestrial planets. The authors noted that the lack of present-day detection of the remnants of these belts places a significant constraint on this hypothesis, requiring that they be 99.99% depleted before Planet V was lost. While this occurred in 66% of the simulations compatible with the current Solar System for a Venus-Earth belt, it did not occur in any for the Earth-Mars belt due to its higher stability. Morbidelli and Brasser concluded from this result that an Earth-Mars belt could not have contained a significant population. Although Planet V could generate a Late Heavy Bombardment by disrupting a massive Venus-Earth belt alone, the authors observed that significant differences in these belts has not been produced in planetary formation models.
Alternate version
An impact of Planet V onto Mars, forming the Borealis Basin has recently been proposed as an explanation for the Late Heavy Bombardment. Debris from this impact would have a different size distribution than the asteroid belt with a smaller fraction of large bodies and would result in a lower number of giant impact basins relative to craters.
See also
Disrupted planet
Fifth planet (hypothetical)
Nice model
Phaeton (hypothetical planet)
List of hypothetical solar system objects
References
Hypothetical bodies of the Solar System
Solar System dynamic theories
V
Solar System | Planet V | [
"Astronomy"
] | 1,076 | [
"Astronomical hypotheses",
"Outer space",
"Astronomical myths",
"Hypothetical astronomical objects",
"Astronomical objects",
"Solar System"
] |
1,544,931 | https://en.wikipedia.org/wiki/Bonnefantenmuseum | The Bonnefanten Museum is a museum of historic, modern and contemporary art in Maastricht, Netherlands.
History
The museum was founded in 1884 as the historical and archaeological museum for the Dutch province of Limburg. The name Bonnefanten Museum is derived from the French 'bons enfants' ('good children'), the popular name of a former convent that housed the museum from 1951 until 1978.
In 1995, the museum moved to its present location, a former industrial site named 'Céramique'. The new building was designed by Italian architect Aldo Rossi. With its rocket-shaped cupola overlooking the river Maas, it is one of Maastricht's most prominent modern buildings.
Since 1999, the museum has become exclusively an art museum. The historical and archaeological collections were housed elsewhere, partially at the Limburg Museum in Venlo. The museum is largely funded by the province of Limburg.
In 2009, the museum celebrated its 125th anniversary with the exhibition Exile on Main Street, celebrating modern and contemporary American art. Stijn Huijts has been director since 2012.
Collection
The combination of historic art and contemporary art under one roof gives the Bonnefanten Museum a distinctive character. The department of old masters is located on the first floor and displays highlights of early Italian, Flemish and Dutch painting. Exhibited on the same floor is the museum's extensive collection of medieval sculpture. The contemporary art collection is usually exhibited on the second floor and focuses on American Minimalism, Italian Arte Povera and Concept Art. The second and third floors are also used for temporary exhibitions.
Historic Art
The collection of historic paintings and sculptures of the Bonnefanten Museum consists of four main sections:
Wooden sculptures dating from the 13th to the 16th century, notably by Jan van Steffeswert (e.g. The Virgin and Child with St. Anne) and the );
The Neutelings Collection of medieval art, consisting of artefacts made of wood, bronze, marble, alabaster and ivory from the Southern Netherlands, France, England and the German Lower Rhine region;
Italian paintings from the 14th and 15th centuries: Giovanni del Biondo, Domenico di Michelino, Jacopo del Casentino, Sano di Pietro, Pietro Nelli;
Flemish and Dutch paintings from the 16th and 17th centuries: Colijn de Coter, Jan Mandyn, Jan Provost, Roelandt Savery, Pieter Coecke van Aelst, Pieter Aertsen, Pieter Brueghel the Younger, David Teniers II, Peter Paul Rubens, Jacob Jordaens, Hendrik van Steenwijk II, Gérard de Lairesse, Wallerant Vaillant, Melchior d'Hondecoeter, Jan van Goyen and Cornelis de Bryer.
Contemporary art
Since Alexander van Grevenstein became director in 1986, the Bonnefanten Museum has focused mainly on contemporary art. The main focus of the permanent collection is on:
Conceptual art: Jan Dibbets, Marcel Broodthaers, Joseph Beuys, Bruce Nauman, Gilbert and George, Ai Weiwei;
Minimal Art: Sol LeWitt, Robert Ryman, Robert Mangold, Richard Serra;
Arte Povera: Luciano Fabro, Mario Merz, Jannis Kounellis;
Neo-expressionism: Neo Rauch, Peter Doig, Gary Hume, Grayson Perry, Luc Tuymans, Marlene Dumas.
The collection also features video art and room-size installations by younger artists: Atelier Van Lieshout, Francis Alÿs, David Claerbout, Patrick Van Caeckenbergh, Roman Signer, Franz West, Pawel Althamer.
In 2011, a deal was negotiated between the collectors Jo and Marlies Eyck and the province of Limburg. The result was that the Eyck collection of postwar art and the castle of Wijlre and its grounds, are now part of the museum.
Visitor numbers
All figures are from museum year reports.
Governance
The current director is Stijn Huijts. He replaced Alexander van Grevenstein, who became director in 1986. As of 2023, there are 53 permanant staff at the museum. The budget, in 2023, was around €9.9m, of which €6.5m was received in funding from the province of Limburg.
Gallery
See also
Google Arts & Culture
Bibliography, references and notes
Szénássy, I.L. (ed.), Bonnefantenmuseum. Het gebouw. Het museum. De verzamelingen. Maastricht, 1984
Szénássy, I.L. (ed.), Kunst in het Bonnefantenmuseum. Maastricht, 1984
Szénássy, I.L. (ed.), Oudheden in het Bonnefantenmuseum. Maastricht, 1984
Poel, P. te, Bonnefantenmuseum. Collectie Middeleeuws Houtsnijwerk. Maastricht, 2007
Poel, P. te, Bonnefantenmuseum. Collectie Neutelings. Maastricht, 2007
Quik, T., Bonnefantenmuseum. De geschiedenis. Maastricht, 2007
Timmers, J.J.M., Catalogus van schilderijen en beeldhouwwerken. Maastricht, 1958
Wegen, R. van, and T. Quik (ed.), Bonnefantenmuseum Maastricht. Maastricht, 1995
External links
Bonnefantenmuseum within Google Arts & Culture
Modern art museums
Art museums and galleries in the Netherlands
Postmodern architecture
Museums in Maastricht
Art museums and galleries established in 1884
1884 establishments in the Netherlands
19th-century architecture in the Netherlands | Bonnefantenmuseum | [
"Engineering"
] | 1,212 | [
"Postmodern architecture",
"Architecture"
] |
1,544,998 | https://en.wikipedia.org/wiki/Van%20Wijngaarden%20grammar | In computer science, a Van Wijngaarden grammar (also vW-grammar or W-grammar) is a formalism for defining formal languages. The name derives from the formalism invented by Adriaan van Wijngaarden
for the purpose of defining the ALGOL 68 programming language.
The resulting specification remains its most notable application.
Van Wijngaarden grammars address the problem that context-free grammars cannot express agreement or reference, where two different parts of the sentence must agree with each other in some way. For example, the sentence "The birds was eating" is not Standard English because it fails to agree on number. A context-free grammar would parse "The birds was eating" and "The birds were eating" and "The bird was eating" in the same way. However, context-free grammars have the benefit of simplicity whereas van Wijngaarden grammars are considered highly complex.
Two levels
W-grammars are two-level grammars: they are defined by a pair of grammars, that operate on different levels:
the hypergrammar is an attribute grammar, i.e. a set of context-free grammar rules in which the nonterminals may have attributes; and
the metagrammar is a context-free grammar defining possible values for these attributes.
The set of strings generated by a W-grammar is defined by a two-stage process:
within each hyperrule, for each attribute that occurs in it, pick a value for it generated by the metagrammar; the result is a normal context-free grammar rule; do this in every possible way;
use the resulting (possibly infinite) context-free grammar to generate strings in the normal way.
The consistent substitution used in the first step is the same as substitution in predicate logic, and actually supports logic programming; it corresponds to unification in Prolog, as noted by Alain Colmerauer.
W-grammars are Turing complete;
hence, all decision problems regarding the languages they generate, such as
whether a W-grammar generates a given string
whether a W-grammar generates no strings at all
are undecidable.
Curtailed variants, known as affix grammars, were developed, and applied in compiler construction and to the description of natural languages.
Definite logic programs, that is, logic programs that make no use of negation, can be viewed as a subclass of W-grammars.
Motivation and history
In the 1950s, attempts started to apply computers to the recognition, interpretation and translation of natural languages, such as English and Russian. This requires a machine-readable description of the phrase structure of sentences, that can be used to parse and interpret them, and to generate them. Context-free grammars, a concept from structural linguistics, were adopted for this purpose; their rules can express how sentences are recursively built out of parts of speech, such as noun phrases and verb phrases, and ultimately, words, such as nouns, verbs, and pronouns.
This work influenced the design and implementation of programming languages, most notably, of ALGOL 60, which introduced a syntax description in Backus–Naur form.
However, context-free rules cannot express agreement or reference (anaphora), where two different parts of the sentence must agree with each other in some way.
These can be readily expressed in W-grammars. (See example below.)
Programming languages have the analogous notions of typing and scoping.
A compiler or interpreter for the language must recognize which uses of a variable belong together (refer to the same variable). This is typically subject to constraints such as:
A variable must be initialized before its value is used.
In strongly typed languages, each variable is assigned a type, and all uses of the variable must respect its type.
Often, its type must be declared explicitly, before use.
W-grammars are based on the idea of providing the nonterminal symbols of context-free grammars with attributes (or affixes) that pass information between the nodes of the parse tree, used to constrain the syntax and to specify the semantics.
This idea was well known at the time; e.g. Donald Knuth visited the ALGOL 68 design committee while developing his own version of it, attribute grammars.
By augmenting the syntax description with attributes, constraints like the above can be checked, ruling many invalid programs out at compile time.
As Van Wijngaarden wrote in his preface:
Quite peculiar to W-grammars was their strict treatment of attributes as strings, defined by a context-free grammar, on which concatenation is the only possible operation; complex data structures and operations can be defined by pattern matching. (See example below.)
After their introduction in the 1968 ALGOL 68 "Final Report", W-grammars were widely considered as too powerful and unconstrained to be practical.
This was partly a consequence of the way in which they had been applied; the 1973 ALGOL 68 "Revised Report" contains a much more readable grammar, without modifying the W-grammar formalism itself.
Meanwhile, it became clear that W-grammars, when used in their full generality, are indeed too powerful for such practical purposes as serving as the input for a parser generator.
They describe precisely all recursively enumerable languages, which makes parsing impossible in general: it is an undecidable problem to decide whether a given string can be generated by a given W-grammar.
Hence, their use must be seriously constrained when used for automatic parsing or translation. Restricted and modified variants of W-grammars were developed to address this, e.g.
Extended Affix Grammars (EAGs), applied to describe the grammars of natural language such as English and Spanish);
Q-systems, also applied to natural language processing;
the CDL series of languages, applied as compiler construction languages for programming languages.
After the 1970s, interest in the approach waned; occasionally, new studies are published.
Examples
Agreement in English grammar
In English, nouns, pronouns and verbs have attributes such as grammatical number, gender, and person, which must agree between subject, main verb, and pronouns referring to the subject:
I wash myself.
She washes herself.
We wash ourselves.
are valid sentences; invalid are, for instance:
*I washes ourselves.
*She wash himself.
*We wash herself.
Here, agreement serves to stress that both pronouns (e.g. I and myself) refer to the same person.
A context-free grammar to generate all such sentences:
<sentence> ::= <subject> <verb> <object>
<subject> ::= I | You | He | She | We | They
<verb> ::= wash | washes
<object> ::= myself | yourself | himself | herself | ourselves | yourselves | themselves
From <sentence>, we can generate all combinations:
I wash myself
I wash yourself
I wash himself
[...]
They wash yourselves
They wash themselves
A W-grammar to generate only the valid sentences:
<sentence <NUMBER> <GENDER> <PERSON>>
::= <subject <NUMBER> <GENDER> <PERSON>>
<verb <NUMBER> <PERSON>>
<object <NUMBER> <GENDER> <PERSON>>
<subject singular <GENDER> 1st> ::= I
<subject <NUMBER> <GENDER> 2nd> ::= You
<subject singular male 3rd> ::= He
<subject singular female 3rd> ::= She
<subject plural <GENDER> 1st> ::= We
<subject singular <GENDER> 3rd> ::= They
<verb singular 1st> ::= wash
<verb singular 2nd> ::= wash
<verb singular 3rd> ::= washes
<verb plural <PERSON>> ::= wash
<object singular <GENDER> 1st> ::= myself
<object singular <GENDER> 2nd> ::= yourself
<object singular male 3rd> ::= himself
<object singular female 3rd> ::= herself
<object plural <GENDER> 1st> ::= ourselves
<object plural <GENDER> 2nd> ::= yourselves
<object plural <GENDER> 3rd> ::= themselves
<NUMBER> ::== singular | plural
<GENDER> ::== male | female
<PERSON> ::== 1st | 2nd | 3rd
A standard non-context-free language
A well-known non-context-free language is
A two-level grammar for this language is the metagrammar
N ::= 1 | N1
X ::= a | b
together with grammar schema
Start ::=
::= X
::= X
Questions.
If one substitutes a new letter, say C, for N1, is the language generated by the grammar preserved? Or N1 should be read as a string of two symbols, that is, N followed by 1?
End of questions.
Requiring valid use of variables in ALGOL
The Revised Report on the Algorithmic Language Algol 60
defines a full context-free syntax for the language.
Assignments are defined as follows (section 4.2.1):
<left part>
::= <variable> :=
| <procedure identifier> :=
<left part list>
::= <left part>
| <left part list> <left part>
<assignment statement>
::= <left part list> <arithmetic expression>
| <left part list> <Boolean expression>
A <variable> can be (amongst other things) an <identifier>, which in turn is defined as:
<identifier> ::= <letter> | <identifier> <letter> | <identifier> <digit>
Examples (section 4.2.2):
s:=p[0]:=n:=n+1+s
n:=n+1
A:=B/C-v-q×S
S[v,k+2]:=3-arctan(sTIMESzeta)
V:=Q>Y^Z
Expressions and assignments must be type checked: for instance,
in n:=n+1, n must be a number (integer or real);
in A:=B/C-v-q×S, all variables must be numbers;
in V:=Q>Y^Z, all variables must be of type Boolean.
The rules above distinguish between <arithmetic expression> and <Boolean expression>, but they cannot verify that the same variable always has the same type.
This (non-context-free) requirement can be expressed in a W-grammar by annotating the rules with attributes that record, for each variable used or assigned to, its name and type.
This record can then be carried along to all places in the grammar where types need to be matched, and implement type checking.
Similarly, it can be used to checking initialization of variables before use, etcetera.
One may wonder how to create and manipulate such a data structure without explicit support in the formalism for data structures and operations on them. It can be done by using the metagrammar to define a string representation for the data structure and using pattern matching to define operations:
<left part with <TYPED> <NAME>>
::= <variable with <TYPED> <NAME>> :=
| <procedure identifier with <TYPED> <NAME>> :=
<left part list <TYPEMAP1>>
::= <left part with <TYPED> <NAME>>
<where <TYPEMAP1> is <TYPED> <NAME> added to sorted <EMPTY>>
| <left part list <TYPEMAP2>>
<left part with <TYPED> <NAME>>
<where <TYPEMAP1> is <TYPED> <NAME> added to sorted <TYPEMAP2>>
<assignment statement <ASSIGNED TO> <USED>>
::= <left part list <ASSIGNED TO>> <arithmetic expression <USED>>
| <left part list <ASSIGNED TO>> <Boolean expression <USED>>
<where <TYPED> <NAME> is <TYPED> <NAME> added to sorted <EMPTY>>
::=
<where <TYPEMAP1> is <TYPED1> <NAME1> added to sorted <TYPEMAP2>>
::= <where <TYPEMAP2> is <TYPED2> <NAME2> added to sorted <TYPEMAP3>>
<where <NAME1> is lexicographically before <NAME2>>
<where <TYPEMAP1> is <TYPED1> <NAME1> added to sorted <TYPEMAP2>>
::= <where <TYPEMAP2> is <TYPED2> <NAME2> added to sorted <TYPEMAP3>>
<where <NAME2> is lexicographically before <NAME1>>
<where <TYPEMAP3> is <TYPED1> <NAME1> added to sorted <TYPEMAP4>>
<where <EMPTY> is lexicographically before <NAME1>>
::= <where <NAME1> is <LETTER OR DIGIT> followed by <NAME2>>
<where <NAME1> is lexicographically before <NAME2>>
::= <where <NAME1> is <LETTER OR DIGIT> followed by <NAME3>>
<where <NAME2> is <LETTER OR DIGIT> followed by <NAME4>>
<where <NAME3> is lexicographically before <NAME4>>
<where <NAME1> is lexicographically before <NAME2>>
::= <where <NAME1> is <LETTER OR DIGIT 1> followed by <NAME3>>
<where <NAME2> is <LETTER OR DIGIT 2> followed by <NAME4>>
<where <LETTER OR DIGIT 1> precedes+ <LETTER OR DIGIT 2>
<where <LETTER OR DIGIT 1> precedes+ <LETTER OR DIGIT 2>
::= <where <LETTER OR DIGIT 1> precedes <LETTER OR DIGIT 2>
<where <LETTER OR DIGIT 1> precedes+ <LETTER OR DIGIT 2>
::= <where <LETTER OR DIGIT 1> precedes+ <LETTER OR DIGIT 3>
<where <LETTER OR DIGIT 3> precedes+ <LETTER OR DIGIT 2>
<where a precedes b> :==
<where b precedes c> :==
[...]
<TYPED> ::== real | integer | Boolean
<NAME> ::== <LETTER> | <NAME> <LETTER> | <NAME> <DIGIT>
<LETTER OR DIGIT> ::== <LETTER> | <DIGIT>
<LETTER OR DIGIT 1> ::= <LETTER OR DIGIT>
<LETTER OR DIGIT 2> ::= <LETTER OR DIGIT>
<LETTER OR DIGIT 3> ::= <LETTER OR DIGIT>
<LETTER> ::== a | b | c | [...]
<DIGIT> ::== 0 | 1 | 2 | [...]
<NAMES1> ::== <NAMES>
<NAMES2> ::== <NAMES>
<ASSIGNED TO> ::== <NAMES>
<USED> ::== <NAMES>
<NAMES> ::== <NAME> | <NAME> <NAMES>
<EMPTY> ::==
<TYPEMAP> ::== (<TYPED> <NAME>) <TYPEMAP>
<TYPEMAP1> ::== <TYPEMAP>
<TYPEMAP2> ::== <TYPEMAP>
<TYPEMAP3> ::== <TYPEMAP>
When compared to the original grammar, three new elements have been added:
attributes to the nonterminals in what are now the hyperrules;
metarules to specify the allowable values for the attributes;
new hyperrules to specify operations on the attribute values.
The new hyperrules are -rules: they only generate the empty string.
ALGOL 68 examples
The ALGOL 68 reports use a slightly different notation without <angled brackets>.
ALGOL 68 as in the 1968 Final Report §2.1
a) program : open symbol, standard prelude,
library prelude option, particular program, exit,
library postlude option, standard postlude, close symbol.
b) standard prelude : declaration prelude sequence.
c) library prelude : declaration prelude sequence.
d) particular program :
label sequence option, strong CLOSED void clause.
e) exit : go on symbol, letter e letter x letter i letter t, label symbol.
f) library postlude : statement interlude.
g) standard postlude : strong void clause train
ALGOL 68 as in the 1973 Revised Report §2.2.1, §10.1.1
program : strong void new closed clause
A) EXTERNAL :: standard ; library ; system ; particular.
B) STOP :: label letter s letter t letter o letter p.
a) program text : STYLE begin token, new LAYER1 preludes,
parallel token, new LAYER1 tasks PACK,
STYLE end token.
b) NEST1 preludes : NEST1 standard prelude with DECS1,
NEST1 library prelude with DECSETY2,
NEST1 system prelude with DECSETY3, where (NEST1) is
(new EMPTY new DECS1 DECSETY2 DECSETY3).
c) NEST1 EXTERNAL prelude with DECSETY1 :
strong void NEST1 series with DECSETY1, go on token ;
where (DECSETY1) is (EMPTY), EMPTY.
d) NEST1 tasks : NEST1 system task list, and also token,
NEST1 user task PACK list.
e) NEST1 system task : strong void NEST1 unit.
f) NEST1 user task : NEST2 particular prelude with DECS,
NEST2 particular program PACK, go on token,
NEST2 particular postlude,
where (NEST2) is (NEST1 new DECS STOP).
g) NEST2 particular program :
NEST2 new LABSETY3 joined label definition
of LABSETY3, strong void NEST2 new LABSETY3
ENCLOSED clause.
h) NEST joined label definition of LABSETY :
where (LABSETY) is (EMPTY), EMPTY ;
where (LABSETY) is (LAB1 LABSETY1),
NEST label definition of LAB1,
NEST joined label definition of$ LABSETY1.
i) NEST2 particular postlude :
strong void NEST2 series with STOP.
A simple example of the power of W-grammars is clause
a) program text : STYLE begin token, new LAYER1 preludes,
parallel token, new LAYER1 tasks PACK,
STYLE end token.
This allows BEGIN ... END and { } as block delimiters, while ruling out BEGIN ... } and { ... END.
One may wish to compare the grammar in the report with the Yacc parser for a subset of ALGOL 68 by Marc van Leeuwen.
Implementations
Anthony Fisher wrote yo-yo,
a parser for a large class of W-grammars, with example grammars for expressions, eva, sal and Pascal (the actual ISO 7185 standard for Pascal uses extended Backus–Naur form).
Dick Grune created a C program that would generate all possible productions of a W-grammar.
Applications outside of ALGOL 68
The applications of Extended Affix Grammars (EAG)s mentioned above can effectively be regarded as applications of W-grammars, since EAGs are so close to W-grammars.
W-grammars have also been proposed for the description of complex human actions in ergonomics.
A W-Grammar Description has also been supplied for Ada.
See also
Affix grammar
Extended Affix Grammar
Attribute grammar
References
Further reading
.
Formal languages
Parsing
Compiler construction
Dutch inventions | Van Wijngaarden grammar | [
"Mathematics"
] | 4,133 | [
"Formal languages",
"Mathematical logic"
] |
1,545,079 | https://en.wikipedia.org/wiki/Barnett%20effect | The Barnett effect is the magnetization of an uncharged body when spun on its axis. It was discovered by American physicist Samuel Barnett in 1915.
An uncharged object rotating with angular velocity tends to spontaneously magnetize, with a magnetization given by
where is the gyromagnetic ratio for the material, is the magnetic susceptibility.
The magnetization occurs parallel to the axis of spin. Barnett was motivated by a prediction by Owen Richardson in 1908, later named the Einstein–de Haas effect, that magnetizing a ferromagnet can induce a mechanical rotation. He instead looked for the opposite effect, that is, that spinning a ferromagnet could change its magnetization. He established the effect with a long series of experiments between 1908 and 1915.
See also
London moment
References
Further reading
Magnetism | Barnett effect | [
"Physics",
"Materials_science"
] | 168 | [
"Materials science stubs",
"Condensed matter stubs",
"Condensed matter physics"
] |
1,545,126 | https://en.wikipedia.org/wiki/Aryne | In organic chemistry, arynes and benzynes are a class of highly reactive chemical species derived from an aromatic ring by removal of two substituents. Arynes are examples of didehydroarenes (1,2-didehydroarenes in this case), although 1,3- and 1,4-didehydroarenes are also known. Arynes are examples of alkynes under high strain.
Bonding in arynes
The alkyne representation of benzyne is the most widely encountered. Arynes are usually described as having a strained triple bond.
Geometric constraints on the triple bond in benzyne result in diminished overlap of in-plane p-orbitals, and thus weaker triple bond. The vibrational frequency of the triple bond in benzyne was assigned by Radziszewski to be 1846 cm−1, indicating a weaker triple bond than in unstrained alkyne with vibrational frequency of approximately 2150 cm−1. Nevertheless, benzyne is more like a strained alkyne than a diradical, as seen from the large singlet–triplet gap and alkyne-like reactivity.
The LUMO of aryne lies much lower than the LUMO of unstrained alkynes, which makes it a better energy match for the HOMO of nucleophiles. Hence, benzyne possesses electrophilic character and undergoes reactions with nucleophiles. A detailed MO analysis of benzyne was presented in 1968.
Generation of arynes
Due to their extreme reactivity, arynes must be generated in situ. Typical of other reactive intermediates, benzyne must be trapped, otherwise it dimerises to biphenylene.
Early routes to benzyne involved dehydrohalogenation of aryl halides:
Such reactions require strong base and high temperatures. 1,2-Disubstituted arenes serve as precursors to benzynes under milder conditions.
Benzyne is generated by the dehalogenation of 1-bromo-2-fluorobenzene by magnesium. Anthranilic acid can be converted to 2-diazoniobenzene-1-carboxylate by diazotization and neutralization. Although explosive, this zwitterionic species is a convenient and inexpensive precursor to benzyne.
Another method is based on trimethylsilylaryl triflates. This method has seen wide applicability and was reviewed in 2021. Fluoride displacement of the trimethylsilyl group induces elimination of triflate and release of benzyne:
A hexadehydro Diels-Alder reaction (HDDA) involves cycloaddition of 1,3-diyne and alkyne.
N-amination of 1H-benzotriazole with hydroxylamine-O-sulfonic acid generates an intermediate which can be oxidised to benzyne in almost quantitative yield with lead(IV) acetate.
Reactions of arynes
Even at low temperatures arynes are extremely reactive. Their reactivity can be classified in three main classes: (1) nucleophilic additions, (2) pericyclic reactions, and (3) bond-insertion.
Nucleophilic additions to arynes
Upon treatment with basic nucleophiles, aryl halides deprotonate alpha to the leaving group, resulting in dehydrohalogenation. Isotope exchange studies indicate that for aryl fluorides and, sometimes, aryl chlorides, the elimination event proceeds in two steps, deprotonation, followed by expulsion of the nucleophile. Thus, the process is formally analogous to the E1cb mechanism of aliphatic compounds. Aryl bromides and iodides, on the other hand, generally appear to undergo elimination by a concerted syn-coplanar E2 mechanism. The resulting benzyne forms addition products, usually by nucleophilic addition and protonation. Generation of the benzyne intermediate is the slow step in the reaction.
"Aryne coupling" reactions allow for generation of biphenyl compounds which are valuable in pharmaceutical industry, agriculture and as ligands in many metal-catalyzed transformations.
The metal–arene product can also add to another aryne, leading to chain-growth polymerization. Using copper(I) cyanide as the initiator to add to the first aryne yielded polymers containing up to about 100 arene units.
When leaving group (LG) and substituent (Y) are mutually ortho or para, only one benzyne intermediate is possible. However, when LG is meta to Y, then regiochemical outcomes (A and B) are possible. If Y is electron withdrawing, then HB is more acidic than HA resulting in regioisomer B being generated. Analogously, if Y is electron donating, regioisomer A is generated, since now HA is the more acidic proton.
There are two possible regioisomers of benzyne with substituent (Y): triple bond can be positioned between C2 and C3 or between C3 and C4. Substituents ortho to the leaving group will lead to the triple bond between C2 and C3. Para Y and LG will lead to regioisomer with triple bond between C3 and C4. Meta substituent can afford both regioisomers as described above.
Nucleophilic additions can occur with regioselectivity. Although classic explanations to explain regioselectivity refer to carbanion stability following attack by the nucleophile, this explanation has been replaced by the aryne distortion model by Houk and Garg. In this model, substituents cause geometric distortion of the ground state structure of the aryne, leading to regioselective reactions, consistent with reactions proceeding through early transition states.
Pericyclic reactions of arynes
Benzyne undergoes rapid dimerization to form biphenylene. Some routes to benzyne lead to especially rapid and high yield of this subsequent reaction. Trimerization gives triphenylene.
Benzynes can undergo [4+2] cyclization reactions. When generated in the presence of anthracene, trypticene results. In this method, the concerted mechanism of the Diels-Alder reaction between benzyne and furan is shown below. Other benzyne [4+2] cycloadditions are thought to proceed via a stepwise mechanism.
A classic example is the synthesis of 1,2,3,4-tetraphenylnaphthalene. Tetrabromobenzene can react with butyllithium and furan to form a tetrahydroanthracene
[4+2] cycloadditions of arynes have been commonly applied to natural product total synthesis. The main limitation of such approach, however, is the need to use constrained dienes, such as furan and cyclopentadiene. In 2009 Buszek and co-workers synthesized herbindole A using aryne [4+2]-cycloaddition. 6,7-indolyne undergoes [4+2] cycloaddition with cyclopentadiene to afford complex tetracyclic product.
Benzynes undergo [2+2] cycloaddition with a wide range of alkenes. Due to electrophilic nature of benzyne, alkenes bearing electron-donating substituents work best for this reaction.
Due to significant byproduct formation, aryne [2+2] chemistry is rarely utilized in natural product total synthesis. Nevertheless, several examples do exist. In 1982, Stevens and co-workers reported a synthesis of taxodione that utilized [2+2] cycloaddition between an aryne and a ketene acetal.
Mori and co-workers performed a palladium-catalyzed [2+2+2]-cocyclization of aryne and diyne in their total synthesis of taiwanins C.
Bond-insertion reactions of arynes
The first example of aryne σ-bond insertion reaction is the synthesis of melleine in 1973.
Other dehydrobenzenes
If benzyne is 1,2-didehydrobenzene, two further isomers are possible: 1,3-didehydrobenzene and 1,4-didehydrobenzene. Their energies in silico are, respectively, 106, 122, and 138 kcal/mol (444, 510 and 577 kJ/mol). The 1,2- and 1,3- isomers have singlet ground states, whereas for 1,4-didehydrobenzene the gap is smaller.
The interconversion of the 1,2-, 1,3- and 1,4-didehydrobenzenes has been studied. A 1,2- to 1,3-didehydrobenzene conversion has been postulated to occur in the pyrolysis (900 °C) of the phenyl substituted aryne precursors as shown below. Extremely high temperatures are required for benzyne interconversion.
1,4-Didehydroarenes
In classical 1,4-didehydrobenzene experiments, heating to 300 °C, [1,6-D2]-A readily equilibrates with [3,2-D2]-B, but does not equilibrate with C or D. The simultaneous migration of deuterium atoms to form B, and the fact that none of C or D is formed can only be explained by a presence of a cyclic and symmetrical intermediate–1,4-didehydrobenzene.
Two states were proposed for 1,4-didehydrobenzene: singlet and triplet, with the singlet state lower in energy. Triplet state represents two noninteracting radical centers, and hence should abstract hydrogens at the same rate as phenyl radical. However, singlet state is more stabilized than the triplet, and therefore some of the stabilizing energy will be lost in order to form the transition state for hydrogen cleavage, leading to slower hydrogen abstraction. Chen proposed the use of 1,4-didehydrobenzene analogues that have large singlet-triplet energy gaps to enhance selectivity of enediyne drug candidates.
History
The first evidence for arynes came from the work of Stoermer and Kahlert. In 1902 they observed that upon treatment of 3-bromobenzofuran with base in ethanol 2-ethoxybenzofuran is formed. Based on this observation they postulated an aryne intermediate.
Wittig et al. invoked zwitterionic intermediate in the reaction of fluorobenzene and phenyllithium to give biphenyl. This hypothesis was later confirmed.
In 1953 14C labeling experiments provided strong support for the intermediacy of benzyne. John D. Roberts et al. showed that the reaction of chlorobenzene-1-14C and potassium amide gave equal amounts of aniline with 14C incorporation at C-1 and C-2.
Wittig and Pohmer found that benzyne participate in [4+2] cycloaddition reactions.
Additional evidence for the existence of benzyne came from spectroscopic studies. Benzyne has been observed in a "molecular container".
In 2015, a single aryne molecule was imaged by STM.
1,3-Didehydroarenes was first demonstrated in the 1990s when it was generated from 1,3-disubstituted benzene derivatives, such as the peroxy ester 1,3-C6H4(O2C(O)CH3)2.
Breakthroughs on 1,4-didehydrobenzene came in the 1960s, followed from studies on the Bergman cyclization. This theme became topical with the discovery of enediyne "cytostatics", such as calicheamicin, which generates a 1,4-didehydrobenzene.
Examples of benzynes in total synthesis
A variety of natural products have been prepared using arynes as intermediates. Nucleophilic additions to arynes have been widely used in natural product total synthesis. Indeed, nucleophilic additions of arynes are some of the oldest known applications of aryne chemistry. Nucleophilic addition to aryne was used in the attempted synthesis of cryptaustoline (1) and cryptowoline (2).
The synthesis of the tetracyclic meroterpenoid (+)-liphagal involved an aryne intermediate. Their approach employed an aryne cyclization to close the final ring of the natural product.
Multicomponent reactions of arynes are powerful transformations that allow for rapid formation of 1,2-disubstituted arenes. Despite their potential utility, examples of multicomponent aryne reactions in natural product synthesis are scarce. A four-component aryne coupling reaction was employed in the synthesis of dehydroaltenuene B.
See also
More examples use of aryne chemistry: tricyclobutabenzene, in-methylcyclophane, Transition metal benzyne complex
The pyridine equivalent pyridyne
References
External links
Reactive intermediates
Cycloalkynes
Aromatic hydrocarbons | Aryne | [
"Chemistry"
] | 2,860 | [
"Organic compounds",
"Reactive intermediates",
"Physical organic chemistry"
] |
1,545,150 | https://en.wikipedia.org/wiki/Terfenol-D | Terfenol-D, an alloy of the formula (x ≈ 0.3), is a magnetostrictive material. It was initially developed in the 1970s by the Naval Ordnance Laboratory in the United States. The technology for manufacturing the material efficiently was developed in the 1980s at Ames Laboratory under a U.S. Navy-funded program. It is named after terbium, iron (Fe), Naval Ordnance Laboratory (NOL), and the D comes from dysprosium.
Physical properties
The alloy has the highest magnetostriction of any alloy, up to 0.002 m/m at saturation; it expands and contracts in a magnetic field. Terfenol-D has a large magnetostriction force, high energy density, low sound velocity, and a low Young's modulus. At its most pure form, it also has low ductility and a low fracture resistance. Terfenol-D is a gray alloy that has different possible ratios of its elemental components that always follow a formula of . The addition of dysprosium made it easier to induce magnetostrictive responses by making the alloy require a lower level of magnetic fields. When the ratio of Tb and Dy is increased, the resulting alloy's magnetostrictive properties will operate at temperatures as low as −200 °C, and when decreased, it may operate at a maximum of 200 °C. The composition of Terfenol-D allows it to have a large magnetostriction and magnetic flux when a magnetic field is applied to it. This case exists for a large range of compressive stresses, with a trend of decreasing magnetostriction as the compressive stress increases. Crush strength has been shown (unpublished) to be quite high under certain conditions. There is also a relationship between the magnetic flux and compression in which when the compressive stress increases, the magnetic flux changes less drastically. Terfenol-D is mostly used for its magnetostrictive properties, in which it changes shape when exposed to magnetic fields in a process called magnetization. Magnetic heat treatment is shown to improve the magnetostrictive properties of Terfenol-D at low compressive stress for certain ratios of Tb and Dy.
Applications
Due to its material properties, Terfenol-D is excellent for use in the manufacturing of low frequency, high powered underwater acoustics. Its initial application was in naval sonar systems. It sees application in magnetomechanical sensors, actuators, and acoustic and ultrasonic transducers due to its high energy density and large bandwidth capabilities, e.g. in the SoundBug device (its first commercial application by FeONIC). Its strain is also larger than that of another normally used material (PZT8), which allows Terfenol-D transducers to reach greater depths for ocean explorations than past transducers. Its low Young's modulus brings some complications due to compression at large depths, which are overcome in transducer designs that may reach 1000 ft in depth and only lose a small amount of accuracy of around 1 dB. Due to its high temperature range, Terfenol-D is also useful in deep hole acoustic transducers where the environment may reach high pressure and temperatures like oil holes. Terfenol-D may also be used for hydraulic valve drivers due to its high strain and high force properties. Similarly, magnetostrictive actuators have also been considered for use in fuel injectors for diesel engines because of the high stresses that can be produced. Terfenol-D uniquely combines key characteristics that enable advanced diesel fuel injection. First, the quantum mechanical origin of magnetostriction means this effect does not degrade, giving it robustness and durability. Second, it makes good use of the compression available from diesel fuel pressure. Finally, its mechanical expansion tends to be proportional to the imposed magnetic field, making injector needle position continuously controllable. An injector needle directly operated by Terfenol-D can have lifetime durability on an engine cylinder head while enabling unprecedented control over each injection event throughout its entire duration. These properties can be used for in-cylinder treatment of efficiency, emissions, and noise while enabling fuel flexibility.
Manufacturing
The increase in use of Terfenol-D in transducers required new production techniques that increased production rates and quality because the original methods were unreliable and small scale. There are four methods that are used to produce Terfenol-D, which are free stand zone melting, modified Bridgman, sintered powder compact, and polymer matrix composites.
The first two methods, free stand zone melting (FSZM) and modified Bridgman (MB), are capable of producing Terfenol-D that has high magnetostrictive properties and energy densities. However, FSZM cannot produce a rod larger than 8 mm in diameter due to the surface tension of the Terfenol-D and how the FSZM process has no container to restrict the material. The MB process offers a minimum of 10 mm diameter size and is only restricted due to the wall interfering with the crystal growth. Both methods create solid crystals that require later manufacturing if a geometry other than a right-angle cylinder is needed. The solid crystals produced have a fine lamellar structure.
The other two techniques, sintered powder compact and polymer matrix composites, are powder based. These techniques allow for intricate geometry and detail. However, the size is limited to 10mm in diameter and 100mm in length due to the molds used. The resulting microstructures of these powder based methods differ from the solid crystal ones because they do not have a lamellar structure and have a lower density. However, all methods have similar magnetostrictive properties.
Due to size restriction, MB is the best process to produce Terfenol-D. However it is a labor-intensive method. A newer process like MB is Etrema crystal grower (ECG) that results in larger diameter Terfenol-D crystals and increased magnetostrictive performance. The reliability of magnetostrictive properties of the Terfenol-D throughout the life of the material is increased by using ECG.
Terfenol-D has some minor drawbacks which stem from its material properties. Terfenol-D has low ductility and low fracture resistance. To solve this, Terfenol-D has been added to polymers and other metals to create composites. When added to polymers, the stiffness of the resulting composite is low. When composites of Terfenol-D with ductile metal binders are created, the resulting material has increased stiffness and ductility with reduced magnetostrictive properties. These metal composites may be formed by explosion compaction. In a study done on processing Terfenol-D alloys, the resulting alloys created using copper and Terfenol-D had increased strength and hardness values, which supports the theory that the composites of ductile metal binders and Terfenol-D result in a stronger and more ductile material.
See also
Galfenol
References
External links
http://tdvib.com/terfenol-d/
https://www.qcwlc.us
Rare earth alloys
Terbium
Intermetallics | Terfenol-D | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,504 | [
"Rare earth alloys",
"Inorganic compounds",
"Metallurgy",
"Alloys",
"Intermetallics",
"Condensed matter physics"
] |
1,545,160 | https://en.wikipedia.org/wiki/Chemical%20trap | In chemistry, a chemical trap is a chemical compound that is used to detect unstable compounds. The method relies on efficiency of bimolecular reactions with reagents to produce a more easily characterize trapped product. In some cases, the trapping agent is used in large excess.
Case studies
Cyclobutadiene
A famous example is the detection of cyclobutadiene released upon oxidation of cyclobutadieneiron tricarbonyl. When this degradation is conducted in the presence of an alkyne, the cyclobutadiene is trapped as a bicyclohexadiene. The requirement for this trapping experiment is that the oxidant (ceric ammonium nitrate) and the trapping agent be mutually compatible.
Diphosphorus
Diphosphorus is an old target of chemists since it is the heavy analogue of N2. Its fleeting existence is inferred by the controlled degradation of certain niobium complexes in the presence of trapping agents. Again, a Diels-Alder strategy is employed in the trapping:
Silylene
Another classic but elusive family of targets are silylenes, analogues of carbenes. It was proposed that dechlorination of dimethyldichlorosilane generates dimethylsilylene:
SiCl2(CH3)2 + 2 K → Si(CH3)2 + 2 KCl
This inference is supported by conducting the dechlorination in the presence of trimethylsilane, the trapped product being pentamethyldisilane:
Si(CH3)2 + HSi(CH3)3 → (CH3)2Si(H)-Si(CH3)3
Not that the trapping agent does not react with dimethyldichlorosilane or potassium metal.
Related meanings
In some cases, chemical trap is used to detect or infer a compound when present at concentrations below its detection limit or is present in a mixture, where other components interfere with its detection. The trapping agent, for example a dye, reacts with the chemical to be detected, giving a product that is more easily detected.
References
Analytical chemistry | Chemical trap | [
"Chemistry"
] | 440 | [
"nan"
] |
1,545,193 | https://en.wikipedia.org/wiki/List%20of%20mergers%20and%20acquisitions%20by%20Microsoft | Microsoft is an American public multinational corporation headquartered in Redmond, Washington, USA that develops, manufactures, licenses, and supports a wide range of products and services predominantly related to computing through its various product divisions. Established on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800, Microsoft rose to dominate the home computer operating system market with MS-DOS in the mid-1980s, followed by the Microsoft Windows line of operating systems. Microsoft would also come to dominate the office suite market with Microsoft Office. The company has diversified in recent years into the video game industry with the Xbox, the Xbox 360, the Xbox One, and the Xbox Series X/S as well as into the consumer electronics and digital services market with Zune, MSN and the Windows Phone OS.
The company's initial public offering was held on March 14, 1986. The stock, which eventually closed at $27.75 a share, peaked at $29.25 a share shortly after the market opened for trading. After the offering, Microsoft had a market capitalization of $519.777 million. Microsoft has subsequently acquired over 225 companies, purchased stakes in 64 companies, and made 25 divestments. Of the companies that Microsoft has acquired, 107 were based in the United States. Microsoft has not released financial details for most of these mergers and acquisitions.
Since Microsoft's first acquisition in 1986, it has purchased an average of six companies a year. The company purchased more than ten companies a year between 2005 and 2008, and it acquired 18 firms in 2006, the most in a single year, including Onfolio, Lionhead Studios, Massive Incorporated, ProClarity, Winternals Software, and Colloquis. Microsoft has made fourteen acquisitions worth over one billion dollars: Skype Technologies (2011), aQuantive (2007), Fast Search & Transfer (2008), Navision (2002), Visio Corporation (2000), Yammer (2012), Nokia's mobile and devices division (2013), Mojang (2014), LinkedIn (2016), GitHub (2018), Affirmed Networks (2020), ZeniMax Media (2020), Nuance Communications (2021), and Activision Blizzard (2022).
Microsoft has also purchased several stakes valued at more than a billion dollars. It obtained an 11.5% stake in Comcast for $1 billion, a 22.98% stake in Telewest for $2.263 billion, and a 3% stake in AT&T for $5 billion. Among Microsoft's divestments, in which parts of the company are sold to another company, only Expedia Group was sold for more than a billion dollars; USA Networks purchased the company on February 5, 2002, for $1.372 billion (~$ in ).
Key acquisitions
One of Microsoft's first acquisitions was Forethought on July 30, 1987. Forethought was founded in 1983 and developed a presentation program that would later be known as Microsoft PowerPoint.
On December 31, 1997, Microsoft acquired Hotmail.com for $500 million (~$ in ), its largest acquisition at the time, and integrated Hotmail into its MSN group of services. Hotmail, a free webmail service founded in 1996 by Jack Smith and Sabeer Bhatia, had more than 8.5 million subscribers earlier that month.
In 1999, Microsoft reportedly discussed a buyout of Nintendo. However, execs failed to negotiate a deal, with Xbox co-inventor Kevin Bachus explaining "They just laughed their asses off."
Microsoft acquired Seattle-based Visio Corporation on January 7, 2000, for $1.375 billion (~$ in ). Visio, a software company, was founded in 1990 as Axon Corporation, and had its initial public offering in November 1995. The company developed the diagramming application software, Visio, which was integrated into Microsoft's product line as Microsoft Visio after its acquisition.
On July 12, 2002, Microsoft purchased Navision for $1.33 billion (~$ in ). The company, which developed the technology for the Microsoft Dynamics NAV enterprise resource planning software, was integrated into Microsoft as a new division named Microsoft Business Solutions, later renamed to Microsoft Dynamics.
Microsoft purchased aQuantive, an advertising company, on August 13, 2007, for $6.333 billion (~$ in ). Before the acquisition, aQuantive was ranked 14th in terms of revenue among advertising agencies worldwide. aQuantive had three subsidiaries at the time of the acquisition: Avenue A/Razorfish, one of the world's largest digital agencies, Atlas Solutions, and DRIVE Performance Solutions.
Microsoft acquired the Norwegian enterprise search company Fast Search & Transfer on April 25, 2008, for $1.191 billion (~$ in ) to boost its search technology.
On May 10, 2011, Microsoft announced its acquisition of Skype Technologies, creator of the VoIP service Skype, for $8.5 billion (~$ in ). With a value 32 times larger than Skype's operating profits, the deal was Microsoft's largest acquisition at the time. Skype would become a division within Microsoft, with Skype's former CEO Tony Bates —then the division's first president —reporting to the CEO of Microsoft.
On September 2, 2013, Microsoft announced its intent to acquire the mobile hardware division of Nokia (which had established a long-term partnership with Microsoft to produce smartphones built off its Windows Phone platform) in a deal worth 3.79 billion euros, along with another 1.65 billion to license Nokia's portfolio of patents. Steve Ballmer considered the purchase to be a "bold step into the future" for both companies, primarily as a result of its recent collaborations. The acquisition, scheduled to close in early 2014 pending regulatory approval, did not include the Here mapping service or the infrastructure division Nokia Solutions and Networks, which will be retained by Nokia. While the deal went through, in May 2016 Microsoft abandoned its mobile business and sold the Nokia feature phone line.
In September 2014, Microsoft purchased Mojang for $2.5 billion (~$ in ).
On June 13, 2016, Microsoft announced it planned to acquire the professional networking site LinkedIn for $26.2 billion, to be completed by the end of 2016. The acquisition would keep LinkedIn as a distinct brand and retain its current CEO, Jeff Weiner, who will subsequently report to Microsoft CEO Satya Nadella. The acquisition was completed on December 8, 2016.
On June 4, 2018, Microsoft acquired the popular code repository site GitHub for $7.5 billion (~$ in ) in Microsoft stock.
On September 21, 2020, Microsoft announced its intent to acquire ZeniMax Media and all its subsidiaries for $7.5 billion (~$ in ).
The acquisition was completed on March 9, 2021.
On January 18, 2022, Microsoft announced its intent to acquire Activision Blizzard, an American video game holding company, for $68.7 billion in cash. The deal has been approved by both companies' board of directors and was finalized on October 13, 2023, with the total cost of the acquisition amounting to $75.4 billion, following international government regulatory review of the action.
Acquisitions
Stakes
Divestitures
See also
List of largest mergers and acquisitions
Lists of corporate acquisitions and mergers
Notes
References
Sources
External links
Microsoft Investor Relations – Acquisitions
Infographic of Microsoft's vast legacy of acquisition | Techi.com
Dashboard and analysis of all Microsoft acquisitions
Microsoft
Mergers and acquisitions | List of mergers and acquisitions by Microsoft | [
"Technology"
] | 1,558 | [
"Computing-related lists",
"Microsoft lists"
] |
1,545,228 | https://en.wikipedia.org/wiki/Zeta%20Ophiuchi | Zeta Ophiuchi (ζ Oph, ζ Ophiuchi) is a single star located in the constellation of Ophiuchus. It has an apparent visual magnitude of 2.6, making it the third-brightest star in the constellation. Parallax measurements give an estimated distance of roughly from the Earth. It is surrounded by the Sh2-27 "Cobold" nebula, the star's bow shock as it ploughs through dense dust clouds near the Rho Ophiuchi cloud complex.
In April 2010, ζ Ophiuchi was occulted by asteroid 824 Anastasia.
Properties
ζ Ophiuchi is an enormous star with more than 20 times the Sun's mass and eight times its radius. The stellar classification of this star is O9.5 V, with the luminosity class of V indicating that it is generating energy in its core by the nuclear fusion of hydrogen. From Earth, the apparent effective temperature of the star appears to be 34,300K, giving the star the blue hue of an O-type star. However, since the star is rapidly rotating, the exact surface temperature varies across the surface of the star from as high as 39,000K at the poles to as low as 30,700K at the equator. The projected rotational velocity may be as high as and it may be rotating at a rate of once per day, close to the velocity at which it would begin to break up.
This is a young star with an age of only three million years. Its luminosity is varying in a periodic manner similar to that of a Beta Cephei variable. This periodicity has a dozen or more frequencies ranging between 1–10 cycles per day. In 1979, examination of the spectrum of this star found "moving bumps" in its helium line profiles. This feature has since been found in other stars, which have come to be called ζ Oph stars. These spectral properties are likely the result of non-radial pulsations.
This star is roughly halfway through the initial phase of its stellar evolution and will, within the next few million years, expand into a red supergiant star wider than the orbit of Jupiter before ending its life in a supernova explosion, leaving behind a neutron star or pulsar. From the Earth, a significant fraction of the light from this star is absorbed by interstellar dust, particularly at the blue end of the spectrum. In fact, were it not for this dust, ζ Ophiuchi would shine several times brighter and be among the very brightest stars visible. If the star's luminosity were not obscured, it would shine at magnitude 1.54, becoming the twenty-third brightest star in the night sky.
X-ray emissions have been detected from Zeta Ophiuchi that vary periodically. The net X-ray flux is estimated at . In the energy range of 0.5–10 keV, this flux varies by about 20% over a period of 0.77 days. This behavior may be the result of a magnetic field in the star. The measured average strength of the longitudinal field is about .
Bow shock
ζ Ophiuchi is moving through space with a peculiar velocity of 30 km s−1. Based upon the age and direction of motion of this star, it is a member of the Upper Scorpius sub-group of the Scorpius–Centaurus association of stars that share a common origin and space velocity. Such runaway stars may be ejected by dynamic interactions between three or four stars. However, in this case the star may be a former component of a binary star system in which the more massive primary was destroyed in a type II supernova explosion. It is possible that ζ Ophiuchi accreted mass from its companion before it was ejected. The pulsar PSR B1929+10 may be the leftover remnant of this supernova, as it too was ejected from the association with a velocity vector that fits the scenario.
Due to the high space velocity of Zeta Ophiuchi, in combination with high intrinsic brightness and its current location in a dust-rich area of the galaxy, the star is creating a bow-shock in the direction of motion. This shock has been made visible via NASA's Wide-field Infrared Survey Explorer. The formation of this bow shock can be explained by a mass loss rate of about times the mass of the Sun per year, which equals the mass of the Sun every nine million years.
Traditional names
ζ Ophiuchi was a member of indigenous Arabic asterism al-Nasaq al-Yamānī, "the Southern Line" of al-Nasaqān "the Two Lines", along with α Serpentis (Unukalhai), δ Ser (Qin, Tsin), ε Ser (Ba, Pa), δ Ophiuchi (Yed Prior), ε Oph (Yed Posterior) and γ Oph (Tsung Ching).
According to the catalogue of stars in the Technical Memorandum 33-507 – A Reduced Star Catalog Containing 537 Named Stars, al-Nasaq al-Yamānī or Nasak Yamani was the title for two stars: δ Serpentis as Nasak Yamani I and ε Ser as Nasak Yamani II (exclude this star, α Ser, δ Ophiuchi, ε Oph and γ Oph)
In Chinese, (), meaning Right Wall of Heavenly Market Enclosure, refers to an asterism which is represent eleven old states in China which is marking the right borderline of the enclosure, consisting of ζ Ophiuchi, β Herculis, γ Herculis, κ Herculis, γ Serpentis, β Serpentis, α Serpentis, δ Serpentis, ε Serpentis, δ Ophiuchi and ε Ophiuchi. Consequently, the Chinese name for ζ Ophiuchi itself is (, ), represent the state Han (韓), together with 35 Capricorni in Twelve States (asterism).
Notes
References
Beta Cephei variables
Runaway stars
Upper Scorpius
O-type main-sequence stars
Ophiuchus
Ophiuchi, Zeta
Durchmusterung objects
Ophiuchi, 13
149757
081377
6175
J16370954-1034014
Gamma Cassiopeiae variable stars | Zeta Ophiuchi | [
"Astronomy"
] | 1,285 | [
"Ophiuchus",
"Constellations"
] |
1,545,323 | https://en.wikipedia.org/wiki/Rabbits%20%28film%29 | Rabbits is a 2002 series of eight short horror web films written and directed by David Lynch, although Lynch himself referred to it as a sitcom. It depicts three humanoid rabbits played by Scott Coffey, Laura Elena Harring and Naomi Watts in a room. Their disjointed conversations are interrupted by a laugh track. Rabbits is presented with the tagline "In a nameless city deluged by a continuous rain... three rabbits live with a fearful mystery".
Originally consisting of a series of eight short episodes shown exclusively on Lynch's website, Rabbits is no longer available there. The films are now only available on DVD in the "Lime Green Set" collection of Lynch's films, in a re-edited four-episode version. The set also does not contain episode three. As of 2020, Lynch has been occasionally uploading the original episodes to YouTube. The setting and some footage of the rabbits were reused in Lynch's Inland Empire.
Description
Rabbits takes place entirely within a single box set representing the living room of a house. Within the set, three humanoid rabbits enter, exit, and converse. One, Jack, is male and wears a smart suit. The other two, Suzie and Jane, are female, one of whom wears a dress, the other a dressing gown. The audience watches from about the position of a television set. In each episode, the rabbits converse in apparent non sequiturs. The lines evoke mystery, and include "Were you blonde?", "Something's wrong", "I wonder who I will be", "I only wish they would go somewhere", "It had something to do with the telling of time", and "no one must find out about this". The disordered but seemingly related lines the rabbits speak suggest that the dialogue could be pieced together into sensible conversations, but concrete interpretations are elusive.
Some of the rabbits' lines are punctuated by a seemingly random laugh track, as if being filmed before a live audience. In addition, whenever one of the rabbits enters the room, the unseen audience whoops and applauds at great length, much like in a sitcom. The rabbits themselves, however, remain serious throughout.
In some episodes, mysterious events take place, including the appearance of a burning hole in the wall and the intrusion of a strange, demonic voice coupled with sinister red lighting. Three episodes involve a solo performance by one rabbit, in which they recite strange poetry, as if performing a demonic ritual.
The rabbits receive a telephone call at one point, and later, at the climax of the series, a knock is heard at the door. When the door is opened, a loud scream is heard and the image is distorted. After the door closes, Jack says it was the man in the green coat. The last episode concludes with the rabbits huddled together on the couch and Jane saying "I wonder who I will be."
Production
Lynch filmed Rabbits in a set built in the garden of his house in the Hollywood Hills. Filming took place at night in order to control the lighting. Lynch says that filming Watts, Harring and Coffey with the set lit up by enormous lights was "a beautiful thing". However, the process generated a lot of noise that echoed from the surrounding hills and annoyed Lynch's neighbors. The unique use of lighting to create shadows and set an uneasy atmosphere has been praised by critics.
As with most of Lynch's films, the score was composed by Angelo Badalamenti.
Reception
Rabbits received positive reviews from viewers, who highly praised the sitcom for its lighting, sound design and scary atmosphere.
Possible influences
Dave Kehr noted in The New York Times that it was Alain Resnais who first put giant rodent heads on his actors in his 1980 film Mon oncle d'Amérique, and the rabbits' dialogue is reminiscent of Resnais' Last Year at Marienbad.
The dialogue has been compared to the writing of Samuel Beckett.
Use in Inland Empire
Lynch used some of the Rabbits footage as well as previously unseen footage featuring Rabbits characters in his film Inland Empire (2006). Lynch also used the Rabbits set to shoot several scenes involving human characters. In that film, excerpts from Rabbits appear but the rabbits are associated with three mysterious Polish characters who live in a house in the woods.
DVD release
Most of Rabbits can be found on the "Mystery DVD" in the 10-disc The Lime Green Set released by Absurda in 2008. This DVD features seven of the eight episodes, though several of the episodes have been edited together. "Episode 1" on the DVD contains "Episode 1", "Episode 2" and "Episode 4" from the website. "Episode 2" on the DVD contains "Episode 6" and "Episode 8" from the website. "Scott" and "Naomi" are the same as "Episode 5" and "Episode 7", respectively. "Episode 3" from the website does not appear on the disc. The DVD's running time is 43 minutes instead of 50 minutes like the original version. The other seven minutes consist of title and credit sequences for each individual episode that were edited out.
Use in psychological research
Rabbits was used as a stimulus in a psychological experiment on the effects of acetaminophen on existential crisis. The research, in a paper entitled "The Common Pain of Surrealism and Death" suggested that acetaminophen acted to suppress the compensatory desire to affirm systems of meaning that viewing surrealism has been shown to produce.
References
Notes
References
External links
David Lynch's Rabbits - random Rabbits episode generator in Flash
American avant-garde and experimental films
2002 films
Short films directed by David Lynch
Films about rabbits and hares
American mystery films
American independent films
Films with screenplays by David Lynch
Films scored by Angelo Badalamenti
2000s avant-garde and experimental films
2000s English-language films
2000s American films | Rabbits (film) | [
"Technology"
] | 1,198 | [
"Multimedia",
"Internet-based works"
] |
1,545,337 | https://en.wikipedia.org/wiki/Garshelis%20effect | The Garshelis effect causes springs made of magnetostrictive material to have their magnetization changed due to the compression of the spring. It is a correlation between magnetization and torsional stress. If the magnetization is due to direct current, it is the inverse of the Wiedemann effect.
It is named after Ivan Garshelis, who investigated the effect.
References
Electric and magnetic fields in matter
Magnetism | Garshelis effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 89 | [
"Materials science stubs",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Electromagnetism stubs"
] |
1,545,347 | https://en.wikipedia.org/wiki/Aliens%3A%20Colonial%20Marines | Aliens: Colonial Marines is a 2013 first-person shooter developed by Gearbox Software and published by Sega for PlayStation 3, Windows, and Xbox 360. Based on the Alien universe and set shortly after the 1986 film Aliens, the game follows a group of Colonial Marines, a fictional military unit, as they confront the Weyland-Yutani corporation in an effort to rescue survivors from the Sulaco spaceship. It features a campaign mode that supports both single-player and cooperative gameplay, and a multiplayer mode in which players compete in different scenarios.
Colonial Marines was developed over six years and suffered from a tumultuous development cycle. As Gearbox decided to focus on projects such as Duke Nukem Forever and Borderlands 2 and outsourced a significant part of Colonial Marines to other studios. The Aliens concept artist Syd Mead was hired to design locations. Four downloadable content packs added multiplayer maps, a new cooperative mode, and a campaign mode that takes place before the campaign of the base game.
Colonial Marines sold more than one million copies in the United States and Europe, but received unfavorable reviews from critics, who criticized its technical problems, low-quality graphics, short length, and weak artificial intelligence of enemies. It is considered to be one of the worst video games ever made. The competitive multiplayer mode was highlighted as its strongest aspect. Colonial Marines drew controversy for featuring a lower graphical quality than that of the press demos. This led two players to file a lawsuit for false advertising, but it lost class-action status by 2015.
Gameplay
Aliens: Colonial Marines is a first-person shooter based on the Alien science fiction horror film series. The campaign mode, which can be played by a single player or cooperatively by up to four players, features 11 missions that involve players moving from one checkpoint to another while fighting opponents controlled by the artificial intelligence. Opponents consist of either Alien creatures, also known as Xenomorphs, or hostile human mercenaries. Xenomorphs are fast and primarily attack with their claws or by spitting acid, while mercenaries are slower and use firearms.
As the fictional Colonial Marines military unit depicted in James Cameron's 1986 film Aliens, players have access to weapons such as pistols, shotguns, grenades, pulse rifles, flamethrowers, robotic sentry turrets, and smartguns, which automatically track and target opponents. They may also use welding torches to seal doors and motion trackers to detect unseen enemies. Ammunition can be found on defeated mercenaries or from certain locations in the mission area. The Colonial Marines are protected by a health bar that is divided into segments. If a segment is partially depleted, it will automatically regenerate over time. Med-Packs throughout the missions may be acquired to restore lost health segments. Players may also collect pieces of armor that protect the health bar with a secondary meter that does not automatically regenerate. Players have a limited time to revive a player whose health has been fully depleted. If they fail, the downed player cannot return until the surviving players reach the next checkpoint.
In addition to the campaign mode, Colonial Marines features a competitive multiplayer mode where two teams of up to five players face each other in four different scenarios. Each scenario involves one team playing as Colonial Marines and the other as Aliens. After a time limit has been reached, players switch roles and play once more on the same map. Scenarios include Team Deathmatch, where both teams must kill as many opposing players as possible; Extermination, where Colonial Marines must detonate bombs in egg-infested areas protected by Aliens; Escape, which involves Colonial Marines completing objectives to reach a destination while being assaulted by Aliens; and Survivor, where Colonial Marines must survive attacks from Aliens with limited health and ammunition resources for as long as possible. Unlike Colonial Marines, Alien players play from a third-person perspective and cannot use firearms, but have the ability to climb walls, run on ceilings, deliver attacks with their claws, and unleash streams of acid.
Players earn experience points by overcoming opponents, completing challenges, and finding collectibles—Audio Logs, Dog Tags, and Legendary Weapons, all of which are related to characters who appear in the film series. Challenges range from killing opponents in a particular way to winning multiplayer matches and completing campaign missions under a difficulty setting. Players have two ranks, one for their Colonial Marine character and another for their Alien character. When a sufficient amount of experience has been obtained, their characters rank up. Colonial Marine ranks unlock weapon upgrades for use in both the campaign and the competitive multiplayer modes. These include alternate fire attachments, telescopic sights, and larger capacity magazines. In contrast, Alien ranks unlock new combat abilities for Alien characters. Completing challenges also unlocks appearance options for both Colonial Marine and Alien characters, and new attributes that are exclusive to Colonial Marines in the competitive multiplayer mode.
Plot
17 weeks after the events of Aliens, the USS Sephora sends a full battalion of Colonial Marines to investigate the USS Sulaco, now in orbit around LV-426 moon. A massive Xenomorph infestation is discovered inside the Sulaco and several Marines are killed in the initial onslaught. Corporal Christopher Winter, Private Peter O'Neal, and Private Bella Clarison discover that hostile mercenaries working for the Weyland-Yutani corporation are in command of the Sulaco and have been breeding Xenomorphs on board for study. Shortly before both ships are destroyed in the ensuing confrontation, the Marines, along with commander Captain Cruz, Sephora android Bishop, and pilot lieutenant Lisa Reid, escape aboard her dropship and take shelter in the ruins of the Hadley's Hope colony complex on LV-426.
Although the Marines learn that Clarison has been attacked by a facehugger and needs medical treatment, Cruz orders Winter to travel to a nearby Weyland-Yutani research facility set up near a derelict Xenomorph spacecraft and recover a manifest that identifies an unknown prisoner from the Sulaco. In an attempt to save Clarison, Winter and O'Neal accept the mission and escort her to the facility, where they intend to convince surviving personnel to remove the Xenomorph embryo from her body. However, upon arrival, an interrogated Weyland-Yutani medical officer explains to them that Clarison's life cannot be saved because the creature's invasive placenta is cancerous and will eventually kill her even if the embryo is successfully extracted. Clarison dies when a chestburster hatches from her.
Winter and O'Neal recover the manifest they were sent to find and rescue the prisoner, who is revealed to be corporal Dwayne Hicks. Hicks explains that Weyland-Yutani intercepted and boarded the Sulaco before it arrived at the penal colony of Fiorina 161. A fire in the hypersleep bay subsequently caused the Sulaco survivors Ellen Ripley, Newt, and Bishop to be jettisoned from the ship, along with the body of an unidentified man who was mistaken for the corporal. Hicks himself was captured by Weyland-Yutani personnel and subjected to torture during interrogation, overseen by android Michael Weyland in an attempt to learn more about the Xenomorphs' origins and to gain control of the Sulacos weapon systems. From Hicks, the Marines also learn that an FTL-capable ship is docked at the research facility, representing the last chance for the Marines to escape from the moon.
After gathering the remaining Sephora personnel on the colony, Cruz orders an all-out assault on the Weyland-Yutani complex in the hopes of capturing the FTL vessel. Winter and Hicks spearhead the advance, but the ship leaves shortly before they can reach it. In a last desperate attempt, Cruz pilots a dropship up to the escaping vessel and crashes into its hangar. Winter is confronted by a Xenomorph queen in the hangar bay, and attempts to eject her using a cargo launching system, but fails when she climbs back aboard. Cruz sacrifices himself when he launches the crippled dropship directly into the queen, propelling both out of the vessel. Winter, O'Neal, Reid, Bishop, and Hicks confront Weyland, who is ultimately executed by Hicks. In search of useful intelligence, Bishop connects to the destroyed android and states that he has "everything".
Development
Design
Colonial Marines was conceived by Gearbox Software after an encounter between the company's creative director, Bryan Martell, and the director of the original Alien film, Ridley Scott. When Brothers in Arms: Road to Hill 30 was released in 2005, Gearbox was interested in working with an existing intellectual property and had previously considered Scott's 1982 film Blade Runner and Michael Mann's 1995 film Heat as candidates. Martell's discussion with Scott on the Alien universe inspired him to approach 20th Century Fox about the licensing opportunities. Sega, who bought the rights to publish games based on the franchise in December 2006, gave Gearbox complete freedom to present them with an idea for a game. Because Gearbox had experience with first-person shooters and the development team was composed of people who were fans of Aliens, the company proposed a first-person shooter that would be a direct sequel to it.
Although the final script was written by Gearbox writer Mikey Neumann, Bradley Thompson and David Weddle, writers of the 2004 television series Battlestar Galactica, collaborated with Gearbox during the 2007–08 Writers Guild of America strike to develop the story and characters. The game takes place shortly after the 1992 Aliens sequel Alien 3, but addresses the events that lead to it. As a result, Colonial Marines is considered part of the series' canon. Several locations of Aliens like the Sulaco spaceship and the Hadley's Hope colony were recreated for the game. To keep the same level of authenticity, concept artist Syd Mead, who collaborated with Cameron on the film to design the Sulaco, was hired to recreate its "mechanical mood" and design areas of the spaceship that did not appear in the film. Entertainment designer Lorin Wood was hired by Gearbox in late 2007 to take over the principle conceptual design workload after Syd Mead completed his work. Due to his industrial design experience and film industry work he helped the studio maintain a consistent aesthetic that Syd Mead and Ron Cobb established for the film. The development team also contacted Kodak to get color channel details about the film's film stock.
Originally, Colonial Marines was intended to feature squad-based gameplay, allowing the player to issue orders to Colonial Marines controlled by the artificial intelligence using context-sensitive commands. These would include hacking doors, sealing air vents, and setting up sentry turrets. In cooperative mode, players would then be able to directly control these Marines, who would have their own strengths and weaknesses. However, this idea was ultimately dropped to make the gameplay more accessible. Gearbox developed the game for Windows and the new PlayStation 3 and Xbox 360 consoles, stating that their technology would "do [the film] justice".
Production
Although Gearbox is credited as the primary developer of Colonial Marines, multiple development studios contributed to production. Initial work on Colonial Marines, internally codenamed Pecan, began in 2007 with the creation of a prototype by Demiurge Studios, who also helped Gearbox with the networking and multiplayer aspects. Between 2007 and 2010, Gearbox did not focus on the development of the game, instead preferring to work on other projects like Borderlands and Duke Nukem Forever, which took over a decade to develop. Colonial Marines was built using Epic Games' Unreal Engine 3, but Gearbox spent a considerable amount of preproduction time developing a custom real-time lighting and shadow renderer that is "plugged" into the engine to capture the feel of Aliens. Nerve Software, a company that handled the multiplayer of the 2001 first-person shooter Return to Castle Wolfenstein, built multiplayer maps.
Borderlands was released in 2009 and was a critical and commercial success. Gearbox immediately started work on Borderlands 2, and outsourced primary development on Colonial Marines to TimeGate Studios, who was developing Section 8: Prejudice at the time. In late 2010, when TimeGate started to focus their work on Colonial Marines, the company realized that very little progress had been made. According to one source, the game was simply a collection of unrelated assets that included a lighting and shadow renderer. Although TimeGate handled primary development on the game until Borderlands 2 was almost complete in mid-2012, their work had to constantly be approved by both Gearbox and Sega. Because narrative designers were still writing the script of the campaign mode, entire scenes and missions were discarded due to story changes. One of these involved the player escorting a scientist who would be a secret agent working for the Weyland-Yutani corporation.
There were disagreements on the design. Sega wanted Colonial Marines to be more similar to a Call of Duty game, with fewer Aliens and more Marines to shoot at, a view Gearbox and TimeGate disagreed with. Developers also struggled to optimize the game after spending a significant amount of time increasing its graphical fidelity for a press demo, which ran on high-end computers not normally meant for general use. The game's shader and particle fidelity were then decreased significantly before release, and textures had to be reduced in size to fit into the memory restraints of the PlayStation 3 and Xbox 360.
When Gearbox took the project back in mid-2012, the company was not satisfied with TimeGate's work, partially because the game could not run on the PlayStation 3. With a release date set for February 2013, asking Sega for an extension was not an option because the game had already been delayed several times. This resulted in Gearbox only having nine months to revise TimeGate's work and finish the game. How much of the game was actually made by Gearbox was questioned by TimeGate. According to Gearbox CEO Randy Pitchford, TimeGate "contributed 20-25 percent" of the development time. However, without considering Gearbox's preproduction time, Pitchford said that TimeGate's effort was equivalent to theirs. A moderator on the official TimeGate forum revealed that the studio worked on the weapons, characters, Aliens, story, and multiplayer component, while some TimeGate developers estimated that 50 percent of the campaign mode in the released game was made by them.
Several actors from the films were involved. Michael Biehn reprised his role as corporal Dwayne Hicks, while Lance Henriksen voiced the androids Bishop and Michael Weyland. Henriksen remarked that it was interesting for him to voice a character that he had not touched in more than 25 years. In contrast, Biehn commented negatively on his experience in voicing his character, stating that there was a lack of passion from the people who were in charge of the project. The soundtrack was composed by Kevin Riepl, who is best known for his work on numerous independent films and the Gears of War series. Because the story is canonical, Riepl's score was influenced by Jerry Goldsmith's work on Alien and James Horner's work on Aliens. The soundtrack was recorded at Ocean Way in Nashville, Tennessee.
Marketing and release
A first-person shooter based on the Alien universe was confirmed to be in pre-production shortly after Sega acquired the license in December 2006. Colonial Marines was announced by Game Informer in its March 2008 issue, where its premise and intended gameplay features were revealed. It shares a title with an unrelated 2002 PlayStation 2 project by Electronic Arts and Fox Interactive that would feature a similar setting and subject matter. Originally intended to be released in 2009, Colonial Marines was delayed after Gearbox laid off several employees in November 2008. This led some to ask whether the game had been canceled. In the following years, few other announcements were made, although Gearbox did show some screenshots at the 2010 Penny Arcade Expo.
At the 2011 Electronic Entertainment Expo in Los Angeles, after confirming that Colonial Marines would be released in spring 2012, Gearbox unveiled a teaser trailer and revealed that a Wii U version was in development. A live gameplay demo played by a Gearbox representative was also showcased at the event. In January 2012, Sega announced that the game had been delayed to a fall 2012 release, stating that the company did not want to "sacrifice the creative process just for the sake of following a [deadline]." In May 2012, it was delayed one last time, with Gearbox stating that Colonial Marines would launch for PlayStation 3, Windows, and Xbox 360 on February 12, 2013, while the Wii U version would follow later. In the months leading up to the game's release, more trailers and demos were released.
Prior to its release, Colonial Marines was criticized for not featuring any playable female character. When a petition was formed to change this, Gearbox included them in both the cooperative and multiplayer modes. In addition to the standard edition, a collector's edition was made available for purchase. The collector's edition included a Powerloader figurine inspired by the film, a Colonial Marines dossier, character customization options, exclusive multiplayer weapons, and a firing range game level. Players who pre-ordered could also receive some of the collector's edition content as a bonus. Shortly after the game's release, Gearbox released a patch that fixed numerous campaign and multiplayer bugs and offered various visual improvements. The Wii U version, which was being handled by Demiurge, was canceled in April 2013.
Downloadable content
Colonial Marines supports additional in-game content in the form of downloadable content packs. Between March and July 2013, four downloadable content packs were released. A season pass for these packs could be purchased before the game was released. The first, Bug Hunt, was released on March 19, 2013, and adds a new cooperative mode that involves up to four players fighting increasingly larger waves of Xenomorphs and hostile soldiers across three new maps. Players earn in-game money by killing opponents, which can then be spent on different options like buying ammunition or opening up new areas of the map to increase their chances of survival. The second pack, Reconnaissance Pack, was released on May 7, 2013, and extends the game's competitive multiplayer mode with four maps and more customization options for Xenomorph characters, while the third pack, Movie Map Pack, was released on June 11, 2013, and adds four maps set in locations from the first three Alien films.
The fourth and final pack, Stasis Interrupted, was released on July 23, 2013, and adds a new campaign mode that takes place before the campaign of the base game, exploring what happened to Hicks between Aliens and Alien 3. The campaign features four "interlocking" missions where players must play as three different characters. Stasis Interrupted also adds several new achievements for players to unlock, which were initially leaked via a list of PlayStation 3 Trophies. According to a report, both Demiurge and Nerve were in charge of developing the downloadable content packs, but it was not confirmed if they contributed to the development of Stasis Interrupted.
Reception
Critical response
Colonial Marines received unfavorable reviews from critics, who criticized its uninspiring gameplay, technical issues, low-quality graphics, and superficial thrills, especially when compared to Cameron's film. Writing for IGN, editor Tristan Ogilvie remarked that, although Colonial Marines looks and sounds like Aliens, it does not feel like it and does not bring anything new to the first-person shooter genre. Kevin VanOrd of GameSpot described it as "a shallow bit of science-fiction fluff with cheap production values and an indifferent attitude." Also in agreement, Jose Otero of 1Up.com stated: "The weapons and sounds of ACM feel authentic, but the bland look of the game will make you think it shipped as an unfinished product." Electronic Gaming Monthly, however, praised the game for its respect to the source material, describing Colonial Marines as "easily the best gaming representation of the franchise to date."
Colonial Marines was criticized for having low-resolution textures, low-quality lighting, poor character models and animations, and uncontrolled aliasing and screen tearing. Eurogamer said it reuses graphical assets often, resulting in many levels having "identical corridors and murky exteriors". However, the Aliens aesthetic was praised by some reviewers, with Edge noting that it was reproduced faithfully in the game and that it was still attractive years after the film was released. The game's numerous bugs frustrated critics. These included poor collision detection and glitchy artificial intelligence, causing enemies to freeze or fail to recognize each other. Technically, the PC version was considered more polished than the PlayStation 3 and Xbox 360 versions.
The story drew criticism for its lack of a consistent continuity with the Alien films. Edge remarked that the Colonial Marines are in an inappropriate context because in the film they are depicted as Weyland-Yutani's private army and tasked with fighting Alien creatures. However, in the game, the Colonial Marines fight Weyland-Yutani's other private military armies. Destructoid editor James Sterling criticized the story for its archetypal characters and immature dialogue, stating that the game fails to understand the essence of Aliens. Sterling explained that the film "dissected its posturing 'manly man' stereotypes, and showcased how utterly frail a cowboy mentality can be when everything falls apart", while Colonial Marines "revels in its own testosterone, submerged gleefully in a pool of dank ultramasculinity."
Journalists criticized the gameplay for the weak artificial intelligence of enemies. They remarked that Xenomorphs simply rush toward players, making the motion tracker useless. According to GameTrailers, "there's never really the sense that you're being stalked by an intelligent enemy, and you'll always get a warning ping anyway." The setting and level design were praised by Electronic Gaming Monthly, but GameSpot noted that the levels were clearly not designed for cooperative gameplay. VanOrd explained that additional players do not take the role of companions that are controlled by the artificial intelligence, but are simply added to the game, resulting in crowded matches with players fighting for space and trying to shoot enemies. The Survivor and Escape multiplayer scenarios were highlighted as the strongest aspects of the game. PC Gamer said that they encourage Colonial Marine players to coordinate their actions with motion trackers as Alien players try to hunt them intelligently. However, the longevity of the multiplayer mode was questioned due to the limited randomization it provides and the lack of computer-controlled bots.
Sales
In the United Kingdom, Colonial Marines topped the all formats charts in its first week of release. On both the Xbox 360 and PlayStation 3 individual charts, it also reached the top position. According to GfK Chart-Track, it was the biggest release of the year in the United Kingdom and held the second highest first-week sales for an Alien game since Sega's 2010 title Aliens vs. Predator. In the United States, Colonial Marines reached No. 6 on the all formats charts for February 2013. As of March 31, 2013, as stated in Sega's end-of-fiscal-year report, Colonial Marines had sold 1.31 million units in the United States and Europe.
Controversy and lawsuit
Upon release, Colonial Marines drew a significant controversy. According to a report, Gearbox had been moving people and resources off Colonial Marines onto Borderlands and Duke Nukem Forever while still collecting full payments from Sega as if they were working on the game. When Sega discovered this misconduct, they temporarily canceled Colonial Marines, leading to the round of layoffs at Gearbox in late 2008. Gearbox outsourced a significant portion of the development to other developers to compensate for their mismanagement. While Sega initially denied such outsourcing, sources claimed otherwise, suggesting that the game was rushed through redesigns, certification, and shipping, despite being largely unfinished. It drew additional controversy when sequences from press demos were compared to the same sequences in the final game, revealing that the finished game is significantly lower in graphical quality.
In April 2013, two players filed a lawsuit, claiming that Gearbox and Sega had falsely advertised the game by showing demos at trade shows that did not resemble the final product. The demos, described as "actual gameplay" by Gearbox CEO Randy Pitchford, were said to feature graphical fidelity, artificial intelligence, and levels not featured in the game. Although Sega suggested settling the lawsuit on their part and agreed to pay US$1.25 million, they denied any illegal behavior. However, Gearbox filed a request to have claims against them dropped, stating that the company, as a software developer, did not have responsibility for marketing decisions. Gearbox officials added that the company supplemented Sega's development budget with its own money to help Sega finish the game and that they had not received any royalty from its sales. The lawsuit lost class-action status and Gearbox was dropped from the case in May 2015. Pitchford said that he lost between US$10 and US$15 million of his own money on Colonial Marines and refuted the accusations against the studio. In 2017, a modder discovered a small typographical error in the code and that fixing this error notably improved the artificial intelligence of enemies.
Notes
References
External links
Aliens: Colonial Marines at Sega
Aliens: Colonial Marines at Gearbox Software
2013 video games
Alien (franchise) games
Asymmetrical multiplayer video games
Split-screen multiplayer games
Cancelled Wii U games
First-person shooters
Video game interquels
Multiplayer and single-player video games
PlayStation 3 games
Sega video games
Science fiction video games
Unreal Engine 3 games
Video games based on works by James Cameron
Video games developed in the United States
Video games set in the 22nd century
Video games set on fictional planets
Windows games
Xbox 360 games
Video game controversies
Gearbox Software games | Aliens: Colonial Marines | [
"Physics"
] | 5,234 | [
"Asymmetrical multiplayer video games",
"Symmetry",
"Asymmetry"
] |
1,545,350 | https://en.wikipedia.org/wiki/Genotype%20frequency | Genetic variation in populations can be analyzed and quantified by the frequency of alleles. Two fundamental calculations are central to population genetics: allele frequencies and genotype frequencies. Genotype frequency in a population is the number of individuals with a given genotype divided by the total number of individuals in the population.
In population genetics, the genotype frequency is the frequency or proportion (i.e., 0 < f < 1) of genotypes in a population.
Although allele and genotype frequencies are related, it is important to clearly distinguish them.
Genotype frequency may also be used in the future (for "genomic profiling") to predict someone's having a disease or even a birth defect. It can also be used to determine ethnic diversity.
Genotype frequencies may be represented by a De Finetti diagram.
Numerical example
As an example, consider a population of 100 four-o-'clock plants (Mirabilis jalapa) with the following genotypes:
49 red-flowered plants with the genotype AA
42 pink-flowered plants with genotype Aa
9 white-flowered plants with genotype aa
When calculating an allele frequency for a diploid species, remember that homozygous individuals have two copies of an allele, whereas heterozygotes have only one. In our example, each of the 42 pink-flowered heterozygotes has one copy of the a allele, and each of the 9 white-flowered homozygotes has two copies. Therefore, the allele frequency for a (the white color allele) equals
This result tells us that the allele frequency of a is 0.3. In other words, 30% of the alleles for this gene in the population are the a allele.
Compare genotype frequency:
let's now calculate the genotype frequency of aa homozygotes (white-flowered plants).
Allele and genotype frequencies always sum to one (100%).
Equilibrium
The Hardy–Weinberg law describes the relationship between allele and genotype frequencies when a population is not evolving. Let's examine the Hardy–Weinberg equation using the population of four-o'clock plants that we considered above:
if the allele A frequency is denoted by the symbol p and the allele a frequency denoted by q, then p+q=1.
For example, if p=0.7, then q must be 0.3. In other words, if the allele frequency of A equals 70%, the remaining 30% of the alleles must be a, because together they equal 100%.
For a gene that exists in two alleles, the Hardy–Weinberg equation states that (p2) + (2pq) + (q2) = 1.
If we apply this equation to our flower color gene, then
(genotype frequency of homozygotes)
(genotype frequency of heterozygotes)
(genotype frequency of homozygotes)
If p=0.7 and q=0.3, then
= (0.7)2 = 0.49
= 2×(0.7)×(0.3) = 0.42
= (0.3)2 = 0.09
This result tells us that, if the allele frequency of A is 70% and the allele frequency of a is 30%, the expected genotype frequency of AA is 49%, Aa is 42%, and aa is 9%.
References
Notes
Genetics concepts
Population genetics | Genotype frequency | [
"Biology"
] | 729 | [
"Genetics concepts"
] |
1,545,388 | https://en.wikipedia.org/wiki/Entropic%20explosion | An entropic explosion is an explosion in which the reactants undergo a large change in volume without releasing a large amount of heat. The chemical decomposition of triacetone triperoxide (TATP) may be an example of an entropic explosion. It is not a thermochemically highly favored event because little energy is generated in chemical bond formation in reaction products, but rather involves an entropy burst, which is the result of formation of one ozone and three acetone gas phase molecules from every molecule of TATP in the solid state.
This hypothesis has been questioned as opposing to other theoretical investigations as well as actual measurements of the detonation heat of TATP. Experiments have shown that the explosion heat of TATP is about 2800 kJ/kg (about 70% of TNT) and that it acts as a usual explosive, producing a mix of hydrocarbons, water and carbon oxides upon detonation.
The authors of the 2005 Dubnikova et al. study confirm that a final redox reaction (combustion) of ozone, oxygen and reactive species into water, various oxides and hydrocarbons takes place within about 180 ps after the initial reaction - within about a micron of the detonation wave. Crystals of TATP ultimately reach a temperature of 2300 K and pressure of 80 kbar.
References
Chemical reactions
Thermodynamic entropy
Explosions | Entropic explosion | [
"Physics",
"Chemistry"
] | 282 | [
"Physical quantities",
"Thermodynamic entropy",
"Entropy",
"Explosions",
"nan",
"Statistical mechanics"
] |
1,545,534 | https://en.wikipedia.org/wiki/Document%20Structure%20Description | Document Structure Description, or DSD, is a schema language for XML, that is, a language for describing valid XML documents. It's an alternative to DTD or the W3C XML Schema.
An example of DSD in its simplest form:
This says that element named "foo" in the XML namespace "http://example.com" may have two attributes, named "first" and "second". A "foo" element may not have any character data. It must contain one subelement, named "bar", also in the "http://example.com" namespace. A "bar" element is not allowed any attributes, character data or subelements.
One XML document that would be valid according to the above DSD would be:
Current Software store
Prototype Java Processor from BRICS
External links
DSD home page
Full DSD specification
Comparison of DTD, W3C XML Schema, and DSD
XML-based standards
XML
Data modeling languages | Document Structure Description | [
"Technology"
] | 208 | [
"Computer standards",
"XML-based standards"
] |
1,545,608 | https://en.wikipedia.org/wiki/Anaerobic%20digestion | Anaerobic digestion is a sequence of processes by which microorganisms break down biodegradable material in the absence of oxygen. The process is used for industrial or domestic purposes to manage waste or to produce fuels. Much of the fermentation used industrially to produce food and drink products, as well as home fermentation, uses anaerobic digestion.
Anaerobic digestion occurs naturally in some soils and in lake and oceanic basin sediments, where it is usually referred to as "anaerobic activity". This is the source of marsh gas methane as discovered by Alessandro Volta in 1776.
Anaerobic digestion comprises four stages:
Hydrolysis
Acidogenesis
Acetogenesis
Methanogenesis
The digestion process begins with bacterial hydrolysis of the input materials. Insoluble organic polymers, such as carbohydrates, are broken down to soluble derivatives that become available for other bacteria. Acidogenic bacteria then convert the sugars and amino acids into carbon dioxide, hydrogen, ammonia, and organic acids. In acetogenesis, bacteria convert these resulting organic acids into acetic acid, along with additional ammonia, hydrogen, and carbon dioxide amongst other compounds. Finally, methanogens convert these products to methane and carbon dioxide. The methanogenic archaea populations play an indispensable role in anaerobic wastewater treatments.
Anaerobic digestion is used as part of the process to treat biodegradable waste and sewage sludge. As part of an integrated waste management system, anaerobic digestion reduces the emission of landfill gas into the atmosphere. Anaerobic digesters can also be fed with purpose-grown energy crops, such as maize.
Anaerobic digestion is widely used as a source of renewable energy. The process produces a biogas, consisting of methane, carbon dioxide, and traces of other 'contaminant' gases. This biogas can be used directly as fuel, in combined heat and power gas engines or upgraded to natural gas-quality biomethane. The nutrient-rich digestate also produced can be used as fertilizer.
With the re-use of waste as a resource and new technological approaches that have lowered capital costs, anaerobic digestion has in recent years received increased attention among governments in a number of countries, among these the United Kingdom (2011), Germany, Denmark (2011), and the United States.
Process
Many microorganisms affect anaerobic digestion, including acetic acid-forming bacteria (acetogens) and methane-forming archaea (methanogens). These organisms promote a number of chemical processes in converting the biomass to biogas.
Gaseous oxygen is excluded from the reactions by physical containment. Anaerobes utilize electron acceptors from sources other than oxygen gas. These acceptors can be the organic material itself or may be supplied by inorganic oxides from within the input material. When the oxygen source in an anaerobic system is derived from the organic material itself, the 'intermediate' end products are primarily alcohols, aldehydes, and organic acids, plus carbon dioxide. In the presence of specialised methanogens, the intermediates are converted to the 'final' end products of methane, carbon dioxide, and trace levels of hydrogen sulfide. In an anaerobic system, the majority of the chemical energy contained within the starting material is released by methanogenic archaea as methane.
Populations of anaerobic microorganisms typically take a significant period of time to establish themselves to be fully effective. Therefore, common practice is to introduce anaerobic microorganisms from materials with existing populations, a process known as "seeding" the digesters, typically accomplished with the addition of sewage sludge or cattle slurry.
Process stages
The four key stages of anaerobic digestion involve hydrolysis, acidogenesis, acetogenesis and methanogenesis.
The overall process can be described by the chemical reaction, where organic material such as glucose is biochemically digested into carbon dioxide (CO2) and methane (CH4) by the anaerobic microorganisms.
C6H12O6 → 3CO2 + 3CH4
Hydrolysis
In most cases, biomass is made up of large organic polymers. For the bacteria in anaerobic digesters to access the energy potential of the material, these chains must first be broken down into their smaller constituent parts. These constituent parts, or monomers, such as sugars, are readily available to other bacteria. The process of breaking these chains and dissolving the smaller molecules into solution is called hydrolysis. Therefore, hydrolysis of these high-molecular-weight polymeric components is the necessary first step in anaerobic digestion. Through hydrolysis the complex organic molecules are broken down into simple sugars, amino acids, and fatty acids.
Acetate and hydrogen produced in the first stages can be used directly by methanogens. Other molecules, such as volatile fatty acids (VFAs) with a chain length greater than that of acetate must first be catabolised into compounds that can be directly used by methanogens.
Acidogenesis
The biological process of acidogenesis results in further breakdown of the remaining components by acidogenic (fermentative) bacteria. Here, VFAs are created, along with ammonia, carbon dioxide, and hydrogen sulfide, as well as other byproducts. The process of acidogenesis is similar to the way milk sours.
Acetogenesis
The third stage of anaerobic digestion is acetogenesis. Here, simple molecules created through the acidogenesis phase are further digested by acetogens to produce largely acetic acid, as well as carbon dioxide and hydrogen.
Methanogenesis
The terminal stage of anaerobic digestion is the biological process of methanogenesis. Here, methanogens use the intermediate products of the preceding stages and convert them into methane, carbon dioxide, and water. These components make up the majority of the biogas emitted from the system. Methanogenesis is sensitive to both high and low pHs and occurs between pH 6.5 and pH 8. The remaining, indigestible material the microbes cannot use and any dead bacterial remains constitute the digestate.
Configuration
Anaerobic digesters can be designed and engineered to operate using a number of different configurations and can be categorized into batch vs. continuous process mode, mesophilic vs. thermophilic temperature conditions, high vs. low portion of solids, and single stage vs. multistage processes. Continuous process requires more complex design, but still, it may be more economical than batch process, because batch process requires more initial building money and a larger volume of the digesters (spread across several batches) to handle the same amount of waste as a continuous process digester. Higher heat energy is required in a thermophilic system compared to a mesophilic system, but the thermophilic system requires much less time and has a larger gas output capacity and higher methane gas content, so one has to consider that trade-off carefully. For solids content, low will handle up to 15% solid content. Above this level is considered high solids content and can also be known as dry digestion. In a single stage process, one reactor houses the four anaerobic digestion steps. A multistage process utilizes two or more reactors for digestion to separate the methanogenesis and hydrolysis phases.
Batch or continuous
Anaerobic digestion can be performed as a batch process or a continuous process. In a batch system, biomass is added to the reactor at the start of the process. The reactor is then sealed for the duration of the process. In its simplest form batch processing needs inoculation with already processed material to start the anaerobic digestion. In a typical scenario, biogas production will be formed with a normal distribution pattern over time. Operators can use this fact to determine when they believe the process of digestion of the organic matter has completed. There can be severe odour issues if a batch reactor is opened and emptied before the process is well completed. A more advanced type of batch approach has limited the odour issues by integrating anaerobic digestion with in-vessel composting. In this approach inoculation takes place through the use of recirculated degasified percolate. After anaerobic digestion has completed, the biomass is kept in the reactor which is then used for in-vessel composting before it is opened As the batch digestion is simple and requires less equipment and lower levels of design work, it is typically a cheaper form of digestion. Using more than one batch reactor at a plant can ensure constant production of biogas.
In continuous digestion processes, organic matter is constantly added (continuous complete mixed) or added in stages to the reactor (continuous plug flow; first in – first out). Here, the end products are constantly or periodically removed, resulting in constant production of biogas. A single or multiple digesters in sequence may be used. Examples of this form of anaerobic digestion include continuous stirred-tank reactors, upflow anaerobic sludge blankets, expanded granular sludge beds, and internal circulation reactors.
Temperature
The two conventional operational temperature levels for anaerobic digesters determine the species of methanogens in the digesters:
Mesophilic digestion takes place optimally around 30 to 38 °C, or at ambient temperatures between 20 and 45 °C, where mesophiles are the primary microorganisms present.
Thermophilic digestion takes place optimally around 49 to 57 °C, or at elevated temperatures up to 70 °C, where thermophiles are the primary microorganisms present.
A limit case has been reached in Bolivia, with anaerobic digestion in temperature working conditions of less than 10 °C. The anaerobic process is very slow, taking more than three times the normal mesophilic time process. In experimental work at University of Alaska Fairbanks, a 1,000-litre digester using psychrophiles harvested from "mud from a frozen lake in Alaska" has produced 200–300 litres of methane per day, about 20 to 30% of the output from digesters in warmer climates. Mesophilic species outnumber thermophiles, and they are also more tolerant to changes in environmental conditions than thermophiles. Mesophilic systems are, therefore, considered to be more stable than thermophilic digestion systems. In contrast, while thermophilic digestion systems are considered less stable, their energy input is higher, with more biogas being removed from the organic matter in an equal amount of time. The increased temperatures facilitate faster reaction rates, and thus faster gas yields. Operation at higher temperatures facilitates greater pathogen reduction of the digestate. In countries where legislation, such as the Animal By-Products Regulations in the European Union, requires digestate to meet certain levels of pathogen reduction there may be a benefit to using thermophilic temperatures instead of mesophilic.
Additional pre-treatment can be used to reduce the necessary retention time to produce biogas. For example, certain processes shred the substrates to increase the surface area or use a thermal pretreatment stage (such as pasteurisation) to significantly enhance the biogas output. The pasteurisation process can also be used to reduce the pathogenic concentration in the digestate, leaving the anaerobic digester. Pasteurisation may be achieved by heat treatment combined with maceration of the solids.
Solids content
In a typical scenario, three different operational parameters are associated with the solids content of the feedstock to the digesters:
High solids (dry—stackable substrate)
High solids (wet—pumpable substrate)
Low solids (wet—pumpable substrate)
High solids (dry) digesters are designed to process materials with a solids content between 25 and 40%. Unlike wet digesters that process pumpable slurries, high solids (dry – stackable substrate) digesters are designed to process solid substrates without the addition of water. The primary styles of dry digesters are continuous vertical plug flow and batch tunnel horizontal digesters. Continuous vertical plug flow digesters are upright, cylindrical tanks where feedstock is continuously fed into the top of the digester, and flows downward by gravity during digestion. In batch tunnel digesters, the feedstock is deposited in tunnel-like chambers with a gas-tight door. Neither approach has mixing inside the digester. The amount of pretreatment, such as contaminant removal, depends both upon the nature of the waste streams being processed and the desired quality of the digestate. Size reduction (grinding) is beneficial in continuous vertical systems, as it accelerates digestion, while batch systems avoid grinding and instead require structure (e.g. yard waste) to reduce compaction of the stacked pile. Continuous vertical dry digesters have a smaller footprint due to the shorter effective retention time and vertical design. Wet digesters can be designed to operate in either a high-solids content, with a total suspended solids (TSS) concentration greater than ~20%, or a low-solids concentration less than ~15%.
High solids (wet) digesters process a thick slurry that requires more energy input to move and process the feedstock. The thickness of the material may also lead to associated problems with abrasion. High solids digesters will typically have a lower land requirement due to the lower volumes associated with the moisture. High solids digesters also require correction of conventional performance calculations (e.g. gas production, retention time, kinetics, etc.) originally based on very dilute sewage digestion concepts, since larger fractions of the feedstock mass are potentially convertible to biogas.
Low solids (wet) digesters can transport material through the system using standard pumps that require significantly lower energy input. Low solids digesters require a larger amount of land than high solids due to the increased volumes associated with the increased liquid-to-feedstock ratio of the digesters. There are benefits associated with operation in a liquid environment, as it enables more thorough circulation of materials and contact between the bacteria and their food. This enables the bacteria to more readily access the substances on which they are feeding, and increases the rate of gas production.
Complexity
Digestion systems can be configured with different levels of complexity. In a single-stage digestion system (one-stage), all of the biological reactions occur within a single, sealed reactor or holding tank. Using a single stage reduces construction costs, but results in less control of the reactions occurring within the system. Acidogenic bacteria, through the production of acids, reduce the pH of the tank. Methanogenic archaea, as outlined earlier, operate in a strictly defined pH range. Therefore, the biological reactions of the different species in a single-stage reactor can be in direct competition with each other. Another one-stage reaction system is an anaerobic lagoon. These lagoons are pond-like, earthen basins used for the treatment and long-term storage of manures. Here the anaerobic reactions are contained within the natural anaerobic sludge contained in the pool.
In a two-stage digestion system (multistage), different digestion vessels are optimised to bring maximum control over the bacterial communities living within the digesters. Acidogenic bacteria produce organic acids and more quickly grow and reproduce than methanogenic archaea. Methanogenic archaea require stable pH and temperature to optimise their performance.
Under typical circumstances, hydrolysis, acetogenesis, and acidogenesis occur within the first reaction vessel. The organic material is then heated to the required operational temperature (either mesophilic or thermophilic) prior to being pumped into a methanogenic reactor. The initial hydrolysis or acidogenesis tanks prior to the methanogenic reactor can provide a buffer to the rate at which feedstock is added. Some European countries require a degree of elevated heat treatment to kill harmful bacteria in the input waste. In this instance, there may be a pasteurisation or sterilisation stage prior to digestion or between the two digestion tanks. Notably, it is not possible to completely isolate the different reaction phases, and often some biogas is produced in the hydrolysis or acidogenesis tanks.
Residence time
The residence time in a digester varies with the amount and type of feed material, and with the configuration of the digestion system. In a typical two-stage mesophilic digestion, residence time varies between 15 and 40 days, while for a single-stage thermophilic digestion, residence times is normally faster and takes around 14 days. The plug-flow nature of some of these systems will mean the full degradation of the material may not have been realised in this timescale. In this event, digestate exiting the system will be darker in colour and will typically have more odour.
In the case of an upflow anaerobic sludge blanket digestion (UASB), hydraulic residence times can be as short as 1 hour to 1 day, and solid retention times can be up to 90 days. In this manner, a UASB system is able to separate solids and hydraulic retention times with the use of a sludge blanket. Continuous digesters have mechanical or hydraulic devices, depending on the level of solids in the material, to mix the contents, enabling the bacteria and the food to be in contact. They also allow excess material to be continuously extracted to maintain a reasonably constant volume within the digestion tanks.
Pressure
A recent development in anaerobic reactor design is High-pressure anaerobic digestion (HPAD) also referred to a Autogenerative High Pressure Digestion (AHPD). This technique produces a biogas with a elevated methane content. The produced carbon dioxide in biogas dissolves more into the water phase under pressure then methane does. Hence the produced biogas is richer in methane. Research at the University of Groningen demonstrated that the bacterial community changes in composition under the influence of pressure. Individual bacteria species have their optimum circumstances in which they grow and replicate the fastest. Commonly known are pH, temperature, salinity etc. but pressure is also one of them. Some species have adapted to life in the deep oceans where pressure is much higher than at sea level. This makes it possible in similar vein as other process parameters such as Temperature, Retention Time, pH to influence the anaerobic digestion process.
Inhibition
The anaerobic digestion process can be inhibited by several compounds, affecting one or more of the bacterial groups responsible for the different organic matter degradation steps. The degree of the inhibition depends, among other factors, on the concentration of the inhibitor in the digester. Potential inhibitors are ammonia, sulfide, light metal ions (Na, K, Mg, Ca, Al), heavy metals, some organics (chlorophenols, halogenated aliphatics, N-substituted aromatics, long chain fatty acids), etc.
Total ammonia nitrogen (TAN) has been shown to inhibit production of methane. Furthermore, it destabilises the microbial community, impacting the synthesis of acetic acid. Acetic acid is one of the driving forces in methane production. At an excess of 5000 mg/L TAN, pH adjustment is needed to keep the reaction stable. A TAN concentration above 1700– 1800 mg/L inhibits methane production and yield decreases at greater TAN concentrations. High TAN concentrations cause the reaction to turn acidic and lead to a domino effect of inhibition. Total ammonia nitrogen is the combination of free ammonia and ionized ammonia. TAN is produced through degrading material high in nitrogen, typically proteins and will naturally build in anaerobic digestion. This is depending on the organic feed stock fed to the system. In typical wastewater treatment practices, TAN reduction is done with via nitrification. Nitrification is an aerobic process where TAN is consumed by aerobic heterotrophic bacteria. These bacteria release nitrate and nitrite which are later converted to nitrogen gas through the denitrification process. Hydrolysis and acidogenesis can also be impacted by TAN concentration. In mesophilic conditions, inhibition for hydrolysis was found to occur at 5500 mg/L TAN, while acidogenesis inhibition occurs at 6500 mg/L TAN.
Feedstocks
The most important initial issue when considering the application of anaerobic digestion systems is the feedstock to the process. Almost any organic material can be processed with anaerobic digestion; however, if biogas production is the aim, the level of putrescibility is the key factor in its successful application. The more putrescible (digestible) the material, the higher the gas yields possible from the system.
Feedstocks can include biodegradable waste materials, such as waste paper, grass clippings, leftover food, sewage, and animal waste. Woody wastes are the exception, because they are largely unaffected by digestion, as most anaerobes are unable to degrade lignin. Xylophagous anaerobes (lignin consumers) or high temperature pretreatment, such as pyrolysis, can be used to break lignin down. Anaerobic digesters can also be fed with specially grown energy crops, such as silage, for dedicated biogas production. In Germany and continental Europe, these facilities are referred to as "biogas" plants. A codigestion or cofermentation plant is typically an agricultural anaerobic digester that accepts two or more input materials for simultaneous digestion.
The length of time required for anaerobic digestion depends on the chemical complexity of the material. Material rich in easily digestible sugars breaks down quickly, whereas intact lignocellulosic material rich in cellulose and hemicellulose polymers can take much longer to break down. Anaerobic microorganisms are generally unable to break down lignin, the recalcitrant aromatic component of biomass.
Anaerobic digesters were originally designed for operation using sewage sludge and manures. Sewage and manure are not, however, the material with the most potential for anaerobic digestion, as the biodegradable material has already had much of the energy content taken out by the animals that produced it. Therefore, many digesters operate with codigestion of two or more types of feedstock. For example, in a farm-based digester that uses dairy manure as the primary feedstock, the gas production may be significantly increased by adding a second feedstock, e.g., grass and corn (typical on-farm feedstock), or various organic byproducts, such as slaughterhouse waste, fats, oils and grease from restaurants, organic household waste, etc. (typical off-site feedstock).
Digesters processing dedicated energy crops can achieve high levels of degradation and biogas production. Slurry-only systems are generally cheaper, but generate far less energy than those using crops, such as maize and grass silage; by using a modest amount of crop material (30%), an anaerobic digestion plant can increase energy output tenfold for only three times the capital cost, relative to a slurry-only system.
Moisture content
A second consideration related to the feedstock is moisture content. Drier, stackable substrates, such as food and yard waste, are suitable for digestion in tunnel-like chambers. Tunnel-style systems typically have near-zero wastewater discharge, as well, so this style of system has advantages where the discharge of digester liquids are a liability. The wetter the material, the more suitable it will be to handling with standard pumps instead of energy-intensive concrete pumps and physical means of movement. Also, the wetter the material, the more volume and area it takes up relative to the levels of gas produced. The moisture content of the target feedstock will also affect what type of system is applied to its treatment. To use a high-solids anaerobic digester for dilute feedstocks, bulking agents, such as compost, should be applied to increase the solids content of the input material. Another key consideration is the carbon:nitrogen ratio of the input material. This ratio is the balance of food a microbe requires to grow; the optimal C:N ratio is 20–30:1. Excess N can lead to ammonia inhibition of digestion.
Contamination
The level of contamination of the feedstock material is a key consideration when using wet digestion or plug-flow digestion.
If the feedstock to the digesters has significant levels of physical contaminants, such as plastic, glass, or metals, then processing to remove the contaminants will be required for the material to be used. If it is not removed, then the digesters can be blocked and will not function efficiently. This contamination issue does not occur with dry digestion or solid-state anaerobic digestion (SSAD) plants, since SSAD handles dry, stackable biomass with a high percentage of solids (40-60%) in gas-tight chambers called fermenter boxes. It is with this understanding that mechanical biological treatment plants are designed. The higher the level of pretreatment a feedstock requires, the more processing machinery will be required, and, hence, the project will have higher capital costs. National Non-Food Crops Centre.
After sorting or screening to remove any physical contaminants from the feedstock, the material is often shredded, minced, and mechanically or hydraulically pulped to increase the surface area available to microbes in the digesters and, hence, increase the speed of digestion. The maceration of solids can be achieved by using a chopper pump to transfer the feedstock material into the airtight digester, where anaerobic treatment takes place.
Substrate composition
Substrate composition is a major factor in determining the methane yield and methane production rates from the digestion of biomass. Techniques to determine the compositional characteristics of the feedstock are available, while parameters such as solids, elemental, and organic analyses are important for digester design and operation. Methane yield can be estimated from the elemental composition of substrate along with an estimate of its degradability (the fraction of the substrate that is converted to biogas in a reactor). In order to predict biogas composition (the relative fractions of methane and carbon dioxide) it is necessary to estimate carbon dioxide partitioning between the aqueous and gas phases, which requires additional information (reactor temperature, pH, and substrate composition) and a chemical speciation model. Direct measurements of biomethanation potential are also made using gas evolution or more recent gravimetric assays.
Applications
Using anaerobic digestion technologies can help to reduce the emission of greenhouse gases in a number of key ways:
Replacement of fossil fuels
Reducing or eliminating the energy footprint of waste treatment plants
Reducing methane emission from landfills
Displacing industrially produced chemical fertilizers
Reducing vehicle movements
Reducing electrical grid transportation losses
Reducing usage of LP Gas for cooking
An important component of the Zero Waste initiatives.
Waste and wastewater treatment
Anaerobic digestion is particularly suited to organic material, and is commonly used for industrial effluent, wastewater and sewage sludge treatment. Anaerobic digestion, a simple process, can greatly reduce the amount of organic matter which might otherwise be destined to be dumped at sea, dumped in landfills, or burnt in incinerators.
Pressure from environmentally related legislation on solid waste disposal methods in developed countries has increased the application of anaerobic digestion as a process for reducing waste volumes and generating useful byproducts. It may either be used to process the source-separated fraction of municipal waste or alternatively combined with mechanical sorting systems, to process residual mixed municipal waste. These facilities are called mechanical biological treatment plants.
If the putrescible waste processed in anaerobic digesters were disposed of in a landfill, it would break down naturally and often anaerobically. In this case, the gas will eventually escape into the atmosphere. As methane is about 20 times more potent as a greenhouse gas than carbon dioxide, this has significant negative environmental effects.
In countries that collect household waste, the use of local anaerobic digestion facilities can help to reduce the amount of waste that requires transportation to centralized landfill sites or incineration facilities. This reduced burden on transportation reduces carbon emissions from the collection vehicles. If localized anaerobic digestion facilities are embedded within an electrical distribution network, they can help reduce the electrical losses associated with transporting electricity over a national grid.
Anaerobic digestion can be used for the remediation sludge polluted with PFAS. A 2024 study has shown that anaerobic digestion, combined with adsorption in activated carbon and voltage application can remove up to 61% of PFAS from sewage sludge.
Power generation
In developing countries, simple home and farm-based anaerobic digestion systems offer the potential for low-cost energy for cooking and lighting.
From 1975, China and India have both had large, government-backed schemes for adaptation of small biogas plants for use in the household for cooking and lighting. At present, projects for anaerobic digestion in the developing world can gain financial support through the United Nations Clean Development Mechanism if they are able to show they provide reduced carbon emissions.
Methane and power produced in anaerobic digestion facilities can be used to replace energy derived from fossil fuels, and hence reduce emissions of greenhouse gases, because the carbon in biodegradable material is part of a carbon cycle. The carbon released into the atmosphere from the combustion of biogas has been removed by plants for them to grow in the recent past, usually within the last decade, but more typically within the last growing season. If the plants are regrown, taking the carbon out of the atmosphere once more, the system will be carbon neutral. In contrast, carbon in fossil fuels has been sequestered in the earth for many millions of years, the combustion of which increases the overall levels of carbon dioxide in the atmosphere. Power generation through anaerobic digesters is best suited to large-scale operations, rather than small farms, as large operations have the volume of manure that is able to make the systems financially viable.
Biogas from sewage sludge treatment is sometimes used to run a gas engine to produce electrical power, some or all of which can be used to run the sewage works. Some waste heat from the engine is then used to heat the digester. The waste heat is, in general, enough to heat the digester to the required temperatures. The power potential from sewage works is limited – in the UK, there are about 80 MW total of such generation, with the potential to increase to 150 MW, which is insignificant compared to the average power demand in the UK of about 35,000 MW. The scope for biogas generation from nonsewage waste biological matter – energy crops, food waste, abattoir waste, etc. - is much higher, estimated to be capable of about 3,000 MW. Farm biogas plants using animal waste and energy crops are expected to contribute to reducing CO2 emissions and strengthen the grid, while providing UK farmers with additional revenues.
Some countries offer incentives in the form of, for example, feed-in tariffs for feeding electricity onto the power grid to subsidize green energy production.
In Oakland, California at the East Bay Municipal Utility District's main wastewater treatment plant (EBMUD), food waste is currently codigested with primary and secondary municipal wastewater solids and other high-strength wastes. Compared to municipal wastewater solids digestion alone, food waste codigestion has many benefits. Anaerobic digestion of food waste pulp from the EBMUD food waste process provides a higher normalized energy benefit, compared to municipal wastewater solids: 730 to 1,300 kWh per dry ton of food waste applied compared to 560 to 940 kWh per dry ton of municipal wastewater solids applied.
Grid injection
Biogas grid-injection is the injection of biogas into the natural gas grid. The raw biogas has to be previously upgraded to biomethane. This upgrading implies the removal of contaminants such as hydrogen sulphide or siloxanes, as well as the carbon dioxide. Several technologies are available for this purpose, the most widely implemented being pressure swing adsorption (PSA), water or amine scrubbing (absorption processes) and, in recent years, membrane separation. As an alternative, the electricity and the heat can be used for on-site generation, resulting in a reduction of losses in the transportation of energy. Typical energy losses in natural gas transmission systems range from 1–2%, whereas the current energy losses on a large electrical system range from 5–8%.
In October 2010, Didcot Sewage Works became the first in the UK to produce biomethane gas supplied to the national grid, for use in up to 200 homes in Oxfordshire. By 2017, UK electricity firm Ecotricity plan to have digester fed by locally sourced grass fueling 6000 homes
Vehicle fuel
After upgrading with the above-mentioned technologies, the biogas (transformed into biomethane) can be used as vehicle fuel in adapted vehicles. This use is very extensive in Sweden, where over 38,600 gas vehicles exist, and 60% of the vehicle gas is biomethane generated in anaerobic digestion plants.
Fertiliser and soil conditioner
The solid, fibrous component of the digested material can be used as a soil conditioner to increase the organic content of soils. Digester liquor can be used as a fertiliser to supply vital nutrients to soils instead of chemical fertilisers that require large amounts of energy to produce and transport. The use of manufactured fertilisers is, therefore, more carbon-intensive than the use of anaerobic digester liquor fertiliser. In countries such as Spain, where many soils are organically depleted, the markets for the digested solids can be equally as important as the biogas.
Cooking gas
By using a bio-digester, which produces the bacteria required for decomposing, cooking gas is generated. The organic waste like fallen leaves, kitchen waste, food waste etc. are fed into a crusher unit, where it is mixed with a small amount of water. The mixture is then fed into the bio-digester, where the archaea decomposes it to produce cooking gas. This gas is piped to kitchen stove. A 2 cubic meter bio-digester can produce 2 cubic meters of cooking gas. This is equivalent to 1 kg of LPG. The notable advantage of using a bio-digester is the sludge which is a rich organic manure.
Products
The three principal products of anaerobic digestion are biogas, digestate, and water.
Biogas
Biogas is the ultimate waste product of the bacteria feeding off the input biodegradable feedstock (the methanogenesis stage of anaerobic digestion is performed by archaea, a micro-organism on a distinctly different branch of the phylogenetic tree of life to bacteria), and is mostly methane and carbon dioxide, with a small amount hydrogen and trace hydrogen sulfide. (As-produced, biogas also contains water vapor, with the fractional water vapor volume a function of biogas temperature). Most of the biogas is produced during the middle of the digestion, after the bacterial population has grown, and tapers off as the putrescible material is exhausted. The gas is normally stored on top of the digester in an inflatable gas bubble or extracted and stored next to the facility in a gas holder.
The methane in biogas can be burned to produce both heat and electricity, usually with a reciprocating engine or microturbine often in a cogeneration arrangement where the electricity and waste heat generated are used to warm the digesters or to heat buildings. Excess electricity can be sold to suppliers or put into the local grid. Electricity produced by anaerobic digesters is considered to be renewable energy and may attract subsidies. Biogas does not contribute to increasing atmospheric carbon dioxide concentrations because the gas is not released directly into the atmosphere and the carbon dioxide comes from an organic source with a short carbon cycle.
Biogas may require treatment or 'scrubbing' to refine it for use as a fuel. Hydrogen sulfide, a toxic product formed from sulfates in the feedstock, is released as a trace component of the biogas. National environmental enforcement agencies, such as the U.S. Environmental Protection Agency or the English and Welsh Environment Agency, put strict limits on the levels of gases containing hydrogen sulfide, and, if the levels of hydrogen sulfide in the gas are high, gas scrubbing and cleaning equipment (such as amine gas treating) will be needed to process the biogas to within regionally accepted levels. Alternatively, the addition of ferrous chloride FeCl2 to the digestion tanks inhibits hydrogen sulfide production.
Volatile siloxanes can also contaminate the biogas; such compounds are frequently found in household waste and wastewater. In digestion facilities accepting these materials as a component of the feedstock, low-molecular-weight siloxanes volatilise into biogas. When this gas is combusted in a gas engine, turbine, or boiler, siloxanes are converted into silicon dioxide (SiO2), which deposits internally in the machine, increasing wear and tear. Practical and cost-effective technologies to remove siloxanes and other biogas contaminants are available at the present time. In certain applications, in situ treatment can be used to increase the methane purity by reducing the offgas carbon dioxide content, purging the majority of it in a secondary reactor.
In countries such as Switzerland, Germany, and Sweden, the methane in the biogas may be compressed for it to be used as a vehicle transportation fuel or input directly into the gas mains. In countries where the driver for the use of anaerobic digestion are renewable electricity subsidies, this route of treatment is less likely, as energy is required in this processing stage and reduces the overall levels available to sell.
Digestate
Digestate is the solid remnants of the original input material to the digesters that the microbes cannot use. It also consists of the mineralised remains of the dead bacteria from within the digesters. Digestate can come in three forms: fibrous, liquor, or a sludge-based combination of the two fractions. In two-stage systems, different forms of digestate come from different digestion tanks. In single-stage digestion systems, the two fractions will be combined and, if desired, separated by further processing.
The second byproduct (acidogenic digestate) is a stable, organic material consisting largely of lignin and cellulose, but also of a variety of mineral components in a matrix of dead bacterial cells; some plastic may be present. The material resembles domestic compost and can be used as such or to make low-grade building products, such as fibreboard.
The solid digestate can also be used as feedstock for ethanol production.
The third byproduct is a liquid (methanogenic digestate) rich in nutrients, which can be used as a fertiliser, depending on the quality of the material being digested. Levels of potentially toxic elements (PTEs) should be chemically assessed. This will depend upon the quality of the original feedstock. In the case of most clean and source-separated biodegradable waste streams, the levels of PTEs will be low. In the case of wastes originating from industry, the levels of PTEs may be higher and will need to be taken into consideration when determining a suitable end use for the material.
Digestate typically contains elements, such as lignin, that cannot be broken down by the anaerobic microorganisms. Also, the digestate may contain ammonia that is phytotoxic, and may hamper the growth of plants if it is used as a soil-improving material. For these two reasons, a maturation or composting stage may be employed after digestion. Lignin and other materials are available for degradation by aerobic microorganisms, such as fungi, helping reduce the overall volume of the material for transport. During this maturation, the ammonia will be oxidized into nitrates, improving the fertility of the material and making it more suitable as a soil improver. Large composting stages are typically used by dry anaerobic digestion technologies.
Wastewater
The final output from anaerobic digestion systems is water, which originates both from the moisture content of the original waste that was treated and water produced during the microbial reactions in the digestion systems. This water may be released from the dewatering of the digestate or may be implicitly separate from the digestate.
The wastewater exiting the anaerobic digestion facility will typically have elevated levels of biochemical oxygen demand (BOD) and chemical oxygen demand (COD). These measures of the reactivity of the effluent indicate an ability to pollute. Some of this material is termed 'hard COD', meaning it cannot be accessed by the anaerobic bacteria for conversion into biogas. If this effluent were put directly into watercourses, it would negatively affect them by causing eutrophication. As such, further treatment of the wastewater is often required. This treatment will typically be an oxidation stage wherein air is passed through the water in a sequencing batch reactors or reverse osmosis unit.
History
Reported scientific interest in the manufacturing of gas produced by the natural decomposition of organic matter dates from the 17th century, when Robert Boyle (1627-1691) and Stephen Hales (1677-1761) noted that disturbing the sediment of streams and lakes released flammable gas. In 1778, the Italian physicist Alessandro Volta (1745-1827), the father of electrochemistry, scientifically identified that gas as methane.
In 1808 Sir Humphry Davy proved the presence of methane in the gases produced by cattle manure. The first known anaerobic digester was built in 1859 at a leper colony in Bombay in India. In 1895, the technology was developed in Exeter, England, where a septic tank was used to generate gas for the sewer gas destructor lamp, a type of gas lighting. Also in England, in 1904, the first dual-purpose tank for both sedimentation and sludge treatment was installed in Hampton, London.
By the early 20th century, anaerobic digestion systems began to resemble the technology as it appears today. In 1906, Karl Imhoff created the Imhoff tank; an early form of anaerobic digester and model wastewater treatment system throughout the early 20th century. After 1920, closed tank systems began to replace the previously common use of anaerobic lagoons – covered earthen basins used to treat volatile solids. Research on anaerobic digestion began in earnest in the 1930s.
Around the time of World War I, production from biofuels slowed as petroleum production increased and its uses were identified. While fuel shortages during World War II re-popularized anaerobic digestion, interest in the technology decreased again after the war ended. Similarly, the 1970s energy crisis sparked interest in anaerobic digestion. In addition to high energy prices, factors affecting the adoption of anaerobic digestion systems include receptivity to innovation, pollution penalties, policy incentives, and the availability of subsidies and funding opportunities.
Modern geographical distribution
Today, anaerobic digesters are commonly found alongside farms to reduce nitrogen run-off from manure, or wastewater treatment facilities to reduce the costs of sludge disposal. Agricultural anaerobic digestion for energy production has become most popular in Germany, where there were 8,625 digesters in 2014. In the United Kingdom, there were 259 facilities by 2014, and 500 projects planned to become operational by 2019. In the United States, there were 191 operational plants across 34 states in 2012. Policy may explain why adoption rates are so different across these countries.
Feed-in tariffs in Germany were enacted in 1991, also known as FIT, providing long-term contracts compensating investments in renewable energy generation. Consequently, between 1991 and 1998 the number of anaerobic digester plants in Germany grew from 20 to 517. In the late 1990s, energy prices in Germany varied and investors became unsure of the market's potential. The German government responded by amending FIT four times between 2000 and 2011, increasing tariffs and improving the profitability of anaerobic digestion, and resulting in reliable returns for biogas production and continued high adoption rates across the country.
Incidents involving digesters
Anaerobic digesters have caused Fish kills (e.g. River Mole, Devon, River Teifi, Afon Llynfi, and loss of human life (e.g. Avonmouth explosion)
There have been explosions of Anaerobic Digesters in the US (Jay, Maine Pixelle Specialty Solutions' Androscoggin Mill; Pensacola (Cantonment) 22 January 2017 (Kamyr digester explosion); EPDM failure March 2013 Aumsville, Oregon; February 6, 1987, Pennsylvania two workers at a wastewater treatment plant were re-draining a sewage digester when an explosion lifted the 30-ton floating cover, killing both workers instantly; Southwest Wastewater Treatment Plant in Springfield, Missouri), in the UK (for example at Avonmouth and at Harper Adams College, Newport, Shropshire), plus In Europe, there were about 800 accidents on biogas plants between 2005 and 2015, e.g. in France (Saint-Fargeau) (though few of them were 'serious' with direct consequences for the human population). Fortunately, according to one source, 'less than a dozen of them had consequences on humans'- for example, the incident at Rhadereistedt, Germany (4 dead).
Safety analyses have included a 2016 study compiled a database of 169 accidents involving ADs.
See also
Anaerobic digester types
Anaerobic organism
Avonmouth explosion
Bioconversion of biomass to mixed alcohol fuels
Carbon dioxide air capture
Comparison of anaerobic and aerobic digestion
Environmental issues with energy
Global Methane Initiative
Hypoxia (environmental)
Methane capture
Microbiology of decomposition
Pasteur point
Relative cost of electricity generated by different sources
Sanitation
Sewage treatment
Upflow anaerobic sludge blanket digestion (UASB)
References
External links
Biodegradable waste management
Biofuels
Environmental engineering
Hydrogen production
Mechanical biological treatment
Power station technology
Sewerage
Sustainable technologies
Water technology
Gas technologies
Renewable energy
Food waste | Anaerobic digestion | [
"Chemistry",
"Engineering"
] | 9,652 | [
"Water technology",
"Anaerobic digestion",
"Environmental engineering"
] |
1,545,771 | https://en.wikipedia.org/wiki/Joshua%20Schachter | Joshua Schachter (; born January 1, 1974) is an American entrepreneur and the creator of Delicious and GeoURL, and co-creator of Memepool. He holds a B.S. in electrical and computer engineering from Carnegie Mellon University in Pittsburgh.
Schachter released his first version of Delicious (then called del.icio.us) in September 2003. The service coined the term social bookmarking and featured tagging, a system he developed for organizing links suggested to Memepool; he published some of them on his personal linkblog, Muxway. On March 29, 2005, Schachter announced he would work full-time on Delicious. On December 9, 2005, Yahoo! acquired Delicious for an undisclosed sum. According to Business 2.0, the acquisition price was $30 million, with Schachter's share being worth approximately $15 million.
Prior to working full-time on Delicious, Schachter was an analyst with Morgan Stanley's Equity Trading Lab. In 2002, he developed GeoURL and ran it until 2004.
In 2006, Schachter was named to the MIT Technology Review TR35 as one of the top 35 innovators in the world under the age of 35. In June 2008, Schachter announced his decision to leave Yahoo!. TechCrunch reported that rumors about mass resignations of Yahoo! senior staff prompted his decision to leave.
Schachter worked for Google from January 2009 to June 2010.
References
External links
Interview with Joshua about del.icio.us
Businessweek article on tagging
1974 births
Living people
People from Long Island
Carnegie Mellon University College of Engineering alumni
20th-century American Jews
People in information technology
21st-century American Jews | Joshua Schachter | [
"Technology"
] | 362 | [
"People in information technology",
"Information technology",
"Computer specialist stubs",
"Computing stubs"
] |
1,545,816 | https://en.wikipedia.org/wiki/Leslie%20matrix | The Leslie matrix is a discrete, age-structured model of population growth that is very popular in population ecology named after Patrick H. Leslie. The Leslie matrix (also called the Leslie model) is one of the most well-known ways to describe the growth of populations (and their projected age distribution), in which a population is closed to migration, growing in an unlimited environment, and where only one sex, usually the female, is considered.
The Leslie matrix is used in ecology to model the changes in a population of organisms over a period of time. In a Leslie model, the population is divided into groups based on age classes. A similar model which replaces age classes with ontogenetic stages is called a Lefkovitch matrix, whereby individuals can both remain in the same stage class or move on to the next one. At each time step, the population is represented by a vector with an element for each age class where each element indicates the number of individuals currently in that class.
The Leslie matrix is a square matrix with the same number of rows and columns as the population vector has elements. The (i,j)th cell in the matrix indicates how many individuals will be in the age class i at the next time step for each individual in stage j. At each time step, the population vector is multiplied by the Leslie matrix to generate the population vector for the subsequent time step.
To build a matrix, the following information must be known from the population:
, the count of individuals (n) of each age class x
, the fraction of individuals that survives from age class x to age class x+1,
, fecundity, the per capita average number of female offspring reaching born from mother of the age class x. More precisely, it can be viewed as the number of offspring produced at the next age class weighted by the probability of reaching the next age class. Therefore,
From the observations that at time t+1 is simply the sum of all offspring born from the previous time step and that the organisms surviving to time t+1 are the organisms at time t surviving at probability , one gets . This implies the following matrix representation:
where is the maximum age attainable in the population.
This can be written as:
or:
where is the population vector at time t and is the Leslie matrix. The dominant eigenvalue of , denoted , gives the population's asymptotic growth rate (growth rate at the stable age distribution). The corresponding eigenvector provides the stable age distribution, the proportion of individuals of each age within the population, which remains constant at this point of asymptotic growth barring changes to vital rates. Once the stable age distribution has been reached, a population undergoes exponential growth at rate .
The characteristic polynomial of the matrix is given by the Euler–Lotka equation.
The Leslie model is very similar to a discrete-time Markov chain. The main difference
is that in a Markov model, one would have for each ,
while the Leslie model may have these sums greater or less than 1.
Stable age structure
This age-structured growth model suggests a steady-state, or stable, age-structure and growth rate. Regardless of the initial population size, , or age distribution, the population tends asymptotically to this age-structure and growth rate. It also returns to this state following perturbation. The Euler–Lotka equation provides a means of identifying the intrinsic growth rate. The stable age-structure is determined both by the growth rate and the survival function (i.e. the Leslie matrix). For example, a population with a large intrinsic growth rate will have a disproportionately “young” age-structure. A population with high mortality rates at all ages (i.e. low survival) will have a similar age-structure.
Random Leslie model
There is a generalization of the population growth rate to when a Leslie matrix has random elements which may be correlated. When characterizing the disorder, or uncertainties, in vital parameters; a perturbative formalism has to be used to deal with linear non-negative random matrix difference equations. Then the non-trivial, effective eigenvalue which defines the long-term asymptotic dynamics of the mean-value population state vector can be presented as the effective growth rate. This eigenvalue and the associated mean-value invariant state vector can be calculated from the smallest positive root of a secular polynomial and the residue of the mean-valued Green function. Exact and perturbative results can thusly be analyzed for several models of disorder.
References
Further reading
Population
Population ecology
Matrices | Leslie matrix | [
"Mathematics"
] | 944 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
1,545,842 | https://en.wikipedia.org/wiki/Oxazepam | Oxazepam is a short-to-intermediate-acting benzodiazepine. Oxazepam is used for the treatment of anxiety, insomnia, and to control symptoms of alcohol withdrawal syndrome.
It is a metabolite of diazepam, prazepam, and temazepam, and has moderate amnesic, anxiolytic, anticonvulsant, hypnotic, sedative, and skeletal muscle relaxant properties compared to other benzodiazepines.
It was patented in 1962 and approved for medical use in 1964.
Medical uses
Oxazepam is an intermediate-acting benzodiazepine with a slow onset of action, so it is usually prescribed to individuals who have trouble staying asleep, rather than falling asleep. It is commonly prescribed for anxiety disorders with associated tension, irritability, and agitation. It is also prescribed for drug and alcohol withdrawal, and for anxiety associated with depression. Oxazepam is sometimes prescribed off-label to treat social phobia, post-traumatic stress disorder, insomnia, premenstrual syndrome, and other conditions.
Side effects
The side effects of oxazepam are similar to those of other benzodiazepines, and may include dizziness, drowsiness, headache, memory impairment, paradoxical excitement, and anterograde amnesia, but does not affect transient global amnesia. Withdrawal effects due to rapid decreases in dosage or abrupt discontinuation of oxazepam may include abdominal and muscle cramps, seizures, depression, insomnia, sweating, tremors, or nausea and vomiting.
In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class.
Tolerance, dependence and withdrawal
Oxazepam, as with other benzodiazepine drugs, can cause tolerance, physical dependence, addiction, and benzodiazepine withdrawal syndrome. Withdrawal from oxazepam or other benzodiazepines often leads to withdrawal symptoms which are similar to those seen during alcohol and barbiturate withdrawal. The higher the dose and the longer the drug is taken, the greater the risk of experiencing unpleasant withdrawal symptoms. Withdrawal symptoms can occur, though, at standard dosages and also after short-term use. Benzodiazepine treatment should be discontinued as soon as possible by a slow and gradual dose reduction regimen.
Contraindications
Oxazepam is contraindicated in myasthenia gravis, chronic obstructive pulmonary disease, and limited pulmonary reserve, as well as severe hepatic disease.
Special precautions
Benzodiazepines require special precautions if used in the elderly, during pregnancy, in children, alcohol- or drug-dependent individuals, and individuals with comorbid psychiatric disorders. Benzodiazepines including oxazepam are lipophilic drugs and rapidly penetrate membranes, so rapidly crosses over into the placenta with significant uptake of the drug. Use of benzodiazepines in late pregnancy, especially high doses, may result in floppy infant syndrome.
Pregnancy
Oxazepam, when taken during the third trimester, causes a definite risk to the neonate, including a severe benzodiazepine withdrawal syndrome including hypotonia, reluctance to suck, apnoeic spells, cyanosis, and impaired metabolic responses to cold stress. Floppy infant syndrome and sedation in the newborn may also occur. Symptoms of floppy infant syndrome and the neonatal benzodiazepine withdrawal syndrome have been reported to persist from hours to months after birth.
Interactions
As oxazepam is an active metabolite of diazepam, an overlap in possible interactions is likely with other drugs or food, with exception of the pharmacokinetic CYP450 interactions (e.g. with cimetidine). Precautions and following the prescription are required when taking oxazepam (or other benzodiazepines) in combinations with antidepressants or opioids. Concurrent use of these medications can interact in a way that is difficult to predict. Drinking alcohol when taking oxazepam is not recommended. Concomitant use of oxazepam and alcohol can lead to increased sedation, memory impairment, ataxia, decreased muscle tone, and, in severe cases or in predisposed patients, respiratory depression and coma.
Overdose
Oxazepam is generally less toxic in overdose than other benzodiazepines. Important factors which affect the severity of a benzodiazepine overdose include the dose ingested, the age of the patient, and health status prior to overdose. Benzodiazepine overdoses can be much more dangerous if a coingestion of other CNS depressants such as opiates or alcohol has occurred. Symptoms of an oxazepam overdose include:
Respiratory depression
Excessive somnolence
Altered consciousness
Central nervous system depression
Occasionally cardiovascular and pulmonary toxicity
Rarely, deep coma
Pharmacology
Oxazepam is an intermediate-acting benzodiazepine of the 3-hydroxy family; it acts on benzodiazepine receptors, resulting in increased effect of GABA to the GABAA receptor which results in inhibitory effects on the central nervous system. The half-life of oxazepam is between 6 and 9 hours. It has been shown to suppress cortisol levels. Oxazepam is the most slowly absorbed and has the slowest onset of action of all the common benzodiazepines according to one British study.
Oxazepam is an active metabolite formed during the breakdown of diazepam, nordazepam, and certain similar drugs. It may be safer than many other benzodiazepines in patients with impaired liver function because it does not require hepatic oxidation, but rather, it is simply metabolized by glucuronidation, so oxazepam is less likely to accumulate and cause adverse reactions in the elderly or people with liver disease. Oxazepam is similar to lorazepam in this respect.
Preferential storage of oxazepam occurs in some organs, including the heart of the neonate. Absorption by any administered route and the risk of accumulation is significantly increased in the neonate, and withdrawal of oxazepam during pregnancy and breast feeding is recommended, as oxazepam is excreted in breast milk.
Two milligrams of oxazepam equates to 1 mg of diazepam according to the benzodiazepine equivalency converter, therefore 20 mg of oxazepam according to BZD equivalency equates to 10 mg of diazepam and 15 mg oxazepam to 7.5 mg diazepam (rounded up to 8 mg of diazepam). Some BZD equivalency converters use 3 to 1 (oxazepam to diazepam), 1 to 3 (diazepam to oxazepam) as the ratio (3:1 and 1:3), so 15 mg of oxazepam would equate to 5 mg of diazepam.
Chemistry
Oxazepam exists as a racemic mixture. Early attempts to isolate enantiomers were unsuccessful; the corresponding acetate has been isolated as a single enantiomer. Given the different rates of epimerization that occur at different pH levels, it was determined that there would be no therapeutic benefit to the administration of a single enantiomer over the racemic mixture.
Frequency of use
Oxazepam, along with diazepam, nitrazepam, and temazepam, were the four benzodiazepines listed on the pharmaceutical benefits scheme and represented 82% of the benzodiazepine prescriptions in Australia in 1990–1991.
It is in several countries the benzodiazepine of choice for novice users, due to a low chance of accumulation and a relatively slow absorption speed.
Society and culture
Misuse
Oxazepam has the potential for misuse, defined as taking the drug to achieve a high, or continuing to take the drug in the long term against medical advice. Benzodiazepines, including diazepam, oxazepam, nitrazepam, and flunitrazepam, accounted for the largest volume of forged drug prescriptions in Sweden from 1982 to 1986. During this time, a total of 52% of drug forgeries were for benzodiazepines, suggesting they were a major prescription drug class of abuse.
However, due to its slow rate of absorption and its slow onset of action, oxazepam has a relatively low potential for abuse compared to some other benzodiazepines, such as temazepam, flunitrazepam, or triazolam. This is similar to the varied potential for abuse between different drugs of the barbiturate class.
Legal status
Oxazepam is a Schedule IV drug under the Convention on Psychotropic Substances.
Brand names
Oxazepam is marketed under many brand names worldwide, including: Alepam, Alepan, Anoxa, Anxiolit, Comedormir, durazepam, Murelax, Nozepam, Oksazepam, Opamox, Ox-Pam, Oxa-CT, Oxabenz, Oxamin, Oxapam, Oxapax, Oxascand, Oxaze, Oxazepam, Oxazépam, Oxazin, Oxepam, Praxiten, Purata, Selars, Serax, Serepax, Seresta, Séresta, Serpax, Sobril, Tazepam, Vaben, and Youfei.
It is also marketed in combination with hyoscine as Novalona and in combination with alanine as Pausafrent T.
Environmental concerns
In 2013, a laboratory study which exposed European perch to oxazepam concentrations equivalent to those present in European rivers (1.8 μg/L) found that they exhibited increased activity, reduced sociality, and higher feeding rate. In 2016, a follow-up study which exposed salmon smolt to oxazepam for seven days before letting them migrate observed increased intensity of migratory behaviour compared to controls. A 2019 study associated this faster, bolder behaviour in exposed smolt to increased mortality due to a higher likelihood of being predated on.
On the other hand, a 2018 study from the same authors, which kept 480 European perch and 12 northern pikes in 12 ponds over 70 days, half of them control and half spiked with oxazepam, found no significant difference in either perch growth or mortality. However, it suggested that the latter could be explained by the exposed perch and pike being equally hampered by oxazepam, rather than the lack of an overall effect. Lastly, a 2021 study built on these results by comparing two whole lakes filled with perch and pikes - one control while the other was exposed to oxazepam 11 days into experiment, at concentrations between 11 and 24 μg/L, which is 200 times greater than the reported concentrations in the European rivers. Even so, there were no measurable effects on pike behaviour after the addition of oxazepam, while the effects on perch behaviour were found to be negligible. The authors concluded that the effects previously attributed to oxazepam were instead likely caused by a combination of fish being stressed by human handling and small aquaria, followed by being exposed to a novel environment.
References
External links
Inchem - Oxazepam
Benzodiazepines
Chloroarenes
IARC Group 2B carcinogens
Lactams
Human drug metabolites
Lactims | Oxazepam | [
"Chemistry"
] | 2,478 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
1,545,858 | https://en.wikipedia.org/wiki/Nanoruler | A nanoruler is a tool or a method used within the subfield of "nanometrology" to achieve precise control and measurements at the nanoscale (i.e. nanometer, a billion times smaller than a meter). Measurements of extremely tiny proportions require more complicated procedures, such as manipulating the properties of light (plasmonic) or DNA to determine distances. At the nanoscale, materials and devices exhibit unique properties that can significantly influence their behavior. In fields like electronics, medicine, and biotechnology, where advancements come from manipulating matter at the atomic and molecular levels, nanoscale measurements become essential.
The nanoruler is also a tool developed by the Massachusetts Institute of Technology with extreme precision, achieved through the technique of scanning beam interference lithography (SBIL). The director of the project, Mark L. Schattenburg, began it with the intention of helping the semiconductor industry, which is required in devices such as computer chips that have components nanometers in size, hence the importance of having a tool capable of nanoscale precision. The Nanoruler was developed in the Space Nanotechnology Laboratory of the Kavli Institute for Astrophysics and Space Research at the Massachusetts Institute of Technology.
Types of Nanoruler
Plasmonic Nanoruler
Localized Surface Plasmon Resonance
Surface plasmon resonance (SPR) is a phenomenon where free electrons in metals oscillate when illuminated by light at a specific wavelength(color) at a particular angle. This oscillation is similar to the ripples created when a rock is thrown into a pond. Localized surface plasmon resonance (LSPR) refers to a concentrated region of SPR found on metallic nanoparticles, or metals at the nanoscale, enabling more precise analysis. Each nanoparticle will have its unique LSPR depending on the particle's size and geometry. When multiple nanoparticles are brought together within nanometer distances, their LSPRs interact, leading to optical changes. LSPRs are highly sensitive and can be influenced by various factors, including plasmonic coupling, the surrounding dielectric medium, or distances. Scientists observe these effects and analyze the data, typically in the form of refractive index and shifts in wavelength, to determine measurements.
Fano resonances
Some nanorulers utilize Fano resonances, which are asymmetric line curves resulting from the interference of multiple electromagnetic waves. These resonances are typically observed at specific distances of separation, such as those between gold (Au) nanostructures. Analogous to LSPRs, Fano resonances are used for measurement because of their strong sensitivity to change, such as distances. This allows for precise measurements in very tiny separations for analysis. In certain applications, second harmonic generation (SHG), a nonlinear optical process where two photons of the same frequency combine to generate a single photon at twice the frequency, is utilized in support with Fano resonances for nonlinear measurements. Certain nanostructures (e.g. a gold nanodolmen using three gold nanorods) can exhibit strong SHG responses and lead to specific emission patterns. This method has been used to accurately determine complex 3-dimensional macromolecular entities.
DNA Origami Nanoruler
In 2006, Paul Rothemund made a breakthrough in DNA nanotechnology, developing the DNA origami. His DNA origami took a long, single-stranded DNA molecule (referred to as the "scaffold") and folded it into short, single-stranded DNA oligonucleotides (referred to as "staples"). This revolution allowed for the creation of nanostructures with highly controlled dimensions by designing the DNA scaffold strand and selecting the appropriate staple strands that can serve as nanorulers by itself. DNA origami structures can be designed with specific attachment sites for other nanoscale components, such as nanoparticles, fluorophores, or proteins. By measuring the distances between these components on the origami structures, researchers can perform precise distance measurements at the nanoscale through atomic force microscopy (AFM) and the use of RNA. In the design process, DNA origami structures are equipped with predetermined binding sites for RNA molecules, strategically positioned to facilitate hybridization. Upon introducing RNA molecules, these hybridization events are measured using AFM, providing both visualization and precise nanoscale measurements.
MIT's Nanoruler
In 2004, MIT developed the Nanoruler, a machine that was more precise and faster at grating than any other methods at the time. To achieve this, MIT combined two conventional methods of grating, mechanical ruling and interference lithography, into a new technique: scanning beam interference lithography (SBIL). In "traditional" interference lithography, two beams of light interfere with each other producing fringes, similar to how two ripples in the water will create a standing wave where the two ripples meet. The standing wave is the fringe, which gets recorded onto a photoresist on top of a substrate, eventually becoming grating lines. The difference between the "traditional" interference lithography method and the SBIL method is the final gratings they produce. Interference lithography struggles to produce linear gratings and the end results are often nonlinear. It's not ideal, since gratings should be linear and uniform. With SBIL, the approach is similar to that of interference lithography but utilizes two, narrow UV laser beams that interfere and create fringes. The narrowness creates smaller distortions, thereby more linear fringes than interference lithography. The substrate is also attached to a stage, enabling the photoresist to move in a "scanning" motion. This motion creates the grating pattern with near 1 nm precision across a 300 nm substrate in around 20 minutes.
Applications
Nanotechnology is a modern field that has yet to be fully understood. Nanorulers allow scientists to investigate the fundamental building blocks of matter, including atoms and molecules, which is essential for advancing our knowledge of the physical and chemical properties of materials. As research on nanomaterials is being done, it is important for manufacturing of these nanomaterials to be scalable and efficient.
Life sciences have particularly benefited from nanotechnology. Nanoscale measurements are used for characterizing nanoparticles for drug delivery, studying biological molecules, and exploring cell structures at the nanoscale. Additionally, SPR has been well-established and widely used for biosensing.
In MIT's case, they developed the Nanoruler machine in order to manufacture semiconductors with higher precision and at faster speeds. Reducing size is important in this industry because the smaller a semiconductor is, the more semiconductors can be placed within a device such as a microchip, which makes the microchip more powerful.
References
Nanotechnology | Nanoruler | [
"Materials_science",
"Engineering"
] | 1,426 | [
"Nanotechnology",
"Materials science"
] |
1,545,928 | https://en.wikipedia.org/wiki/Aggregate%20%28composite%29 | Aggregate is the component of a composite material that resists compressive stress and provides bulk to the material. For efficient filling, aggregate should be much smaller than the finished item, but have a wide variety of sizes. Aggregates are generally added to lower the amount of binders needed and to increase the strength of composite materials.
Sand and gravel are used as construction aggregate with cement to make concrete and increase its mechanical strength. Aggregates make up 60-80% of the volume of concrete and 70-85% of the mass of concrete.
Comparison to fiber composites
Aggregate composites are easier to fabricate, and more predictable in their finished properties, than fiber composites. Fiber orientation and continuity can have a large effect, but can be difficult to control and assess. Aggregate materials are generally less expensive. Mineral aggregates are found in nature and can often be used with minimal processing.
Not all composite materials include aggregate. Aggregate particles tend to have about the same dimensions in every direction (that is, an aspect ratio of about one), so that aggregate composites do not display the level of synergy that fiber composites often do. A strong aggregate held together by a weak matrix will be weak in tension, whereas fibers can be less sensitive to matrix properties, especially if they are properly oriented and run the entire length of the part (i.e., a continuous filament).
Most composites are filled with particles whose aspect ratio lies somewhere between oriented filaments and spherical aggregates. A good compromise is chopped fiber, where the performance of filament or cloth is traded off in favor of more aggregate-like processing techniques. Ellipsoid and plate-shaped aggregates are also sometimes used.
Properties
In most cases, the ideal finished piece would be 100% aggregate. A given application's most desirable quality (be it high strength, low cost, high dielectric constant, or low density) is usually most prominent in the aggregate itself. However, the aggregate lacks the ability of a liquid to flow and fill up a volume, and to form attachments between particles.
Aggregate size
Experiments and mathematical models show that more of a given volume can be filled with hard spheres if it is first filled with large spheres, then the spaces between (interstices) are filled with smaller spheres, and the new interstices filled with still smaller spheres as many times as possible. For this reason, control of particle size distribution can be quite important in the choice of aggregate; appropriate simulations or experiments are necessary to determine the optimal proportions of different-sized particles.
The upper limit to particle size depends on the amount of flow required before the composite sets (the gravel in paving concrete can be fairly coarse, but fine sand must be used for tile mortar), whereas the lower limit is due to the thickness of matrix material at which its properties change (clay is not included in concrete because it would "absorb" the matrix, preventing a strong bond to other aggregate particles). Particle size distribution is also the subject of much study in the fields of ceramics and powder metallurgy.
Toughened composites
Toughness is a compromise between the (often contradictory) requirements of strength and plasticity. In many cases, the aggregate will have one of these properties, and will benefit if the matrix can add what it lacks. Perhaps the most accessible examples of this are composites with an organic matrix and ceramic aggregate, such as asphalt concrete ("tarmac") and filled plastic (i.e., Nylon mixed with powdered glass), although most metal matrix composites also benefit from this effect. In this case, the correct balance of hard and soft components is necessary or the material will become either too weak or too brittle.
Nanocomposites
Many materials properties change radically at small length scales (see nanotechnology). In the case where this change is desirable, a certain range of aggregate size is necessary to ensure good performance. This naturally sets a lower limit to the amount of matrix material used.
Unless some practical method is implemented to orient the particles in micro- or nano-composites, their small size and (usually) high strength relative to the particle-matrix bond allows any macroscopic object made from them to be treated as an aggregate composite in many respects.
While bulk synthesis of such nanoparticles as carbon nanotubes is currently too expensive for widespread use, some less extreme nanostructured materials can be synthesized by traditional methods, including electrospinning and spray pyrolysis. One important aggregate made by spray pyrolysis is glass microspheres. Often called microballoons, they consist of a hollow shell several tens of nanometers thick and approximately one micrometer in diameter. Casting them in a polymer matrix yields syntactic foam, with extremely high compressive strength for its low density.
Many traditional nanocomposites escape the problem of aggregate synthesis in one of two ways:
Natural aggregates: By far the most widely used aggregates for nano-composites are naturally occurring. Usually these are ceramic materials whose crystalline structure is extremely directional, allowing it to be easily separated into flakes or fibers. The nanotechnology touted by General Motors for automotive use is in the former category: a fine-grained clay with a laminar structure suspended in a thermoplastic olefin (a class which includes many common plastics like polyethylene and polypropylene). The latter category includes fibrous asbestos composites (popular in the mid-20th century), often with matrix materials such as linoleum and Portland cement.
In-situ aggregate formation: Many micro-composites form their aggregate particles by a process of self-assembly. For example, in high impact polystyrene, two immiscible phases of polymer (including brittle polystyrene and rubbery polybutadiene) are mixed together. Special molecules (graft copolymers) include separate portions which are soluble in each phase, and so are only stable at the interface between them, in the manner of a detergent. Since the number of this type of molecule determines the interfacial area, and since spheres naturally form to minimize surface tension, synthetic chemists can control the size of polybutadiene droplets in the molten mix, which harden to form rubbery aggregates in a hard matrix. Dispersion strengthening is a similar example from the field of metallurgy. In glass-ceramics, the aggregate is often chosen to have a negative coefficient of thermal expansion, and the proportion of aggregate to matrix adjusted so that the overall expansion is very near zero. Aggregate size can be reduced so that the material is transparent to infrared light.
See also
Construction aggregate
Aggregate (geology)
Interfacial Transition Zone (ITZ)
Saturated-surface-dry
References
Aggregate (composite)
Concrete
Composite materials
Granularity of materials | Aggregate (composite) | [
"Physics",
"Chemistry",
"Engineering"
] | 1,396 | [
"Structural engineering",
"Composite materials",
"Materials",
"Concrete",
"Particle technology",
"Granularity of materials",
"Matter"
] |
1,546,092 | https://en.wikipedia.org/wiki/Electrical%20mobility | Electrical mobility is the ability of charged particles (such as electrons or protons) to move through a medium in response to an electric field that is pulling them. The separation of ions according to their mobility in gas phase is called ion mobility spectrometry, in liquid phase it is called electrophoresis.
Theory
When a charged particle in a gas or liquid is acted upon by a uniform electric field, it will be accelerated until it reaches a constant drift velocity according to the formula
where
is the drift velocity (SI units: m/s),
is the magnitude of the applied electric field (V/m),
is the mobility (m2/(V·s)).
In other words, the electrical mobility of the particle is defined as the ratio of the drift velocity to the magnitude of the electric field:
For example, the mobility of the sodium ion (Na+) in water at 25 °C is . This means that a sodium ion in an electric field of 1 V/m would have an average drift velocity of . Such values can be obtained from measurements of ionic conductivity in solution.
Electrical mobility is proportional to the net charge of the particle. This was the basis for Robert Millikan's demonstration that electrical charges occur in discrete units, whose magnitude is the charge of the electron.
Electrical mobility is also inversely proportional to the Stokes radius of the ion, which is the effective radius of the moving ion including any molecules of water or other solvent that move with it. This is true because the solvated ion moving at a constant drift velocity is subject to two equal and opposite forces: an electrical force and a frictional force , where is the frictional coefficient, is the solution viscosity. For different ions with the same charge such as Li+, Na+ and K+ the electrical forces are equal, so that the drift speed and the mobility are inversely proportional to the radius . In fact, conductivity measurements show that ionic mobility increases from Li+ to Cs+, and therefore that Stokes radius decreases from Li+ to Cs+. This is the opposite of the order of ionic radii for crystals and shows that in solution the smaller ions (Li+) are more extensively hydrated than the larger (Cs+).
Mobility in gas phase
Mobility is defined for any species in the gas phase, encountered mostly in plasma physics and is defined as
where
is the charge of the species,
is the momentum-transfer collision frequency,
is the mass.
Mobility is related to the species' diffusion coefficient through an exact (thermodynamically required) equation known as the Einstein relation:
where
is the Boltzmann constant,
is the gas temperature,
is the diffusion coefficient.
If one defines the mean free path in terms of momentum transfer, then one gets for the diffusion coefficient
But both the momentum-transfer mean free path and the momentum-transfer collision frequency are difficult to calculate. Many other mean free paths can be defined. In the gas phase, is often defined as the diffusional mean free path, by assuming that a simple approximate relation is exact:
where is the root mean square speed of the gas molecules:
where is the mass of the diffusing species. This approximate equation becomes exact when used to define the diffusional mean free path.
Applications
Electrical mobility is the basis for electrostatic precipitation, used to remove particles from exhaust gases on an industrial scale. The particles are given a charge by exposing them to ions from an electrical discharge in the presence of a strong field. The particles acquire an electrical mobility and are driven by the field to a collecting electrode.
Instruments exist which select particles with a narrow range of electrical mobility, or particles with electrical mobility larger than a predefined value. The former are generally referred to as "differential mobility analyzers". The selected mobility is often identified with the diameter of a singly charged spherical particle, thus the "electrical-mobility diameter" becomes a characteristic of the particle, regardless of whether it is actually spherical.
Passing particles of the selected mobility to a detector such as a condensation particle counter allows the number concentration of particles with the currently selected mobility to be measured. By varying the selected mobility over time, mobility vs concentration data may be obtained. This technique is applied in scanning mobility particle sizers.
References
Physical quantities
Electrophoresis
Mass spectrometry | Electrical mobility | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 878 | [
"Physical phenomena",
"Physical quantities",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantity",
"Mass",
"Biochemical separation processes",
"Molecular biology techniques",
"Mass spectrometry",
"Electrophoresis",
"Physical properties",
"Matter"
] |
1,546,202 | https://en.wikipedia.org/wiki/Vienna%20Standard%20Mean%20Ocean%20Water | Vienna Standard Mean Ocean Water (VSMOW) is an isotopic standard for water, that is, a particular sample of water whose proportions of different isotopes of hydrogen and oxygen are accurately known. VSMOW is distilled from ocean water and does not contain salt or other impurities. Published and distributed by the Vienna-based International Atomic Energy Agency in 1968, the standard and its essentially identical successor, VSMOW2, continue to be used as a reference material.
Water samples made up of different isotopes of hydrogen and oxygen have slightly different physical properties. As an extreme example, heavy water, which contains two deuterium (2H) atoms instead of the usual, lighter hydrogen-1 (1H), has a melting point of and boiling point of . Different rates of evaporation cause water samples from different places in the water cycle to contain slightly different ratios of isotopes. Ocean water (richer in heavy isotopes) and rain water (poorer in heavy isotopes) roughly represent the two extremes found on Earth. With VSMOW, the IAEA simultaneously published an analogous standard for rain water, Standard Light Antarctic Precipitation (SLAP), and eventually its successor SLAP2. SLAP contains about 5% less oxygen-18 and 42.8% less deuterium than VSMOW.
A scale based on VSMOW and SLAP is used to report oxygen-18 and deuterium concentrations. From 2005 until its redefinition in 2019, the kelvin was specified to be of the temperature of specifically VSMOW at its triple point.
History and background
Abundances of a particular isotope in a substance are usually given relative to some reference material, as a delta in parts per thousand (‰) from the reference. For example, the ratio of deuterium (H) to hydrogen-1 in a substance x may be given as
,
where denotes the absolute concentration in x.
In 1961, pursuing a standard for measuring and reporting deuterium and oxygen-18 concentrations, Harmon Craig of the Scripps Institution of Oceanography in San Diego, California, proposed an abstract water standard. He based the proportions on his measurements of samples taken by of ocean waters around the world. Approximating an average of their measurements, Craig defined his "standard mean ocean water" (SMOW) relative to a water sample held in the United States' National Bureau of Standards called NBS-1 (sampled from the Potomac River). In particular, SMOW had the following parameters relative to NBS-1:
δ 2H SMOW/NBS-1 = 50‰, i.e., an enrichment of 5%;
δ 18O SMOW/NBS-1 = 8‰, i.e., an enrichment of 0.8%.
Later, researchers at the California Institute of Technology defined another abstract reference, also called "SMOW", for oxygen-18 concentrations, such that a sample of Potsdam Sandstone in their possession satisfied .
To resolve the confusion, November 1966 meeting of the Vienna-based International Atomic Energy Agency (IAEA) recommended the preparation of two water isotopic standards: Vienna SMOW (VSMOW; initially just "SMOW" but later disambiguated) and Standard Light Antarctic Precipitation (SLAP). Craig prepared VSMOW by mixing distilled Pacific Ocean water with small amounts of other waters. VSMOW was intended to match the SMOW standard as closely as possible. Craig's measurements found an identical 18O concentration and a 0.2‰ lower 2H concentration. The SLAP standard was created from a melted firn sample from Plateau Station in Antarctica. A standard with oxygen-18 and deuterium concentrations between that of VSMOW and SLAP, called Greenland Ice Sheet Precipitation (GISP), was also prepared. The IAEA began distributing samples in 1968, and compiled analyses of VSMOW and SLAP from 45 laboratories around the world. The VSMOW sample was stored in a stainless-steel container under nitrogen and was transferred to glass ampoules in 1977.
The deuterium and oxygen-18 concentrations in VSMOW are close to the upper end of naturally occurring materials, and the concentrations in SLAP are close to the lower end. Due to confusion over multiple water standards, the Commission on Isotopic Abundances and Atomic Weights recommended in 1994 that all future isotopic measurements of oxygen-18 (18O) and deuterium (2H) be reported relative to VSMOW, on a scale such that the of SLAP is −55.5‰ and the of SLAP is −428‰, relative to VSMOW. Therefore, SLAP is defined to contain 94.45% the oxygen-18 concentration and 57.2% the deuterium concentration of VSMOW. Using a scale with two defined samples improves comparison of results between laboratories.
In December 1996, because of a dwindling supply of VSMOW, the IAEA decided to create a replacement standard, VSMOW2. Published in 1999, it contains a nearly identical isotopic mixture. About 300 liters was prepared from a mixture of distilled waters, from Lake Bracciano in Italy, the Sea of Galilee in Israel, and a well in Egypt, in proportions chosen to reach VSMOW isotopic ratios. The IAEA also published a successor to SLAP, called SLAP2, derived from melted water from four Antarctic drilling sites. Deviations of 17O, and 18O in the new standards from the old standards are zero within the error of measurement. There is a small but measurable deviation of 2H concentration in SLAP2 from SLAP— is defined to be −427.5‰ instead of −428‰—but not in VSMOW2 from VSMOW. The IAEA recommends that measurements still be reported on the VSMOW–SLAP scale.
The older two standards are now kept at the IAEA and no longer sold.
Measurements
All measurements are reported with their standard uncertainty. Measurements of particular combinations of oxygen and hydrogen isotopes are unnecessary because water molecules constantly exchange atoms with each other.
VSMOW
Except for tritium, which was determined by the helium gas emitted by radioactive decay, these measurements were taken using mass spectroscopy.
Deuterium (2H / 1H) – , about 1 in 6420 hydrogen atoms
Tritium (3H / 1H) – = , measured on 16 September 1976, about 1 in hydrogen atoms
Oxygen-18 (18O / 16O) – , about 1 in 499 oxygen atoms
Oxygen-17 (17O / 16O) – , about 1 in 2640 oxygen atoms
SLAP
Based on the results of , the IAEA defined the delta scale with SLAP at −55.5‰ for 18O and −428‰ for 2H. That is, SLAP was measured to contain approximately 5.55% less oxygen-18 and 42.8% less deuterium than does VSMOW, and these figures were used to anchor the scale at two points. Experimental figures are given below.
2H / 1H – , , about 1 in 11230 atoms
3H / 1H – = , measured on 16 September 1976, about 1 in 2.671015 atoms
18O / 16O – , , about 1 in 528 atoms
17O / 16O – , about 1 in 3700 atoms
VSMOW2 and SLAP2
The concentrations of 17O, and 18O are indistinguishable between VSMOW and VSMOW2, and between SLAP and SLAP2. The specification sheet gives the standard errors in these measurements. The concentration of 2H is unchanged in VSMOW2 as well, but is slightly increased in SLAP2. The IAEA reports:
, (Compare −428‰ for SLAP.)
On 6 July 2007, the tritium concentration was in VSMOW2, and in SLAP2.
GISP
δ 2H GISP −189.5 ± 1.2‰
δ 18O GISP −24.66 ± 0.09‰
δ 17O GISP −12.71 ± 0.1‰
Applications
Reporting isotopic ratios
The VSMOW–SLAP scale is recommended by the USGS, IUPAC, and IAEA for measurement of deuterium and 18O concentrations in any substance. For 18O, a scale based on Vienna Pee Dee Belemnite can also be used. The physical samples, which are distributed by the IAEA and U.S. National Institute of Standards and Technology, are used to calibrate isotope-measuring equipment.
Variations in isotopic content are useful in hydrology, meteorology, and oceanography. Different parts of the ocean do have slightly different isotopic concentrations: δ 18O values range from –11.35‰ in water off the coast of Greenland to +1.32‰ in the north Atlantic, and δ 2H concentrations in deep ocean water range from roughly –1.7‰ near Antarctica to +2.2‰ in the Arctic. Variations are much larger in surface water than in deep water.
Temperature measurements
In 1954, the International Committee for Weights and Measures (CIPM) established the definition of the Kelvin as 1/273.16 of the absolute temperature of the triple point of water. Waters with different isotopic compositions had slightly different triple points. Thus, the International Committee for Weights and Measures specified in 2005 that the definition of the kelvin temperature scale would refer to water with a composition of the nominal specification of VSMOW. The decision was welcomed in 2007 by Resolution 10 of the 23rd CGPM. The triple point is measured in triple-point cells, where the water is held at its triple point and allowed to reach equilibrium with its surroundings. Using ordinary waters, the range of inter-laboratory measurements of the triple point can be about . With VSMOW, the inter-laboratory range of measurements of the triple point is about .
After the 2019 revision of the SI, the kelvin is defined in terms of the Boltzmann constant, which makes its definition completely independent of the properties of water. The defined value for the Boltzmann constant was selected so that the measured value of the VSMOW triple point is identical to the prior defined value, within measurable accuracy. Triple-point cells remain a practical method of calibrating thermometers.
See also
International Temperature Scale of 1990
Properties of water
Notes
Sources
Citations
External links
International Atomic Energy Agency – IAEA
ITS-90 – Swedish National Testing and Research Institute
ITS-90 – Omega Engineering
Scripps Institution of Oceanography
Temperature Sensors – information repository
Scientific data of water – London South Bank University
Vsmow
Vsmow
Vsmow
Standards | Vienna Standard Mean Ocean Water | [
"Physics",
"Chemistry"
] | 2,184 | [
"Environmental isotopes",
"Isotopes",
"Nuclear technology",
"Chemical oceanography",
"Nuclear physics"
] |
1,546,216 | https://en.wikipedia.org/wiki/Spirit%20of%20place | Spirit of place (or soul) refers to the unique, distinctive and cherished aspects of a place; often those celebrated by artists and writers, but also those cherished in folk tales, festivals and celebrations. It is thus as much in the invisible weave of culture (stories, art, memories, beliefs, histories, etc.) as it is the tangible physical aspects of a place (monuments, boundaries, rivers, woods, architectural style, rural crafts styles, pathways, views, and so on) or its interpersonal aspects (the presence of relatives, friends and kindred spirits, and the like).
Often the term is applied to a rural or a relatively unspoiled or regenerated place — whereas the very similar term sense of place would tend to be more domestic, urban, or suburban in tone. For instance, one could logically apply 'sense of place' to an urban high street; noting the architecture, the width of the roads and pavements, the plantings, the style of the shop-fronts, the street furniture, and so on, but one could not really talk about the 'spirit of place' of such an essentially urban and commercial environment. However, an urban area that looks faceless or neglected to an adult may have deep meaning in children's street culture.
The Roman term for spirit of place was Genius loci, by which it is sometimes still referred. This has often been historically envisaged as a guardian animal or a small supernatural being (puck, fairy, elf, and the like) or a ghost. In the developed world these beliefs have been, for the most part, discarded. A new layer of less-embodied superstition on the subject, however, has arisen around ley lines, feng shui and similar concepts, on the one hand, and urban leftover spaces, such as back alleys or gaps between buildings in some North-American downtown areas, on the other hand.
The western cultural movements of Romanticism and Neo-romanticism are often deeply concerned with creating cultural forms that 're-enchant the land', in order to establish or re-establish a spirit of place.
Modern earth art (sometimes called environment art) artists such as Andy Goldsworthy have explored the contribution of natural/ephemeral sculpture to spirit of place.
Many indigenous and tribal cultures around the world are deeply concerned with spirits of place in their landscape. Spirits of place are explicitly recognized by some of the world's main religions: Shinto has its Kami which may incorporate spirits of place; and the Dvarapalas and Lokapalas in Hinduism, Vajrayana and Bonpo traditions.
See also
Bioregionalism
Common Ground (United Kingdom)
Cultural landscape
Cultural region
Deep map
Genius loci
Landvættir
Nature writing
Parochialism
Psychogeography
Topophilia
References
External links
Common Ground (UK)
The arts
Cultural geography
Psychogeography
Landscape design history
Gardening
Landscape architecture
Environmental design | Spirit of place | [
"Engineering"
] | 604 | [
"Environmental design",
"Design",
"Landscape architecture",
"Architecture"
] |
1,546,265 | https://en.wikipedia.org/wiki/Constantin%20Sotiropoulos | Constantin Sotiropoulos is the co-creator (with François Lionet) of AMOS BASIC, a popular video game and multimedia programming language for the Amiga computer, and STOS BASIC on the Atari ST.
He has also been creator of copy protection software for some French companies.
Before joining the 16-bit scene, he also developed Speedy Wonder (BASIC compiler) on the Amstrad and Thomson 8-bit micro-computers, released by French company Minipuce, and ML1, a macro-assembler for the same machines (released by Micro-Application). Both software were co-developed with Youri Beltchenko.
References
French computer programmers
Living people
Year of birth missing (living people)
Place of birth missing (living people) | Constantin Sotiropoulos | [
"Technology"
] | 152 | [
"Computing stubs",
"Computer specialist stubs"
] |
13,353,024 | https://en.wikipedia.org/wiki/1-Aminopropan-2-ol | 1-Aminopropan-2-ol is the organic compound with the formula . It is an amino alcohol. The term isopropanolamine may also refer more generally to the additional homologs diisopropanolamine (DIPA) and triisopropanolamine (TIPA).
1-Aminopropan-2-ol is chiral. It can be prepared by the addition of aqueous ammonia to propylene oxide.
Biosynthesis
(R)-1-Aminopropan-2-ol is one of the components incorporated in the biosynthesis of cobalamin. The O-phosphate ester is produced from threonine by the enzyme Threonine-phosphate decarboxylase.
Applications
The isopropanolamines are used as buffers. They are good solubilizers of oil and fat, so they are used to neutralize fatty acids and sulfonic acid-based surfactants.
Racemic 1-aminopropan-2-ol is typically used in metalworking fluid, waterborne coatings, personal care products, and in the production of titanium dioxide and polyurethanes. It is an intermediate in the synthesis of a variety of pharmaceutical drugs.
(R)-1-aminopropan-2-ol is metabolised to aminoacetone by the enzyme (R)-aminopropanol dehydrogenase.
Synthesis of Hexylcaine is one application.
References
Amines
Secondary alcohols
Amino alcohols | 1-Aminopropan-2-ol | [
"Chemistry"
] | 315 | [
"Functional groups",
"Organic compounds",
"Amines",
"Bases (chemistry)",
"Amino alcohols"
] |
13,353,499 | https://en.wikipedia.org/wiki/Receptacle%20%28botany%29 | In botany, the receptacle refers to vegetative tissues near the end of reproductive stems that are situated below or encase the reproductive organs.
Angiosperms
In angiosperms, the receptacle or torus (an older term is thalamus, as in Thalamiflorae) is the thickened part of a stem (pedicel) from which the flower organs grow. In some accessory fruits, for example the pome and strawberry, the receptacle gives rise to the edible part of the fruit. The fruit of Rubus species is a cluster of drupelets on top of a conical receptacle. When a raspberry is picked, the receptacle separates from the fruit, but in blackberries, it remains attached to the fruit.
In the daisy family (Compositae or Asteraceae), small individual flowers are arranged on a round or dome-like structure that is also called receptacle.
Algae and bryophyta
In phycology, receptacles occur at the ends of branches of algae mainly in the brown algae or Heterokontophyta in the order Fucales. They are specialised structures which contain the reproductive organs called conceptacles. Receptacles also function as a structure that captures food.
References
Plant morphology
Algal anatomy | Receptacle (botany) | [
"Biology"
] | 276 | [
"Plant morphology",
"Plants"
] |
13,353,520 | https://en.wikipedia.org/wiki/Urea%20nitrate | Urea nitrate is a fertilizer-based high explosive that has been used in improvised explosive devices in Afghanistan, Pakistan, Iraq, and various terrorist acts elsewhere in the world such as in the 1993 World Trade Center bombings. It has a destructive power similar to better-known ammonium nitrate explosives, with a velocity of detonation between and . It has chemical formula of or .
Urea nitrate is produced in one step by reaction of urea with nitric acid. This is an exothermic reaction, so steps must be taken to control the temperature.
It was discovered in 1797 by William Cruickshank, inventor of the Chloralkali process.
Urea nitrate explosions may be initiated using a blasting cap.
Chemistry
Urea contains a carbonyl group. The more electronegative oxygen atom pulls electrons away from the carbon atom, forming a polar bond with greater electron density around the oxygen atom, giving it a partial negative charge. In a simplistic sense, nitric acid dissociates in aqueous solution into protons (hydrogen cations) and nitrate anions. The electrophilic proton contributed by the acid is attracted to the negatively charged oxygen atom on the urea molecule and the two form a covalent bond. The formed O-H bond is stabilized into a hydroxyl group when the oxygen abstracts an electron pair away from the central carbon atom, which leads to bond resonance between it and the two amino groups. As such, the urea cation can be thought of as a amidinium species. Paired with the spectator nitrate counteranion, it forms urea nitrate.
The compound is favored by many amateur explosive enthusiasts as a principal explosive for use in larger charges. In this role it acts as a substitute for ammonium nitrate based explosives. This is due to the ease of acquiring the materials necessary to synthesize it, and its greater sensitivity to initiation compared to ammonium nitrate based explosives.
References
Further reading
External links
Explosive chemicals
Nitrates
Ureas | Urea nitrate | [
"Chemistry"
] | 408 | [
"Nitrates",
"Salts",
"Organic compounds",
"Oxidizing agents",
"Explosive chemicals",
"Ureas"
] |
13,353,724 | https://en.wikipedia.org/wiki/QH-II-66 | QH-II-66 (QH-ii-066) is a sedative drug which is a benzodiazepine derivative. It produces some of the same effects as other benzodiazepines, but is much more selective than most other drugs of this class and so produces somewhat less sedation and ataxia than other related drugs such as diazepam and triazolam, although it still retains anticonvulsant effects.
QH-ii-066 is a highly subtype-selective GABAA agonist which was designed to bind selectively to the α5 subtype of GABAA receptors.
The α5 subtype (and to a lesser extent the α1 subtype) of GABAA are two of the most important targets in the brain that produce the effects of alcohol, and so one of the purposes for which QH-ii-066 was developed was to reproduce the GABAergic effects of alcohol separately from its other actions.
QH-ii-066 replicates some of the effects of alcohol, such as sedation and ataxia, but does not increase appetite, as this effect seems to be produced by the α1 subtype of GABAA rather than α5. The inverse agonist Ro15-4513, which blocks the α5 subtype of GABAA, reverses the effects of alcohol, suggesting that this subtype is also important in producing the subjective effects of alcohol intoxication.
See also
GL-II-73
Pyeazolam
SH-I-048A
SH-053-R-CH3-2′F
References
Benzodiazepines
Sedatives
Hypnotics
Ethynyl compounds
Lactams | QH-II-66 | [
"Biology"
] | 361 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
13,353,871 | https://en.wikipedia.org/wiki/Dawson%E2%80%93G%C3%A4rtner%20theorem | In mathematics, the Dawson–Gärtner theorem is a result in large deviations theory. Heuristically speaking, the Dawson–Gärtner theorem allows one to transport a large deviation principle on a “smaller” topological space to a “larger” one.
Statement of the theorem
Let (Yj)j∈J be a projective system of Hausdorff topological spaces with maps pij : Yj → Yi. Let X be the projective limit (also known as the inverse limit) of the system (Yj, pij)i,j∈J, i.e.
Let (με)ε>0 be a family of probability measures on X. Assume that, for each j ∈ J, the push-forward measures (pj∗με)ε>0 on Yj satisfy the large deviation principle with good rate function Ij : Yj → R ∪ {+∞}. Then the family (με)ε>0 satisfies the large deviation principle on X with good rate function I : X → R ∪ {+∞} given by
References
(See theorem 4.6.1)
Asymptotic analysis
Large deviations theory
Probability theorems | Dawson–Gärtner theorem | [
"Mathematics"
] | 245 | [
"Mathematical analysis",
"Theorems in probability theory",
"Asymptotic analysis",
"Mathematical problems",
"Mathematical theorems"
] |
13,354,070 | https://en.wikipedia.org/wiki/Varadhan%27s%20lemma | In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a family of random variables Zε as ε becomes small in terms of a rate function for the variables.
Statement of the lemma
Let X be a regular topological space; let (Zε)ε>0 be a family of random variables taking values in X; let με be the law (probability measure) of Zε. Suppose that (με)ε>0 satisfies the large deviation principle with good rate function I : X → [0, +∞]. Let ϕ : X → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition
where 1(E) denotes the indicator function of the event E; or, for some γ > 1, the moment condition
Then
See also
Laplace principle (large deviations theory)
References
(See theorem 4.3.1)
Asymptotic analysis
Lemmas
Probability theorems
Theorems in statistics
Large deviations theory | Varadhan's lemma | [
"Mathematics"
] | 248 | [
"Mathematical analysis",
"Theorems in statistics",
"Theorems in probability theory",
"Asymptotic analysis",
"Mathematical problems",
"Mathematical theorems",
"Lemmas"
] |
13,354,396 | https://en.wikipedia.org/wiki/Sanno%20Park%20Tower | is a 44-story skyscraper in Nagatachō, Chiyoda ward, Tokyo, Japan. It is the 8th highest building of the ward, after the Shin-Marunouchi Building, JP Tower, GranTokyo, etc. The building stands over Tameike-Sannō Station, which is served by the Tokyo Metro Ginza Line (G06) and the Tokyo Metro Namboku Line (N06), and linked to Kokkai-gijidō-mae Station (M14) on the Tokyo Metro Marunouchi Line.
History
The site of the building was historically occupied by the Sanno Hotel, which was one of the leading luxury hotels in Tokyo at the time of its opening in 1932. The hotel served as the headquarters for dissident military units during the February 26 Incident in 1936, and as a United States Armed Forces family housing, billeting and lodging facility from 1946 to 1983. In October 1983, the site was returned to its original owner, Anzen Motor Car Co., and the hotel was closed, being replaced by the New Sanno Hotel in the Minami-Azabu area of Tokyo. The site was then vacant until 1996 as various re-development plans led by Mitsubishi Estate failed to materialize; at one time the building was designed to have over fifty floors. Construction on the site began in 1996 and was completed in January 2000.
Just at the foot of the Hie Shrine, the skyscraper overlooks both the Prime Minister of Japan's official residence (Sōri Kantei), across the street to the northeast, and the Diet Building. As a consequence, windows of the tower in the direction of the Prime Minister's residence are all locked, and the residence is designed with a barrier wall and no windows facing the tower. To achieve the building's height while obeying floor area ratio limitations under local zoning laws, air rights were bought from the neighboring Hie Shrine.
Tenants
Pagani, an Italian sports car manufacturer, authorized Japanese dealer is located in the Sanno Park Tower Annex.
The headquarters of the largest mobile carrier in Japan, NTT docomo, are located on the 7th to 9th and 27th to 44th floors.
The lower floors house Tokyo offices of several multinational corporations, such as Lazard, Deutsche Bank, DuPont, Cushman & Wakefield, Philip Morris, Standard Chartered, Munich Re, Estée Lauder and Canonical.
The headquarters of the Consumer Affairs Agency of Japan was located on the 4th to 6th floors until March 2016.
The "Skylobby" at the 27th floor has restaurants and a closed observation deck.
References
External links
Sanno Park Tower - Mitsubishi Estate
Skyscrapers in Chiyoda, Tokyo
Skyscraper office buildings in Tokyo
Office buildings completed in 2000
Mitsubishi Estate
2000 establishments in Japan
NTT Docomo | Sanno Park Tower | [
"Technology"
] | 568 | [
"Members of the Conexus Mobile Alliance",
"NTT Docomo"
] |
13,354,642 | https://en.wikipedia.org/wiki/Naphthenic%20acid | Naphthenic acids (NAs) are mixtures of several cyclopentyl and cyclohexyl carboxylic acids with molecular weights of 120 to well over 700 atomic mass units. The main fractions are carboxylic acids with a carbon backbone of 9 to 20 carbons. McKee et al. claim that "naphthenic acids (NAs) are primarily cycloaliphatic carboxylic acids with 10 to 16 carbons", although acids containing up to 50 carbons have been identified in heavy petroleum.
Nomenclature
Naphthenic acid can refer to derivatives and isomers of naphthalene carboxylic acids. In the petrochemical industry, NA's refer to alkyl carboxylic acids found in petroleum. The term naphthenic acid has roots in the somewhat archaic term "naphthene" (cycloaliphatic but non-aromatic) used to classify hydrocarbons. It was originally used to describe the complex mixture of petroleum-based acids when the analytical methods available in the early 1900s could identify only a few naphthene-type components with accuracy. Today "naphthenic" acid is used in a more generic sense to refer to all of the carboxylic acids present in petroleum, whether cyclic, acyclic, or aromatic compounds, and carboxylic acids containing heteroatoms such as N and S. Although commercial naphthenic acids often contain a majority of cycloaliphatic acids, multiple studies have shown they also contain straight chain and branched aliphatic acids and aromatic acids; some naphthenic acids contain >50% combined aliphatic and aromatic acids.
Salts of naphthenic acids, called naphthenates, are widely used as hydrophobic sources of metal ions in diverse applications.
Classification
Naphthenic acids are represented by a general formula CnH2n-zO2, where n indicates the carbon number and z specifies a homologous series. The z is equal to 0 for saturated, acyclic acids and increases to 2 in monocyclic naphthenic acids, to 4 in bicyclic naphthenic acids, to 6 in tricyclic acids, and to 8 in tetracyclic acids. Crude oils with total acid number (TAN) as little as 0.5 mg KOH/g acid or petroleum fractions greater than about 1.0 mg KOH/g oil usually qualify as a high acid crude or oil. At the 1.0 mg/g TAN level, acidic crude oils begin to be heavily discounted in value and so are referred to as opportunity crudes. Commercial grades of naphthenic acid are most often recovered from kerosene/jet fuel and diesel fractions, where their corrosivity and negative impact on burning qualities require their removal. Naphthenic acids are also a major contaminant in water produced during the extraction of oil from Athabasca oil sands.
Sources and occurrence
Naphthenic acids are extracted from petroleum distillates by extraction with aqueous base. Acidification of this extract acidic neutralization returns the acids free from hydrocarbons. Naphthenic acid is removed from petroleum fractions not only to minimize corrosion but also to recover commercially useful products. Some crude oils are high in acidic compounds (up to 4%).
Naphthenic acid corrosion
The composition varies with the crude oil composition and the conditions during refining and oxidation. Fractions that are rich in naphthenic acids can cause corrosion damage to oil refinery equipment; the phenomenon of naphthenic acid corrosion (NAC). Crude oils with a high content of naphthenic acids are often referred to as high total acid number (TAN) crude oils or high acid crude oil (HAC).
Metal naphthenates
As the greatest current and historical usage, naphthenic acid are used to produce metal naphthenates. Metal naphthenates are referred often to as "salts" of naphthenic acids, but metal naphthenates are not ionic. They are covalent, hydrophobic coordination complexes. More specifically they are metal carboxylate complexes with the formula M(naphthenate)2, or M3O(naphthenate)6 for basic oxides. Metal naphthenates are not well defined in conventional chemical sense because they are a complex mixture rather than a specific single component, structure or formula. They have diverse applications.
The naphthenates have industrial applications including synthetic detergents, lubricants, corrosion inhibitors, fuel and lubricating oil additives, wood preservatives, insecticides, fungicides, acaricides, wetting agents, thickening agent of napalm and oil drying agents used in painting and wood surface treatment. Industrially useful naphthenates include those of aluminium, magnesium, calcium, barium, cobalt, copper, lead, manganese, nickel, vanadium, and zinc. Illustrative is the use of cobalt naphthenate for the oxidation of tetrahydronaphthalene to the hydroperoxide.
The complex mixture and hydrophobic nature of naphthenic acid allows metal naphthenates to be highly soluble in organic media such as petroleum-based hydrocarbons, oftentimes much more so than single isomer carboxylates such as metal acetates and stearates. Their industrial applications exploits this property, where they are used as oil-borne detergents, lubricants, corrosion inhibitors, fuel and lubricating oil additives, wood preservatives, insecticides, fungicides, acaricides, wetting agents, oil drying agents (driers) used in oil-based paint and wood surface treatment including varnish. Industrially useful metal naphthenates include those of aluminum, barium, calcium, cobalt, copper, iron, lead, magnesium manganese, nickel, potassium, vanadium, zinc, and zirconium.
Environmental impact
Naphthenic acids are the major contaminant in water produced from the extraction of oil from Athabasca oil sands (AOS).
It has been stated that "naphthenic acids are the most significant environmental contaminants resulting from petroleum extraction from oil sands deposits." Nonetheless, the same authors suggest that "under worst-case exposure conditions, acute toxicity is unlikely in wild mammals exposed to naphthenic acids in AOS tailings pond water, but repeated exposure may have adverse health effects." Naphthenic acids are present in Athabasca oil sands and tailings pond water at an estimated concentration of 81 mg/L
Using Organisation for Economic Co-operation and Development [OECD] protocols for testing toxicity, refined NAs are not acutely genotoxic to mammals. Damage, however, induced by NAs while transient in acute or discontinuous exposure, may be cumulative in repeated exposure.
Naphthenic acids have both acute and chronic toxicity to fish and other organisms.
See also
Carboxylic acid
Organic acid
Resin acid
Paraffinic
References
External links
Article concerning refining crude oil with a high content of naphthenic acids
Crude oils with a high content of naphthenic acids in China's refineries
Crude oils containing naphthenic acids in the Grangemouth refinery
Overview of naphthenic acid corrosion
Literature survey of naphthenic acid corrosion
Removing naphthenic acids from the crude oil
Presentation by Nalco on naphthenic acid corrosion
Presentation by Baker Petrolite on naphthenic acid corrosion
Presentation by ChevronTexaco on crude oils with a high content of naphthenic acids
Information by Seth Laboratories on Naphthenic acid corrosion
Details regarding Kuwaitian heavy crudes and naphthenic acid corrosion
Article regarding naphthenic acid removal
Article regarding naphthenic acid species
Article abstract regarding molecular origins of heavy crude oil interfacial activity mainly caused by Naphthenic acids
Article about processes to remove Naphthenic acids
Article about stabilisation of water-in-oil emulsions by naphthenic acids
Spectrometric Identification of Naphthenic Acids Isolated from Crude Oil
Hydrogen flux and naphthenic acid corrosion
Petroleum products
Carboxylic acids
Cyclopentanes | Naphthenic acid | [
"Chemistry"
] | 1,743 | [
"Petroleum",
"Carboxylic acids",
"Functional groups",
"Petroleum products"
] |
13,355,399 | https://en.wikipedia.org/wiki/Syntax%20diagram | Syntax diagrams (or railroad diagrams) are a way to represent a context-free grammar. They represent a graphical alternative to Backus–Naur form, EBNF, Augmented Backus–Naur form, and other text-based grammars as metalanguages. Early books using syntax diagrams include the "Pascal User Manual" written by Niklaus Wirth (diagrams start at page 47) and the Burroughs CANDE Manual. In the compilation field, textual representations like BNF or its variants are usually preferred. BNF is text-based, and used by compiler writers and parser generators. Railroad diagrams are visual, and may be more readily understood by laypeople, sometimes incorporated into graphic design. The canonical source defining the JSON data interchange format provides yet another example of a popular modern usage of these diagrams.
Principle of syntax diagrams
The representation of a grammar is a set of syntax diagrams. Each diagram defines a "nonterminal" stage in a process. There is a main diagram which defines the language in the following way: to belong to the language, a word must describe a path in the main diagram.
Each diagram has an entry point and an end point. The diagram describes possible paths between these two points by going through other nonterminals and terminals. Historically, terminals have been represented by round boxes and nonterminals by rectangular boxes but there is no official standard.
Example
We use arithmetic expressions as an example, in various grammar formats.
BNF:
<expression> ::= <term> | <term> "+" <expression>
<term> ::= <factor> | <factor> "*" <term>
<factor> ::= <constant> | <variable> | "(" <expression> ")"
<variable> ::= "x" | "y" | "z"
<constant> ::= <digit> | <digit> <constant>
<digit> ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"
EBNF:
expression = term , [ "+" , expression ];
term = factor , [ "*" , term ];
factor = constant | variable | "(" , expression , ")";
variable = "x" | "y" | "z";
constant = digit , { digit };
digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9";
ABNF:
expression = term ["+" expression]
term = factor ["*" term]
factor = constant / variable / "(" expression ")"
variable = "x" / "y" / "z"
constant = 1*digit
DIGIT = "0" / "1" / "2" / "3" / "4" / "5" / "6" / "7" / "8" / "9"
ABNF also supports ranges, e.g. , but it is not used here for consistency with the other examples.
Red (programming language) Parse Dialect:
Red [Title: "Parse Dialect"]
expression: [term opt ["+" expression]]
term: [factor opt ["*" term]]
factor: [constant | variable | "(" expression ")"]
variable: ["x" | "y" | "z"]
constant: [some digit]
digit: ["0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"]
This format also supports ranges, e.g. , but it is not used here for consistency with the other examples.
One possible syntax diagram for the example grammars is below. While the syntax for the text-based grammars differs, the syntax diagram for all of them can be the same because it is a metalanguage.
See also
Recursive transition network
Extended Backus–Naur form (EBNF)
References
Note: the first link is sometimes blocked by the server outside of its domain, but it is available on archive.org. The file was also mirrored at standardpascal.org.
External links
JSON website including syntax diagrams
Generator from EBNF
From EBNF to a postscript file with the diagrams
EBNF Parser & Renderer
SQLite syntax diagram generator for SQL
Online Railroad Diagram Generator
Augmented Syntax Diagram (ASD) grammars
(ASD) Augmented Syntax Diagram Application Demo Site
SRFB Syntax Diagram representation by Function Basis + svg generation
Formal languages
Diagrams | Syntax diagram | [
"Mathematics"
] | 994 | [
"Formal languages",
"Mathematical logic"
] |
13,355,943 | https://en.wikipedia.org/wiki/Heterotrophic%20nutrition | Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes
References
Trophic ecology
Biological interactions | Heterotrophic nutrition | [
"Biology"
] | 282 | [
"Biological interactions",
"Ethology",
"Behavior",
"nan"
] |
13,356,100 | https://en.wikipedia.org/wiki/Convention%20over%20configuration | Convention over configuration (also known as coding by convention) is a software design paradigm used by software frameworks that attempts to decrease the number of decisions that a developer using the framework is required to make without necessarily losing flexibility and don't repeat yourself (DRY) principles.
The concept was introduced by David Heinemeier Hansson to describe the philosophy of the Ruby on Rails web framework, but is related to earlier ideas like the concept of "sensible defaults" and the principle of least astonishment in user interface design.
The phrase essentially means a developer only needs to specify unconventional aspects of the application. For example, if there is a class Sales in the model, the corresponding table in the database is called "sales" by default. It is only if one deviates from this convention, such as the table "product sales", that one needs to write code regarding these names.
When the convention implemented by the tool matches the desired behavior, it behaves as expected without having to write configuration files. Only when the desired behavior deviates from the implemented convention is explicit configuration required.
Ruby on Rails' use of the phrase is particularly focused on its default project file and directory structure, which prevent developers from having to write XML configuration files to specify which modules the framework should load, which was common in many earlier frameworks.
Disadvantages
Disadvantages of the convention over configuration approach can occur due to conflicts with other software design principles, like the Zen of Python's "explicit is better than implicit." A software framework based on convention over configuration often involves a domain-specific language with a limited set of constructs or an inversion of control in which the developer can only affect behavior using a limited set of hooks, both of which can make implementing behaviors not easily expressed by the provided conventions more difficult than when using a software library that does not try to decrease the number of decisions developers have to make or require inversion of control.
Other methods of decreasing the number of decisions a developer needs to make include programming idioms and configuration libraries with a multilayered architecture.
Motivation
Some frameworks need multiple configuration files, each with many settings. These provide information specific to each project, ranging from URLs to mappings between classes and database tables. Many configuration files with many parameters are often difficult to maintain.
For example, early versions of the Java persistence mapper Hibernate mapped entities and their fields to the database by describing these relationships in XML files. Most of this information could have been revealed by conventionally mapping class names to the identically named database tables and the fields to their columns, respectively. Later versions did away with the XML configuration file and instead employed these very conventions, deviations from which can be indicated through the use of Java annotations (see JavaBeans specification, linked below).
Usage
Many modern frameworks use a convention over configuration approach.
The concept is older, however, dating back to the concept of a default, and can be spotted more recently in the roots of Java libraries. For example, the JavaBeans specification relies on it heavily. To quote the JavaBeans specification 1.01: "As a general rule we don't want to invent an enormous java.beans.everything class that people have to inherit from. Instead we'd like the JavaBeans runtimes to provide default behaviour for 'normal' objects, but to allow objects to override a given piece of default behaviour by inheriting from some specific java.beans.something interface."
See also
Comparison of web frameworks
Convention over Code
Markedness
Rapid application development
References
External links
Object-oriented programming
Software design | Convention over configuration | [
"Engineering"
] | 729 | [
"Design",
"Software design"
] |
13,356,113 | https://en.wikipedia.org/wiki/Mabey%20Logistic%20Support%20Bridge | The Mabey Logistic Support Bridge (in the United States, the Mabey-Johnson Bridge) is a portable pre-fabricated truss bridge, designed for use by military engineering units to upgrade routes for heavier traffic, replace civilian bridges damaged by enemy action or floods etc., replace assault and general support bridges and to provide a long span floating bridge capability. The bridge is a variant of the Mabey Compact 200 bridge, with alterations made to suit the military user as well as a ramp system to provide ground clearance to civilian and military vehicles.
Description
The Logistic Support Bridge is a non-assault bridge for the movement of supplies and the re-opening of communications. It is a low-cost system that can be used widely throughout the support area, as well as for a range of defined applications. All types of vehicles including civilian vehicles with low ground clearances are accommodated.
The Mabey Logistic Support Bridge originated from the Bailey bridge concept. Compared with World War II material in use throughout the world, LSB is manufactured with chosen modern steel grades, with a strong steel deck system. With strong deep transoms, there are only two per bay instead of the four previously needed on Bailey bridges.
Beyond the need for the re-opening of communications, Logistic Support Bridge-based equipment (Compact 200) can be used as a rescue bridge for relief in natural disaster situations or as a civilian bridge for semi-permanent bridging to open up communications in some of the most remote regions of the world.
Users
The bridge is manufactured by Mabey Group at its Mabey Bridge factory in Lydney, Gloucestershire (Mabey Group's original factory, now closed, was in Chepstow and manufactured large bridge girders; in May 2019, the Group sold Mabey Bridge to the US-based Acrow Bridge).
The name LSB was given by the British Army (Royal Engineers) to supply bridging to satisfy their specific requirements for a logistic or line of communication bridging. The LSB went into service with the British Army on 21 December 2001. The system is proved and approved by a number of NATO forces.
Armies from a number of countries around the world own equipment or have trained and deployed on the system, notably during the crisis in the Balkans. These armies include Argentina, Austria, Belgium, Brazil, Bulgaria, Cambodia, Canada, Chile, Denmark, Ecuador, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Malaysia, Nepal, Netherlands, Romania, Sri Lanka, Slovakia, Slovenia, Spain, Sweden, Switzerland, Tanzania, Turkey, Venezuela, United Kingdom, United States.
The bridge has been built in many locations across Iraq and Afghanistan by the U.S. Naval Mobile Construction Battalions (Seabees) and the United States Army Corps of Engineers.
During the Canadian OPERATION ATHENA, members of 1 CER on Task Force 3-09 constructed a Mabey Logistic Support Bridge over a pre-existing bridge after a vehicle borne suicide bomb detonated on the bridge close to Kandahar Airfield
The Swedish Transport Administration also uses the bridge where it is used during road renovation and construction or as a stop gap after road damage.
Features
The bridge takes military load class 80 Tracked, 110 Wheeled
The bridge can span up to 61m
LSB has a lane width of 4.2m
Multi-span equipment enables the bridge to be built to any length on fixed or floating supports
Built on a greenfield site using grillages, ground beams and ramps
The bridge is normally built using an atc244 22 ton capacity crane or a hydraulic excavator using a bucket with a lifting eye.
System description
The LSB combines standard off the shelf equipment with a range of purpose designed special equipment to meet the expectations of modern military loads and traffic expectations.
Panels —These are the main structural components of the bridge trusses. They are welded items comprising top and bottom chords interconnected by vertical and diagonal bracing. At the end of each panel, chords terminate in male lugs or eyes and at the other end in female lugs or eyes. This allows panels to be pinned together to form the bridge span. There are two different panels; a Super Panel and a High Shear Super Panel. The High Shear Super Panel is used at each end of the bridge span depending upon the loading criteria.
Chord reinforcement —These are constructed in the same way as the chords of the bridge panels and are bolted to the panels to increase the bending capacity of the bridge. For the LSB a heavy chord reinforcement is used.
Transoms —These are fabricated from universal beams and form the cross girders of the bridge, spanning between the panels and carrying the bridge deck. The transom is designed for the appropriate loading criteria and for LSB is designed to accommodate MLC80T/110W.
Decks —Unlike wooden Bailey decks, the steel LSB decks are 1.05m x 3.05m and are manufactured using robotic welding technology. The decks are manufactured to have a long fatigue life and with durbar/checkered plate finish. The decks withstand both wheeled and tracked vehicles.
Bracing —A variety of bracing members are used to connect panels to form the bridge trusses and to brace adjacent transoms to the bridge.
Grillages and Ground Beams —On greenfield sites and when being used as an over bridge, ground beams are available that form an assembly which transmits all dead and live forces from the bridge into the ground. For a 40m (MLC80T/110W) bridge the ground bearing pressure is 200 kN/m2. The grillages are located on the top of the ground beams and accommodate the bridge bearings as well as the head of the ramp transom.
Ramps —The slope or profile of the ramps can be adjusted to allow for the passage of a range of civilian and military traffic. The length of a standard ramp at each end of the bridge is 13.5m. The ramps are bolted to the grillages and use standard deck units supported on special transoms. These transoms can be positioned at a variety of heights depending upon the set adopted with a special ramp post. The interface between the ramp and ground is a special toe ramp unit (1.5m)
Construction
The bridge can be constructed by the cantilever launch method without the need for any temporary intermediate support. This is achieved by erecting a temporary launching nose at the front of the bridge and pushing the bridge over the gap on rollers.
After pushing the bridge over the gap, the launching nose is dismantled and the bridge is jacked down onto its bearings. The launching nose is largely constructed from standard bridge components.
Floating variants
There are a number of floating versions of the Mabey LSB in use across Iraq: Floating Piers which consist of steel Flexifloat pontoon units, Landing Piers consisting of 16 pontoon units, and Intermediate Piers which consist of 8 pontoons each. Hand winches are mounted on steel trays which are bolted to the pontoons. The anchors are connected to the hand winches and pontoons via steel chain and polypropylene ropes. Special span junction decks allow for the rotation of the floating spans as the spans deflect under live load.
If the bridge is relatively short in terms of the number of spans, it may be possible to launch the complete bridge from one bank. On a long span bridge, launching intermediate spans and floating them into position on intermediate piers is more practical.
See also
Bailey bridge
Callender-Hamilton bridge
Mabey Group
Medium Girder Bridge
References
External links
www.mabey.com
www.mabeybridge.co.uk
Animated build of a Mabey Compact 200 Bridge
Portable bridges
Military bridging equipment | Mabey Logistic Support Bridge | [
"Engineering"
] | 1,553 | [
"Military bridging equipment",
"Military engineering"
] |
13,356,339 | https://en.wikipedia.org/wiki/Glossary%20of%20firearms%20terms | The following are terms related to firearms and ammunition topics.
A
Accurize, accurizing: The process of altering a stock firearm to improve its accuracy.
Ackley Improved: A type of firearm cartridge that underwent a process of fireforming to contain more propellant to improve the performance of the round. The term may also refer to cutting down the cartridge to contain a different caliber of projectile.
Action: The physical mechanism that manipulates cartridges and/or seals the breech. The term refers to the method in which cartridges are loaded, locked, and extracted from the mechanism. Actions are generally categorized by the type of mechanism used. A firearm action is technically not present on muzzleloaders as all loading is done by hand. The mechanism that fires a muzzleloader is called the lock.
Adjustable sight: Any aiming mechanism, usually iron sights, that allow the user to move the reticle up or down (elevation), and left or right (windage), in order to compensate for wind and distance.
Ammunition or ammo: Can be described as anything that can be launched or thrown. In the case of modern firearms, usually refers to the assembly that is made up of a brass, steel, aluminum, or (rarely) a polymer case. The case contains the priming compound, usually in its own removable assembly called a primer. The case will also contain the charge of smokeless gunpowder, or sometimes black powder, and will be topped off by the projectile.
Anti-glare: A grooved/textured surface detail found above the barrel to deflect light from affecting target aqquisition.
Assault rifle: A service rifle capable of select fire, that fires intermediate cartridges.
Assault weapon: A term used in some jurisdictions within the United States, usually used to describe semi-automatic rifles that fire from a detachable magazine.
Automatic fire: A weapon capable of automatic fire is one that will continually expend ammunition for as long as the trigger is held.
Automatic pistol: A pistol that is capable of automatic fire; a machine pistol.
Automatic rifle: A self-loading rifle that is capable of automatic fire.
B
Back bore, backbored barrel: A shotgun barrel whose internal diameter is greater than nominal for the gauge, but less than the SAAMI maximum. Done in an attempt to reduce felt recoil, improve patterning, or change the balance of the shotgun.
Bandolier or bandoleer: A pocketed belt for holding ammunition and cartridges, usually slung across the chest. Bandoliers are now rare because most military arms use magazines, which are not well-suited to being stored in a bandolier. They are, however, still commonly used with shotguns, as a traditional bandolier conveniently stores individual shells.
Barrel: A tube, usually metal, through which a controlled explosion or rapid expansion of gases are released to propel a projectile out of the end at high velocity.
Barrel nut: A firearm component used on barrels. On handguards, a barrel nut may refer to the component that holds the handguards to the barrel. On machine guns, a barrel nut is a screw on component at the rear of the barrel that has locking lugs and a notch for quick barrel change and helps install it in the trunnion.
Ballistic coefficient (BC): A measure of a projectile's ability to overcome air resistance in flight. It is inversely proportional to the deceleration – a high number indicates a low deceleration. BC is a function of mass, diameter, and drag coefficient.
Ballistics: a field of mechanics concerned with the launching, flight behavior and impact effects of projectiles. Often broken down into internal ballistics, transitional ballistics, external ballistics and terminal ballistics.
Battle rifle: A service rifle capable of semi-automatic or fully automatic fire of a full-power rifle cartridge.
Bayonet lug: An attachment point at the muzzle end of a long gun for a bayonet.
Belt: An ammunition belt is a device used to retain and feed cartridges into some machine guns in place of a magazine.
Belted magnum or belt: Any caliber cartridge, generally rifles, using a shell casing with a pronounced "belt" around its base that continues 2 to 4 mm past the extractor groove. This design originated with the British gunmaker Holland & Holland for the purpose of headspace certain of their more powerful cartridges. Especially the non-shouldered (non-"bottlenecked") magnum rifle cartridges could be pushed too far into the chamber and thus cause catastrophic failure of the gun when fired with excessive headspace; the addition of the belt to the casing prevented this over-insertion.
Bipod: A support device that is similar to a tripod or monopod, but with two legs. On firearms, bipods are commonly used on rifles and machine guns to provide a forward rest and reduce motion. The bipod permits the operator to rest the weapon on the ground, a low wall, or other object, reducing fatigue and permitting increased accuracy.
Black powder also called gunpowder: A mixture of sulfur, charcoal, and potassium nitrate. It burns rapidly, producing a volume of hot gas made up of carbon dioxide, water, and nitrogen, and a solid residue of potassium sulfide. Because of its burning properties and the amount of heat and gas volume that it generates, gunpowder has been widely used as a propellant in firearms and as a pyrotechnic composition in fireworks. Since 1886, most firearms use smokeless powder.
Black powder substitute: A firearm propellant that is designed to reproduce the burning rate and propellant properties of black powder (making it safe for use in black-powder firearms), while providing advantages in one or more areas such as reduced smoke, reduced corrosion, reduced cost, or decreased sensitivity to unintentional ignition.
Blank: A type of cartridge for a firearm that contains gunpowder but no bullet or shot. When fired, the blank makes a flash and an explosive sound (report). Blanks are often used for simulation (such as in historical reenactments, theatre and movie special effects), training, and for signaling (see starting pistol). Blank cartridges differ from dummy cartridges, which are used for training or function testing firearms; these contain no primer or gunpowder, and are inert.
Blank-firing adapter: Some weapons use an adapter fitted to the muzzle when firing blanks.
Blowback: A system of operation for self-loading firearms that obtains power from the motion of the cartridge case as it is pushed to the rear by expanding gases created by the ignition of the powder charge.
Blow-forward: A system of operation that pushes the weapon's bolt forwards to eject the bullet and cycle the action.
Bluing or blueing: A passivation process in which steel is partially protected against rust, and is named after the blue-black appearance of the resulting protective finish. True gun bluing is an electrochemical conversion coating resulting from an oxidizing chemical reaction with iron on the surface selectively forming magnetite (Fe3O4), the black oxide of iron, which occupies the same volume as metallic iron. Bluing is most commonly used by gun manufacturers, gunsmiths and gun owners to improve the cosmetic appearance of, and provide a measure of corrosion resistance to, their firearms.
Bolt action: A type of firearm action in which the firearm's bolt is operated manually by the opening and closing of the breech (barrel) with a small handle. As the handle is operated, the bolt is unlocked, the breech is opened, the spent shell casing is withdrawn and ejected, the firing pin is cocked, and a new round/shell (if available) is placed into the breech and the bolt closed.
Bolt thrust or breech pressure: The amount of rearward force exerted by the propellant gases on the bolt or breech of a firearm action or breech when a projectile is fired. The applied force has both magnitude and direction, making it a vector quantity.
Bolt: The part of a repeating, breech-loading firearm that blocks the rear opening (breech) of the barrel chamber while the propellant burns, and moves back and forward to facilitate loading/unloading of cartridges from the magazine. The extractor and firing pin are often integral parts of the bolt.
Bore rope: A tool used to clean the barrel of a gun.
Boresight: Crude adjustments made to an optical firearm sight, or iron sights, to align the firearm barrel and sights. This method is usually used to pre-align the sights, which makes zeroing (zero drop at XX distance) much faster.
Box magazine: A standard magazine, that is generally rectangular in shape, and used for loading ammunition.
Brass: The empty cartridge case.
Break-action: A firearm whose barrels are hinged, and rotate perpendicular to the bore axis to expose the breech and allow the loading and/or unloading of ammunition.
Breech: The part of a breechloader that is opened for the insertion of ammunition.
Breech pressure or bolt thrust: The amount of rearward force exerted by the propellant gases on the bolt or breech of a firearm action or breech when a projectile is fired. The applied force has both magnitude and direction, making it a vector quantity.
Buffer: A component that reduces the velocity of recoiling parts (such as the bolt).
Bullpup: A firearm configuration in which both the action and magazine are located behind the trigger.
Burst mode: A firing mode enabling the shooter to fire a predetermined number of rounds, with a single pull of the trigger.
Browning: John Moses Browning, an American firearms designer. The name is also used to refer to his designs, some of which include the M2 Browning, Browning Auto-5, and Browning Hi-Power.
Bullet: the small metal projectile that is part of a cartridge and is fired through the barrel. Sometimes, but incorrectly, used to refer to a cartridge.
Button rifling: Rifling that is formed by pulling a die made with reverse image of the rifling (the 'button') down the pre-drilled bore of a firearm barrel. See also cut rifling and hammer forging.
C
Caliber/calibre: In small arms, the internal diameter of a firearm's barrel or a cartridge's bullet, usually expressed in millimeters or hundredths of an inch; in measuring rifled barrels this may be measured across the lands (.303 British) or grooves (.308 Winchester) or; a specific cartridge for which a firearm is chambered, such as .45 ACP or .357 Magnum. In artillery, the length of the barrel expressed in terms of the internal bore diameter.
Caplock: An obsolete mechanism for discharging a firearm.
Carbine: A shortened version of a service rifle, often chambered in a less potent cartridge or; a shortened version of the infantryman's musket or rifle suited for use by cavalry.
Cartridge: The assembly consisting of a bullet, gunpowder, shell casing, and primer. When counting, it is referred to as a "round".
Caseless ammunition: A type of small arms ammunition that eliminates the cartridge case that typically holds the primer, propellant, and projectile together as a unit.
Casket magazine: A quad stack box magazine.
Centerfire: A cartridge in which the primer is located in the center of the cartridge case head. Unlike rimfire cartridges, the primer is a separate and replaceable component. The centerfire cartridge has replaced the rimfire in all but the smallest cartridge sizes. Except for low-powered .22 and .17 caliber cartridges, and a handful of antiques, all modern pistol, rifle, and shotgun ammunition are centerfire.
Chain gun: A type of single barrelled machine gun or autocannon that uses an external source of power to cycle the weapon.
Chamber: The portion of the barrel or firing cylinder in which the cartridge is inserted prior to being fired. Rifles and pistols generally have a single chamber in their barrels, while revolvers have multiple chambers in their cylinders and no chamber in their barrel.
Chambering: Inserting a round into the chamber, either manually or through the action of the weapon.
Charger: Commonwealth parlance for a stripper clip, a speedloader that holds several cartridges together in a single unit for easier loading of a firearm's magazine.
Charging handle: Device on a firearm which, when operated, results in the hammer or striker being cocked or moved to the ready position.
Choke: A tapered constriction of a shotgun barrel's bore at the muzzle end. Chokes are almost always used with modern hunting and target shotguns, to improve performance
Clip: A device that is used to store multiple rounds of ammunition together as a unit, ready for insertion into the magazine of a repeating firearm. This speeds up the process of loading and reloading the firearm as several rounds can be loaded at once, rather than one round being loaded at a time.
COL (cartridge overall length): Factory ammunition is loaded to a standard, SAAMI specified, Cartridge Overall Length so that the ammunition will reliably function in all firearms and action types. This specified O.A.L. has nothing to do with optimizing accuracy, and is typically much shorter than the O.A.L. used by handloaders for the same cartridge. For the last several decades, the rule of thumb was the closer you seated the bullet to the lands, the better the accuracy. Currently, it is understood that this isn't always true. It is true that some bullets and some rifles perform best when bullets are seated out long enough to touch the lands, but other bullets perform best when they have a certain amount of “jump” to the lands. The only rule is: there is no rule.
Collateral damage: Damage that is unintended or incidental to the intended outcome. The term originated in the United States military, but it has since expanded into broader use.
Collimator sight: A type of optical "blind" sight that allows the user looking into it to see an illuminated aiming point aligned with the device the sight is attached to regardless of eye position (parallax free). The user can not see through the sight so it is used with both eyes open while one looks into the sight, with one eye open and moving the head to alternately see the sight and then at the target, or using one eye to partially see the sight and target at the same time. (variant names/types: "collimating sight","occluded eye gunsight" (OEG).)
Combination gun: A shoulder-held firearm that has two or more barrels; and at least one rifle barrel and one shotgun barrel. Most combination guns are of an over-under design (O/U), in which the two barrels are stacked vertically on top of each other, but side-by-side (SxS), in which the two barrels are parrarel to one another are also made.
Cooking off: The premature explosion of ammunition, for example when a gun is hot from sustained firing the heat can ignite the propellant and make the weapon fire.
Cordite: A family of smokeless propellants developed and produced in the United Kingdom from 1889 to replace gunpowder as a military propellant. Like gunpowder, cordite is classified as a low explosive because of its slow burning rates and consequently low brisance. The hot gases produced by burning gunpowder or cordite generate sufficient pressure to propel a bullet or shell to its target, but not enough to destroy the barrel of the firearm, or gun.
CQB: close-quarters combat (CQC) or close quarters battle (CQB) is a type of fighting in which small units engage the enemy with personal weapons at very short range, potentially to the point of hand-to-hand combat or fighting with hand weapons such as swords or knives.
Cylindro-conoidal bullet: A hollow base bullet, shaped so that, when fired, the bullet expands and seals the bore. It was invented by Captain John Norton of the British 34th Regiment in 1832, after he examined the blow pipe arrows used by the natives in India and found that their base was formed of elastic lotus pith, which by its expansion against the inner surface of the blow pipe prevented the escape of air past it.
D
Damascus barrel or damascus twist: An obsolete method of manufacturing a firearm barrel made by twisting strips of metal around a mandrel and forge welding it into shape. See also Damascus steel.
Delayed blowback: A type of blowback operation when fired uses an operation to delay the opening until the gas pressure drops to a safe level to extract.
Derringer: A breechloading handgun, that typically has one to four barrels. Because of their construction, derringers are much smaller and more concealable than many other types of handguns.
Direct impingement: A type of gas operation for a firearm that directs gas from a fired cartridge directly to the bolt carrier or slide assembly to cycle the action.
Disassembly: The removal of parts of a firearm, usually as part of a field strip.
Discharge: Firing a weapon.
Doglock: The lock that preceded the 'true' flintlock in both rifles and pistols in the 17th century. Commonly used throughout Europe in the 1600s, it gained popular favor in the British and Dutch military. A doglock carbine was the principal weapon of the harquebusier, the most numerous type of cavalry in the armies of Thirty Years War and the English Civil War era.
Double-barreled shotgun: A shotgun with two barrels that are usually of the same gauge or bore. The two types of double-barreled shotguns are over/under (O/U), in which the two barrels are stacked on top of each other, and side-by-side (SxS), in which the two barrels sit parrarel to each other. For double-barreled guns that use one shotgun barrel and one rifle barrel, see combination gun.
Double action revolver: A revolver whose trigger performs two actions, firing the round, and cocking the hammer.
Double rifle: A rifle that has two barrels, usually of the same caliber. Like shotguns, they are configured either in over-and-under or side-by-side.
Drilling: A firearm with three barrels (from the German word drei for three). Typically it has two shotgun barrels in a side-by-side configuration on the top, with a single rifle barrel underneath.
Drum magazine: A type of firearms magazine that is cylindrical in shape, similar to a drum.
Dry fire: the practice of "firing" a firearm without ammunition. That is, to pull the trigger and allow the hammer or striker to drop on an empty chamber.
Dum-dum: A bullet designed to expand on impact, increasing in diameter to limit penetration and/or produce a larger diameter wound. The two typical designs are the hollow-point bullet and the soft-point bullet.
Dummy: A round of ammunition that is completely inert, i.e., contains no primer, propellant, or explosive charge. It is used to check weapon function, and for crew training. Unlike a blank, it contains no charge at all.
Dust cover: a seal for the ejection port (which allows spent brass to exit the upper receiver after firing) from allowing contaminants such as sand, dirt, or other debris from entering the mechanism.
E
Ear protection: Devices used to help reduce the sound of a firearm, to prevent hearing damage. Most commonly earplugs or ear defenders.
Effective range: The maximum range at which a particular firearm can accurately hit a target.
Electronic firing: The use of an electric current to fire a cartridge, instead of a percussion cap. In an electronic-fired firearm an electric current is used instead to ignite the propellant, which fires the cartridge as soon as the trigger is pulled.
Eye relief: For optics such as binoculars or a rifle scope, eye relief is the distance from the eyepiece to the viewer's eye that matches the eyepiece exit pupil to the eye's entrance pupil. Short eye relief requires the observer to press their eye close to the eyepiece in order to see an un-vignetted image. For a shooter, eye relief is an important safety consideration. An optic with too short an eye relief can cut skin at the contact point between the optic and the shooter's eyebrow due to recoil.
Expanding bullet: An expanding bullet is a bullet designed to expand on impact, increasing in diameter to limit penetration and/or produce a larger diameter wound. The two typical designs are the hollow-point bullet and the soft-point bullet.
Extractor: A part in a firearm that serves to remove brass cases of fired ammunition after the ammunition has been fired. When the gun's action cycles, the extractor lifts or removes the spent brass casing from the firing chamber.
F
Fail-to-fire: A firearm malfunction in which a firearm is incapable of discharging a round.
Falling block action (also known as a sliding-block action): A single-shot firearm action in which a solid metal breechblock slides vertically in grooves cut into the breech of the rifle and actuated by a lever. In the top position, it locks and resists recoil while sealing the chamber. In the lower position, it leaves the chamber open so the shooter can load a cartridge from the rear.
Ferritic nitrocarburizing: A case hardening processes that diffuse nitrogen and carbon into ferrous metals at sub-critical temperatures to improve scuffing resistance, fatigue properties and corrosion resistance of metal surfaces. Also called nitriding.
Feed ramp: A detail which leads the cartridge from the magazine into the chamber.
Field strip: Disassembling a firearm for the purpose of repair or cleaning, without tools. When using tools, this is called a detail strip.
Firearm: A weapon that fires bullets, and of such a size that is designed for usage by one individual.
Fire forming: The process of reshaping a metallic cartridge case to fit a new chamber by firing it within that chamber.
Firing pin: The part of a firearm that strikes the primer, discharging the round.
Flash suppressor or flash hider: A device that is attached to the muzzle of a firearm, that lowers the temperature at which gases disperse upon firing.
Flintlock: An obsolete mechanism for discharging a firearm.
Fluted barrel: Removal of material from a cylindrical surface, usually creating grooves. This is most often the barrel of a rifle, though it may also refer to the cylinder of a revolver or the bolt of a bolt action rifle. In contrast to rifle barrels and revolver cylinders, rifle bolts are normally helically fluted, though helical fluting is sometimes also applied to rifle barrels.
Fluted chamber: A barrel chamber that allows gas to leak around the cartridge during extraction. Fluted chambers are often found in Delayed Blowback firearms.
Fouling shot: A fouling shot is a shot fired through a clean bore, intended to leave some residue of firing and prepare the bore for more consistent performance in subsequent shots. The first shot through a clean bore behaves differently from subsequent shots through a bore with traces of powder residue, resulting in a different point of impact. Also, the Fouling Shot Journal, a publication of the Cast Bullet Association
Forcing cone: The tapered section at the rear of the barrel of a revolver that eases the entry of the bullet into the bore, similar to that of a feed ramp.
Forward assist: A button, found on firearms firing from closed bolt only and with non-reciprocating cocking handles, commonly on AR-10/AR-15-styled rifles, usually located near the bolt closure, that when hit, pushes the bolt carrier forward, ensuring that the bolt is locked in-battery position.
Fouling: The accumulation of unwanted material on solid surfaces. The fouling material can consist of either powder, lubrication residue, or bullet material such as lead or copper.
Frangible: A bullet that is designed to disintegrate into tiny particles upon impact to minimize their penetration for reasons of range safety, to limit environmental impact, or to limit the danger behind the intended target. Examples are the Glaser Safety Slug and the breaching round.
Free gun: A term for a General Purpose Machine Gun used by Door gunners that is not installed on a weapon mount but a bungee/sling allowing more free movement.
Frizzen: An L-shaped piece of steel hinged at the rear used in flintlock firearms. The flint scraping the steel throws a shower of sparks into the flash pan.
G
Gas bleed: A device used on a firearm for various purposes. One example found on bolt action rifles to prevent ruptured cartridges. The other used on gas operated firearms, usually a small hole on the barrel/gas block that is used to push back a gas piston to unlock the bolt.
Gas check: A device used in some types of firearms ammunition when non-jacketed bullets are used in high pressure cartridges, to prevent the buildup of lead in the barrel and aid in accuracy.
Gas-operated reloading: A system of operation used to provide energy to operate autoloading firearms.
Gatling gun: A hand-crank operated cannon named after its inventor, Richard Gatling. In modern usage, a Gatling often refers to a rotary machine gun.
Gauge: The gauge of a firearm is a unit of measurement used to express the diameter of the barrel.
General purpose machine gun: A machine gun intended to fill the role of either a light machine gun or medium machine gun, while at the same time being man-portable.
Grain is a unit of measurement of mass that is based upon the mass of a single seed of a typical cereal. Used in firearms to denote the amount of powder in a cartridge or the weight of a bullet. Traditionally it was based on the weight of a grain of wheat or barley, but since 1958, the grain (gr) measure has been redefined using the International System of Units as precisely . There are 7,000 grains per avoirdupois pound in the Imperial and U.S. customary units.
Grip safety: A safety mechanism, usually a lever on the rear of a pistol grip, that automatically unlocks the trigger mechanism of a firearm as pressure is applied by the shooter's hand.
Gunpowder, also called black powder, is a mixture of sulfur, charcoal, and potassium nitrate. It burns rapidly, producing a volume of hot gas made up of carbon dioxide, water, and nitrogen, and a solid residue of potassium sulfide. Because of its burning properties and the amount of heat and gas volume that it generates, gunpowder has been widely used as a propellant in firearms and as a pyrotechnic composition in fireworks. The term gunpowder also refers broadly to any propellant powder. Modern firearms do not use the traditional gunpowder (black powder) described here, but instead use smokeless powder. Guncotton replaced black powder as a propellant, and was in turn replaced by smokeless powder.
Gun serial number: A unique identifier given to a specific firearm.
H
Hammer bite: The action of an external hammer pinching or poking the web of the operator's shooting hand between the thumb and fore-finger when the gun is fired. Some handguns prone to this are the M1911 pistol and the Browning Hi-Power.
Hang fire: An unexpected delay between the triggering of a firearm and the ignition of the propellant. This failure was common in firearm actions that relied on open primer pans, due to the poor or inconsistent quality of the powder. Modern weapons are susceptible, particularly if the ammunition has been stored in an environment outside of the design specifications.
Half-cock: The position of the hammer where the hammer is partially but not completely cocked. Many firearms, particularly older firearms, had a notch cut into the hammer allowing half-cock, as this position would neither allow the gun to fire nor permit the hammer-mounted firing pin to rest on a live percussion cap or cartridge. The purpose of the half-cock position has variously been used both for loading a firearm, and as a safety-mechanism.
Hammer: The function of the hammer is to strike the firing pin in a firearm, which in turn detonates the impact-sensitive cartridge primer. The hammer of a firearm was given its name for both resemblance and functional similarity to the common tool.
Handgun: A type of firearm that is compact enough that it can be held and used with only a single hand.
Headspace: The distance measured from the part of the chamber that stops forward motion of the cartridge (the datum reference) to the face of the bolt. Used as a verb, headspace refers to the interference created between this part of the chamber and the feature of the cartridge that achieves the correct positioning.
Headstamp: A headstamp is the markings on the bottom of a cartridge case designed for a firearm. It usually tells who manufactured the case. If it is a civilian case it often also tells the caliber, if it is military, the year of manufacture is often added.
Heavy machine gun: A machine gun firing large diameter rifle cartridges, considerably larger than a medium or light machine gun. Most heavy machine guns fire larger rounds, such as the .50 BMG or 12.7×108mm.
High brass: A shotgun shell for more powerful loads with the brass extended up further along the sides of the shell, while light loads use "low brass" shells. The brass does not provide significantly more strength, but the difference in appearance helps shooters quickly differentiate between higher and lower powered ammunition.
Holographic weapon sight: a non-magnifying gun sight that allows the user to look through a glass optical window and see a cross hair reticle image superimposed at a distance on the field of view. The hologram of the reticle is built into the window and is illuminated by a laser diode.
I
Improved cartridge: A wildcat cartridge that is created by straightening out the sides of an existing case and making a sharper shoulder to maximize powder space. Frequently the neck length and shoulder position are altered as well. The caliber is NOT changed in the process.
IMR powder or Improved Military Rifle: A series of tubular nitrocellulose smokeless powders evolved from World War I through World War II for loading military and commercial ammunition and sold to private citizens for reloading rifle ammunition for hunting and target shooting.
Improvised firearm: A firearm manufactured by someone who is not a regular maker of firearms, often as part of an insurgency.
Internal ballistics: A subfield of ballistics, that is the study of a projectile's behavior from the time its propellant's igniter is initiated until it exits the gun barrel. The study of internal ballistics is important to designers and users of firearms of all types, from small-bore Olympic rifles and pistols, to high-tech artillery.
Iron sights are a system of aligned markers used to assist in the aiming of a device such as a firearm, crossbow, or telescope, and exclude the use of optics as in a scope. Iron sights are typically composed of two component sights, formed by metal blades: a rear sight mounted perpendicular to the line of sight and consisting of some form of notch (open sight) or aperture (closed sight); and a front sight that is a post, bead, or ring.
J
Jacket: A metal, usually copper, wrapped around a lead core to form a bullet.
Jam: A type of firearm malfunction, in which a cartridge does not load correctly and needs to be resolved by the user to maintain proper functioning.
Jeweling: A cosmetic process to enhance the looks of firearm parts, such as the bolt. The look is created with an abrasive brush and compound that roughs the surface of the metal in a circular pattern. Asides aesthetics, it can be used as an anti-glare on the barrel and hold lubricants on components.
K
Keyhole or keyholing: Refers to the end-over-end tumbling of the bullet which will often leave an elongated or keyhole shaped hole in a paper target. This occurs when the bullet is insufficiently stabilised by the firearm's rifling, either because the rifling is too slow or long for a given bullet, also meaning that the bullet is too long or tail heavy for said rifling. Or else due to poor fit of an undersize bullet in the gun barrel. In these cases the bullet has a natural tendency to wobble, and may start to tumble end-over-end just encountering the resistance of the air. Keyholing can also occur in wounding (human or animal), when the bullet is sufficiently stabilised for penetrating the air only, but not for penetrating denser media such as bone or flesh. In these cases tumbling starts at some point inside the victim's body, subsequently causing massive wounding. When using a bullet/rifling combination which is just sufficiently stabilised for normal flight though free air, and so to easily produce massive keyhole wounds in the victim, then keyholing may occur quite easily in flight if any obstacle is encountered, be it a twig, leaf, even a blade of grass or a large rain-drop.
Khyber Pass copy: A firearm manufactured by cottage gunsmiths in the Khyber Pass region between Pakistan and Afghanistan.
Kick: The recoil or backward momentum of a firearm when it is discharged.
L
Laser sight: an attachment that projects a laser beam onto the target, providing a rough point of impact.
Leading: The act of aiming a firearm in front of a moving target, to compensate for the bullet's travel time.
Length of pull: The distance between the trigger and the butt end of the shoulder stock of a rifle or shotgun.
Lever-action: A type of firearm action with a lever that encircles the trigger guard area, (often including the trigger guard itself) to load fresh cartridges into the chamber of the barrel when the lever is worked.
Light machine gun: a class of machine gun often defined as being designed for carry and use by a single operator and firing the same intermediate-power cartridge as other soldiers in a unit.
Live fire exercise or LFX: Any exercise that simulates a realistic scenario for the use of specific equipment. In the popular lexicon this applies primarily to tests of weapons or weapon systems associated with a branch of a nation's armed forces, though the term can also apply to civilian activity.
Lock: the mechanism of a firearm that is used to initiate the ignition and propel the projectile down the barrel.
Lug: any piece that projects from a firearm for the purpose of attaching something to it. For example, barrel lugs are used to attach a break-action shotgun barrel to the action itself. If the firearm is a revolver, the term may also refer to a protrusion under the barrel that adds weight, thereby stabilizing the gun during aiming, mitigating recoil, and reducing muzzle flip. A full lug extends all the way to the muzzle, while a half lug extends only partially down the barrel. On a swing-out-cylinder revolver, the lug is slotted to accommodate the ejector rod.
M
Machine gun: A fully automatic weapon capable of sustained fire over a long period of time.
Machine pistol: A pistol capable of automatic fire. Also used interchangeably with submachine gun.
Magazine: A magazine is an ammunition storage and feeding device within or attached to a repeating firearm. Magazines may be integral to the firearm (fixed) or removable (detachable). The magazine functions by moving the cartridges stored in the magazine into a position where they may be loaded into the chamber by the action of the firearm.
Match grade: Firearm parts and ammunition that are suitable for a competitive match. This refers to parts that are designed and manufactured such that they have a relatively tight-tolerances and high level of accuracy.
Matchlock: An obsolete mechanism for discharging a firearm.
Medium machine gun: A class of machine gun often defined as being designed for carry and use by multiple operators, firing a full-power rifle cartridge.
Mine shell: A high explosive round used for armour piercing etc.
Muzzle: The part of a firearm at the end of the barrel from which the projectile exits.
Muzzle brakes and recoil compensators: Devices that are fitted to the muzzle of a firearm to redirect propellant gases with the effect of countering both recoil of the gun and unwanted rising of the barrel during rapid fire.
Muzzle energy: the kinetic energy of a bullet as it is expelled from the muzzle of a firearm. It is often used as a rough indication of the destructive potential of a given firearm or load. The heavier the bullet and the faster it moves, the higher its muzzle energy and the more damage it does.
Muzzle velocity: The speed at which a projectile leaves the muzzle of the gun. Muzzle velocities range from approximately for some pistols and older cartridges to more than in modern cartridges such as the .220 Swift and .204 Ruger. In conventional guns, muzzle velocity is determined by the quality (burn speed, expansion) and quantity of the propellant, the mass of the projectile, and the length of the barrel.
N
Necking down or necking up: Shrinking or expanding the neck of an existing cartridge to make it use a bullet of a different caliber. A typical process used in the creation of wildcat cartridges.
NRA: National Rifle Association (disambiguation). Most commonly referring to National Rifle Association of America: American organization that lists its goals as the protection of the Second Amendment of the United States Bill of Rights and the promotion of firearm ownership rights as well as marksmanship, firearm safety, and the protection of hunting and self-defense in the United States. The NRA is also the sanctioning body for most marksmanship competition in the United States, from the local to international level (particularly bullseye style events).
O
Out-of-battery: The status of a weapon before the action has returned to the normal firing position. The term originates from artillery, referring to a gun that fires before it has been pulled back into its firing position in a gun battery. In firearms where there is an automatic loading mechanism, a condition in which a live round is at least partially in the firing chamber and capable of being fired, but is not properly secured by the usual mechanism of that particular weapon can occur.
Over and Under (O/U): A configuration for double-barreled shotguns, in which the barrels are arranged vertically
Over-bore: Small caliber bullets being used in very large cases. It is the relationship between the volume of powder that can fit in a case and the diameter of the inside of the barrel or bore.
Obturate: An ordnance word; to close (a hole or cavity) so as to prevent a flow of gas through it, especially the escape of explosive gas from a gun tube during firing. The process of obturation is where a recess in the base of a bullet allows for expanding gases to press against the base and inside skirt of the bullet creating a gas tight seal to the bore. See also swage.
Offset mount: A situation wherein it may not be practical to mount a telescopic sight directly above the receiver and barrel of a firearm. This was noted with many military and service arms where new ammunition was fed from above along a similar path, in reverse, to the spent cartridge cases being ejected clear. Not often seen or used today, although complete or partial sets of offset mounts attract keen interest from restorers and collectors.
Open bolt: Open-bolt weapons have the bolt to the rear of the receiver when ready to fire. This means that when the trigger is pulled the bolt moves forward, feeds a cartridge into the chamber and fires that cartridge in one movement.
Open sight: A type of iron sight that has an open notch.
Open Tip Match: A type of bullet. The open tip design employs a precision deep drawn jacket with lead inserted from the front tip and ogival forming from the open tip mouth, and originated strictly for competitive match.
P
Paramilitary ammunition: Firearm ammunition not used by the armed forces but retains combat capabilities and sold commercially to civilians or used by various law enforcement/government organisations.
Paramilitary firearm: Firearms not used by the armed forces but retains military capabilities (IE: Design layout, ergonomics, field strip ability, modularity etc). The term may refer to semi automatic only variants of military firearms sold to civilians/law enforcement agencies/government paramilitary organisations or privately-owned military firearms (semi- or full-auto) chambered in civilian rounds.
Parkerizing: A method of protecting a steel surface from corrosion and increasing its resistance to wear through the application of an electrochemical phosphate conversion coating. Also called phosphating and phosphatizing.
Parts kit: A kit of firearm parts minus the receiver. Used to build a complete firearm with the purchase or manufacture of a receiver (regulated in the US).
Percussion cap: a small cylinder of copper or brass that was the crucial invention that enabled muzzle-loading firearms to fire reliably in any weather. The cap has one closed end. Inside the closed end is a small amount of a shock-sensitive explosive material such as fulminate of mercury. The percussion cap is placed over a hollow metal "nipple" at the rear end of the gun barrel. Pulling the trigger releases a hammer, which strikes the percussion cap and ignites the explosive primer. The flame travels through the hollow nipple to ignite the main powder charge.
Picatinny rail: A bracket used on some firearms to provide a standardized mounting platform.
Pinfire: An obsolete type of brass cartridge in which the priming compound is ignited by striking a small pin that protrudes radially from just above the base of the cartridge.
Plinking: Informal target shooting done at non-traditional targets such as tin cans, glass bottles, and balloons filled with water.
POA: point of aim.
Pocket mortar: A flare pistol modified as an ad-hoc grenade launcher or capable of firing high explosive armor piercing rounds, in particular as an anti-tank weapon.
Point of impact: The exact place at which a bullet hits its target.
Ported chamber: A barrel chamber with pressure relief ports that allows gas to leak around the cartridge during extraction. Basically the opposite of a fluted chamber as it is intended for the cartridge to stick to the chamber wall making a slight delay of extraction. This requires a welded-on sleeve with an annular groove to contain the pressure.
Pistol: A type of firearm that can be held and fired with one hand. The word pistol is usually used to refer specifically to a semi-automatic pistol.
Pistol grip: A feature on some firearms that gives the user a slightly curved area to grip, just rear of the trigger.
Powerhead or bang stick: A specialized firearm used underwater that is fired when in direct contact with the target.
Propellant: The substance in a cartridge that burns to create pressure that propels the projectile. Examples are cordite and gunpowder.
Pump-action: A rifle or shotgun in which the handgrip can be pumped back and forth to eject a spent round of ammunition and to chamber a fresh one. It is much faster than a bolt-action and somewhat faster than a lever-action, as it does not require that the shooter remove their trigger hand during reloading. In rifles, this action is also commonly called a slide action.
R
Ramrod: A device used with early firearms to push the projectile up against the propellant (mainly gunpowder).
Rate of fire: The frequency at which a firearm can fire its projectiles. Usually measured in RPM (rounds per minute).
Receiver: the part of a firearm that houses the operating parts.
Recoil: The backward momentum of a gun when it is discharged. In technical terms, the recoil caused by the gun exactly balances the forward momentum of the projectile, according to Newton's third law. (often called kickback or simply kick).
Recoil operation: An operating mechanism used in locked-breech, autoloading firearms. As the name implies, these actions use the force of recoil to provide energy to cycle the action.
Red dot magnifier: An optical telescope that can be paired with a non-magnifying optical sight turning the combination into a telescopic sight.
Red dot sight: A type of reflector (reflex) sight for firearms that gives the uses a red light-emitting diode as a reticle to create an aim point.
Reflector (reflex) sight: A generally non-magnifying optical device that has an optically collimated reticle, allowing the user to look through a partially reflecting glass element and see a parallax free cross hair or other projected aiming point superimposed on the field of view. Invented in 1900 but not generally used on firearms until reliably illuminated versions were invented in the late 1970s (usually referred to by the abbreviation "reflex sight").
Reversed bullet: A bullet placed in the cartridge backwards as an ad-hoc way of armour piercing.
Revolver: A repeating firearm that has a cylinder containing multiple chambers and at least one barrel for firing.
Rib: A grooved/textured surface found above the receiver/barrel to improve target aqquisition.
Ricochet: A rebound, bounce or skip off a surface, particularly in the case of a projectile.
Rifle bedding: A process of filling gaps between the action and the stock of a rifle with an epoxy based material.
Rifling: Helical grooves in the barrel of a gun or firearm, which imparts a spin to a projectile around its long axis. This spin serves to gyroscopically stabilize the projectile, improving its aerodynamic stability and accuracy.
Rimfire: A type of firearm cartridge that used a firing pin to strike the base's rim, instead of striking the primer cap at the center of the base of the cartridge to ignite it (as in a centerfire cartridge). The rim of the rimfire cartridge is essentially an extended and widened percussion cap that contains the priming compound, while the cartridge case itself contains the propellant powder and the projectile (bullet).
Riot gun: A gun that has been loaded for rubber bullets, smoke grenades, or any other projectile that is not designed to kill its target.
Rolling block: A form of firearm action where the sealing of the breech is done with a circular shaped breechblock able to rotate on a pin. The breechblock is locked into place by the hammer, thus preventing the cartridge from moving backwards at the moment of firing. By cocking the hammer, the breechblock can be rotated freely to reload the weapon.
Rotary cannon: A type of autocannon that contains multiple rotating barrels. If in a machine gun caliber it is referred to as a rotary machine gun.
Round: a single cartridge.
RPM: Rounds per minute
S
Sabot: A device used in a firearm to fire a projectile, such as a bullet, that is smaller than the bore diameter.
Safety: A mechanism used to help prevent the accidental discharge of a firearm in case of unsafe handling. Safeties can generally be divided into sub-types such as internal safeties (which typically do not receive input from the user) and external safeties (which typically allow the user to give input, for example, toggling a lever from "on" to "off" or something similar). Sometimes these are called "passive" and "active" safeties (or "automatic" and "manual"), respectively.
Sawed-off shotgun/Sawn off shotgun/Short-barreled shotgun (SBS): A type of shotgun with a shorter gun barrel and often a shorter or deleted stock.
Selective fire: A firearm that fires semi–automatically and at least one automatic mode by means of a selector depending on the weapon's design. Some selective fire weapons utilize burst fire mechanisms to limit the maximum or total number of shots fired automatically in this mode. The most common limits are two or three rounds per pull of the trigger.
Selector: The part of a selective fire weapon that allows the user to choose their desired mode of firing.
Semi-automatic: Firing a single round of ammunition each time the trigger is pulled.
Semi-automatic pistol: A pistol that has a single chamber, and is capable of semi-automatic fire.
Semi-wadcutter (SWC): A type of all-purpose bullet commonly used in revolvers that combines features of the wadcutter target bullet and traditional round nosed revolver bullets, and is used in both revolver and pistol cartridges for hunting, target shooting, and plinking. The basic SWC design consists of a roughly conical nose, truncated with a flat point, sitting on a cylinder. The flat nose punches a clean hole in the target, rather than tearing it like a round nose bullet would, and the sharp shoulder enlarges the hole neatly, allowing easy and accurate scoring of the target. The SWC design offers better external ballistics than the wadcutter, as its conical nose produces less drag than the flat cylinder.
Shooting range: Specialized facility designed for firearms practice.
Shooting sticks: Portable weapon mounts.
Short-barreled rifle (SBR): A legal designation in the United States, referring to a shoulder-fired, rifled firearm with a barrel length of less than 16" (40.6 cm) or overall length of less than 26" (66.0 cm).
Shotgun: A type of firearm designed to fire shotshell, which releases a large number of small projectiles (shot) or a single large projectile (slug) upon firing.
Side by side (SxS): A configuration for double-barreled shotguns, in which the barrels are arranged horizontally
Sighting in or sighting: The act of setting up a telescopic or other sighting system so that the point of impact of a bullet matches the sights at a specified distance.
Silencer, suppressor, sound suppressor, sound moderator, or "hush puppy": A device attached to or part of the barrel of a firearm to reduce the amount of noise and flash generated by firing the weapon.
Single-action: Usually referring to a pistol or revolver, single-action is when the hammer is pulled back manually by the shooter (cocking it), after which the trigger is operated to fire the shot. See also double-action.
Single-shot: A firearm that holds only a single round of ammunition and must be reloaded after each shot.
Slamfire: A premature, unintended discharge of a firearm that occurs as a round is being loaded into the chamber.
Sleeving: A method of using new tubes to replace a worn-out gun barrel.
Slide bite or Snake bite: A phenomenon often grouped with hammer bite, in this case the web of the shooting hand is cut or abraded by the rearward motion of the semi-automatic pistol's slide, not by the gun's hammer. This most often occurs with small pistols like the Walther PPK and Walther TPH that have an abbreviated grip tang. This problem is exacerbated by the sharp machining found on many firearms.
Sling: A type of strap or harness designed to allow an operator carry a firearm (usually a long gun such as a rifle, carbine, shotgun, or submachine gun) on his/her person and/or aid in greater hit probability with that firearm.
Snubnosed revolver: A revolver with a very short barrel.
Sound moderator: A muzzle device used to dampen sound, although similar it differs to a Supressor.
Speedloader: A device used for loading a firearm or firearm magazine with loose ammunition very quickly. Generally, speedloaders are used for loading all chambers of a revolver simultaneously, although speedloaders of different designs are also used for the loading of fixed tubular magazines of shotguns and rifles, or the loading of box or drum magazines. Revolver speedloaders are used for revolvers having either swing-out cylinders or top-break cylinders.
Spitzer bullet: An aerodynamic bullet design.
Sporterising, sporterisation, or sporterization: The practice of modifying military-type firearms either to make them suitable for civilian sporting use or to make them legal under the law.
Squib load, also known as squib round, pop and no kick, or just squib: A firearms malfunction in which a fired projectile does not have enough force behind it to exit the barrel, and thus becomes stuck. Squib loads make the firearm unsafe to shoot, unless the projectile can be removed.
Stock: The part of a rifle or other firearm, to which the barrel and firing mechanism are attached, that is held against one's shoulder when firing the gun. The stock provides a means for the shooter to firmly support the device and easily aim it.
Stopping power: The ability of a firearm or other weapon to cause a penetrating ballistic injury to a target, human or animal, sufficient to incapacitate the target where it stands.
Stripper clip: A speedloader that holds several cartridges together in a single unit for easier loading of a firearm's magazine.
Submachine gun: A type of automatic, magazine-fed weapon that fires pistol cartridges.
Swage: To reduce an item in size by forcing through a die. In internal ballistics, swaging refers to the process where bullets are swaged into the rifling of the barrel by the force of the expanding powder gases.
Swaged bullet: A bullet that is formed by forcing the bullet into a die to assume its final form.
Swaged choke: A constriction or choke in a shotgun barrel formed by a swaging process that compresses the outside of the barrel.
Swaged rifling: Rifling in a firearm barrel formed by a swaging process, such as button rifling.
Synchronization gear: A device usually used on aircraft for the weapon to shoot through the propeller without damaging the rotating blades. The term can be used to describe a rate of fire moderator.
T
Tack driver: A term used in the firearms industry to a firearm regardless of form one is trying to promote.
Tapering: Firearm components that narrow down to a conical fashion hence the name taper, notably with barrels and cartridges.
Taylor KO Factor: Mathematical approach for evaluating the stopping power of hunting cartridges, which favors cartridges with a high momentum and a large bullet diameter.
Telescoping stock or collapsing stock: A stock on a firearm that can telescope or fold in on itself to become more compact. Telescoping stocks are useful for storing a rifle or weapon in a space that it would not normally fit in.
Terminal ballistics: A sub-field of ballistics, the study of the behavior of a projectile when it hits its target.
Throat Erosion (firearms): The wearing of the portion of the barrel where the gas pressure and heat is highest as the projectile leaves the chamber. The greater the chamber pressure, the more rapid throat erosion occurs. This is compounded by rapid firing, which heats and weakens the steel.
Trigger: A mechanism that actuates the firing sequence of a firearm. Triggers almost universally consist of levers or buttons actuated by the index finger.
Trigger pull restrictor (TPR): A quasi-selector device intended for automatic firearms using a staged trigger.
Trunnion: a cylindrical protrusion used as a mounting and/or pivoting point. On firearms, the barrel is sometimes mounted in a trunnion, which in turn is mounted to the receiver.
Turn bolt: A turn bolt refers to a firearm component that where the whole bolt without using a bolt carrier turns to lock/unlock. This is mostly used to describe manually operated bolt action firearms, but also on some automatic firearms.
U
Upset forging: A process that increases the diameter of a workpiece by compressing its length.
Underlug: The locking lugs on a break-action firearm that extend from the bottom of the barrels under the chamber(s) and connect into the receiver bottom. 2. The metal shroud underneath the barrel of a revolver that surrounds and protects the extractor rod. The two types of underlugs include half-lug, meaning the shroud does not run the entire length of the barrel but instead is only as long as the extractor rod, and full-lug, meaning the shroud runs the full length of the barrel.
Underwater firearm: A firearm specially designed for use underwater.
V
Varmint rifle: A small-caliber firearm or high-powered air gun primarily used for varmint hunting—killing non-native or non-game animals such as rats, house sparrows, starling, crows, ground squirrels, gophers, jackrabbits, marmots, groundhogs, porcupine, opossum, coyote, skunks, weasels, or feral cats, dogs, goats, pigs, and other animals considered a nuisance vermin destructive to native or domestic plants and animals.
Velocity: The speed at which a projectile travels.
W
Wadcutter: A special-purpose bullet specially designed for shooting paper targets, usually at close range and at subsonic velocities typically under . They are often used in handgun and airgun competitions. A wadcutter has a flat or nearly flat front that cuts a very clean hole through the paper target, making it easier to score and ideally reducing errors in scoring the target to the favor of the shooter.
WCF: An acronym for a family of cartridges designed by Winchester Repeating Arms Company, called Winchester Center Fire, as in the .30–30 WCF or .32-20 WCF.
Wheellock: An obsolete mechanism for discharging a firearm.
Wildcat cartridge or wildcat: A custom cartridge for which ammunition and firearms are not mass-produced. These cartridges are often created to optimize a certain performance characteristic (such as the power, size or efficiency) of an existing commercial cartridge. See improved cartridge.
Windage: The side-to-side adjustment of a sight, used to change the horizontal component of the aiming point. See also Kentucky windage.
X
X-ring: A circle in the middle of a shooting target bullseye used to determine winners in event of a tie.
Y
Yaw: The heading of a bullet, used in external ballistics that refers to how the Magnus effect causes bullets to move out of a straight line based on their spin.
Z
Zero-in or : The act of setting up a telescopic or other sighting system so that the point of impact of a bullet matches the sights at a specified distance.
Zero stop: A stopping mechanism found on some scope sights letting the user easily dial back their sight to the zeroing distance after having adjusted their sight to shoot at other distances.
See also
Firearm components
Firearm terminology
Glossary of military abbreviations
List of established military terms
List of military tactics
References
Further reading
Firearm terminology
Firearm components
Firearms
Wikipedia glossaries using unordered lists | Glossary of firearms terms | [
"Technology"
] | 12,169 | [
"Firearm components",
"Components"
] |
13,357,973 | https://en.wikipedia.org/wiki/Ofeq-7 | Ofeq-7 (also known as Ofek 7 or Offek-7) is part of the Ofeq family of Earth observation satellites designed and built by Israel Aerospace Industries (IAI) for the Israel Ministry of Defense.
Launch
The Ofeq-7 was launched by a Shavit 2 space launch vehicle on 10 June 2007 at 23:40 UTC. Equipped with advanced technology and a series of new enhancements to provide improved imagery, it is placed into an elliptical orbit of .
Mission
Three days after its launch, on 13 June 2007, IAI MBT Space Division received the first images taken by the satellite. The Ofeq-7 is a follow-on spacecraft to Ofeq-5 that was placed into orbit in 2002.
References
Reconnaissance satellites of Israel
Spacecraft launched in 2007
Spacecraft launched by Shavit rockets
2007 in Israel
Israel Aerospace Industries satellites | Ofeq-7 | [
"Astronomy"
] | 179 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
13,358,411 | https://en.wikipedia.org/wiki/Personal%20health%20application | Personal health applications (PHA) are tools and services in medical informatics which utilizes information technologies to aid individuals to create their own personal health information. These next generation consumer-centric information systems help improve health care delivery, self-management and wellness by providing clear and complete information, which increases understanding, competence and awareness. Personal health applications are part of the Medicine 2.0 movement.
Definition
Personal Health Application is an electronic tool of storing, managing and sharing health information in illness and wellness by an individual in a secure and confidential environment.
Benefits
Most people do not carry medical records when they leave home. They do not realize that in an emergency these medical records can make a big difference; additionally, it is hard to predict when an emergency might occur. In fact, they could save a life. Previous medications, history of allergy to medications, and other significant medical or surgical history can help a health professional through PHA tools to optimize treatment.
A Personal Health Application (PHA) tool contains a patient's personal data (name, date of birth and other demographic details). It also includes a patient's diagnosis or health condition and details about the various treatment/assessments delivered by health professionals during an episode of care from a health care provider. It contains an individual's health-related information accumulated during an entire lifetime.
See also
eHealth
mHealth
Personal health record
References
Health informatics | Personal health application | [
"Biology"
] | 287 | [
"Health informatics",
"Medical technology"
] |
13,358,716 | https://en.wikipedia.org/wiki/Bioamplifier | A Bioamplifier is an electrophysiological device, a variation of the instrumentation amplifier, used to gather and increase the signal integrity of physiologic electrical activity for output to various sources. It may be an independent unit, or integrated into the electrodes.
History
Efforts to amplify biosignals started with the development of electrocardiography. In 1887, Augustus Waller, a British physiologist, successfully measured the electrocardiograph of his dog using two buckets of saline, in which he submerged each of the front and the hind paws. A few months later, Waller successfully recorded the first human electrocardiography using the capillary electrometer. However, at the time of invention, Waller did not envision that electrocardiography would be used extensively in healthcare. The electrocardiograph was impractical to use until Willem Einthoven, a Dutch physiologist, innovated the use of the string galvanometer for cardiac signal amplification. Significant improvements in amplifier technologies led to the usage of smaller electrodes that were more easily attached to body parts. In the 1920s, a way to electrically amplify the cardiac signals using vacuum tubes was introduced, which quickly replaced the string galvanometer that amplified the signal mechanically. Vacuum tubes have a larger impedance, so the amplification was more robust. Also, its relatively small size compared to the string galvanometer contributed the widespread use of the vacuum tubes. Furthermore, the large metal buckets were no longer needed, as much smaller metal-plate electrodes were introduced. By the 1930s, electrocardiograph devices could be carried to the patient's home for the purpose of bedside monitoring. With the emergence of electronic amplification, it was quickly discovered that many features of the electrocardiography were revealed with various electrode placement.
Variations
Electrocardiography
Electrocardiography (ECG or EKG) records the electrical activity of the heart, across the surface of the thorax skin. The signals are detected by electrodes attached to the surface of the skin and recorded by a device external to the body.
The amplitude of ECG ranges from 0.3 to 2 mV for the QRS complex, which is used to determine the interbeat interval from which the frequency is derived. The typical requirements for the amplifiers to be used in ECG include:
Low internal noise (<2 mV)
High Input Impedance (Zin > 10 MΩ)
Bandwidth ranging from 0.16–250 Hz
Bandwidth cutoffs (>18 dB/octave).
Notch filter (50 or 60 Hz, depending on country/region)
Common mode rejection ratio (CMRR > 107 dB)
Common mode input range (CMR ± 200 mV)
Static electricity shock protection (>2000 V).
Electromyography
Electromyography (EMG) records the electrical activity produced by skeletal muscles. It records various types of muscle signals from simple relaxation by using placing electrodes on the subject's forehead, to complex neuromuscular feedback during stroke rehabilitation. The EMG signals are acquired from the electrodes applied over or nearby the muscles to be monitored. The electrodes delegates signals to the amplifier unit, usually consisting of high performance differential amplifiers. The usual types of the signal of the interest are in the range of 0.1–2000 mV amplitude, over a bandwidth of about 25–500 Hz.
Although many electrodes still connect to an amplifier using wires, some amplifiers are small enough to mount directly on the electrode. Some minimal specifications for a modern EMG amplifier includes:
Low internal noise (<0.5 mV)
High input impedance (>100 MΩ)
Flat bandwidth and sharp high and low frequency cutoffs (>18 dB/octave).
High common mode rejection ratio (CMRR > 107 dB)
Common mode input range (CMR > ±200 mV)
Static electricity shock protection (>2000 V)
Gain stability > ±1%
Electroencephalography
Electroencephalography (EEG) instrumentation is similar to EMG instrumentation in terms of involving the placement of many surface electrodes on the patient's skin, specifically, on scalp. While EMG acquires the signals from muscles below the skin, EEG attempts to acquire signals on the patient's scalp, generated by brain cells. Simultaneously, EEG records the summed activity of tens of thousands to millions of neurons. As the amplifiers became small enough to integrate with the electrodes, EEG has become to have the potential for long term use as a brain–computer interface, because the electrodes can be kept on the scalp indefinitely. The temporal and spatial resolutions and signal to noise ratios of EEG have always lagged behind those of comparable intracortical devices, but it has the advantage of not requiring surgery.
High performance differential amplifiers are used for amplification. Signals of interest are in the range of 10 μV to 100 μV, over the frequency range of 1–50 Hz. Similar to EMG amplifiers, EEG benefits from the usage of integrated circuit. The chances of EEG is also mainly from the asymmetrical placement of electrodes, which leads to increased noise or offset. Some minimal specifications for a modern EEG amplifier includes:
Low internal voltage and current noise(<1 mV, 100 pA)
High input impedance (>108 MΩ)
Bandwidth (1–50 Hz)
Frequency cutoffs (>18 dB/octave)
High common mode rejection ratio (>107)
Common mode input range (greater than ±200 mV).
Static electricity shock protection (>2000 V)
Gain stability > ±1%
Galvanic skin response
Galvanic skin response is a measurement of the electrical conductance of the skin, which is directly influenced by how much the skin is moist. Since the sweat glands are controlled by the sympathetic nervous system, the skin conductance is crucial in measuring the psychological or physiological arousal. The arousal and the eccrine sweat gland activity are clinically found to have direct relation. High skin conductance due to sweating can be used to predict that the subject is in a highly aroused state, either psychologically or physiologically, or both.
Galvanic skin response can be measured either as resistance, called skin resistance activity (SRA) or skin conductance activity (SCA), which is a reciprocal of SRA. Both SRA and SCA include two types of responses: the average level and the short-term phasic response. Most modern instruments measure conductance, although they both can be displayed with the conversion made in circuitry or software.
Other
Electrocorticography (ECoG) records the cumulative activity of hundreds to thousands of neurons with a sheet of electrodes placed directly on the surface of the brain. In addition to requiring surgery and having low resolution, the ECoG device is wired, meaning the scalp cannot be completely closed, increasing the risk of infection. However, researchers investigating ECoG claim that the grid "possesses characteristics suitable for long term implantation".
The neurotrophic electrode is a wireless device that transmits its signals transcutaneously. In addition, it has demonstrated longevity of over four years in a human patient, because every component is completely biocompatible. It is limited in the amount of information it can provide, however, because the electronics it uses to transmit its signal (based around differential amplifiers) require so much space on the scalp that only four can fit on a human skull.
In one experiment, Dr. Kennedy adapted the neurotrophic electrode to read local field potentials (LFPs). He demonstrated that they are capable of controlling assistive technology devices, suggesting that less invasive techniques can be used to restore functionality to locked-in patients. However, the study did not address the degree of control possible with LFPs or make a formal comparison between LFPs and single unit activity.
Alternatively, the Utah array is currently a wired device, but transmits more information. It has been implanted in a human for over two years and consists of 100 conductive silicon needle-like electrodes, so it has high resolution and can record from many individual neurons.
Design
Acquiring signals
Nowadays, mostly digital amplifiers are used to record biosignals. The amplification process does not only depend on the performance and specifications of the amplifier device, but also closely binds to the types of electrodes to attach on the subject's body. Types of electrode materials and the mount position of electrodes affect the acquirement of the signals. Multielectrode arrays are also used, in which multiple electrodes are arranged in an array.
Electrodes made with certain materials tend to perform better by increasing surface area of the electrodes. For instance, Indium tin oxide (ITO) electrodes have less surface area than those made with other materials, like titanium nitride. More surface area results in reducing impedance of electrode, then neurons signals are obtained easier. ITO electrodes tend to be flat with a relatively small surface area, and are often electroplated with platinum to increase surface area and improve signal-to-area ratio.
Digital amplifiers and filters are produced small enough nowadays to be combined with electrodes, serving as preamplifiers. The need for preamplifiers is clear in that
the signals that neurons (or any other organs) produce are weak. Therefore, preamplifiers preferably are to be placed near the source of the signals, where the electrodes are adjacent to. Another advantage for having preamplifiers close to the signal source is that the long wires lead to significant interference or noise. Therefore, it is best to have the wires as short as possible.
However, when wider bands are needed, for instance a very high (action potentials) or a low frequency (local field potentials), they could be filtered digitally, perhaps with second-stage analog amplifier before being digitized. There may be some drawbacks when several amplifiers in cascade. It depends on the type, analog or digital. However, in general, filters cause time-delay and amendments are needed to have signals in sync. Also, as extra complexity is added, it costs more money. In terms of digital amplifiers, a lot of works that the laboratories do are feeding back signals to the networks in closed loop, real-time. As a result, more time is needed to apply on signals when there are more digital amplifiers on the way. One solution is using field-programmable gate array (FPGA), the "blank slate" integrated circuit that is written whatever on it. Using FPGA sometimes reduces a need to use computers, resulting in a speed-up of filtering. Another problem with cascaded filters occurs when the maximum output of the first filter is smaller than the raw signals, and the second filter has a higher maximum output that the first filter. In that case, it is impossible to recognize if the signals have reached the maximum output or not.
Design challenges
The trend with the development in electrodes and amplifiers has been reducing its size for better transportability, as well as making them implantable on the skin for prolonged recording of the signals. Preamplifiers, head-stage amplifiers stay the same except that they should have different form-factors. They should be lightweight, waterproof, not scratching skin or scalp from parts that they need to mount themselves, and they should dissipate heat well. Heat dissipation is a big issue, because extra heat may cause in the temperature of nearby tissue to rise, potentially causing a change in the physiology of the tissue. One of the solutions to dissipate heat is the usage of the Peltier device. Peltier device, uses Peltier effect or thermoelectric effect to create a heat flux between the two different types of materials. A Peltier device actively pumps the heat from one side to the other side of the device, consuming electrical energy. Conventional cooling using compressed gases would not be a feasible option for cooling down an individual integrated circuit, because it needs many other devices to operate such as evaporator, compressor and condenser. Overall, a compressor-based system is more for a large-scale cooling jobs, and is not viable for small-scale system like bioamplifiers. The passive cooling, like heat sink and fan, only limits the rise of temperature above the ambient condition, while Peltier devices can actively pull heat right out of a thermal load, just like compressor-based cooling systems. Also, Peltier devices can be manufactured at sizes well below 8 mm square, therefore they can be integrated to the bioamplifiers without making them to lose mobility.
See also
Amplifier
Biosignal
Operational amplifier applications
References
Neurophysiology
Electrophysiology
Electronic amplifiers | Bioamplifier | [
"Technology"
] | 2,611 | [
"Electronic amplifiers",
"Amplifiers"
] |
13,359,535 | https://en.wikipedia.org/wiki/Ribat | A ribāṭ (; hospice, hostel, base or retreat) is an Arabic term, initially designating a small fortification built along a frontier during the first years of the Muslim conquest of the Maghreb to house military volunteers, called murabitun, and shortly after they also appeared along the Byzantine frontier, where they attracted converts from Greater Khorasan, an area that would become known as al-ʻAwāṣim in the ninth century CE.
The ribat fortifications later served to protect commercial routes, as caravanserais, and as centers for isolated Muslim communities as well as serving as places of piety.
Islamic meaning
Historical meaning
The word ribat in its abstract refers to voluntary defense of Islam, which is why ribats were originally used to house those who fought to defend Islam in jihad. They can also be referred to by other names such as khanqah, most commonly used in Iran, and tekke, most commonly used in Turkey.
Classically, ribat referred to the guard duty at a frontier outpost in order to defend dar al-Islam. The one who performs ribat is called a murabit.
Contemporary use
Contemporary use of the term ribat is common among jihadi groups such as al-Qaeda or the Islamic State of Iraq and the Levant. The term has also been used by Salafi-Jihadis operating in the Gaza Strip. In their terminology, ʻArḍ al-Ribat "Land of the Ribat" is a name for Palestine, with the literal meaning of "the land of standing vigilant watch on the frontier", understood in the context of their ideology of global jihad, which is fundamentally opposed to Palestinian nationalism.
As caravanserais
In time, some ribats became hostels for voyagers on major trade routes (caravanserai).
As Sufi retreats
Sufi brotherhoods
Ribat was initially used to describe a frontier post where soldiers would stay during the early Muslim conquests and after, such as in al-Awasim. The term transformed over time to refer to a center for Sufi. As they were later no longer needed to house and supply soldiers, ribats became refuges for mystics. The ribat tradition was perhaps one of the early sources of the ṭarīqas, or Sufi mystic brotherhoods, and a type of the later zawiya or Sufi lodge, which spread into North Africa, and from there across the Sahara to West Africa. Here, they are the homes of marabouts: religious teachers, usually Sufis. Such places of spiritual retreat were termed khānqāhs (). Usually, ribats were inhabited by a shaykh, and his family and visitors were allowed to come and learn from him. Many times, the tomb of the founder was also located in the same building. These centers' institutionalization was made possible partly through donations from wealthy merchants, landowners, and influential leaders. Some of these compounds also received regular stipends to maintain them.
Some important ribats to mention are the Rabati Malik (c.1068–80), which is in Uzbekistan in the Kyzylkum Desert and is still partially intact, and the Ribat of Sharaf from the 12th century, which was built in a square shape with a monumental portal, a courtyard, and long vaulted rooms along the walls. Most ribats had a similar architectural appearance which consisted of a surrounding wall with an entrance, living rooms, storehouses for provisions, a watch tower used to signal in the case of an invasion, four to eight towers, and a mosque in large ribats.
These institutions were used as a sort of school house where a shaykh could teach his disciples the ways of a specific ṭarīqa. They were also used as a place of worship where the shaykh could observe the members of the specific Sufi order and help them on their inner path to ḥaqīqa (, ultimate truth or reality).
Female Sufis
Another use of ribat refers to a sort of convent or retreat house for Sufi women. Female shaykhas (شيخة), scholars of law in medieval times, and large numbers of widows or divorcees lived in abstinence and worship in ribats.
See also
Almoravids
Al-Awasim, Muslim side of the frontier between the Byzantine Empire and Early Islamic realm
Khan, Persian word for caravanserai; Turkish variant: han
Khanqah, building used specifically by a Sufi brotherhood
Ksar, North African (usually Berber) fortified village
List of caravanserais
Rabad, Central Asian variant for 'rabat'
Rabat (disambiguation), Semitic word for "fortified town" or "suburb"
Robat (disambiguation), Persian variant for 'ribat'
List of Early Muslim ribats
Cafarlet in Palestine
Minat al-Qal'a in Palestine
References
Further reading
Cache of The Ribat by Hajj Ahmad Thomson, 23 06 2007.
"The Ribats in Morocco and their influence in the spread of knowledge and tasawwuf" from: al-Imra'a al-Maghribiyya wa't-Tasawwuf (The Moroccan Woman and Tasawwuf in the Eleventh Century) by Mustafa 'Abdu's-Salam al-Mahmah)
Majid Khadduri, War And Peace in the Law of Islam (Baltimore, Johns Hopkins University Press, 1955), . p. 81.
Hassan S. Khalilieh, "The Ribat System and Its Role in Coastal Navigation," Journal of the Economic and Social History of the Orient, 42,2 (1999), 212–225.
Jörg Feuchter, "The Islamic Ribаt - A Model for the Christian Military Orders? Sacred Violence, Religious Concepts and the Invention of a Cultural Transfer," in Religion and Its Other: Secular and Sacral Concepts and Practices in Interaction. Edited by Heike Bock, Jörg Feuchter, and Michi Knecht (Frankfurt/M., Campus Verlag, 2008).
External links
With a map and list of Seljuk hans.
Introduction and definition
Origins of the Han. The evolution of stopping posts from the Ancient Near East, through the Early Muslim ribats, to the Seljuk han (Turkish for caravanserai); with a list of "Great Seljuk era hans and ribats in Central Asia and Iran"
ArchNet: Origin and layout of a ribat and its adaptation as a caravanserai. Accessed May 2021.
What is the ribath?
Forts
Muslim conquest of the Maghreb
Maghreb
Islamic architecture
Arabic fortifications
Infrastructure
Building types
Buildings and structures by type
Urban studies and planning terminology | Ribat | [
"Engineering"
] | 1,368 | [
"Construction",
"Buildings and structures by type",
"Infrastructure",
"Architecture"
] |
13,360,078 | https://en.wikipedia.org/wiki/Mammisi | A mammisi (mamisi) is an ancient Egyptian small chapel attached to a larger temple (usually in front of the pylons), built from the Late Period, and associated with the nativity of a god. The word is derived from Coptic – the last phase of the ancient Egyptian language – meaning "birth place". Its usage is attributed to the French egyptologist Jean-François Champollion (1790–1832).
Religious references
Major temples inhabited by a divine triad could be completed by a peristyle-surrounded mammisi, in which the goddess of the triad would give birth to the son of the triad itself. The son, whose divine birth was celebrated annually, was associated with the Pharaoh (even in the hierogamy scenes on the walls).
Taweret, Raet-Tawy and the Seven Hathors who presided over childbirth were particularly revered here, but it is equally common to find references to Bes, Khnum and Osiris himself as fertility deities. Mammisis thus formed an architectural translation of the myth of divine birth and its eternal repetition. From the end of the Late Period these buildings confirm the restoration of royal power that each dynasty will strive to assert in the very heart of the great sanctuaries of the country, including the Roman emperors.
Notable Mammisi
Dendera
The most important surviving examples in Dendera, Edfu, Kom Ombo, Philae, El Kab, Athribis, Armant, the Dakhla Oasis etc. are from the Ptolemaic and Roman periods in Egypt; but the first one, in Dendera, dates back to the 30th dynasty Pharaoh Nectanebo I (379/378–361/360 BC), one of the last native rulers of Egypt. Equipped with a hypostyle by Pharaoh Ptolemy VI Philometor (181–145 BC) and with a peristyle by Ptolemy X Alexander I (110–88 BC), it was dedicated to Harsiese ("Horus son of Isis"). This 30th dynasty project was then repeated, by the Græco-Roman rulers, for some of the major shrines in the country.
The famous Roman mammisi – the less ancient one associated with the Dendera Temple complex – was built by Augustus immediately after his conquest of Egypt (31 BC). The murals show Augustus' far successor Trajan at the sacrificial ceremony for Hathor and are among the most beautiful in Egypt. The mammisi was dedicated to Hathor and her child Ihy. On the abacus above the pillar capitals are representations of Bes as the patron god of birth.
References
Further reading
Alexander Badawy: The Architectural Symbolism of the Mammisi-Chapels in Egypt. In: Chronique d'Égypte. Vol. 38, 1933, p. 87–90.
Ludwig Borchardt: Ägyptische Tempel mit Umgang. Cairo 1938.
François Daumas: Les mammisis des temples égyptiens. Paris 1958.
Francois Daumas: Geburtshaus, in: Lexikon der Ägyptologie. Vol. II, p. 462–475.
Sandra Sandri: Har-Pa-Chered (Harpokrates): Die Genese eines ägyptischen Götterkindes. Peeters, Leuven 2006, .
Daniela Rutica: Kleopatras vergessener Tempel. Das Geburtshaus von Kleopatra VII. in Hermonthis. Eine Rekonstruktion der Dekoration (= Göttinger Miszellen. Occasional Studies Vol. 1). Georg-August-Universität Göttingen, Seminar für Ägyptologie und Koptologie, Göttingen 2015, .
Egyptology
Sacral architecture
Ancient Egyptian architecture
Architectural elements
Types of monuments and memorials | Mammisi | [
"Technology",
"Engineering"
] | 808 | [
"Sacral architecture",
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
13,360,851 | https://en.wikipedia.org/wiki/Nature%20center | A nature center (or nature centre) is an organization with a visitor center or interpretive center designed to educate people about nature and the environment. Usually in a protected open space, nature centers often have trails through their property. Some are in a state or city park, and some have special gardens or an arboretum. Their properties can be characterized as nature preserves and wildlife sanctuaries. Nature centers generally display small live animals, such as reptiles, rodents, insects, or fish. There are often museum exhibits and displays about natural history, or preserved mounted animals or nature dioramas. Nature centers are staffed by paid or volunteer naturalists and most offer educational programs to the general public, as well as summer camp, after-school and school group programs. These educational programs teach people about nature conservation as well as the scientific method, biology, and ecology.
Some nature centers allow free admission but collect voluntary donations in order to help offset expenses. They usually rely on support from volunteers.
Environmental education centers differ from nature centers in that their museum exhibits and education programs are available mostly by appointment, although casual visitors may be allowed to walk on their grounds.
Some city, state and national parks have facilities similar to nature centers, such as museum exhibits, dioramas and trails, and some offer park nature education programs, usually presented by a park ranger.
See also
List of nature centers
National park
References
Natural history
Environmental education
Visitor centers | Nature center | [
"Environmental_science"
] | 286 | [
"Environmental education",
"Environmental social science"
] |
13,361,101 | https://en.wikipedia.org/wiki/Maximum%20Absorbency%20Garment | A Maximum Absorbency Garment (MAG) is an adult-sized diaper with extra absorption material that NASA astronauts wear during liftoff, landing, and extra-vehicular activity (EVA) to absorb urine and feces. It is worn by both male and female astronauts. Astronauts can urinate into the MAG, and usually wait to defecate when they return to the spacecraft. However, the MAG is rarely used for this purpose, since the astronauts use the facilities of the station before EVA and also time the consumption of the in-suit water. Nonetheless, the garment provides peace of mind for the astronauts.
The MAG was developed because astronauts cannot remove their space suits during long operations, such as spacewalks that usually last for several hours. Generally, three MAGs were given during space shuttle missions, one for launch, reentry, and an extra for spacewalking or for a second reentry attempt. Astronauts drink about of salty water before reentry since less fluid is retained in zero G. Without the extra fluids, the astronauts might faint in Earth's gravity, further highlighting the potential necessity of the MAGs. It is worn underneath the Liquid Cooling and Ventilation Garment (LCVG).
History
During the Apollo era, astronauts used urine and fecal containment systems worn under spandex trunks. The fecal containment device (FCD) was a bag attached directly to the body with an adhesive seal, and the urine collection device (UCD) had a condom-like sheath attached to a tube and pouch. Women joined the astronaut corps in 1978 and required devices with similar functions. However, the early attempts to design feminized versions of the male devices were unsuccessful. In the 1980s, NASA designed space diapers which were called Disposable Absorption Containment Trunks (DACTs). These addressed the women's needs since it was comfortable, manageable, and resistant to leaks. These diapers were first used in 1983, during the first Challenger mission.
Disposable underwear, first introduced in the 1960s as baby's diapers then in 1980 for adult incontinence, appealed to NASA as a more practical option. In 1988, the Maximum Absorbency Garment replaced the DACT for female astronauts. NASA created the name Maximum Absorbency Garment to avoid using trade names. Male astronauts then adopted the MAG as well. In the 1990s, NASA ordered 3,200 of the diapers of the brand name Absorbencies, manufactured by a company that has folded. In 2007, about a third of the supply remained.
Usage
The MAGs are pulled up like shorts. A powdery chemical absorbent called sodium polyacrylate is incorporated into the fabric of the garment. Sodium polyacrylate can absorb around 300 times its weight in distilled water. Assuming the astronaut urinates, the diaper would only need to be changed every eight to ten hours. The MAG can hold a maximum of of urine, blood, and/or feces. The MAG absorbs the liquid and pulls it away from the skin.
Media attention
These garments gained attention in February 2007, when astronaut Lisa Nowak drove to attack Air Force officer Colleen Shipman out of jealousy for her former lover. It was stated in a police report that Nowak said she used the diapers to avoid pit stops during her journey. However, Nowak denied these claims and testified that she did not wear these diapers during her trip.
See also
Extravehicular Mobility Unit
References
Space suit components
Undergarments
Diapers
1988 introductions | Maximum Absorbency Garment | [
"Biology"
] | 718 | [
"Diapers",
"Excretion"
] |
13,361,521 | https://en.wikipedia.org/wiki/L-shell | The L-shell, L-value, or McIlwain L-parameter (after Carl E. McIlwain) is a parameter describing a particular set of planetary magnetic field lines. Colloquially, L-value often describes the set of magnetic field lines which cross the Earth's magnetic equator at a number of Earth-radii equal to the L-value. For example, describes the set of the Earth's magnetic field lines which cross the Earth's magnetic equator two earth radii from the center of the Earth. L-shell parameters can also describe the magnetic fields of other planets. In such cases, the parameter is renormalized for that planet's radius and magnetic field model.
Although L-value is formally defined in terms of the Earth's true instantaneous magnetic field (or a high-order model like IGRF), it is often used to give a general picture of magnetic phenomena near the Earth, in which case it can be approximated using the dipole model of the Earth's magnetic field.
Charged particle motions in a dipole field
The motions of low-energy charged particles in the Earth's magnetic field (or in any nearly-dipolar magnetic field) can be usefully described in terms of McIlwain's (B,L) coordinates, the first of which, B is just the magnitude (or length) of the magnetic field vector.
This description is most valuable when the gyroradius of the charged particle orbit is small compared to the spatial scale for changes in the field. Then a charged particle will basically follow a helical path orbiting the local field line. In a local coordinate system {x,y,z} where z is along the field, the transverse motion will be nearly a circle, orbiting the "guiding center", that is the center of the orbit or the local B line, with the gyroradius and frequency characteristic of cyclotron motion for the field strength, while the simultaneous motion along z will be at nearly uniform velocity, since the component of the Lorentz force along the field line is zero.
At the next level of approximation, as the particle orbits and moves along the field line, along which the field changes slowly, the radius of the orbit changes so as to keep the magnetic flux enclosed by the orbit constant. Since the Lorentz force is strictly perpendicular to the velocity, it cannot change the energy of a charged particle moving in it. Thus the particle's kinetic energy remains constant. Then so also must its speed be constant. Then it can be shown that the particle's velocity parallel to the local field must decrease if the field is increasing along its z motion, and increase if the field decreases, while the components of the velocity transverse to the field increase or decrease so as to keep the magnitude of the total velocity constant. Conservation of energy prevents the transverse velocity from increasing without limit, and eventually the longitudinal component of the velocity becomes zero, while the pitch angle, of the particle with respect to the field line, becomes 90°. Then the longitudinal motion is stopped and reversed, and the particle is reflected back towards regions of weaker field, the guiding center now retracing its previous motion along the field line, with the particle's transverse velocity decreasing and its longitudinal velocity increasing.
In the (approximately) dipole field of the Earth, the magnitude of the field is greatest near the magnetic poles, and least near the magnetic Equator. Thus after the particle crosses the Equator, it will again encounter regions of increasing field, until it once again stops at the magnetic mirror point, on the opposite side of the Equator. The result is that, as the particle orbits its guiding center on the field line, it bounces back and forth between the north mirror point and the south mirror point, remaining approximately on the same field line. The particle is therefore endlessly trapped, and cannot escape from the region of the Earth. Particles with too-small pitch angles may strike the top of the atmosphere if they are not mirrored before their field line reaches too close to the Earth, in which case they will eventually be scattered by atoms in the air, lose energy, and be lost from the belts.
However, for particles which mirror at safe altitudes, (in yet a further level of approximation) the fact that the field generally increases towards the center of the Earth means that the curvature on the side of the orbit nearest the Earth is somewhat greater than on the opposite side, so that the orbit has a slightly non-circular, with a (prolate) cycloidal shape, and the guiding center slowly moves perpendicular both to the field line and to the radial direction. The guiding center of the cyclotron orbit, instead of moving exactly along the field line, therefore drifts slowly east or west (depending on the sign of the charge of the particle), and the local field line connecting the two mirror points at any moment, slowly sweeps out a surface connecting them as it moves in longitude. Eventually the particle will drift entirely around the Earth, and the surface will be closed upon itself. These drift surfaces, nested like the skin of an onion, are the surfaces of constant L in the McIlwain coordinate system. They apply not only for a perfect dipole field, but also for fields that are approximately dipolar. For a given particle, as long as only the Lorentz force is involved, B and L remain constant and particles can be trapped indefinitely. Use of (B,L) coordinates provides us with a way of mapping the real, non-dipolar terrestrial or planetary field into coordinates that behave essentially like those of a perfect dipole. The L parameter is traditionally labeled in Earth-radii, of the point where the shell crosses the magnetic Equator, of the equivalent dipole. B is measured in gauss.
Equation for L in a Dipole Magnetic Field
In a centered dipole magnetic field model, the path along a given L shell can be described as
where is the radial distance (in planetary radii) to a point on the line, is its geomagnetic latitude, and is the L-shell of interest.
L-shells on Earth
For the Earth, L-shells uniquely define regions of particular geophysical interest. Certain physical phenomena occur in the ionosphere and magnetosphere at characteristic L-shells. For instance, auroral light displays are most common around L=6, can reach L=4 during moderate disturbances, and during the most severe geomagnetic storms, may approach L=2. The Van Allen radiation belts roughly correspond to L=, and L=. The plasmapause is typically around L=5.
L-shells on Jupiter
The Jovian magnetic field is the strongest planetary field in the solar system. Its magnetic field traps electrons with energies greater than 500 MeV The characteristic L-shells are L=6, where electron distribution undergoes a marked hardening (increase of energy), and L=20-50, where the electron energy decreases to the VHF regime and the magnetosphere eventually gives way to the solar wind. Because Jupiter's trapped electrons contain so much energy, they more easily diffuse across L-shells than trapped electrons in Earth's magnetic field. One consequence of this is a more continuous and smoothly-varying radio-spectrum emitted by trapped electrons in gyro-resonance.
See also
Earth's magnetic field
Dipole model of the Earth's magnetic field
Guiding center
Geomagnetic latitude
International Geomagnetic Reference Field
TEP
World Magnetic Model
References
Other references
Tascione, Thomas F. (1994), Introduction to the Space Environment (2nd ed.), Malabar, FL: Kreiger
Margaret Kivelson and Christopher Russell (1995), Introduction to Space Physics, New York, NY: Cambridge University Press, pp. 166–167
Geomagnetism
Space physics | L-shell | [
"Astronomy"
] | 1,606 | [
"Outer space",
"Space physics"
] |
13,361,622 | https://en.wikipedia.org/wiki/Die%20Glocke%20%28conspiracy%20theory%29 | (, 'The Bell') was a purported top-secret scientific technological device, wonder weapon, or developed in the 1940s in Nazi Germany. Rumors of this device have persisted for decades after WW2 and were used as a plot trope in the fiction novel Lightning by Dean Koontz (1988). First fully described by Polish journalist and author Igor Witkowski in (2000), it was later popularized by military journalist and author Nick Cook, who associated it with Nazi occultism, antigravity, and free energy suppression research. Mainstream reviewers have criticized claims about Die Glocke as being pseudoscientific, recycled rumors, and a hoax. and other alleged Nazi "miracle weapons" have been dramatized in video games, television shows, and novels.
History
In his 2001 book The Hunt for Zero Point, author Nick Cook identified claims about as having originated in the 2000 Polish book ("The Truth About The Wonder Weapon") by Igor Witkowski. Cook described Witkowski's claims of a device called "The Bell" engineered by Nazi scientists that was "a glowing, rotating contraption" rumored to have "some kind of antigravitational effect", be a "time machine", or part of an "SS antigravity program" for a flying saucer.
According to Cook, was bell-shaped, about high and in diameter, and incorporated "two high-speed, counter-rotating cylinders filled with a purplish, liquid metallic-looking substance that was supposed to be highly radioactive, code-named 'Xerum 525.'" Cook recounts claims that "scientists and technicians who worked on the bell and who did not die of its effects were wiped out by the SS at the close of the war, and the device was moved to an unknown location". Cook proposed that SS official Hans Kammler later secretly traded this technology to the U.S. military in exchange for his freedom. Witkowski suggested that a concrete ring called "The Henge" near the Wenceslaus mine built in 1943 or 1944 and vaguely resembling Stonehenge was used to tether the Bell during tests. According to writer Jason Colavito, the structure is merely the remains of an ordinary industrial cooling tower.
Witkowski's book was translated to English in 2003. He claimed to have discovered evidence of Die Glocke in a review of WWII-era documents that were declassified by the Polish government, which led him to additional research via archives and interviews. The first document, allegedly supplied to Witkowski by an unnamed Polish government official, was an affidavit from the war crimes trial for General Jakob Sporrenberg, who supposedly confessed to ordering the murder of about 60 persons who had knowledge of the secretive project. Kurt Debus, Wernher von Braun and Walther Gerlach were also allegedly implicated in Die Glocke research. Witkowski claims Die Glocke was organized under a division of the Waffen-SS, and operated mainly at facilities in Lower Silesia. Die Glocke was conceived in early 1942, and active experimentation began in mid-1944.
Prisoners from the Gross-Rosen concentration camp were supposedly exposed to radiation from Die Glocke, resulting in many deaths and health problems. Survivors of the camp are alleged to have reported witnessing tests of Die Glocke, reporting a bright bluish light from the object.
Witkowski postulated Xerum 525 was likely an irradiated form of mercury used in the creation of a form of plasma that was intended as a weapon and/or propulsion system, and which may have been capable of distorting spacetime.
Reception
Cook's publication introduced the topic in English without critically discussing the subject. More recently, historian Eric Kurlander has discussed the topic in his 2017 book on Nazi esotericism Hitler's Monsters: A Supernatural History of the Third Reich. According to reviewer Julian Strube, Kurlander "cites from the reservoir of post-war conspiracy theories" and "heavily relies on sensationalist accounts...mixing up contemporary sources with post-war sensationalist literature, half-truths, and fictitious accounts".
According to Salon reviewer Kurt Kleiner, Cook's decade as an editor at Jane's Defence Weekly "is enough to make you take a second look" at Die Glocke theories. Kleiner further notes that anti-gravity per se "can't be completely dismissed" given that it's been the subject of serious research over the years, and also agrees that researchers in Nazi Germany were working on highly advanced technology during the 1940s. Nonetheless, Kleier concludes: "It's a story that strains credulity. But unless we're after cheap laughs, our hope when we pick up a book like this is that the author will, against the odds, build a careful, reasonable and convincing case. Cook isn't that author". Kleiner criticized Cook's work as "ferreting out minor inconsistencies and odd, ambiguous details which he tries to puff up into proof", characterized the process of evaluating Cook's claims as "untangling science from pseudo-science", and concluded that "what is instructive about the book is the insight we get into how conspiracy theories seduce otherwise reasonable people".
Skeptical author Robert Sheaffer criticized Cook's book as "a classic example of how to spin an exciting yarn based on almost nothing. He visits places where it is rumored that secret UFO and antigravity research is going on...and writes about what he feels and imagines, although he discovers nothing more tangible than unsubstantiated rumors". Sheaffer notes that claims about Die Glocke are circulated by UFOlogists and conspiracy-oriented authors such as Jim Marrs, Joseph P. Farrell, and antigravity proponent John Dering.
Jason Colavito wrote that Witkowski's claims were "recycled" from 1960s rumors of Nazi occult science first published in Morning of the Magicians, and describes Die Glocke as "a device few outside of fringe culture think actually existed. In short, it looks to be a hoax, or at least a wild exaggeration". Author Brian Dunning states that Morning of the Magicians helped promote belief in Die Glocke and Nazi occultism, and its absence in the historical record make it "increasingly unlikely that anything like it actually existed". According to Dunning, "all we have in the way of evidence is a third-hand anecdotal account of something that's desperately implausible, backed up by neither evidence nor even a corroborating account".
Author and historian Robert F. Dorr characterizes Die Glocke as among "the most imaginative of the conspiracy theories" that arose in post-World War II years, and typical of the fantasies of magical German weapons often popularized in pulp magazines such as the National Police Gazette.
Some theories circulating on Internet conspiracy sites claim that is located in a Nazi gold train that is buried in a tunnel beneath a mountain in Poland. Duncan Roads, editor of Nexus, has pointed out that the "Nazis on the Moon trope" is linked to wild speculations about Nazi anti-gravitational technology, such as Witkowski's .
Journalist Patrick J. Kiger wrote that German propaganda of fictional combined with the secrecy surrounding actual advanced technology such as the V-2 rocket captured at war's end by the U.S. military helped spawn "sensational book-length exposes, web sites, and legions of enthusiasts who revel in rumors of science fiction-like weapons supposedly invented by Hitler’s scientists". According to Kiger, is a popular example of such legends and speculation, citing former aerospace scientist David Myhra's contention that if antigravity devices actually existed, the Germans, desperate to stop the Allies' advance, would have used them.
See also
Kecksburg UFO incident
Nazi UFOs
Project Riese
Gross-Rosen concentration camp
References
Further reading
External links
The Nazi Bell - A Detailed Field Investigation to the Flytrap/Henge that Allegedly Held the Bell.
Die Glocke - Hitler's Anti-Gravity Machine?, by Mark Felton
Anti-gravity
Hoaxes in Germany
Military UFO conspiracy theories
Nazi-related conspiracy theories
Occultism in Nazism
Paranormal hoaxes
Supernatural urban legends
UFO-related phenomena | Die Glocke (conspiracy theory) | [
"Astronomy"
] | 1,705 | [
"Astronomical hypotheses",
"Anti-gravity"
] |
13,361,722 | https://en.wikipedia.org/wiki/PowerTOP | PowerTOP is a software utility designed to measure, explain and minimise a computer's electrical power consumption. It was released by Intel in 2007 under the GPLv2 license. It works for Intel, AMD, ARM and UltraSPARC processors.
PowerTOP analyzes the programs, device drivers, and kernel options running on a computer based on the Linux and Solaris operating systems, and estimates the power consumption resulting from their use. This information may be used to pinpoint software that results in excessive power use. This is particularly useful for laptop computer users who wish to prolong battery life, and data center operators, for whom electrical and cooling costs are a major expenditure.
Usage
The original focus was on CPU sleep states, and showing the programs or drivers responsible for "wakeups" which prevent CPUs entering sleep states. A database of known problems automatically provides more user friendly "tips" for specific sources of wakeups. However, it also shows information on CPU frequency scaling. Over time the database has been expanded to include tips on a wide range of power consumption issues.
Project activity
The latest release of PowerTOP (version 2.15) was made public on September 29, 2022. The project is hosted on GitHub.
See also
Power management
Green computing
LatencyTOP
top (software)
Run-time estimation of system and sub-system level power consumption
References
External links
Version Control Repository
Powertop for OpenSolaris – part of Project Tesla
Linux process- and task-management-related software
Computers and the environment | PowerTOP | [
"Technology"
] | 309 | [
"Computers and the environment",
"Computers",
"Computing and society"
] |
13,362,584 | https://en.wikipedia.org/wiki/Smith%E2%80%93Minkowski%E2%80%93Siegel%20mass%20formula | In mathematics, the Smith–Minkowski–Siegel mass formula (or Minkowski–Siegel mass formula) is a formula for the sum of the weights of the lattices (quadratic forms) in a genus, weighted by the reciprocals of the orders of their automorphism groups. The mass formula is often given for integral quadratic forms, though it can be generalized to quadratic forms over any algebraic number field.
In 0 and 1 dimensions the mass formula is trivial, in 2 dimensions it is essentially equivalent to Dirichlet's class number formulas for imaginary quadratic fields, and in 3 dimensions some partial results were given by Gotthold Eisenstein.
The mass formula in higher dimensions was first given by , though his results were forgotten for many years.
It was rediscovered by , and an error in Minkowski's paper was found and corrected by .
Many published versions of the mass formula have errors; in particular the 2-adic densities are difficult to get right, and it is sometimes forgotten that the trivial cases of dimensions 0 and 1 are different from the cases of dimension at least 2.
give an expository account and precise statement of the mass formula for integral quadratic forms, which is reliable because they check it on a large number of explicit cases.
For recent proofs of the mass formula see and .
The Smith–Minkowski–Siegel mass formula is essentially the constant term of the Weil–Siegel formula.
Statement of the mass formula
If f is an n-dimensional positive definite integral quadratic form (or lattice) then the mass
of its genus is defined to be
where the sum is over all integrally inequivalent forms in the same genus as f, and Aut(Λ) is the automorphism group of Λ.
The form of the mass formula given by states that for n ≥ 2 the mass is given by
where mp(f) is the p-mass of f, given by
for sufficiently large r, where ps is the highest power of p dividing the determinant of f. The number N(pr) is the number of n by n matrices
X with coefficients that are integers mod p r such that
where A is the Gram matrix of f, or in other words the order of the automorphism group of the form reduced mod p r.
Some authors state the mass formula in terms of the p-adic density
instead of the p-mass. The p-mass is invariant under rescaling f but the p-density is not.
In the (trivial) cases of dimension 0 or 1 the mass formula needs some modifications. The factor of 2 in front represents the Tamagawa number of the special orthogonal group, which is only 1 in dimensions 0 and 1. Also the factor of 2 in front of mp(f) represents the index of the special orthogonal group in the orthogonal group, which is only 1 in 0 dimensions.
Evaluation of the mass
The mass formula gives the mass as an infinite product over all primes. This can be rewritten as a finite product as follows. For all but a finite number of primes (those not dividing 2 det(ƒ)) the p-mass mp(ƒ) is equal to the standard p-mass stdp(ƒ), given by
(for n = dim(ƒ) even)
(for n = dim(ƒ) odd)
where the Legendre symbol in the second line is interpreted as 0 if p divides 2 det(ƒ).
If all the p-masses have their standard value, then the total mass is the
standard mass
(For n odd)
(For n even)
where
D = (−1)n/2 det(ƒ)
The values of the Riemann zeta function for an even integers s are given in terms of Bernoulli numbers by
So the mass of ƒ is given as a finite product of rational numbers as
Evaluation of the p-mass
If the form f has a p-adic Jordan decomposition
where q runs through powers of p and fq has determinant prime to p and dimension n(q),
then the p-mass is given by
Here n(II) is the sum of the dimensions of all Jordan components of type 2 and p = 2, and n(I,I) is the total number of pairs of adjacent constituents fq, f2q that are both of type I.
The factor Mp(fq) is called a diagonal factor and is a power of p times the order of a certain orthogonal group over the field with p elements.
For odd p its value is given by
when n is odd, or
when n is even and (−1)n/2dq is a quadratic residue, or
when n is even and (−1)n/2dq is a quadratic nonresidue.
For p = 2 the diagonal factor Mp(fq) is notoriously tricky to calculate. (The notation is misleading as it depends not only on fq but also on f2q and fq/2.)
We say that fq is odd if it represents an odd 2-adic integer, and even otherwise.
The octane value of fq is an integer mod 8; if fq is even its octane value is 0 if the determinant is +1 or −1 mod 8, and is 4 if the determinant is +3 or −3 mod 8, while if fq is odd it can be diagonalized and its octane value is then the number of diagonal entries that are 1 mod 4 minus the number that are 3 mod 4.
We say that fq is bound if at least one of f2q and fq/2 is odd, and say it is free otherwise.
The integer t is defined so that the dimension of fq is 2t if fq is even, and 2t + 1 or 2t + 2 if fq is odd.
Then the diagonal factor Mp(fq) is given as follows.
when the form is bound or has octane value +2 or −2 mod 8 or
when the form is free and has octane value −1 or 0 or 1 mod 8 or
when the form is free and has octane value −3 or 3 or 4 mod 8.
Evaluation of ζD(s)
The required values of the Dirichlet series ζD(s) can be evaluated as follows. We write χ for the Dirichlet character with χ(m) given by 0 if m is even, and the Jacobi symbol if m is odd. We write k for the modulus of this character and k1 for its conductor, and put χ = χ1ψ where χ1 is the principal character mod k and ψ is a primitive character mod k1. Then
The functional equation for the L-series is
where G is the Gauss sum
If s is a positive integer then
where Bs(x) is a Bernoulli polynomial.
Examples
For the case of even unimodular lattices Λ of dimension n > 0 divisible by 8 the mass formula is
where Bk is a Bernoulli number.
Dimension n = 0
The formula above fails for n = 0, and in general the mass formula needs to be modified in the trivial cases when the dimension is at most 1. For n = 0 there is just one lattice, the zero lattice, of weight 1, so the total mass is 1.
Dimension n = 8
The mass formula gives the total mass as
There is exactly one even unimodular lattice of dimension 8, the E8 lattice, whose automorphism group is the Weyl group of E8 of order 696729600, so this verifies the mass formula in this case.
Smith originally gave a nonconstructive proof of the existence of an even unimodular lattice of dimension 8 using the fact that the mass is non-zero.
Dimension n = 16
The mass formula gives the total mass as
There are two even unimodular lattices of dimension 16, one with root system E82
and automorphism group of order 2×6967296002 = 970864271032320000, and one with root system D16 and automorphism group of order 21516! = 685597979049984000.
So the mass formula is
Dimension n = 24
There are 24 even unimodular lattices of dimension 24, called the Niemeier lattices. The mass formula for them is checked in .
Dimension n = 32
The mass in this case is large, more than 40 million. This implies that there are more than 80 million even
unimodular lattices of dimension 32, as each has automorphism group of order at least 2 so contributes at most 1/2 to the mass. By refining this argument, showed that there are more than a billion such lattices. In higher dimensions the mass, and hence the number of lattices, increases very rapidly.
Generalizations
Siegel gave a more general formula that counts the weighted number of representations of one quadratic form by forms in some genus; the Smith–Minkowski–Siegel mass formula is the special case when one form is the zero form.
Tamagawa showed that the mass formula was equivalent to the statement that the Tamagawa number of
the orthogonal group is 2, which is equivalent to saying that the Tamagawa number of its simply connected cover the spin group is 1. André Weil conjectured more generally that the Tamagawa number of any simply connected semisimple group is 1, and this conjecture was proved by Kottwitz in 1988.
gave a mass formula for unimodular lattices without roots (or with given root system).
See also
Siegel identity
References
.
Quadratic forms
Hermann Minkowski | Smith–Minkowski–Siegel mass formula | [
"Mathematics"
] | 1,988 | [
"Quadratic forms",
"Number theory"
] |
13,363,170 | https://en.wikipedia.org/wiki/Xanthosine%20triphosphate | Xanthosine 5'-triphosphate (XTP) is a nucleotide that is not produced by - and has no known function in - living cells. Uses of XTP are, in general, limited to experimental procedures on enzymes that bind other nucleotides. Deamination of purine bases can result in accumulation of such nucleotides as ITP, dITP, XTP, and dXTP.
References
See also
Xanthosine
Xanthosine monophosphate
Nucleotides
Phosphate esters
Xanthines | Xanthosine triphosphate | [
"Chemistry"
] | 118 | [
"Alkaloids by chemical classification",
"Xanthines"
] |
13,363,496 | https://en.wikipedia.org/wiki/Snow%20bridge | A snow bridge is an arc formed by snow across a crevasse, a crack in rock, a creek, or some other opening in terrain. It is typically formed by snow drift, which first creates a cornice, which may then grow to reach the other side of the opening.
Dangers
A snow bridge may completely cover the opening and thus present a danger by creating an illusion of unbroken surface under which the opening is concealed by an unknown thickness of snow, possibly only a few centimetres.
Snow bridges may also form inside a crevasse, making it appear shallow.
A snow bridge is thicker and stronger at the edge of a crevasse; therefore, a fall through a bridge usually happens at some distance from the edge.
See also
Ice bridge
References
Glaciology
Snow
Bridges
Ice bridges | Snow bridge | [
"Engineering"
] | 160 | [
"Structural engineering",
"Bridges"
] |
13,363,621 | https://en.wikipedia.org/wiki/Lundquist%20number | In plasma physics, the Lundquist number (denoted by ) is a dimensionless ratio which compares the timescale of an Alfvén wave crossing to the timescale of resistive diffusion. It is a special case of the magnetic Reynolds number when the Alfvén velocity is the typical velocity scale of the system, and is given by
where is the typical length scale of the system, is the magnetic diffusivity and is the Alfvén velocity of the plasma.
High Lundquist numbers indicate highly conducting plasmas, while low Lundquist numbers indicate more resistive plasmas. Laboratory plasma experiments typically have Lundquist numbers between , while in astrophysical situations the Lundquist number can be greater than . Considerations of Lundquist number are especially important in magnetic reconnection.
See also
Magnetic Prandtl number
Péclet number
Stuart number
References
Plasma parameters | Lundquist number | [
"Physics"
] | 172 | [
"Plasma physics stubs",
"Plasma physics"
] |
161,565 | https://en.wikipedia.org/wiki/Olry%20Terquem | Olry Terquem (16 June 1782 – 6 May 1862) was a French mathematician. He is known for his works in geometry and for founding two scientific journals, one of which was the first journal about the history of mathematics. He was also the pseudonymous author (as Tsarphati) of a sequence of letters advocating radical reform in Judaism. He was French Jewish.
Education and career
Terquem grew up speaking Yiddish, and studying only the Hebrew language and the Talmud. However, after the French Revolution his family came into contact with a wider society, and his studies broadened. Despite his poor French he was admitted to study mathematics at the École Polytechnique in Paris, beginning in 1801, as only the second Jew to study there. He became an assistant there in 1803, and earned his doctorate in 1804.
After finishing his studies he moved to Mainz (at that time known as Mayence and part of imperial France), where he taught at the Imperial Lycée. In 1811 he moved to the artillery school in the same city, in 1814 he moved again to the artillery school in Grenoble, and in 1815 he became the librarian of the Dépôt Central de l'Artillerie in Paris, where he remained for the rest of his life. He became an officer of the Legion of Honor in 1852. After he died, his funeral was officiated by Lazare Isidor, the Chief Rabbi of Paris and later of France, and attended by over 12 generals headed by Edmond Le Bœuf.
Mathematics
Terquem translated works concerning artillery, was the author of several textbooks, and became an expert on the history of mathematics. Terquem and Camille-Christophe Gerono were the founding editors of the Nouvelles Annales de Mathématiques in 1842. Terquem also founded another journal in 1855, the Bulletin de Bibliographie, d'Histoire et de Biographie de Mathématiques, which was published as a supplement to the Nouvelles Annales, and he continued editing it until 1861. This was the first journal dedicated to the history of mathematics.
In geometry, Terquem is known for naming the nine-point circle and fully proving its properties. This is a circle that passes through nine special points of any given triangle. Karl Wilhelm Feuerbach had previously observed that the three feet of the altitudes of a triangle and the three midpoints of its sides all lie on a single circle, but Terquem was the first to prove that this circle also contains the midpoints of the line segments connecting each vertex to the orthocenter of the triangle. He also gave a new proof of Feuerbach's theorem that the nine-point circle is tangent to the incircle and excircles of a triangle.
Terquem's other contributions to mathematics include naming the pedal curve of another curve, and counting the number of perpendicular lines from a point to an algebraic curve as a function of the degree of the curve. He was also the first to observe that the minimum or maximum value of a symmetric function is often obtained by setting all variables equal to each other.
Jewish activism
Terquem has been called the first, most radical, and most outspoken of the major proponents of Jewish reform in France, "the enfant terrible of French Judaism". He published 27 "letters of an Israelite" under the name "Tsarphati" (a Hebrew word for a Frenchman), pushing for reforms that in his view would better assimilate Jews into modern life and better accommodate working-class Jews. The first nine of these appeared in L'Israélite Français, and the remaining 18 as letters to the editor in Courrier de la Moselle. Terquem rejected the Talmud, proposed to codify intermarriage between Jews and non-Jews, pushed to move the sabbath to Sunday, advocated using other languages than Hebrew for prayers, and fought against circumcision, regressive attitudes towards women, and the Jewish calendar. However, he had little effect on the Jewish practices of the time.
Despite Terquem's calls for reform, and despite having married a Catholic woman and raised his children as Catholic, he requested that his funeral be held with all the proper Jewish rites.
References
1782 births
1862 deaths
19th-century French mathematicians
18th-century French Jews
Geometers
French historians of mathematics
Officers of the Legion of Honour
French librarians | Olry Terquem | [
"Mathematics"
] | 904 | [
"Geometers",
"Geometry"
] |
161,711 | https://en.wikipedia.org/wiki/Truth%20value | In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth, which in classical logic has only two possible values (true or false).
Computing
In some programming languages, any expression can be evaluated in a context that expects a Boolean data type. Typically (though this varies by programming language) expressions like the number zero, the empty string, empty lists, and null are treated as false, and strings with content (like "abc"), other numbers, and objects evaluate to true. Sometimes these classes of expressions are called falsy and truthy. For example, in Lisp, nil, the empty list, is treated as false, and all other values are treated as true. In C, the number 0 or 0.0 is false, and all other values are treated as true.
In JavaScript, the empty string (""), null, undefined, NaN, +0, −0 and false are sometimes called falsy (of which the complement is truthy) to distinguish between strictly type-checked and coerced Booleans (see also: JavaScript syntax#Type conversion). As opposed to Python, empty containers (Arrays, Maps, Sets) are considered truthy. Languages such as PHP also use this approach.
Classical logic
In classical logic, with its intended semantics, the truth values are true (denoted by 1 or the verum ⊤), and untrue or false (denoted by 0 or the falsum ⊥); that is, classical logic is a two-valued logic. This set of two values is also called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables. Logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgan's laws:
¬(
¬(
Propositional variables become variables in the Boolean domain. Assigning values for propositional variables is referred to as valuation.
Intuitionistic and constructive logic
Whereas in classical logic truth values form a Boolean algebra, in intuitionistic logic, and more generally, constructive mathematics, the truth values form a Heyting algebra. Such truth values may express various aspects of validity, including locality, temporality, or computational content.
For example, one may use the open sets
of a topological space as intuitionistic truth values, in which case the truth value of a formula expresses where the formula holds, not whether it holds.
In realizability truth values are sets of programs, which can be understood as computational evidence of validity of a formula. For example, the truth value of the statement "for every number there is a prime larger than it" is the set of all programs that take as input a number , and output a prime larger than .
In category theory, truth values appear as the elements of the subobject classifier. In particular, in a topos every formula of higher-order logic may be assigned a truth value in the subobject classifier.
Even though a Heyting algebra may have many elements, this should not be understood as there being truth values that are neither true nor false, because intuitionistic logic proves ("it is not the case that is neither true nor false").
In intuitionistic type theory, the Curry-Howard correspondence exhibits an equivalence of propositions and types, according to which validity is equivalent to inhabitation of a type.
For other notions of intuitionistic truth values, see the Brouwer–Heyting–Kolmogorov interpretation and .
Multi-valued logic
Multi-valued logics (such as fuzzy logic and relevance logic) allow for more than two truth values, possibly containing some internal structure. For example, on the unit interval such structure is a total order; this may be expressed as the existence of various degrees of truth.
Algebraic semantics
Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions. For example, intuitionistic logic lacks a complete set of truth values because its semantics, the Brouwer–Heyting–Kolmogorov interpretation, is specified in terms of provability conditions, and not directly in terms of the necessary truth of formulae.
But even non-truth-valuational logics can associate values with logical formulae, as is done in algebraic semantics. The algebraic semantics of intuitionistic logic is given in terms of Heyting algebras, compared to Boolean algebra semantics of classical propositional calculus.
See also
Agnosticism
Bayesian probability
Circular reasoning
Degree of truth
False dilemma
Paradox
Semantic theory of truth
Slingshot argument
Supervaluationism
Truth-value semantics
Verisimilitude
References
External links
Concepts in logic
Propositions
Logical truth
Concepts in epistemology | Truth value | [
"Mathematics"
] | 1,001 | [
"Mathematical logic",
"Logical truth"
] |
161,744 | https://en.wikipedia.org/wiki/Interpersonal%20relationship | In social psychology, an interpersonal relation (or interpersonal relationship) describes a social association, connection, or affiliation between two or more persons. It overlaps significantly with the concept of social relations, which are the fundamental unit of analysis within the social sciences. Relations vary in degrees of intimacy, self-disclosure, duration, reciprocity, and power distribution. The main themes or trends of the interpersonal relations are: family, kinship, friendship, love, marriage, business, employment, clubs, neighborhoods, ethical values, support and solidarity. Interpersonal relations may be regulated by law, custom, or mutual agreement, and form the basis of social groups and societies. They appear when people communicate or act with each other within specific social contexts, and they thrive on equitable and reciprocal compromises.
Interdisciplinary analysis of relationships draws heavily upon the other social sciences, including, but not limited to: anthropology, communication, cultural studies, economics, linguistics, mathematics, political science, social work, and sociology. This scientific analysis had evolved during the 1990s and has become "relationship science", through the research done by Ellen Berscheid and Elaine Hatfield. This interdisciplinary science attempts to provide evidence-based conclusions through the use of data analysis.
Types
Intimate relationships
Romantic relationships
Romantic relationships have been defined in countless ways, by writers, philosophers, religions, scientists, and in the modern day, relationship counselors. Two popular definitions of love are Sternberg's Triangular Theory of Love and Fisher's theory of love. Sternberg defines love in terms of intimacy, passion, and commitment, which he claims exist in varying levels in different romantic relationships. Fisher defines love as composed of three stages: attraction, romantic love, and attachment. Romantic relationships may exist between two people of any gender, or among a group of people, as in polyamory.
On the basis of openness, all romantic relationships are of 2 types: open and closed. Closed relationships are strictly against romantic or sexual activity of partners with anyone else outside the relationships. In an open relationship, all partners remain committed to each other, but allow themselves and their partner to have relationships with others.
On the basis of number of partners, they are of 2 types: monoamorous and polyamorous. A monoamorous relationship is between only two individuals. A polyamorous relationship is among three or more individuals.
Romance
While many individuals recognize the single defining quality of a romantic relationship as the presence of love, it is impossible for romantic relationships to survive without the component of interpersonal communication. Within romantic relationships, love is therefore equally difficult to define. Hazan and Shaver define love, using Ainsworth's attachment theory, as comprising proximity, emotional support, self-exploration, and separation distress when parted from the loved one. Other components commonly agreed to be necessary for love are physical attraction, similarity, reciprocity, and self-disclosure.
Life stages
Early adolescent relationships are characterized by companionship, reciprocity, and sexual experiences. As emerging adults mature, they begin to develop attachment and caring qualities in their relationships, including love, bonding, security, and support for partners. Earlier relationships also tend to be shorter and exhibit greater involvement with social networks. Later relationships are often marked by shrinking social networks, as the couple dedicates more time to each other than to associates. Later relationships also tend to exhibit higher levels of commitment.
Most psychologists and relationship counselors predict a decline of intimacy and passion over time, replaced by a greater emphasis on companionate love (differing from adolescent companionate love in the caring, committed, and partner-focused qualities). However, couple studies have found no decline in intimacy nor in the importance of sex, intimacy, and passionate love to those in longer or later-life relationships. Older people tend to be more satisfied in their relationships, but face greater barriers to entering new relationships than do younger or middle-aged people. Older women in particular face social, demographic, and personal barriers; men aged 65 and older are nearly twice as likely as women to be married, and widowers are nearly three times as likely to be dating 18 months following their partner's loss compared to widows.
Significant other
The term significant other gained popularity during the 1990s, reflecting the growing acceptance of 'non-heteronormative' relationships. It can be used to avoid making an assumption about the gender or relational status (e.g. married, cohabitating, civil union) of a person's intimate partner. Cohabiting relationships continue to rise, with many partners considering cohabitation to be nearly as serious as, or a substitute for, marriage. In particular, LGBTQ+ people often face unique challenges in establishing and maintaining intimate relationships. The strain of internalized discrimination, socially ingrained or homophobia, transphobia and other forms of discrimination against LGBTQ+ people, and social pressure of presenting themselves in line with socially acceptable gender norms can affect their health, quality of life, satisfaction, emotions etc. inside and outside their relationships. LGBTQ+ youth also lack the social support and peer connections enjoyed by hetero-normative young people. Nonetheless, comparative studies of homosexual and heterosexual couples have found few differences in relationship intensity, quality, satisfaction, or commitment.
Marital relationship
Although nontraditional relationships continue to rise, marriage still makes up the majority of relationships except among emerging adults. It is also still considered by many to occupy a place of greater importance among family and social structures.
Family relationships
Parentchild
In ancient times, parentchild relationships were often marked by fear, either of rebellion or abandonment, resulting in the strict filial roles in, for example, ancient Rome and China. Freud conceived of the Oedipal complex, the supposed obsession that young boys have towards their mothers and the accompanying fear and rivalry with their fathers, and the Electra complex, in which the young girl feels that her mother has castrated her and therefore becomes obsessed with her father. Freud's ideas influenced thought on parentchild relationships for decades.
Another early conception of parent–child relationships was that love only existed as a biological drive for survival and comfort on the child's part. In 1958, however, Harry Harlow's study " The Hot Wire Mother'' comparing rhesus' reactions to wire surrogate "mothers" and cloth "mothers" demonstrated that affection was wanted by any caregiver and not only the surrogate mothers.
The study laid the groundwork for Mary Ainsworth's attachment theory, showing how the infants used their cloth "mothers" as a secure base from which to explore. In a series of studies using the strange situation, a scenario in which an infant is separated from then reunited with the parent, Ainsworth defined three styles of parent-child relationship.
Securely attached infants miss the parent, greet them happily upon return, and show normal exploration and lack of fear when the parent is present.
Insecure avoidant infants show little distress upon separation and ignore the caregiver when they return. They explore little when the parent is present. Infants also tend to be emotionally unavailable.
Insecure ambivalent infants are highly distressed by separation, but continue to be distressed upon the parent's return; these infants also explore little and display fear even when the parent is present.
Some psychologists have suggested a fourth attachment style, disorganized, so called because the infants' behavior appeared disorganized or disoriented.
Secure attachments are linked to better social and academic outcomes and greater moral internalization as research proposes the idea that parent-child relationships play a key role in the developing morality of young children. Secure attachments are also linked to less delinquency for children, and have been found to predict later relationship success.
For most of the late nineteenth through the twentieth century, the perception of adolescent-parent relationships was that of a time of upheaval. G. Stanley Hall popularized the "Sturm und drang", or storm and stress, model of adolescence. Psychological research has painted a much tamer picture. Although adolescents are more risk-seeking and emerging adults have higher suicide rates, they are largely less volatile and have much better relationships with their parents than the storm and stress model would suggest Early adolescence often marks a decline in parent-child relationship quality, which then re-stabilizes through adolescence, and relationships are sometimes better in late adolescence than prior to its onset. With the increasing average age at marriage and more youths attending college and living with parents past their teens, the concept of a new period called emerging adulthood gained popularity. This is considered a period of uncertainty and experimentation between adolescence and adulthood. During this stage, interpersonal relationships are considered to be more self-focused, and relationships with parents may still be influential.
Siblings
Sibling relationships have a profound effect on social, psychological, emotional, and academic outcomes. Although proximity and contact usually decreases over time, sibling bonds continue to have effect throughout their lives. Sibling bonds are one of few enduring relationships humans may experience. Sibling relationships are affected by parent-child relationships, such that sibling relationships in childhood often reflect the positive or negative aspects of children's relationships with their parents.
Other examples of interpersonal relationship
Egalitarian and platonic friendship
Enemy
Frenemy — a person with whom an individual maintains a friendly interaction despite underlying conflict, possibly encompassing rivalry, mistrust, jealousy or competition
Neighbor
Familiar stranger
Official
Queerplatonic relationship
Business is generally held to be distinct from personal relations, a contrasting mode which other than excursions from the norm is based on non-personal interest and rational rather than emotional concerns.
Business relationships
Partnership
Employer and employee
Contractor
Customer
Landlord and tenant
Co-worker
Ways that interpersonal relationships begin
Proximity:
Proximity increases the chance of repeated exposure to the same person. Long-term exposure that can develop familiarity is more likely to trigger like or hate.
Technological advance:
The Internet removes the problem of lack of communication due to long distance. People can communicate with others who live far away from them through video calls or text. Internet is a medium for people to be close to others who are not physically near them.
Similarity:
People prefer to make friends with others who are similar to them because their thoughts and feelings are more likely to be understood.
Stages
Interpersonal relationships are dynamic systems that change continuously during their existence. Like living organisms, relationships have a beginning, a lifespan, and an end. They tend to grow and improve gradually, as people get to know each other and become closer emotionally, or they gradually deteriorate as people drift apart, move on with their lives and form new relationships with others. One of the most influential models of relationship development was proposed by psychologist George Levinger. This model was formulated to describe heterosexual, adult romantic relationships, but it has been applied to other kinds of interpersonal relations as well. According to the model, the natural development of a relationship follows five stages:
Acquaintance and acquaintanceship – Becoming acquainted depends on previous relationships, physical proximity, first impressions, and a variety of other factors. If two people begin to like each other, continued interactions may lead to the next stage, but acquaintance can continue indefinitely. Another example is the association.
Buildup – During this stage, people begin to trust and care about each other. The need for intimacy, compatibility and such filtering agents as common background and goals will influence whether or not interaction continues.
Continuation – This stage follows a mutual commitment to quite a strong and close long-term friendship, romantic relationship, or even marriage. It is generally a long, relatively stable period. Nevertheless, continued growth and development will occur during this time. Mutual trust is important for sustaining the relationship.
Deterioration – Not all relationships deteriorate, but those that do tend to show signs of trouble. Boredom, resentment, and dissatisfaction may occur, and individuals may communicate less and avoid self-disclosure. Loss of trust and betrayals may take place as the downward spiral continues, eventually ending the relationship. (Alternately, the participants may find some way to resolve the problems and reestablish trust and belief in others.)
Ending – The final stage marks the end of the relationship, either by breakups, death or by spatial separation for quite some time and severing all existing ties of either friendship or romantic love.
Terminating a relationship
According to the latest Systematic Review of the Economic Literature on the Factors associated with Life Satisfaction (dating from 2007), stable and secure relationships are beneficial, and correspondingly, relationship dissolution is harmful.
The American Psychological Association has summarized the evidence on breakups. Breaking up can actually be a positive experience when the relationship did not expand the self and when the breakup leads to personal growth. They also recommend some ways to cope with the experience:
Purposefully focusing on the positive aspects of the breakup ("factors leading up to the break-up, the actual break-up, and the time right after the break-up")
Minimizing the negative emotions
Journaling the positive aspects of the breakup (e.g. "comfort, confidence, empowerment, energy, happiness, optimism, relief, satisfaction, thankfulness, and wisdom"). This exercise works best, although not exclusively, when the breakup is mutual.
Less time between a breakup and a subsequent relationship predicts higher self-esteem, attachment security, emotional stability, respect for your new partner, and greater well-being. Furthermore, rebound relationships do not last any shorter than regular relationships. 60% of people are friends with one or more ex. 60% of people have had an off-and-on relationship. 37% of cohabiting couples, and 23% of the married, have broken up and gotten back together with their existing partner.
Terminating a marital relationship implies divorce or annulment. One reason cited for divorce is infidelity. The determinants of unfaithfulness are debated by dating service providers, feminists, academics, and science communicators. According to Psychology Today, women's, rather than men's, level of commitment more strongly determines if a relationship will continue.
Pathological relationships
Research conducted in Iran and other countries has shown that conflicts are common between couples, and, in Iran, 92% of the respondents reported that they had conflicts in their marriages. These conflicts can cause major problems for couples and they are caused due to multiple reasons.
Abusive
Abusive relationships involve either maltreatment or violence such as physical abuse, physical neglect, sexual abuse, and emotional maltreatment. Abusive relationships within the family are very prevalent in the United States and usually involve women or children as victims. Common individual factors for abusers include low self-esteem, poor impulse control, external locus of control, drug use, alcohol abuse, and negative affectivity. There are also external factors such as stress, poverty, and loss which contribute to likelihood of abuse.
Codependent
Codependency initially focused on a codependent partner enabling substance abuse, but it has become more broadly defined to describe a dysfunctional relationship with extreme dependence on or preoccupation with another person. There are some who even refer to codependency as an addiction to the relationship. The focus of codependents tends to be on the emotional state, behavioral choices, thoughts, and beliefs of another person. Often those who are codependent neglect themselves in favor of taking care of others and have difficulty fully developing an identity of their own.
Narcissistic
Narcissists focus on themselves and often distance themselves from intimate relationships; the focus of narcissistic interpersonal relationships is to promote one's self-concept. Generally, narcissists show less empathy in relationships and view love pragmatically or as a game involving others' emotions.
Narcissists are usually part of the personality disorder, narcissistic personality disorder (NPD). In relationships, they tend to affect the other person as they attempt to use them to enhance their self-esteem. Specific types of NPD make a person incapable of having an interpersonal relationship due to their being cunning, envious, and contemptuous.
Importance
Human beings are innately social and are shaped by their experiences with others. There are multiple perspectives to understand this inherent motivation to interact with others.
Need to belong
According to Maslow's hierarchy of needs, humans need to feel love (sexual/nonsexual) and acceptance from social groups (family, peer groups). In fact, the need to belong is so innately ingrained that it may be strong enough to overcome physiological and safety needs, such as children's attachment to abusive parents or staying in abusive romantic relationships. Such examples illustrate the extent to which the psychobiological drive to belong is entrenched.
Social exchange
Another way to appreciate the importance of relationships is in terms of a reward framework. This perspective suggests that individuals engage in relations that are rewarding in both tangible and intangible ways. The concept fits into a larger theory of social exchange. This theory is based on the idea that relationships develop as a result of cost–benefit analysis. Individuals seek out rewards in interactions with others and are willing to pay a cost for said rewards. In the best-case scenario, rewards will exceed costs, producing a net gain. This can lead to "shopping around" or constantly comparing alternatives to maximize the benefits or rewards while minimizing costs.
Relational self
Relationships are also important for their ability to help individuals develop a sense of self. The relational self is the part of an individual's self-concept that consists of the feelings and beliefs that one has regarding oneself that develops based on interactions with others. In other words, one's emotions and behaviors are shaped by prior relationships. Relational self theory posits that prior and existing relationships influence one's emotions and behaviors in interactions with new individuals, particularly those individuals that remind them of others in their life. Studies have shown that exposure to someone who resembles a significant other activates specific self-beliefs, changing how one thinks about oneself in the moment more so than exposure to someone who does not resemble one's significant other.
Power and dominance
Power is the ability to influence the behavior of other people. When two parties have or assert unequal levels of power, one is termed "dominant" and the other "submissive". Expressions of dominance can communicate an intention to assert or maintain dominance in a relationship. Being submissive can be beneficial because it saves time, limits emotional stress, and may avoid hostile actions such as withholding of resources, cessation of cooperation, termination of the relationship, maintaining a grudge, or even physical violence. Submission occurs in different degrees; for example, some employees may follow orders without question, whereas others might express disagreement but concede when pressed.
Groups of people can form a dominance hierarchy. For example, a hierarchical organization uses a command hierarchy for top-down management. This can reduce time wasted in conflict over unimportant decisions, prevents inconsistent decisions from harming the operations of the organization, maintain alignment of a large population of workers with the goals of the owners (which the workers might not personally share) and, if promotion is based on merit, help ensure that the people with the best expertise make important decisions. This contrasts with group decision-making and systems which encourage decision-making and self-organization by front-line employees, who in some cases may have better information about customer needs or how to work efficiently. Dominance is only one aspect of organizational structure.
A power structure describes power and dominance relationships in a larger society. For example, a feudal society under a monarchy exhibits a strong dominance hierarchy in both economics and physical power, whereas dominance relationships in a society with democracy and capitalism are more complicated.
In business relationships, dominance is often associated with economic power. For example, a business may adopt a submissive attitude to customer preferences (stocking what customers want to buy) and complaints ("the customer is always right") in order to earn more money. A firm with monopoly power may be less responsive to customer complaints because it can afford to adopt a dominant position. In a business partnership a "silent partner" is one who adopts a submissive position in all aspects, but retains financial ownership and a share of the profits.
Two parties can be dominant in different areas. For example, in a friendship or romantic relationship, one person may have strong opinions about where to eat dinner, whereas the other has strong opinions about how to decorate a shared space. It could be beneficial for the party with weak preferences to be submissive in that area because it will not make them unhappy and avoids conflict with the party that would be unhappy.
The breadwinner model is associated with gender role assignments where the male in a heterosexual marriage would be dominant as they are responsible for economic provision.
Relationship satisfaction
Social exchange theory and Rusbult's investment model show that relationship satisfaction is based on three factors: rewards, costs, and comparison levels (Miller, 2012). Rewards refer to any aspects of the partner or relationship that are positive. Conversely, costs are the negative or unpleasant aspects of the partner or their relationship. The comparison level includes what each partner expects of the relationship. The comparison level is influenced by past relationships, and general relationship expectations they are taught by family and friends.
Individuals in long-distance relationships, LDRs, rated their relationships as more satisfying than individuals in proximal relationship, PRs. Alternatively, Holt and Stone (1988) found that long-distance couples who were able to meet with their partner at least once a month had similar satisfaction levels to unmarried couples who cohabitated. Also, the relationship satisfaction was lower for members of LDRs who saw their partner less frequently than once a month. LDR couples reported the same level of relationship satisfaction as couples in PRs, despite only seeing each other on average once every 23 days.
Social exchange theory and the am investment model both theorize that relationships that are high in cost would be less satisfying than relationships that are low in cost. LDRs have a higher level of costs than PRs, therefore, one would assume that LDRs are less satisfying than PRs. Individuals in LDRs are more satisfied with their relationships compared to individuals in PRs. This can be explained by unique aspects of the LDRs, how the individuals use relationship maintenance behaviors, and the attachment styles of the individuals in the relationships. Therefore, the costs and benefits of the relationship are subjective to the individual, and people in LDRs tend to report lower costs and higher rewards in their relationship compared to PRs.
Theories and empirical research
Confucianism
Confucianism is a study and theory of relationships, especially within hierarchies. Social harmony—the central goal of Confucianism—results in part from every individual knowing their place in the social order and playing their part well. Particular duties arise from each person's particular situation in relation to others. The individual stands simultaneously in several different relationships with different people: as a junior in relation to parents and elders; and as a senior in relation to younger siblings, students, and others. Juniors are considered in Confucianism to owe their seniors reverence and seniors have duties of benevolence and concern toward juniors. A focus on mutuality is prevalent in East Asian cultures to this day.
Minding relationships
The mindfulness theory of relationships shows how closeness in relationships may be enhanced. Minding is the "reciprocal knowing process involving the nonstop, interrelated thoughts, feelings, and behaviors of persons in a relationship." Five components of "minding" include:
Knowing and being known: seeking to understand the partner
Making relationship-enhancing attributions for behaviors: giving the benefit of the doubt
Accepting and respecting: empathy and social skills
Maintaining reciprocity: active participation in relationship enhancement
Continuity in minding: persisting in mindfulness
In popular culture
Popular perceptions
Popular perceptions of intimate relationships are strongly influenced by movies and television. Common messages are that love is predestined, love at first sight is possible, and that love with the right person always succeeds. Those who consume the most romance-related media tend to believe in predestined romance and that those who are destined to be together implicitly understand each other. These beliefs, however, can lead to less communication and problem-solving as well as giving up on relationships more easily when conflict is encountered.
Social media
Social media has changed the face of interpersonal relationships. Romantic interpersonal relationships are no less impacted. For example, in the United States, Facebook has become an integral part of the dating process for emerging adults. Social media can have both positive and negative impacts on romantic relationships. For example, supportive social networks have been linked to more stable relationships. However, social media usage can also facilitate conflict, jealousy, and passive-aggressive behaviors such as spying on a partner. Aside from direct effects on the development, maintenance, and perception of romantic relationships, excessive social network usage is linked to jealousy and dissatisfaction in relationships.
A growing segment of the population is engaging in purely online dating, sometimes but not always moving towards traditional face-to-face interactions. These online relationships differ from face-to-face relationships; for example, self-disclosure may be of primary importance in developing an online relationship. Conflict management differs, since avoidance is easier and conflict resolution skills may not develop in the same way. Additionally, the definition of infidelity is both broadened and narrowed, since physical infidelity becomes easier to conceal but emotional infidelity (e.g. chatting with more than one online partner) becomes a more serious offense.
See also
I and Thou
Impact of prostitution on mental health
Interactionism
Interpersonal attraction
Interpersonal tie
Outline of relationships
Relational mobility
Relational models theory
Relationship status
Relationship forming
Social connection
Socionics
Relationship Science
References
Further reading
External links | Interpersonal relationship | [
"Biology"
] | 5,221 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
161,758 | https://en.wikipedia.org/wiki/Poppy | A poppy is a flowering plant in the subfamily Papaveroideae of the family Papaveraceae. Poppies are herbaceous plants, often grown for their colourful flowers. One species of poppy, Papaver somniferum, is the source of the narcotic drug mixture opium, which contains powerful medicinal alkaloids such as morphine and has been used since ancient times as an analgesic and narcotic medicinal and recreational drug. It also produces edible seeds. Following the trench warfare in the poppy fields of Flanders, Belgium, during World War I, poppies have become a symbol of remembrance of soldiers who have died during wartime, especially in the UK, Canada, Australia, New Zealand and other Commonwealth realms.
Description
Poppies are herbaceous annual, biennial or short-lived perennial plants. Some species are monocarpic, dying after flowering.
Poppies can be over tall with flowers up to across. Flowers of species (not cultivars) have 4 or 6 petals, many stamens forming a conspicuous whorl in the center of the flower and an ovary of from 2 to many fused carpels. The petals are showy, may be of almost any colour and some have markings. The petals are crumpled in the bud and as blooming finishes, the petals often lie flat before falling away.
In the temperate zones, poppies bloom from spring into early summer. Most species secrete latex when injured. Bees use poppies as a pollen source. The pollen of the oriental poppy, Papaver orientale, is dark blue, that of the field or corn poppy (Papaver rhoeas) is grey to dark green. The opium poppy, Papaver somniferum, grows wild in Southeast Europe and Southeast Asia. It is believed that it originated in the Mediterranean region.
Poppies belong to the subfamily Papaveroideae of the family Papaveraceae, which includes the following genera:
Papaver – Papaver rhoeas, Papaver somniferum, Papaver orientale, Papaver nudicaule, Papaver cambricum
Eschscholzia – Eschscholzia californica
Meconopsis – Meconopsis napaulensis
Glaucium - the horned poppies including Glaucium flavum and Glaucium corniculatum
Stylophorum – celandine poppy
Argemone – prickly poppy
Romneya – matilija poppy and relatives
Canbya – pygmy poppy
Stylomecon – wind poppy
Arctomecon – desert bearpaw poppy
Hunnemannia – tulip poppy
Dendromecon – tree poppy
Uses and cultivation
The flowers of most poppy species are attractive and are widely cultivated as annual or perennial ornamental plants. This has resulted in a number of commercially important cultivars, such as the Shirley poppy, a cultivar of Papaver rhoeas and semi-double or double (flore plena) forms of the opium poppy Papaver somniferum and oriental poppy (Papaver orientale). Poppies of several other genera are also cultivated in gardens.
Poppy seeds are rich in oil, carbohydrates, calcium and protein. Poppy oil is often used as cooking oil, salad dressing oil, or in products such as margarine. Poppy oil can also be added to spices for cakes or breads. Poppy products are also used in different paints, varnishes, and some cosmetics.
A few species have other uses, principally as sources of drugs and foods. The opium poppy is widely cultivated and its worldwide production is monitored by international agencies. It is used for production of dried latex and opium, the principal precursor of narcotic and analgesic opiates such as morphine, heroin, and codeine.
Traditional medicine
Poppy seeds contain small quantities of both morphine and codeine, which are pain-relieving drugs. Poppy seeds and fixed oils can also be nonnarcotic because when they are harvested about twenty days after the flower has opened, the morphine is no longer present. Poppy cultivation is strictly regulated worldwide, with the exception of India where opium gum, which also contains the analgesic thebaine, is legally produced.
History
Papaver somniferum was domesticated by the indigenous people of Western and Central Europe between 6000 and 3500 BC. However, it is believed that its origins may come from the Sumerian people, where the first use of opium was recognized. Poppies and opium made their way around the world along the silk road. Juglets resembling poppy seed pods have been discovered with trace amounts of opium and the flower appeared in jewelry and on art pieces in Ancient Egypt, dated 1550–1292 BC.
The eradication of poppy cultivation came about in the early 1900s through international conferences due to safety concerns associated with the production of opium. In the 1970s the American war on drugs targeted Turkish production of the plant, leading to a more negative popular opinion of the U.S.
In culture
The girl's given name "Poppy" is taken from the name of the flower.
A poppy flower is depicted on the reverse of the Macedonian 500-denar banknote, issued in 1996 and 2003. The poppy is also part of the coat of arms of North Macedonia.
Canada has issued special quarters (25-cent coins) with a red poppy on the reverse in 2004, 2008, 2010, and 2015. The 2004 Canadian "poppy" quarter was the world's first coloured circulation coin.
Symbolism
Poppies have long been used as a symbol of sleep, peace, and death: Sleep because the opium extracted from them is a sedative, and death because of the common blood-red colour of the red poppy in particular. In Greek and Roman myths, poppies were used as offerings to the dead. Poppies used as emblems on tombstones symbolize eternal sleep. This symbolism was evoked in L. Frank Baum's 1900 children's novel The Wonderful Wizard of Oz, in which a magical poppy field threatened to make the protagonists sleep forever.
A second interpretation of poppies in Classical mythology is that the bright scarlet colour signifies a promise of resurrection after death.
Red-flowered poppy is unofficially considered the national flower of the Albanians in Albania, Kosovo and elsewhere. This is due to its red and black colours, the same as the colours of the flag of Albania. Red poppies are also the national flower of Poland. The California poppy, Eschscholzia californica, is the state flower of California.
The powerful symbolism of Papaver rhoeas has been borrowed by various advocacy campaigns, such as the White Poppy and Simon Topping's black poppy.
Wartime remembrance
The poppy of wartime remembrance is Papaver rhoeas, the red-flowered corn poppy. This poppy is a common plant of disturbed ground in Europe and is found in many locations, including Flanders, which is the setting of the famous poem "In Flanders Fields" by the Canadian surgeon and soldier John McCrae. In Canada, the United Kingdom, Australia, South Africa and New Zealand, artificial poppies (plastic in Canada, paper in the UK, Australia, South Africa, Malta and New Zealand) are worn to commemorate those who died in war. This form of commemoration is associated with Remembrance Day, which falls on November 11. In Canada, Australia and the UK, poppies are often worn from the beginning of November through to the 11th, or Remembrance Sunday if that falls on a later date. In New Zealand and Australia, soldiers are also commemorated on ANZAC day (April 25), although the poppy is still commonly worn around Remembrance Day. Wearing of poppies has been a custom since 1924 in the United States. Moina Michael of Georgia is credited as the founder of the Memorial Poppy in the United States.
Artificial poppies (called "Buddy Poppies") are used in the veterans' aid campaign by the Veterans of Foreign Wars, which provides money to the veterans who assemble the poppies and various aid programs to veterans and their families.
See also
List of poppy seed pastries and dishes
Poppy goddess
References
Plant common names
Symbols
Papaveroideae | Poppy | [
"Mathematics",
"Biology"
] | 1,669 | [
"Plants",
"Symbols",
"Plant common names",
"Common names of organisms"
] |
161,804 | https://en.wikipedia.org/wiki/Africanized%20bee | The Africanized bee, also known as the Africanized honey bee (AHB) and colloquially as the "killer bee", is a hybrid of the western honey bee (Apis mellifera), produced originally by crossbreeding of the East African lowland honey bee (A. m. scutellata) with various European honey bee subspecies such as the Italian honey bee (A. m. ligustica) and the Iberian honey bee (A. m. iberiensis).
The East African lowland honey bee was first introduced to Brazil in 1956 in an effort to increase honey production, but 26 swarms escaped quarantine in 1957. Since then, the hybrid has spread throughout South America and arrived in North America in 1985. Hives were found in south Texas in the United States in 1990.
Africanized honey bees are typically much more defensive, react to disturbances faster, and chase people further () than other varieties of honey bees. They have killed some 1,000 humans, with victims receiving 10 times more stings than from European honey bees. They have also killed horses and other animals.
History
There are 29 recognized subspecies of Apis mellifera based largely on geographic variations. All subspecies are cross-fertile. Geographic isolation led to numerous local adaptations. These adaptations include brood cycles synchronized with the bloom period of local flora, forming a winter cluster in colder climates, migratory swarming in Africa, enhanced (long-distance) foraging behavior in desert areas, and numerous other inherited traits.
The Africanized honey bees in the Western Hemisphere are descended from hives operated by biologist Warwick E. Kerr, who had interbred honey bees from Europe and southern Africa. Kerr was attempting to breed a strain of bees that would produce more honey in tropical conditions than the European strain of honey bee then in use throughout North, Central and South America. The hives containing this particular African subspecies were housed at an apiary near Rio Claro, São Paulo, in the southeast of Brazil, and were noted to be especially defensive. These hives had been fitted with special excluder screens (called queen excluders) to prevent the larger queen bees and drones from getting out and mating with the local population of European bees. According to Kerr, in October 1957 a visiting beekeeper, noticing that the queen excluders were interfering with the worker bees' movement, removed them, resulting in the accidental release of 26 Tanganyikan swarms of A. m. scutellata. Following this accidental release, the Africanized honey bee swarms spread out and crossbred with local European honey bee colonies.
The descendants of these colonies have since spread throughout the Americas, moving through the Amazon basin in the 1970s, crossing into Central America in 1982, and reaching Mexico in 1985. Because their movement through these regions was rapid and largely unassisted by humans, Africanized honey bees have earned the reputation of being a notorious invasive species. The prospect of killer bees arriving in the United States caused a media sensation in the late 1970s, inspired several horror movies, and sparked debate about the wisdom of humans altering entire ecosystems.
The first Africanized honey bees in the U.S. were discovered in 1985 at an oil field in the San Joaquin Valley of California. Bee experts theorized the colony had not traveled overland but instead "arrived hidden in a load of oil-drilling pipe shipped from South America." The first permanent colonies arrived in Texas from Mexico in 1990. In the Tucson region of Arizona, a study of trapped swarms in 1994 found that only 15 percent had been Africanized; this number had grown to 90 percent by 1997.
Characteristics
Though Africanized honey bees display certain behavioral traits that make them less than desirable for commercial beekeeping, excessive defensiveness and swarming foremost, they have now become the dominant type of honey bee for beekeeping in Central and South America due to their genetic dominance as well as ability to out-compete their European counterpart, with some beekeepers asserting that they are superior honey producers and pollinators.
Africanized honey bees, as opposed to other Western bee types:
Tend to swarm more frequently and go farther than other types of honey bees.
Are more likely to migrate as part of a seasonal response to lowered food supply.
Are more likely to "abscond"—the entire colony leaves the hive and relocates—in response to stress.
Have greater defensiveness when in a resting swarm, compared to other honey bee types.
Live more often in ground cavities than the European types.
Guard the hive aggressively, with a larger alarm zone around the hive.
Have a higher proportion of "guard" bees within the hive.
Deploy in greater numbers for defense and pursue perceived threats over much longer distances from the hive.
Cannot survive extended periods of forage deprivation, preventing introduction into areas with harsh winters or extremely dry late summers.
Live in dramatically higher population densities.
North American distribution
Africanized honey bees are considered an invasive species in the Americas. As of 2002, the Africanized honey bees had spread from Brazil south to northern Argentina and north to Central America, Trinidad (the West Indies), Mexico, Texas, Arizona, Nevada, New Mexico, Florida, and southern California. In June 2005, it was discovered that the bees had spread into southwest Arkansas. Their expansion stopped for a time at eastern Texas, possibly due to the large population of European honey bee hives in the area. However, discoveries of the Africanized honey bees in southern Louisiana show that they have gotten past this barrier, or have come as a swarm aboard a ship.
On 11 September 2007, Commissioner Bob Odom of the Louisiana Department of Agriculture and Forestry said that Africanized honey bees had established themselves in the New Orleans area. In February 2009, Africanized honey bees were found in southern Utah. The bees had spread into eight counties in Utah, as far north as Grand and Emery Counties by May 2017.
In October 2010, a 73-year-old man was killed by a swarm of Africanized honey bees while clearing brush on his south Georgia property, as determined by Georgia's Department of Agriculture. In 2012, Tennessee state officials reported that a colony was found for the first time in a beekeeper's colony in Monroe County in the eastern part of the state. In June 2013, 62-year-old Larry Goodwin of Moody, Texas, was killed by a swarm of Africanized honey bees.
In May 2014, Colorado State University confirmed that bees from a swarm which had aggressively attacked an orchardist near Palisade, in west-central Colorado, were from an Africanized honey bee hive. The hive was subsequently destroyed.
In tropical climates they effectively out-compete European honey bees and, at their peak rate of expansion, they spread north at almost two kilometers (about 1¼ mile) a day. There were discussions about slowing the spread by placing large numbers of docile European-strain hives in strategic locations, particularly at the Isthmus of Panama, but various national and international agricultural departments could not prevent the bees' expansion. Current knowledge of the genetics of these bees suggests that such a strategy, had it been tried, would not have been successful.
As the Africanized honey bee migrates further north, colonies continue to interbreed with European honey bees. In a study conducted in Arizona in 2004 it was observed that swarms of Africanized honey bees could take over weakened European honey bee hives by invading the hive, then killing the European queen and establishing their own queen. There are now relatively stable geographic zones in which either Africanized honey bees dominate, a mix of Africanized and European honey bees is present, or only non-Africanized honey bees are found, as in the southern portions of South America or northern North America.
African honey bees abscond (abandon the hive and any food store to start over in a new location) more readily than European honeybees. This is not necessarily a severe loss in tropical climates where plants bloom all year, but in more temperate climates it can leave the colony with not enough stores to survive the winter. Thus Africanized honey bees are expected to be a hazard mostly in the southern states of the United States, reaching as far north as the Chesapeake Bay in the east. The cold-weather limits of the Africanized honey bee have driven some professional bee breeders from Southern California into the harsher wintering locales of the northern Sierra Nevada and southern Cascade Range. This is a more difficult area to prepare bees for early pollination placement in, such as is required for the production of almonds. The reduced available winter forage in northern California means that bees must be fed for early spring buildup.
The arrival of the Africanized honey bee in Central America is threatening the traditional craft of keeping Melipona stingless bees in log gums, although they do not interbreed or directly compete with each other. The honey production from an individual hive of Africanized honey bees can be as high as . This value exceeds the much smaller of the various Melipona stingless bee species. Thus economic pressures are forcing beekeepers to switch from the traditional stingless bees to the new reality of the Africanized honey bee. Whether this will lead to the extinction of the former is unknown, but they are well adapted to exist in the wild, and there are a number of indigenous plants that the Africanized honey bees do not visit, so the fate of the Melipona bees remains to be seen.
Foraging behavior
Africanized honey bees begin foraging at young ages and harvest a greater quantity of pollen compared to their European counterparts (Apis mellifera ligustica). This may be linked to the high reproductive rate of the Africanized honey bee, which requires pollen to feed its greater number of larvae. Africanized honey bees are also sensitive to sucrose at lower concentrations. This adaptation causes foragers to harvest resources with low concentrations of sucrose that include water, pollen, and unconcentrated nectar. A study comparing A. m. scutellata and A. m. ligustica published by Fewell and Bertram in 2002 suggests that the differential evolution of this suite of behaviors is due to the different environmental pressures experienced by African and European subspecies.
Proboscis extension responses
Honey bee sensitivity to different concentrations of sucrose is determined by a reflex known as the proboscis extension response (PER). Different species of honey bees that employ different foraging behaviors will vary in the concentration of sucrose that elicits their proboscis extension response.
For example, European honey bees (Apis mellifera ligustica) forage at older ages and harvest less pollen and more concentrated nectar. The differences in resources collected during harvesting are a result of the European honey bee's sensitivity to sucrose at higher concentrations.
Evolution
The differences in a variety of behaviors between different species of honey bees are the result of a directional selection that acts upon several foraging behavior traits as a common entity. Selection in natural populations of honey bees show that positive selection of sensitivity to low concentrations of sucrose are linked to foraging at younger ages and collecting resources low in sucrose. Positive selection of sensitivity to high concentrations of sucrose were linked to foraging at older ages and collecting resources higher in sucrose. Additionally of interest, "change in one component of a suite of behaviors appear[s] to direct change in the entire suite."
When resource density is low in Africanized honey bee habitats, it is necessary for the bees to harvest a greater variety of resources because they cannot afford to be selective. Honey bees that are genetically inclined towards resources high in sucrose, such as concentrated nectar, will not be able to sustain themselves in harsher environments. The noted to low sucrose concentration in Africanized honey bees may be a result of selective pressure in times of scarcity when their survival depends on their attraction to low quality resources.
Morphology and genetics
The popular term "killer bee" has only limited scientific meaning today because there is no generally accepted fraction of genetic contribution used to establish a cut-off between a "killer" honey bee and an ordinary honey bee. Government and scientific documents prefer "Africanized honey bee" as an accepted scientific taxon.
Morphological tests
Although the native East African lowland honey bees (Apis mellifera scutellata) are smaller and build smaller comb cells than the European honey bees, their hybrids are not smaller. Africanized honey bees have slightly shorter wings, which can only be recognized reliably by performing a statistical analysis on micro-measurements of a substantial sample.
One of the problems with this test is that there are other subspecies, such as A. m. iberiensis, which also have shortened wings. This trait is hypothesized to derive from ancient hybrid haplotypes thought to have links to evolutionary lineages from Africa. Some belong to A. m. intermissa, but others have an indeterminate origin; the Egyptian honeybee (Apis mellifera lamarckii), present in small numbers in the southeastern U.S., has the same morphology.
DNA tests
Currently testing techniques have moved away from external measurements to DNA analysis, but this means the test can only be done by a sophisticated laboratory. Molecular diagnostics using the mitochondrial DNA (mtDNA) cytochrome b gene can differentiate A. m. scutellata from other A. mellifera lineages, though mtDNA only allows one to detect Africanized colonies that have Africanized queens and not colonies where a European queen has mated with Africanized drones. A test based on single nucleotide polymorphisms was created in 2015 to detect Africanized bees based on the proportion of African and European ancestry.
Western variants
The western honey bee is native to the continents of Europe, Asia, and Africa. As of the early 1600s, it was introduced to North America, with subsequent introductions of other European subspecies 200 years later. Since then, they have spread throughout the Americas. The 29 subspecies can be assigned to one of four major branches based on work by Ruttner and subsequently confirmed by analysis of mitochondrial DNA. African subspecies are assigned to branch A, northwestern European subspecies to branch M, southwestern European subspecies to branch C, and Mideast subspecies to branch O. The subspecies are grouped and listed. There are still regions with localized variations that may become identified subspecies in the near future, such as A. m. pomonella from the Tian Shan Mountains, which would be included in the Mideast subspecies branch.
The western honey bee is the third insect whose genome has been mapped, and is unusual in having very few transposons. According to the scientists who analyzed its genetic code, the western honey bee originated in Africa and spread to Eurasia in two ancient migrations. They have also discovered that the number of genes in the honey bee related to smell outnumber those for taste. The genome sequence revealed several groups of genes, particularly the genes related to circadian rhythms, were closer to vertebrates than other insects. Genes related to enzymes that control other genes were also vertebrate-like.
African variants
There are two lineages of the East African lowland subspecies (Apis mellifera scutellata) in the Americas: actual matrilineal descendants of the original escaped queens and a much smaller number that are Africanized through hybridization. The matrilineal descendants carry African mtDNA, but partially European nuclear DNA, while the honey bees that are Africanized through hybridization carry European mtDNA, and partially African nuclear DNA. The matrilineal descendants are in the vast majority. This is supported by DNA analyses performed on the bees as they spread northwards; those that were at the "vanguard" were over 90% African mtDNA, indicating an unbroken matriline, but after several years in residence in an area interbreeding with the local European strains, as in Brazil, the overall representation of African mtDNA drops to some degree. However, these latter hybrid lines (with European mtDNA) do not appear to propagate themselves well or persist. Population genetics analysis of Africanized honey bees in the United States, using a maternally inherited genetic marker, found 12 distinct mitotypes, and the amount of genetic variation observed supports the idea that there have been multiple introductions of AHB into the United States.
A newer publication shows the genetic admixture of the Africanized honey bees in Brazil. The small number of honey bees with African ancestry that were introduced to Brazil in 1956, which dispersed and hybridized with existing managed populations of European origin and quickly spread across much of the Americas, is an example of a massive biological invasion as earlier told in this article. Here, they analysed whole-genome sequences of 32 Africanized honey bees sampled from throughout Brazil to study the effect of this process on genome diversity. By comparison with ancestral populations from Europe and Africa, they infer that these samples had 84% African ancestry, with the remainder from western European populations. However, this proportion varied across the genome and they identified signals of positive selection in regions with high European ancestry proportions. These observations are largely driven by one large gene-rich 1.4 Mbp segment on chromosome 11 where European haplotypes are present at a significantly elevated frequency and likely confer an adaptive advantage in the Africanized honey bee population.
Consequences of selection
The chief difference between the European subspecies of honey bees kept by beekeepers and the African ones is attributable to both selective breeding and natural selection. By selecting only the most gentle, non-defensive subspecies, beekeepers have, over centuries, eliminated the more defensive ones and created a number of subspecies suitable for apiculture.
In Central and southern Africa there was formerly no tradition of beekeeping, and the hive was destroyed in order to harvest the honey, pollen and larvae. The bees adapted to the climate of Sub-Saharan Africa, including prolonged droughts. Having to defend themselves against aggressive insects such as ants and wasps, as well as voracious animals like the honey badger, African honey bees evolved as a subspecies group of highly defensive bees unsuitable by a number of metrics for domestic use.
As Africanized honey bees migrate into regions, hives with an old or absent queen can become hybridized by crossbreeding. The aggressive Africanized drones out-compete European drones for a newly developed queen of such a hive, ultimately resulting in hybridization of the existing colony. Requeening, a term for replacing out the older existing queen with a new, already fertilized one, can avoid hybridization in apiaries. As a prophylactic measure, the majority of beekeepers in North America tend to requeen their hives annually, maintaining strong colonies and avoiding hybridization.
Defensiveness
Africanized honey bees exhibit far greater defensiveness than European honey bees and are more likely to deal with a perceived threat by attacking in large swarms. These hybrids have been known to pursue a perceived threat for a distance of well over 500 meters (1,640 ft).
The venom of an Africanized honey bee is the same as that of a European honey bee, but since the former tends to sting in far greater numbers, deaths from them are naturally more numerous than from European honey bees. While allergies to the European honey bee may cause death, complications from Africanized honey bee stings are usually not caused from allergies to their venom. Humans stung many times by the Africanized honey bees can exhibit serious side effects such as inflammation of the skin, dizziness, headaches, weakness, edema, nausea, diarrhea, and vomiting. Some cases even progress to affecting different body systems by causing increased heart rates, respiratory distress, and even renal failure. Africanized honey bee sting cases can become very serious, but they remain relatively rare and are often limited to accidental discovery in highly populated areas.
Impact on humans
Fear factor
The Africanized honey bee is widely feared by the public, a reaction that has been amplified by sensationalist movies (such as The Swarm) and some of the media reports. Stings from Africanized honey bees kill on average two or three people per year.
As the Africanized honey bee spreads through Florida, a densely populated state, officials worry that public fear may force misguided efforts to combat them:
Misconceptions
"Killer bee" is a term frequently used in media such as movies that portray aggressive behavior or actively seeking to attack humans. "Africanized honey bee" is considered a more descriptive term in part because their behavior is increased defensiveness compared to European honey bees that can exhibit similar defensive behaviors when disturbed.
The sting of the Africanized honey bee is no more potent than any other variety of honey bee, and although they are similar in appearance to European honey bees, they tend to be slightly smaller and darker in color. Although Africanized honey bees do not actively search for humans to attack, they are more dangerous because they are more easily provoked, quicker to attack in greater numbers, and then pursue the perceived threat farther, for as much as a quarter of a mile (400 metres).
While studies have shown that Africanized honey bees can infiltrate European honey bee colonies and then kill and replace their queen (thus usurping the hive), this is less common than other methods. Wild and managed colonies will sometimes be seen to fight over honey stores during the dearth (periods when plants are not flowering), but this behavior should not be confused with the aforementioned activity. The most common way that a European honey bee hive will become Africanized is through crossbreeding during a new queen's mating flight. Studies have consistently shown that Africanized drones are more numerous, stronger and faster than their European cousins and are therefore able to out-compete them during these mating flights. The result of mating between Africanized drones and European queens is almost always Africanized offspring.
Impact on apiculture
In areas of suitable temperate climate, the survival traits of Africanized honey bee colonies help them outperform European honey bee colonies. They also return later and work under conditions that often keep European honey bees hive-bound. This is the reason why they have gained a reputation as superior honey producers, and those beekeepers who have learned to adapt their management techniques now seem to prefer them to their European counterparts. Studies show that in areas of Florida that contain Africanized honey bees, the honey production is higher than in areas in which they do not live. It is also becoming apparent that Africanized honey bees have another advantage over European honey bees in that they seem to show a higher resistance to several health issues, including parasites such as Varroa destructor, some fungal diseases like chalkbrood, and even the mysterious colony collapse disorder which was plaguing beekeepers in the early 2000's. Despite all its negative factors, it is possible that the Africanized honey bee might actually end up being a boon to apiculture.
Queen management
In areas where Africanized honey bees are well established, bought and pre-fertilized (i.e. mated) European queens can be used to maintain a hive's European genetics and behavior. However, this practice can be expensive, since these queens must be bought and shipped from breeder apiaries in areas completely free of Africanized honey bees, such as the northern U.S. states or Hawaii. As such, this is generally not practical for most commercial beekeepers outside the U.S., and it is one of the main reasons why Central and South American beekeepers have had to learn to manage and work with the existing Africanized honey bee. Any effort to crossbreed virgin European queens with Africanized drones will result in the offspring exhibiting Africanized traits; only 26 swarms escaped in 1957, and nearly 60 years later there does not appear to be a noticeable lessening of the typical Africanized characteristics.
Gentleness
Not all Africanized honey bee hives display the typical hyper-defensive behavior, which may provide bee breeders a point to begin breeding a gentler stock (gAHBs). Work has been done in Brazil towards this end, but in order to maintain these traits, it is necessary to develop a queen breeding and mating facility in order to requeen colonies and to prevent reintroduction of unwanted genes or characteristics through unintended crossbreeding with feral colonies. In Puerto Rico, some bee colonies are already beginning to show more gentle behavior. This is believed to be because the more gentle bees contain genetic material that is more similar to the European honey bee, although they also contain Africanized honey bee material. This degree of aggressiveness is surprisingly almost unrelated to individual genetics – instead being almost entirely determined by the entire hive's proportion of aggression genetics.
Safety
While bee incidents are much less common than they were during the first wave of Africanized honey bee colonization, this can be largely attributed to modified and improved bee management techniques. Prominent among these are locating bee-yards much farther away from human habitation, creating barriers to keep livestock at enough of a distance to prevent interaction, and education of the general public to teach them how to properly react when feral colonies are encountered and what resources to contact. The Africanized honey bee is now considered the honey bee of choice for beekeeping in Brazil.
Impact on pets and livestock
Africanized honey bees are a threat to outdoor pets, especially mammals. The most detailed information available pertains to dogs.
Less is known about livestock as victims. There is a widespread consensus that cattle suffer occasional Africanized honey bee attacks in Brazil, but there is little relevant documentation. It appears that cows sustain hundreds of stings if they are attacked, but can survive such injuries.
See also
Bee removal
Notes
References
Further reading
External links
Lists general information and resources for Africanized Honeybee.
Western honey bee breeds
Hybrid animals
Agricultural pest insects
Invasive insect species
Pest insects
Beekeeping in the United States
Invasive animal species in the United States | Africanized bee | [
"Biology"
] | 5,243 | [
"Hybrid animals",
"Animals",
"Hybrid organisms"
] |
161,875 | https://en.wikipedia.org/wiki/Dana%20Scott | Dana Stewart Scott (born October 11, 1932) is an American logician who is the emeritus Hillman University Professor of Computer Science, Philosophy, and Mathematical Logic at Carnegie Mellon University; he is now retired and lives in Berkeley, California. His work on automata theory earned him the Turing Award in 1976, while his collaborative work with Christopher Strachey in the 1970s laid the foundations of modern approaches to the semantics of programming languages. He has also worked on modal logic, topology, and category theory.
Early career
He received his B.A. in Mathematics from the University of California, Berkeley, in 1954. He wrote his Ph.D. thesis on Convergent Sequences of Complete Theories under the supervision of Alonzo Church while at Princeton, and defended his thesis in 1958. Solomon Feferman (2005) writes of this period:
After completing his Ph.D. studies, he moved to the University of Chicago, working as an instructor there until 1960. In 1959, he published a joint paper with Michael O. Rabin, a colleague from Princeton, titled Finite Automata and Their Decision Problem (Scott and Rabin 1959) which introduced the idea of nondeterministic machines to automata theory. This work led to the joint bestowal of the Turing Award on the two, for the introduction of this fundamental concept of computational complexity theory.
University of California, Berkeley, 1960–1963
Scott took up a post as Assistant Professor of Mathematics, back at the University of California, Berkeley, and involved himself with classical issues in mathematical logic, especially set theory and Tarskian model theory. He proved that the axiom of constructibility is incompatible with the existence of a measurable cardinal, a result considered seminal in the evolution of set theory.
During this period he started supervising Ph.D. students, such as James Halpern (Contributions to the Study of the Independence of the Axiom of Choice) and Edgar Lopez-Escobar (Infinitely Long Formulas with Countable Quantifier Degrees).
Modal and tense logic
Scott also began working on modal logic in this period, beginning a collaboration with John Lemmon, who moved to Claremont, California, in 1963. Scott was especially interested in Arthur Prior's approach to tense logic and the connection to the treatment of time in natural-language semantics, and began collaborating with Richard Montague (Copeland 2004), whom he had known from his days as an undergraduate at Berkeley. Later, Scott and Montague independently discovered an important generalisation of Kripke semantics for modal and tense logic, called Scott-Montague semantics (Scott 1970).
John Lemmon and Scott began work on a modal-logic textbook that was interrupted by Lemmon's death in 1966. Scott circulated the incomplete monograph amongst colleagues, introducing a number of important techniques in the semantics of model theory, most importantly presenting a refinement of the canonical model that became standard, and introducing the technique of constructing models through filtrations, both of which are core concepts in modern Kripke semantics (Blackburn, de Rijke, and Venema, 2001). Scott eventually published the work as An Introduction to Modal Logic (Lemmon & Scott, 1977).
Stanford, Amsterdam and Princeton, 1963–1972
Following an initial observation of Robert Solovay, Scott formulated the concept of Boolean-valued model, as Solovay and Petr Vopěnka did likewise at around the same time. In 1967, Scott published a paper, A Proof of the Independence of the Continuum Hypothesis, in which he used Boolean-valued models to provide an alternate analysis of the independence of the continuum hypothesis to that provided by Paul Cohen. This work led to the award of the Leroy P. Steele Prize in 1972.
University of Oxford, 1972–1981
Scott took up a post as Professor of Mathematical Logic on the Philosophy faculty of the University of Oxford in 1972. He was member of Merton College while at Oxford and is now an Honorary Fellow of the college.
Semantics of programming languages
This period saw Scott working with Christopher Strachey, and the two
managed, despite administrative pressures, to do work on providing a mathematical foundation for the semantics of programming languages, the work for which Scott is best known. Together, their work constitutes the Scott–Strachey approach to denotational semantics, an important and seminal contribution to theoretical computer science. One of Scott's contributions is his formulation of domain theory, allowing programs involving recursive functions and looping-control constructs to be given denotational semantics. Additionally, he provided a foundation for the understanding of infinitary and continuous information through domain theory and his theory of information systems.
Scott's work of this period led to the bestowal of:
The 1990 Harold Pender Award for his application of concepts from logic and algebra to the development of mathematical semantics of programming languages;
The 1997 Rolf Schock Prize in logic and philosophy from the Royal Swedish Academy of Sciences for his conceptually oriented logical works, especially the creation of domain theory, which has made it possible to extend Tarski's semantic paradigm to programming languages as well as to construct models of Curry's combinatory logic and Church's calculus of lambda conversion; and
The 2001 Bolzano Prize for Merit in the Mathematical Sciences by the Czech Academy of Sciences
The 2007 EATCS Award for his contribution to theoretical computer science.
Carnegie Mellon University, 1981–2003
At Carnegie Mellon University, Scott proposed the theory of equilogical spaces as a successor theory to domain theory; among its many advantages, the category of equilogical spaces is a cartesian closed category, whereas the category of domains is not. In 1994, he was inducted as a Fellow of the Association for Computing Machinery. In 2012 he became a fellow of the American Mathematical Society.
Bibliography
With Michael O. Rabin, 1959. Finite Automata and Their Decision Problem.
1967. A proof of the independence of the continuum hypothesis. Mathematical Systems Theory 1:89–111.
1970. 'Advice on modal logic'. In Philosophical Problems in Logic, ed. K. Lambert, pages 143–173.
With John Lemmon, 1977. An Introduction to Modal Logic. Oxford: Blackwell.
References
Further reading
Blackburn, de Rijke and Venema (2001). Modal logic. Cambridge University Press.
Jack Copeland (2004). Arthur Prior. In the Stanford Encyclopedia of Philosophy.
Anita Burdman Feferman and Solomon Feferman (2004). Alfred Tarski: life and logic. Cambridge University Press, , .
Solomon Feferman (2005). Tarski's influence on computer science. Proc. LICS'05. IEEE Press.
Joseph E. Stoy (1977). Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory. MIT Press.
External links
DOMAIN 2002 Workshop on Domain Theory — held in honor of Scott's 70th birthday.
Dana Scott interviewed by Gordon Plotkin, as part of the Association for Computing Machinery series of interviews of Turing award winners: Part 1 (Nov 12, 2020) Part 2 (Dec 29, 2020) Part 3 (Jan 12, 2021) Part 4 (Feb 18, 2021)
Selected papers of Dana S. Scott
American computer scientists
American logicians
1932 births
Living people
Members of the United States National Academy of Sciences
Carnegie Mellon University faculty
1994 fellows of the Association for Computing Machinery
Fellows of the American Mathematical Society
Fellows of Merton College, Oxford
Formal methods people
Lattice theorists
Mathematical logicians
Modal logicians
Model theorists
Programming language researchers
Rolf Schock Prize laureates
Semanticists
American set theorists
American topologists
Turing Award laureates
University of Chicago faculty
University of California, Berkeley College of Letters and Science faculty
Princeton University alumni
UC Berkeley College of Letters and Science alumni
People from Berkeley, California
Engineers from California
Scientists from California
20th-century American mathematicians
20th-century American engineers
21st-century American engineers
20th-century American scientists
21st-century American scientists
21st-century American mathematicians | Dana Scott | [
"Mathematics"
] | 1,621 | [
"Model theorists",
"Mathematical logic",
"Model theory",
"Mathematical logicians"
] |
161,879 | https://en.wikipedia.org/wiki/Scalar%20field | In mathematics and physics, a scalar field is a function associating a single number to each point in a region of space – possibly physical space. The scalar may either be a pure mathematical number (dimensionless) or a scalar physical quantity (with units).
In a physical context, scalar fields are required to be independent of the choice of reference frame. That is, any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory.
Definition
Mathematically, a scalar field on a region U is a real or complex-valued function or distribution on U. The region U may be a set in some Euclidean space, Minkowski space, or more generally a subset of a manifold, and it is typical in mathematics to impose further conditions on the field, such that it be continuous or often continuously differentiable to some order. A scalar field is a tensor field of order zero, and the term "scalar field" may be used to distinguish a function of this kind with a more general tensor field, density, or differential form.
Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should also be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields, which associate a vector to every point of a region, as well as tensor fields and spinor fields. More subtly, scalar fields are often contrasted with pseudoscalar fields.
Uses in physics
In physics, scalar fields often describe the potential energy associated with a particular force. The force is a vector field, which can be obtained as a factor of the gradient of the potential energy scalar field. Examples include:
Potential fields, such as the Newtonian gravitational potential, or the electric potential in electrostatics, are scalar fields which describe the more familiar forces.
A temperature, humidity, or pressure field, such as those used in meteorology.
Examples in quantum theory and relativity
In quantum field theory, a scalar field is associated with spin-0 particles. The scalar field may be real or complex valued. Complex scalar fields represent charged particles. These include the Higgs field of the Standard Model, as well as the charged pions mediating the strong nuclear interaction.
In the Standard Model of elementary particles, a scalar Higgs field is used to give the leptons and massive vector bosons their mass, via a combination of the Yukawa interaction and the spontaneous symmetry breaking. This mechanism is known as the Higgs mechanism. A candidate for the Higgs boson was first detected at CERN in 2012.
In scalar theories of gravitation scalar fields are used to describe the gravitational field.
Scalar–tensor theories represent the gravitational interaction through both a tensor and a scalar. Such attempts are for example the Jordan theory as a generalization of the Kaluza–Klein theory and the Brans–Dicke theory.
Scalar fields like the Higgs field can be found within scalar–tensor theories, using as scalar field the Higgs field of the Standard Model. This field interacts gravitationally and Yukawa-like (short-ranged) with the particles that get mass through it.
Scalar fields are found within superstring theories as dilaton fields, breaking the conformal symmetry of the string, though balancing the quantum anomalies of this tensor.
Scalar fields are hypothesized to have caused the high accelerated expansion of the early universe (inflation), helping to solve the horizon problem and giving a hypothetical reason for the non-vanishing cosmological constant of cosmology. Massless (i.e. long-ranged) scalar fields in this context are known as inflatons. Massive (i.e. short-ranged) scalar fields are proposed, too, using for example Higgs-like fields.
Other kinds of fields
Vector fields, which associate a vector to every point in space. Some examples of vector fields include the electromagnetic field and air flow (wind) in meteorology.
Tensor fields, which associate a tensor to every point in space. For example, in general relativity gravitation is associated with the tensor field called Einstein tensor. In Kaluza–Klein theory, spacetime is extended to five dimensions and its Riemann curvature tensor can be separated out into ordinary four-dimensional gravitation plus an extra set, which is equivalent to Maxwell's equations for the electromagnetic field, plus an extra scalar field known as the "dilaton". (The dilaton scalar is also found among the massless bosonic fields in string theory.)
See also
Scalar field theory
Vector boson
Vector-valued function
References
Multivariable calculus
Articles containing video clips
Field
Functions and mappings | Scalar field | [
"Physics",
"Mathematics"
] | 1,076 | [
"Scalar physical quantities",
"Functions and mappings",
"Mathematical analysis",
"Physical quantities",
"Calculus",
"Quantity",
"Mathematical objects",
"Mathematical relations",
"Multivariable calculus"
] |
161,881 | https://en.wikipedia.org/wiki/Uniqueness%20type | In computing, a unique type guarantees that an object is used in a single-threaded way, with at most a single reference to it. If a value has a unique type, a function applied to it can be optimized to update the value in-place in the object code. Such in-place updates improve the efficiency of functional languages while maintaining referential transparency. Unique types can also be used to integrate functional and imperative programming.
Introduction
Uniqueness typing is best explained using an example. Consider a function readLine that reads the next line of text from a given file:
function readLine(File f) returns String
return line where
String line = doImperativeReadLineSystemCall(f)
end
end
Now doImperativeReadLineSystemCall reads the next line from the file using an OS-level system call which has the side effect of changing the current position in the file. But this violates referential transparency because calling it multiple times with the same argument will return different results each time as the current position in the file gets moved. This in turn makes readLine violate referential transparency because it calls doImperativeReadLineSystemCall.
However, using uniqueness typing, we can construct a new version of readLine that is referentially transparent even though it's built on top of a function that's not referentially transparent:
function readLine2(unique File f) returns (unique File, String)
return (differentF, line) where
String line = doImperativeReadLineSystemCall(f)
File differentF = newFileFromExistingFile(f)
end
end
The unique declaration specifies that the type of f is unique; that is to say that f may never be referred to again by the caller of readLine2 after readLine2 returns, and this restriction is enforced by the type system. And since readLine2 does not return f itself but rather a new, different file object differentF, this means that it's impossible for readLine2 to be called with f as an argument ever again, thus preserving referential transparency while allowing for side effects to occur.
Programming languages
Uniqueness types are implemented in functional programming languages such as Clean, Mercury, SAC and Idris. They are sometimes used for doing I/O operations in functional languages in lieu of monads.
A compiler extension has been developed for the Scala programming language which uses annotations to handle uniqueness in the context of message passing between actors.
Relationship to linear typing
A unique type is very similar to a linear type, to the point that the terms are often used interchangeably, but there is in fact a distinction: actual linear typing allows a non-linear value to be typecast to a linear form, while still retaining multiple references to it. Uniqueness guarantees that a value has no other references to it, while linearity guarantees that no more references can be made to a value.
Linearity and uniqueness can be seen as particularly distinct when in relation to non-linearity and non-uniqueness modalities, but can then also be unified in a single type system.
See also
Linear type
Linear logic
References
External links
Bibliography on Linear Logic
Uniqueness Typing Simplified
Type theory | Uniqueness type | [
"Mathematics"
] | 665 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
161,883 | https://en.wikipedia.org/wiki/Formal%20methods | In computer science, formal methods are mathematically rigorous techniques for the specification, development, analysis, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.
Formal methods employ a variety of theoretical computer science fundamentals, including logic calculi, formal languages, automata theory, control theory, program semantics, type systems, and type theory.
Uses
Formal methods can be applied at various points through the development process.
Specification
Formal methods may be used to give a formal description of the system to be developed, at whatever level of detail desired. Further formal methods may depend on this specification to synthesize a program or to verify the correctness of a system.
Alternatively, specification may be the only stage in which formal methods is used. By writing a specification, ambiguities in the informal requirements can be discovered and resolved. Additionally, engineers can use a formal specification as a reference to guide their development processes.
The need for formal specification systems has been noted for years. In the ALGOL 58 report, John Backus presented a formal notation for describing programming language syntax, later named Backus normal form then renamed Backus–Naur form (BNF). Backus also wrote that a formal description of the meaning of syntactically valid ALGOL programs was not completed in time for inclusion in the report, stating that it "will be included in a subsequent paper." However, no paper describing the formal semantics was ever released.
Synthesis
Program synthesis is the process of automatically creating a program that conforms to a specification. Deductive synthesis approaches rely on a complete formal specification of the program, whereas inductive approaches infer the specification from examples. Synthesizers perform a search over the space of possible programs to find a program consistent with the specification. Because of the size of this search space, developing efficient search algorithms is one of the major challenges in program synthesis.
Verification
Formal verification is the use of software tools to prove properties of a formal specification, or to prove that a formal model of a system implementation satisfies its specification.
Once a formal specification has been developed, the specification may be used as the basis for proving properties of the specification, and by inference, properties of the system implementation.
Sign-off verification
Sign-off verification is the use of a formal verification tool that is highly trusted. Such a tool can replace traditional verification methods (the tool may even be certified).
Human-directed proof
Sometimes, the motivation for proving the correctness of a system is not the obvious need for reassurance of the correctness of the system, but a desire to understand the system better. Consequently, some proofs of correctness are produced in the style of mathematical proof: handwritten (or typeset) using natural language, using a level of informality common to such proofs. A "good" proof is one that is readable and understandable by other human readers.
Critics of such approaches point out that the ambiguity inherent in natural language allows errors to be undetected in such proofs; often, subtle errors can be present in the low-level details typically overlooked by such proofs. Additionally, the work involved in producing such a good proof requires a high level of mathematical sophistication and expertise.
Automated proof
In contrast, there is increasing interest in producing proofs of correctness of such systems by automated means. Automated techniques fall into three general categories:
Automated theorem proving, in which a system attempts to produce a formal proof from scratch, given a description of the system, a set of logical axioms, and a set of inference rules.
Model checking, in which a system verifies certain properties by means of an exhaustive search of all possible states that a system could enter during its execution.
Abstract interpretation, in which a system verifies an over-approximation of a behavioural property of the program, using a fixpoint computation over a (possibly complete) lattice representing it.
Some automated theorem provers require guidance as to which properties are "interesting" enough to pursue, while others work without human intervention. Model checkers can quickly get bogged down in checking millions of uninteresting states if not given a sufficiently abstract model.
Proponents of such systems argue that the results have greater mathematical certainty than human-produced proofs, since all the tedious details have been algorithmically verified. The training required to use such systems is also less than that required to produce good mathematical proofs by hand, making the techniques accessible to a wider variety of practitioners.
Critics note that some of those systems are like oracles: they make a pronouncement of truth, yet give no explanation of that truth. There is also the problem of "verifying the verifier"; if the program that aids in the verification is itself unproven, there may be reason to doubt the soundness of the produced results. Some modern model checking tools produce a "proof log" detailing each step in their proof, making it possible to perform, given suitable tools, independent verification.
The main feature of the abstract interpretation approach is that it provides a sound analysis, i.e. no false negatives are returned. Moreover, it is efficiently scalable, by tuning the abstract domain representing the property to be analyzed, and by applying widening operators to get fast convergence.
Techniques
Formal methods includes a number of different techniques.
Specification languages
The design of a computing system can be expressed using a specification language, which is a formal language that includes a proof system. Using this proof system, formal verification tools can reason about the specification and establish that a system adheres to the specification.
Binary decision diagrams
A binary decision diagram is a data structure that represents a Boolean function. If a Boolean formula expresses that an execution of a program conforms to the specification, a binary decision diagram can be used to determine if is a tautology; that is, it always evaluates to TRUE. If this is the case, then the program always conforms to the specification.
SAT solvers
A SAT solver is a program that can solve the Boolean satisfiability problem, the problem of finding an assignment of variables that makes a given propositional formula evaluate to true. If a Boolean formula expresses that a specific execution of a program conforms to the specification, then determining that is unsatisfiable is equivalent to determining that all executions conform to the specification. SAT solvers are often used in bounded model checking, but can also be used in unbounded model checking.
Applications
Formal methods are applied in different areas of hardware and software, including routers, Ethernet switches, routing protocols, security applications, and operating system microkernels such as seL4. There are several examples in which they have been used to verify the functionality of the hardware and software used in data centres. IBM used ACL2, a theorem prover, in the AMD x86 processor development process. Intel uses such methods to verify its hardware and firmware (permanent software programmed into a read-only memory). Dansk Datamatik Center used formal methods in the 1980s to develop a compiler system for the Ada programming language that went on to become a long-lived commercial product.
There are several other projects of NASA in which formal methods are applied, such as Next Generation Air Transportation System, Unmanned Aircraft System integration in National Airspace System, and Airborne Coordinated Conflict Resolution and Detection (ACCoRD).
B-Method with Atelier B, is used to develop safety automatisms for the various subways installed throughout the world by Alstom and Siemens, and also for Common Criteria certification and the development of system models by ATMEL and STMicroelectronics.
Formal verification has been frequently used in hardware by most of the well-known hardware vendors, such as IBM, Intel, and AMD. There are many areas of hardware, where Intel have used formal methods to verify the working of the products, such as parameterized verification of cache-coherent protocol, Intel Core i7 processor execution engine validation (using theorem proving, BDDs, and symbolic evaluation), optimization for Intel IA-64 architecture using HOL light theorem prover, and verification of high-performance dual-port gigabit Ethernet controller with support for PCI express protocol and Intel advance management technology using Cadence. Similarly, IBM has used formal methods in the verification of power gates, registers, and functional verification of the IBM Power7 microprocessor.
In software development
In software development, formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification, and design levels. Formal methods are most likely to be applied to safety-critical or security-critical software and systems, such as avionics software. Software safety assurance standards, such as DO-178C allows the usage of formal methods through supplementation, and Common Criteria mandates formal methods at the highest levels of categorization.
For sequential software, examples of formal methods include the B-Method, the specification languages used in automated theorem proving, RAISE, and the Z notation.
In functional programming, property-based testing has allowed the mathematical specification and testing (if not exhaustive testing) of the expected behaviour of individual functions.
The Object Constraint Language (and specializations such as Java Modeling Language) has allowed object-oriented systems to be formally specified, if not necessarily formally verified.
For concurrent software and systems, Petri nets, process algebra, and finite-state machines (which are based on automata theory; see also virtual finite state machine or event driven finite state machine) allow executable software specification and can be used to build up and validate application behaviour.
Another approach to formal methods in software development is to write a specification in some form of logic—usually a variation of first-order logic—and then to directly execute the logic as though it were a program. The OWL language, based on description logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, as well as executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English–logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.
Semi-formal methods
Semi-formal methods are formalisms and languages that are not considered fully "formal". It defers the task of completing the semantics to a later stage, which is then done either by human interpretation or by interpretation through software like code or test case generators.
Some practitioners believe that the formal methods community has overemphasized full formalization of a specification or design. They contend that the expressiveness of the languages involved, as well as the complexity of the systems being modelled, make full formalization a difficult and expensive task. As an alternative, various lightweight formal methods, which emphasize partial specification and focused application, have been proposed. Examples of this lightweight approach to formal methods include the Alloy object modelling notation, Denney's synthesis of some aspects of the Z notation with use case driven development, and the CSK VDM Tools.
Formal methods and notations
There are a variety of formal methods and notations available.
Specification languages
Abstract State Machines (ASMs)
A Computational Logic for Applicative Common Lisp (ACL2)
Actor model
Alloy
ANSI/ISO C Specification Language (ACSL)
Autonomic System Specification Language (ASSL)
B-Method
CADP
Common Algebraic Specification Language (CASL)
Esterel
Java Modeling Language (JML)
Knowledge Based Software Assistant (KBSA)
Lustre
mCRL2
Perfect Developer
Petri nets
Predicative programming
Process calculi
CSP
LOTOS
π-calculus
RAISE
Rebeca Modeling Language
SPARK Ada
Specification and Description Language
TLA+
USL
VDM
VDM-SL
VDM++
Z notation
Model checkers
ESBMC
MALPAS Software Static Analysis Toolset – an industrial-strength model checker used for formal proof of safety-critical systems
PAT – a free model checker, simulator and refinement checker for concurrent systems and CSP extensions (e.g., shared variables, arrays, fairness)
SPIN
UPPAAL
Solvers and competitions
Many problems in formal methods are NP-hard, but can be solved in cases arising in practice. For example, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, but SAT solvers can solve a variety of large instances. There are "solvers" for a variety of problems that arise in formal methods, and there are many periodic competitions to evaluate the state-of-the-art in solving such problems.
The SAT competition is a yearly competition that compares SAT solvers. SAT solvers are used in formal methods tools such as Alloy.
CASC is a yearly competition of automated theorem provers.
SMT-COMP is a yearly competition of SMT solvers, which are applied to formal verification.
CHC-COMP is a yearly competition of solvers of constrained Horn clauses, which have applications to formal verification.
QBFEVAL is a biennial competition of solvers for true quantified Boolean formulas, which have applications to model checking.
SV-COMP is an annual competition for software verification tools.
SyGuS-COMP is an annual competition for program synthesis tools.
Organizations
BCS-FACS
Formal Methods Europe
Z User Group
See also
Abstract interpretation
Automated theorem proving
Design by contract
Formal methods people
Formal science
Formal specification
Formal verification
Formal system
Methodism
Methodology
Model checking
Scientific method
Software engineering
Specification language
References
Further reading
Jonathan P. Bowen and Michael G. Hinchey, Formal Methods. In Allen B. Tucker, Jr. (ed.), Computer Science Handbook, 2nd edition, Section XI, Software Engineering, Chapter 106, pages 106-1 – 106-25, Chapman & Hall / CRC Press, Association for Computing Machinery, 2004.
Hubert Garavel (editor) and Susanne Graf. Formal Methods for Safe and Secure Computer Systems. Bundesamt für Sicherheit in der Informationstechnik, BSI study 875, Bonn, Germany, December 2013.
* Michael G. Hinchey, Jonathan P. Bowen, and Emil Vassev, Formal Methods. In Philip A. Laplante (ed.), Encyclopedia of Software Engineering, Taylor & Francis, 2010, pages 308–320.
Marieke Huisman, Dilian Gurov, and Alexander Malkis, Formal Methods: From Academia to Industrial Practice – A Travel Guide, arXiv:2002.07279, 2020.
Jean François Monin and Michael G. Hinchey, Understanding formal methods, Springer, 2003, .
External links
Formal Methods Europe (FME)
Formal Methods Wiki
Formal methods from Foldoc
Archival material
Formal method keyword on Microsoft Academic Search via Archive.org
Evidence on Formal Methods uses and impact on Industry supported by the DEPLOY project (EU FP7) in Archive.org
Software development philosophies
Theoretical computer science
Specification languages | Formal methods | [
"Mathematics",
"Engineering"
] | 3,097 | [
"Specification languages",
"Theoretical computer science",
"Applied mathematics",
"Software engineering",
"Formal methods"
] |
161,888 | https://en.wikipedia.org/wiki/HOL%20%28proof%20assistant%29 | HOL (Higher Order Logic) denotes a family of interactive theorem proving systems using similar (higher-order) logics and implementation strategies. Systems in this family follow the LCF approach as they are implemented as a library which defines an abstract data type of proven theorems such that new objects of this type can only be created using the functions in the library which correspond to inference rules in higher-order logic. As long as these functions are correctly implemented, all theorems proven in the system must be valid. As such, a large system can be built on top of a small trusted kernel.
Systems in the HOL family use ML or its successors. ML was originally developed along with LCF as a meta-language for theorem proving systems; in fact, the name stands for "Meta-Language".
Underlying logic
HOL systems use variants of classical higher-order logic, which has simple axiomatic foundations with few axioms and well-understood semantics.
The logic used in HOL provers is closely related to Isabelle/HOL, the most widely used logic of Isabelle.
HOL implementations
A number of HOL systems (sharing essentially the same logic) remain active and in use:
HOL4 the only presently maintained and developed system stemming from the HOL88 system, which was the culmination of the original HOL implementation effort, led by Mike Gordon. HOL88 included its own ML implementation, which was in turn implemented on top of Common Lisp. The systems that followed HOL88 (HOL90, hol98 and HOL4) were all implemented in Standard ML; while hol98 is coupled to Moscow ML, HOL4 can be built with either Moscow ML or Poly/ML. All come with large libraries of theorem proving code which implement extra automation on top of the very simple core code. HOL4 is BSD licensed.
HOL Light an experimental "minimalist" version of HOL which has since grown into another mainstream HOL variant; its logical foundations remain unusually simple. HOL Light, originally implemented in Caml Light, now uses OCaml. HOL Light is available under the new BSD license.
ProofPower a collection of tools designed to provide special support for working with the Z notation for formal specification. 5 of the 6 tools are GNU GPL v2 licensed. The sixth (PPDaz) has a proprietary license.
HOL Zero a minimalist implementation focused on trustworthiness. HOL Zero is GNU GPL 3+ licensed.
Candle An end-to-end verified HOL Light implementation on top of CakeML.
Formal proof developments
The CakeML project developed a formally proven compiler for ML. Previously, HOL was used to develop a formally proven Lisp implementation running on ARM, x86 and PowerPC.
HOL was also used to formalize the semantics of x86 multiprocessors as well as the machine code for Power ISA and ARM architectures.
References
Further reading
External links
Documents specifying HOL's basic logic
HOL4 Description manual, includes system logic specification
Virtual library formal methods information
Proof assistants
Logic in computer science
Software using the BSD license | HOL (proof assistant) | [
"Mathematics"
] | 647 | [
"Mathematical logic",
"Logic in computer science"
] |
161,898 | https://en.wikipedia.org/wiki/Cliff%20Robertson | Clifford Parker Robertson III (September 9, 1923 – September 10, 2011) was an American actor whose career in film and television spanned over six decades. Robertson portrayed a young John F. Kennedy in the 1963 film PT 109, and won the 1968 Academy Award for Best Actor for his role in the film Charly.
On television, Robertson portrayed retired astronaut Buzz Aldrin in the 1976 TV film adaptation of Aldrin's autobiographic Return to Earth, played a fictional character based on Director of Central Intelligence Richard Helms in the 1977 miniseries Washington: Behind Closed Doors, and portrayed Henry Ford in Ford: The Man and the Machine (1987). His last well-known film appearances were as Uncle Ben in the 2002–2007 Spider-Man film trilogy.
Robertson was an accomplished aviator who served as the founding chairman of the Experimental Aircraft Association (EAA)'s Young Eagles Program at its inception in the early 1990s. It became the most successful aviation youth advocacy program in history.
Early life and education
Robertson was born in La Jolla, California, the son of Clifford Parker Robertson Jr. (1902–1968) and his first wife, Audrey Olga Robertson (née Willingham; 1903–1925). His Texas-born father was described as "the idle heir to a tidy sum of ranching money". Robertson once said, "[My father] was a very romantic figure – tall, handsome. He married four or five times, and between marriages he'd pop in to see me. He was a great raconteur, and he was always surrounded by sycophants who let him pick up the tab. During the Great Depression, he tapped the trust for $500,000, and six months later he was back for more."
Robertson's parents divorced when he was one, and his mother died of peritonitis a year later in El Paso, Texas, at the age of 21. He was raised by his maternal grandmother, Mary Eleanor "Eleanora" Willingham (née Sawyer, 1875–1957), in California, and rarely saw his father. He graduated in 1941 from La Jolla High School, where he was known as "The Walking Phoenix".
He served as a third mate in the U.S. Merchant Marine during World War II, before attending Antioch College in Yellow Springs, Ohio, and dropping out to work for a short time as a journalist.
Career
Robertson studied at the Actors Studio, becoming a life member. In the early 1950s he worked steadily in television, including a stint as the lead of Rod Brown of the Rocket Rangers (1953–1954). He appeared in Broadway in Late Love (1953–1954) and The Wisteria Trees (1955), the latter written by Joshua Logan.
Columbia
Robertson made his film debut in Picnic (1955), directed by Logan. Robertson played the role of William Holden's best friend – a part originated on stage by Paul Newman. Newman was under contract to Warner Bros. when the film was being made and was then considered too big a star to reprise his stage performance. Logan's wife recommended Robertson after seeing him in a revival of The Wisteria Trees, and the director remembered him from a Chicago production of Mister Roberts.
The film was a box office success and Robertson was promoted to Joan Crawford's co-star in Autumn Leaves (1956), also at Columbia Pictures, playing her mentally unstable younger lover. This meant he had to pass up the chance to replace Ben Gazzara on Broadway in Cat on a Hot Tin Roof. However he did return to Broadway to appear in Orpheus Descending by Tennessee Williams, which only had a short run.
Robertson went to RKO to make two films: The Naked and the Dead (1958), an adaptation of the famous novel, co-starring Aldo Ray; and The Girl Most Likely (1958), a musical – the last film made by RKO Studios. Robertson received superb reviews for Days of Wine and Roses on TV with Piper Laurie.
He was in Columbia's Gidget (1959), appearing opposite Sandra Dee as the Big Kahuna. It was popular and led to two sequels, neither of which Robertson appeared in. Less successful was a war film at Columbia, Battle of the Coral Sea (1959).
In 1961, he was the third lead in Paramount's All in a Night's Work, starred in Samuel Fuller's Underworld U.S.A. at Columbia, and supported Esther Williams in The Big Show. He had his first film hit since Gidget with Columbia's The Interns (1962). After supporting Debbie Reynolds in My Six Loves (1963), Robertson was President John F. Kennedy's personal choice to play him in 1963's PT 109. The film was not a success at the box office.
More popular was Sunday in New York (1963), where Robertson supported Rod Taylor and Jane Fonda, and The Best Man where he was a ruthless presidential candidate.
Robertson appeared in a popular war film 633 Squadron (1964) then supported Lana Turner in a melodrama, Love Has Many Faces (1965). In 1965 he said his contract with Columbia was for one film a year.
Charly
In 1961 Robertson played the lead role in a United States Steel Hour television production titled "The Two Worlds of Charlie Gordon", based on the novel Flowers for Algernon by Daniel Keyes. Frustrated at the progress of his career, Robertson optioned the rights to the teleplay and hired William Goldman to write a script. Before Goldman completed his work, Robertson arranged for Goldman to be hired to Americanize the dialogue for Masquerade (1965), a spy spoof which Robertson starred in, replacing Rex Harrison.
Robertson then made a war film, Up from the Beach (1965) for Fox and guest-starred on that studio's TV show, Batman (1966). He co-starred with Harrison in The Honey Pot (1967) for Joseph L. Mankiewicz then appeared in another war film, The Devil's Brigade (1968) with William Holden.
Robertson disliked Goldman's Algernon script and replaced the writer with Stirling Silliphant for what became Charly (1968). The film was another box office success and Robertson won the 1968 Academy Award for Best Actor for his portrayal of a mentally-challenged man.
Stardom
Charly was made by ABC Pictures, which insisted that Robert Aldrich use Robertson in Too Late the Hero (1970), a war film with Michael Caine that was a disappointment at the box office.
Robertson turned down roles in The Anderson Tapes, Straw Dogs (before Peckinpah was involved), and Dirty Harry. Instead Robertson co-wrote, starred in, and directed J. W. Coop (1972), another commercial disappointment despite excellent reviews.
Looking back on his career, Robertson said: "nobody made more mediocre movies than I did. Nobody ever did such a wide variety of mediocrity".
In 1969, immediately after winning the Academy Award for Charly, Robertson, a lifelong aviation enthusiast, attempted to produce and direct an aviation film, I Shot Down the Red Baron, I Think, featuring World War I aerial combat, using Lynn Garrison's Irish aviation facility. The comedic storyline portrayed the Red Baron as gay. The aircraft featured garish paint schemes. The film was never completed or released.
Robertson played Cole Younger in The Great Northfield Minnesota Raid (1972) and a pilot in Ace Eli and Rodger of the Skies (1973). He appeared in the 1974 thriller Man on a Swing and the 1975 British drama Out of Season.
Later career
Robertson returned to supporting parts in Three Days of the Condor (1975), which was a big hit. He played the lead in Obsession (1976), a popular thriller from Brian De Palma and Paul Schrader, and in the Canadian drama Shoot (1976). He was also one of several stars in Midway (1976).
Robertson turned to television for Washington: Behind Closed Doors (1977), then had the lead in a thriller, Dominique (1978). He returned to directing for The Pilot (1980), also playing the title role, an alcoholic flyer. Robertson played Hugh Hefner in Star 80 (1983). He attempted to make Charly II in 1980 but it did not happen.
From the 1980s and 1990s onwards, Robertson was predominantly a character actor. He played villains in Class (1983) and Brainstorm (1983). He did have the lead in Shaker Run (1985) in New Zealand, and Dreams of Gold: The Mel Fisher Story (1986) on TV.
In addition, he served as the company spokesperson for AT&T from 1983 to 1992 and appeared in various commercials for their long-distance service and consumer telephones.
He was a villain in Malone (1987), did Dead Reckoning (1990) on TV and supported in Wild Hearts Can't Be Broken (1991), Wind (1991), Renaissance Man (1994) and John Carpenter's Escape from L.A. (1996).
Late in his life Robertson's career had a resurgence. He appeared as Uncle Ben Parker in Sam Raimi's Spider-Man (2002), as well as in the sequels Spider-Man 2 (2004) and Spider-Man 3 (2007; his last acting role). He commented on his website: "Since Spider-Man 1 and 2, I seem to have a whole new generation of fans. That in itself is a fine residual." He also starred in and wrote 13th Child (2002) and appeared in Riding the Bullet (2004), both horror films.
In 1989, he was a member of the jury at the 39th Berlin International Film Festival.
Television
Robertson's early television appearances included a starring role in the live space opera Rod Brown of the Rocket Rangers (1953–1954), as well as recurring roles on Hallmark Hall of Fame (1952), Alcoa Theatre (1959), and Playhouse 90 (1958, 1960), Outlaws (three episodes). Robertson also appeared as a special guest star on Wagon Train for one episode, portraying an Irish immigrant.
In 1958, Robertson portrayed Joe Clay in the first broadcast of Playhouse 90'''s Days of Wine and Roses. In 1960, he was cast as Martinus Van Der Brig, a con man, in the episode "End of a Dream" of Riverboat.
Other appearances included: "Wagon Train" (1958), The Twilight Zone episodes "A Hundred Yards Over the Rim" (1961) and "The Dummy" (1962), followed by The Eleventh Hour in the 1963 episode "The Man Who Came Home Late". He guest-starred on such television series as The Greatest Show on Earth, Breaking Point and ABC Stage 67. He had starring roles in episodes of both the 1960s and 1990s versions of The Outer Limits, including "The Galaxy Being", the first episode of the original series. He was awarded an Emmy for his leading role in a 1965 episode, "The Game" of Bob Hope Presents the Chrysler Theatre. He appeared as a villain on five episodes of ABC's Batman series as the gunfighter "Shame" (1966 and 1968), the second time with his wife, Dina Merrill, as "Calamity Jan".
In 1976, he portrayed a retired Buzz Aldrin in an adaptation of Aldrin's autobiography Return to Earth. The next year, he portrayed a fictional Director of Central Intelligence (based on Richard Helms) in Washington: Behind Closed Doors, an adaptation of John Ehrlichman's roman à clef The Company, in turn based on the Watergate scandal. In 1987, he portrayed Henry Ford in Ford: The Man and The Machine. From 1983 to 1984, he played Dr. Michael Ranson in Falcon Crest.
Columbia Pictures scandal
In 1977, Robertson discovered that his signature had been forged on a $10,000 check payable to him, although it was for work he had not performed. He also learned that the forgery had been carried out by then-Columbia Pictures head David Begelman, and on reporting it he inadvertently triggered one of the biggest Hollywood scandals of the 1970s. Begelman was charged with embezzlement, convicted, and later fired from Columbia. Despite pressure to remain quiet, Robertson and his wife Dina Merrill spoke to the press. As a result, Columbia blacklisted him and would not make another film with him in it until 2002's Spider-Man.
He finally returned to studio film five years later, starring in Brainstorm (1983).McClintick, David. Indecent Exposure: A True Story of Hollywood and Wall Street, William Morrow and Company, 1982. The story of the scandal is told in David McClintick's 1982 bestseller, Indecent Exposure.
Personal life
In 1957, Robertson married actress Cynthia Stone, the former wife of actor Jack Lemmon. They had a daughter, Stephanie, before divorcing in 1959; he also had a stepson by this marriage, Chris Lemmon. In 1966, he married actress and Post Cereals heiress Dina Merrill, the former wife of Stanley M. Rumbough Jr.; they had a daughter, Heather (1968–2007), before divorcing. He resided in Water Mill, New York.
Robertson was a Democrat and supported Arizona congressman Morris K. Udall during the 1976 Democratic presidential primaries.
Aviation
A certified private pilot, one of Robertson's main hobbies was flying and, among other aircraft, he owned several de Havilland Tiger Moths, a Messerschmitt Bf 108, and a genuine World War II–era Mk. IX Supermarine Spitfire (MK923).First Cross-Country Soaring or (You Ain't John Wayne – Robertson) His first plane flight was in a Lockheed Model 9 Orion. As a 13-year-old, he cleaned hangars for airplane rides. He met Paul Mantz, Art Scholl, and Charles Lindbergh while flying at local California airports. His piloting skills helped him get the part as the squadron leader in the British war film 633 Squadron''. He entered balloon races, including one in 1964 from the mainland to Catalina Island that ended with him being rescued from the Pacific Ocean. He was also a glider pilot and owned a Grob Astir.
In 1969, during the civil war conflict in Nigeria, Robertson helped organize an effort to fly food and medical supplies into the area. He also organized flights of supplies to the ravaged country of Ethiopia when it experienced famine in 1978.
Robertson was flying a private Beechcraft Baron over New York City on the morning of September 11, 2001, two days after his 78th birthday. He was directly above the World Trade Center, climbing through 7,500 feet when the first Boeing 767 struck. He was instructed by air traffic control to land immediately at the nearest airport after a nationwide order to ground all civilian and commercial aircraft following the attacks.
Young Eagles
He was a longtime member of the Experimental Aircraft Association (EAA), working his way through the ranks in prominence and eventually co-founding the Young Eagles Program with EAA president Tom Poberezny. Robertson chaired the program from its 1992 inception to 1994 (succeeded by former test pilot Chuck Yeager). Along with educating youth about aviation, the initial goal of the Young Eagles was to fly one million children (many of them never having flown before) prior to the 100th Anniversary of Flight celebration on December 17, 2003. That goal was achieved on November 13, 2003. On July 28, 2016, the two millionth Young Eagle was flown by actor Harrison Ford. Within the EAA, he also founded the Cliff Robertson Work Experience in 1993, which offers youths the chance to work for flight and ground school instruction.
Death
On September 10, 2011, one day after his 88th birthday, Robertson died of natural causes in Stony Brook, New York. His body was cremated, and a private funeral was held at St. Luke's Episcopal Church in East Hampton, New York and was interred at the Cedar Lawn Cemetery.
Filmography
Awards
Robertson was inducted into the National Aviation Hall of Fame in 2006. He received the Rebecca Rice Alumni Award from Antioch College in 2007. In addition to his Oscar and Emmy and several lifetime achievement awards from various film festivals, Robertson has a star on the Hollywood Walk of Fame at 6801 Hollywood Blvd. He was also awarded the 2008 Ambassador of Good Will Aviation Award by the National Transportation Safety Board (NTSB) Bar Association in Alexandria, Virginia, for his leadership in and promotion of general aviation. In 2009, Robertson was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum, and was part of the Living Legends of Aviation.
Notes
References
External links
Interview in the Archive of American Television
Warbird Registry entry on MK923
"Cliff Robertson, 1923–2011: Actor, Writer, Producer and Director", a Special English presentation of Voice of America
Biography in the National Aviation Hall of Fame
1923 births
2011 deaths
20th-century American Episcopalians
20th-century American male actors
21st-century American male actors
American aviators
American glider pilots
American male film actors
American male stage actors
American male television actors
American military personnel of World War II
American sailors
Antioch College alumni
Best Actor Academy Award winners
California Democrats
Experimental Aircraft Association
Male actors from San Diego
Military personnel from California
New York (state) Democrats
Outstanding Performance by a Lead Actor in a Miniseries or Movie Primetime Emmy Award winners
People from La Jolla, San Diego
People from Stony Brook, New York
United States Merchant Mariners
United States Merchant Mariners of World War II
Members of The Lambs Club | Cliff Robertson | [
"Engineering"
] | 3,594 | [
"Experimental Aircraft Association",
"Aerospace engineering organizations"
] |
161,900 | https://en.wikipedia.org/wiki/Logic%20for%20Computable%20Functions | Logic for Computable Functions (LCF) is an interactive automated theorem prover developed at Stanford and Edinburgh by Robin Milner and collaborators in early 1970s, based on the theoretical foundation of logic of computable functions previously proposed by Dana Scott. Work on the LCF system introduced the general-purpose programming language ML to allow users to write theorem-proving tactics, supporting algebraic data types, parametric polymorphism, abstract data types, and exceptions.
Basic idea
Theorems in the system are terms of a special "theorem" abstract data type. The general mechanism of abstract data types of ML ensures that theorems are derived using only the inference rules given by the operations of the theorem abstract type. Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler.
Advantages
The LCF approach provides similar trustworthiness to systems that generate explicit proof certificates but without the need to store proof objects in memory. The Theorem data type can be easily implemented to optionally store proof objects, depending on the system's run-time configuration, so it generalizes the basic proof-generation approach. The design decision to use a general-purpose programming language for developing theorems means that, depending on the complexity of programs written, it is possible to use the same language to write step-by-step proofs, decision procedures, or theorem provers.
Disadvantages
Trusted computing base
The implementation of the underlying ML compiler adds to the trusted computing base. Work on CakeML resulted in a formally verified ML compiler, alleviating some of these concerns.
Efficiency and complexity of proof procedures
Theorem proving often benefits from decision procedures and theorem proving algorithms, whose correctness has been extensively analyzed. A straightforward way of implementing these procedures in an LCF approach requires such procedures to always derive outcomes from the axioms, lemmas, and inference rules of the system, as opposed to directly computing the outcome. A potentially more efficient approach is to use reflection to prove that a function operating on formulas always gives correct result.
Influences
Among subsequent implementations is Cambridge LCF. Later systems simplified the logic to use total instead of partial functions, leading to HOL, HOL Light, and the Isabelle proof assistant that supports various logics. As of 2019, the Isabelle proof assistant still contains an implementation of an LCF logic, Isabelle/LCF.
Notes
References
Logic in computer science
Proof assistants | Logic for Computable Functions | [
"Mathematics"
] | 513 | [
"Mathematical logic stubs",
"Mathematical logic",
"Logic in computer science"
] |
161,921 | https://en.wikipedia.org/wiki/Predicative%20verb | A predicative verb is a verb that behaves as a grammatical adjective; that is, it predicates (qualifies or informs about the properties of its argument). It is a special kind of stative verb.
Many languages do not use the present forms of the verb "to be" to separate an adjective from its noun: instead, these forms of the verb "to be" are understood as part of the adjective. Egyptian uses this structure: "my mouth is red" is written as "red my mouth" (/dSr=f r=i/). Other languages to use this structure include the Northwest Caucasian languages, the Thai language, Indonesian, the East Slavic languages, the Semitic languages, some Nilotic languages and the Athabaskan languages. Many adjectives in Chinese and Japanese also behave like this.
In the Akkadian languages, the "predicative" (also called the "permansive" or "stative") is a set of pronominal inflections used to convert noun stems into effective sentences, so that the form šarrāku is a single word more or less equivalent to either of the sentences šarrum anāku "I am king" or šarratum anāku "I am queen".
References
Parts of speech
Verb types | Predicative verb | [
"Technology"
] | 269 | [
"Parts of speech",
"Components"
] |
161,935 | https://en.wikipedia.org/wiki/Animator | An animator is an artist who creates images, known as frames, which give an illusion of movement called animation when displayed in rapid sequence. Animators can work in a variety of fields including film, television, and video games. Animation is closely related to filmmaking and like filmmaking is extremely labor-intensive, which means that most significant works require the collaboration of several animators. The methods of creating the images or frames for an animation piece depend on the animators' artistic styles and their field.
Other artists who contribute to animated cartoons, but who are not animators, include layout artists (who design the backgrounds, lighting, and camera angles), storyboard artists (who draw panels of the action from the script), and background artists (who paint the "scenery"). Animated films share some film crew positions with regular live action films, such as director, producer, sound engineer, and editor, but differ radically in that for most of the history of animation, they did not need most of the crew positions seen on a physical set.
In hand-drawn Japanese animation productions, such as in Hayao Miyazaki's films, the key animator handles both layout and key animation. Some animators in Japan such as Mitsuo Iso take full responsibility for their scenes, making them become more than just the key animator.
Specialized fields
Animators often specialize. One important distinction is between character animators (artists who specialize in character movement, dialogue, acting, etc.) and special effects animators (who animate anything that is not a character; most commonly vehicles, machinery, and natural phenomena such as rain, snow, and water).
Stop motion animators do not draw their images, instead they move models or cut-outs frame-by-frame, famous animators of this genre being Ray Harryhausen and Nick Park.
Inbetweeners and cleanup artists
In large-scale productions by major studios, each animator usually has one or more assistants, "inbetweeners" and "clean-up artists", who make drawings between the "key poses" drawn by the animator, and also re-draw any sketches that are too roughly made to be used as such. Usually, a young artist seeking to break into animation is hired for the first time in one of these categories, and can later advance to the rank of full animator (usually after working on several productions).
Methods
Historically, the creation of animation was a long and arduous process. Each frame of a given scene was hand-drawn, then transposed onto celluloid, where it would be traced and painted. These finished "cels" were then placed together in sequence over painted backgrounds and filmed, one frame at a time.
Animation methods have become far more varied in recent years. Today's cartoons could be created using any number of methods, mostly using computers to make the animation process cheaper and faster. These more efficient animation procedures have made the animator's job less tedious and more creative.
Audiences generally find animation to be much more interesting with sound. Voice actors and musicians, among other talent, may contribute vocal or music tracks. Some early animated films asked the vocal and music talent to synchronize their recordings to already-extant animation (and this is still the case when films are dubbed for international audiences). For the majority of animated films today, the soundtrack is recorded first in the language of the film's primary target market and the animators are required to synchronize their work to the soundtrack.
Evolution of animator's roles
As a result of the ongoing transition from traditional 2D to 3D computer animation, the animator's traditional task of redrawing and repainting the same character 24 times a second (for each second of finished animation) has now been superseded by the modern task of developing dozens (or hundreds) of movements of different parts of a character in a virtual scene.
Because of the transition to computer animation, many additional support positions have become essential, with the result that the animator has become but one component of a very long and highly specialized production pipeline. In the 21st century, visual development artists design a character as a 2D drawing or painting, then hand it off to modelers who build the character as a collection of digital polygons. Texture artists "paint" the character with colorful or complex textures, and technical directors set up rigging so that the character can be easily moved and posed. For each scene, layout artists set up virtual cameras and rough blocking. Finally, when a character's bugs have been worked out and its scenes have been blocked, it is handed off to an animator (that is, a person with that actual job title) who can start developing the exact movements of the character's virtual limbs, muscles, and facial expressions in each specific scene.
At that point, the role of the modern computer animator overlaps in some respects with that of his or her predecessors in traditional animation: namely, trying to create scenes already storyboarded in rough form by a team of story artists, and synchronizing lip or mouth movements to dialogue already prepared by a screenwriter and recorded by vocal talent. Despite those constraints, the animator is still capable of exercising significant artistic skill and discretion in developing the character's movements to accomplish the objective of each scene. There is an obvious analogy here between the art of animation and the art of acting, in that actors also must do the best they can with the lines they are given; it is often encapsulated by the common industry saying that animators are "actors with pencils". In 2015, Chris Buck noted in an interview that animators have become "actors with mice." Some studios bring in acting coaches on feature films to help animators work through such issues. Once each scene is complete and has been perfected through the "sweat box" feedback process, the resulting data can be dispatched to a render farm, where computers handle the tedious task of actually rendering all the frames. Each finished film clip is then checked for quality and rushed to a film editor, who assembles the clips together to create the film.
While early computer animation was heavily criticized for rendering human characters that looked plastic or even worse, eerie (see uncanny valley), contemporary software can now render strikingly realistic clothing, hair, and skin. The solid shading of traditional animation has been replaced by very sophisticated virtual lighting in computer animation, and computer animation can take advantage of many camera techniques used in live-action filmmaking (i.e., simulating real-world "camera shake" through motion capture of a cameraman's movements). As a result, some studios now hire nearly as many lighting artists as animators for animated films, while costume designers, hairstylists, choreographers, and cinematographers have occasionally been called upon as consultants to computer-animated projects.
See also
Animation
Computer animation
Computer graphics
Key frame
List of animators
Sweat box
References
External links
Animation Toolworks Glossary: Who Does What In Animation
How An Animated Cartoon Is Made
Visual arts occupations
Filmmaking occupations
Computer occupations | Animator | [
"Technology"
] | 1,433 | [
"Computer occupations"
] |
161,995 | https://en.wikipedia.org/wiki/Quorum%20sensing | In biology, quorum sensing or quorum signaling (QS) is the process of cell-to-cell communication that allows bacteria to detect and respond to cell population density by gene regulation, typically as a means of acclimating to environmental disadvantages.
More specifically, quorum sensing is a type of cellular signaling, and more specifically can be considered a type of paracrine signaling. However, it also contains traits of autocrine signaling: a cell produces both an autoinducer molecule and the receptor for the autoinducer. As one example, QS enables bacteria to restrict the expression of specific genes to the high cell densities at which the resulting phenotypes will be most beneficial, especially for phenotypes that would be ineffective at low cell densities and therefore too energetically costly to express. Many species of bacteria use quorum sensing to coordinate gene expression according to the density of their local population. In a similar fashion, some social insects use quorum sensing to determine where to nest. Quorum sensing in pathogenic bacteria activates host immune signaling and prolongs host survival, by limiting the bacterial intake of nutrients, such as tryptophan, which further is converted to serotonin. As such, quorum sensing allows a commensal interaction between host and pathogenic bacteria. Quorum sensing may also be useful for cancer cell communications.
In addition to its function in biological systems, quorum sensing has several useful applications for computing and robotics. In general, quorum sensing can function as a decision-making process in any decentralized system in which the components have: (a) a means of assessing the number of other components they interact with and (b) a standard response once a threshold to achieve number of components is useful for amino acid regulation
is detected.
Discovery
The first observations of an autoinducer-controlled phenotype in bacteria were reported in 1970, by Kenneth Nealson, Terry Platt, and J. Woodland Hastings, who observed what they described as a conditioning of the medium in which they had grown the bioluminescent marine bacterium Aliivibrio fischeri. These bacteria did not synthesize luciferase—and therefore did not luminesce—in freshly inoculated culture but only after the bacterial population had increased significantly.
Etymology
Because Nealson, Platt, and Hastings attributed the conditioning of the growth medium to the growing population of cells itself, they referred to the phenomenon as autoinduction. In 1994, after study of the phenomenon had expanded into several additional bacteria, Stephen Winans did not believe the word autoinduction fully characterized the true process so, in a review article coauthored with W. Claiborne Fuqua and E. Peter Greenberg, he introduced the term quorum sensing. Its use also avoided confusion between the terms autoinduction and autoregulation.
The new term was not stumbled onto, but rather created through trial and error. Among the alternatives that Winans had created and considered were gridlockins, communiolins, and quoromones.
Bacteria
Some of the best-known examples of quorum sensing come from studies of bacteria. Bacteria use quorum sensing to regulate certain phenotype expressions, which in turn, coordinate their behaviors. Some common phenotypes include biofilm formation, virulence factor expression, and motility. Certain bacteria are able to use quorum sensing to regulate bioluminescence, nitrogen fixation and sporulation.
The quorum-sensing function is based on the local density of the bacterial population in the immediate environment. It can occur within a single bacterial species, as well as between diverse species. Both gram-positive and gram-negative bacteria use quorum sensing, but there are some major differences in their mechanisms.
Mechanism
For the bacteria to use quorum sensing constitutively, they must possess three abilities: secretion of a signaling molecule, secretion of an autoinducer (to detect the change in concentration of signaling molecules), and regulation of gene transcription as a response. This process is highly dependent on the diffusion mechanism of the signaling molecules. QS signaling molecules are usually secreted at a low level by individual bacteria. At low cell density, the molecules may just diffuse away. At high cell density, the local concentration of signaling molecules may exceed its threshold level, and trigger changes in gene expression.
Gram-positive bacteria
Gram-positive bacteria use autoinducing peptides (AIP) as their autoinducers.
When gram-positive bacteria detect high concentration of AIPs in their environment, that happens by way of AIPs binding to a receptor to activate a kinase. The kinase phosphorylates a transcription factor, which regulates gene transcription. This is called a two-component system.
Another possible mechanism is that AIP is transported into the cytosol, and binds directly to a transcription factor to initiate or inhibit transcription.
Gram-negative bacteria
Gram-negative bacteria produce N-acyl homoserine lactones (AHL) as their signaling molecule. Usually AHLs do not need additional processing, and bind directly to transcription factors to regulate gene expression.
Some gram-negative bacteria may use the two-component system as well.
Examples
Aliivibrio fischeri
The bioluminescent bacterium Aliivibrio fischeri is the first organism in which QS was observed. It lives as a mutualistic symbiont in the photophore (or light-producing organ) of the Hawaiian bobtail squid. When A. fischeri cells are free-living (or planktonic), the autoinducer is at low concentration, and, thus, cells do not show luminescence. However, when the population reaches the threshold in the photophore (about cells/ml), transcription of luciferase is induced, leading to bioluminescence.
In A. fischeri, bioluminescence is regulated by AHLs (N-acyl-homoserine lactones) which is a product of the LuxI gene whose transcription is regulated by the LuxR activator. LuxR works only when AHLs binds to the LuxR.
Curvibacter sp.
Curvibacter sp. is a gram-negative curved rod-formed bacterium which is the main colonizer of the epithelial cell surfaces of the early branching metazoan Hydra vulgaris. Sequencing the complete genome uncovered a circular chromosome (4.37 Mb), a plasmid (16.5 kb), and two operons coding each for an AHL (N-acyl-homoserine lactone) synthase (curI1 and curI2) and an AHL receptor (curR1 and curR2). Moreover, a study showed that these host associated Curvibacter bacteria produce a broad spectrum of AHL, explaining the presence of those operons. As mentioned before, AHL are the quorum sensing molecules of gram-negative bacteria, which means Curvibacter has a quorum sensing activity.
Even though their function in host-microbe interaction is largely unknown, Curvibacter quorum-sensing signals are relevant for host-microbe interactions. Indeed, due to the oxidoreductase activity of Hydra, there is a modification of AHL signalling molecules (3-oxo-homoserine lactone into 3-hydroxy-homoserine lactone) which leads to a different host-microbe interaction. On one hand, a phenotypic switch of the colonizer Curvibacter takes place. The most likely explanation is that the binding of 3-oxo-HSL and 3-hydroxy-HSL causes different conformational changes in the AHL receptors curR1 and curR2. As a result, there is a different DNA-binding motif affinity and thereby different target genes are activated. On the other hand, this switch modifies its ability to colonize the epithelial cell surfaces of Hydra vulgaris. Indeed, one explanation is that with a 3-oxo-HSL quorum-sensing signal, there is an up-regulation of flagellar assembly. Yet, flagellin, the main protein component of flagella, can act as an immunomodulator and activate the innate immune response in Hydra. Therefore, bacteria have less chance to evade the immune system and to colonize host tissues. Another explanation is that 3-hydroxy-HSL induces carbon metabolism and fatty acid degradation genes in Hydra. This allows the bacterial metabolism to adjust itself to the host growth conditions, which is essential for the colonization of the ectodermal mucus layer of Hydrae.
Enterococcus faecalis
Enterococcus faecalis is an opportunistic, gram-positive bacteria that forms biofilm in glass. This process is also known as forming a biofilm in vitro. The presence of (Esp), a certain cell surface protein, aids the formation of a biofilm by E. faecalis.
The ability of E. faecalis to form biofilms contributes to its capacity to survive in extreme environments, and facilitates its involvement in persistent bacterial infection, particularly in the case of multi-drug resistant strains. Biofilm formation in E. faecalis is associated with DNA release, and such release has emerged as a fundamental aspect of biofilm formation. Conjugative plasmid DNA transfer in E. faecalis is enhanced by the release of peptide sex pheromones.
Escherichia coli
In the gram-negative bacterium Escherichia coli, cell division may be partially regulated by AI-2-mediated quorum sensing. This species uses AI-2, which is produced and processed by the lsr operon. Part of it encodes an ABC transporter, which imports AI-2 into the cells during the early stationary (latent) phase of growth. AI-2 is then phosphorylated by the LsrK kinase, and the newly produced phospho-AI-2 can be either internalized or used to suppress LsrR, a repressor of the lsr operon (thereby activating the operon). Transcription of the lsr operon is also thought to be inhibited by dihydroxyacetone phosphate (DHAP) through its competitive binding to LsrR. Glyceraldehyde 3-phosphate has also been shown to inhibit the lsr operon through cAMP-CAPK-mediated inhibition. This explains why, when grown with glucose, E. coli will lose the ability to internalize AI-2 (because of catabolite repression). When grown normally, AI-2 presence is transient.
E. coli and Salmonella enterica do not produce AHL signals commonly found in other gram-negative bacteria. However, they have a receptor that detects AHLs from other bacteria and change their gene expression in accordance with the presence of other "quorate" populations of gram-negative bacteria. AHL quorum sensing regulates a wide range of genes through cell density. Other species of bacteria produce AHLs that Escherichia and Salmonella can detect. E. coli and Salmonella produce a receptor like protein (SdiA) allowing the amino acid sequence that is similar to AHL show AHLs can be found in the bovine rumen and E. coli responds to AHLs taken out of the bovine rumen. Most animals do not have AHL in their gastrointestinal tracts.
Salmonella enterica
Salmonella encodes a LuxR homolog, SdiA, but does not encode an AHL synthase. SdiA detects AHLs produced by other species of bacteria including Aeromonas hydrophila, Hafnia alvei, and Yersinia enterocolitica. When AHL is detected, SdiA regulates the rck operon on the Salmonella virulence plasmid (pefI-srgD-srgA-srgB-rck-srgC) and a single gene horizontal acquisition in the chromosome srgE. Salmonella does not detect AHL when passing through the gastrointestinal tracts of several animal species, suggesting that the normal microbiota does not produce AHLs. However, SdiA does become activated when Salmonella transits through turtles colonized with Aeromonas hydrophila or mice infected with Yersinia enterocolitica. Therefore, Salmonella appears to use SdiA to detect the AHL production of other pathogens rather than the normal gut flora.
Myxococcus xanthus
Myxococcus is a genus of gram-negative bacterium in the Myxococcacae family. Myxococcus xanthus specifically, a bacillus myxobacteria species within Myxococcae, grows in the upper layers of soil. This bacterium is known for its unique utilization of quorum sensing practices to hunt.
The bacterium uniquely survives not on sugars, but lipids created by the degradation of macromolecules lysed by the species. It hunts and feeds through a density-regulated method of predation that is "the regulation of gene expression in response to cell density." The pilus propelled microorganism moves with the use of both S- and A- (or gliding) motility, which provide transportation across a dynamic range of different surfaces. M. xanthus A-motility is most effective in the presence of a single or low number of cells, allowing the bacteria to glide in high agar concentrations. The S-motility, or social motility, is controlled by the process of quorum sensing and is only effective when cells are within one cell length of a neighbor. Although the precise specifics of M. xanthus communication methods for quorum sensing are not well understood, the bacteria mediate the process by using both C-signal and A-factor. The A-factor molecule, produced by M. xanthus, must reach a set concentration to initiate aggregation for hunting. The C-signal concentration, on the other hand, plays a role in fruiting body production.
The species is known for its ability to use quorum sensing to hunt in special packs with thousands of individual cells, lending to M. xanthus name "the wolf packs." M. xanthus is inclined to behave in a multicellular fashion. In the presence of many cells, it uses these "wolf packs" to form "highly structured biofilms that include tentacle-like packs of surface-gliding cell groups, synchronized rippling waves of oscillating cells and massive spore-filled aggregates that protrude upwards from the substratum to form fruiting bodies." On the fringes of this film, individual cells can be observed "gliding across the surface, but the majority of cells are observed in large tendril-shaped groups" using S-motility.
Staphylococcus aureus
Staphylococcus aureus is a type of pathogen that causes infection to the skin and soft tissue and can lead to a variety of more severe diseases such as osteomyelitis, pneumonia, and endocarditis. S. aureus uses biofilms in order to increase its chances of survival by becoming resistant to antibiotics. Biofilms help S. aureus become up to 1500 times more resistant to antibiofilm agents, which try to break down biofilms formed by S. aureus.
Pseudomonas aeruginosa
The environmental bacterium and opportunistic pathogen Pseudomonas aeruginosa uses quorum sensing to coordinate the formation of biofilm, swarming motility, exopolysaccharide production, virulence, and cell aggregation. These bacteria can grow within a host without harming it until they reach a threshold concentration. Then they become aggressive, developing to the point at which their numbers are sufficient to overcome the host's immune system, and form a biofilm, leading to disease within the host as the biofilm is a protective layer encasing the bacterial population. The relative ease of growth, handling, and genetic manipulation of P. aeruginosa has lent much research effort to the quorum sensing circuits of this relatively common bacterium. Quorum sensing in P. aeruginosa typically encompasses two complete AHL synthase-receptor circuits, LasI-LasR and RhlI-RhlR, as well as the orphan receptor-regulator QscR, which is also activated by the LasI-generated signal. Together, the multiple AHL quorum sensing circuits of P. aeruginosa influence regulation of hundreds of genes.
Another form of gene regulation that allows the bacteria to rapidly adapt to surrounding changes is through environmental signaling. Recent studies have discovered that anaerobiosis can significantly impact the major regulatory circuit of quorum sensing. This important link between quorum sensing and anaerobiosis has a significant impact on the production of virulence factors of this organism. There is hope among some humans that the therapeutic enzymatic degradation of the signaling molecules will be possible when treating illness caused by biofilms, and prevent the formation of such biofilms and possibly weaken established biofilms. Disrupting the signaling process in this way is called quorum sensing inhibition.
Acinetobacter sp.
It has recently been found that Acinetobacter sp. also show quorum sensing activity. This bacterium, an emerging pathogen, produces AHLs. Acinetobacter sp. shows both quorum sensing and quorum quenching activity. It produces AHLs and can also degrade the AHL molecules.
Aeromonas sp.
This bacterium was previously considered a fish pathogen, but it has recently emerged as a human pathogen. Aeromonas sp. have been isolated from various infected sites from patients (bile, blood, peritoneal fluid, pus, stool and urine). All isolates produced the two principal AHLs, N-butanoylhomoserine lactone (C4-HSL) and N-hexanoyl homoserine lactone (C6-HSL). It has been documented that Aeromonas sobria has produced C6-HSL and two additional AHLs with N-acyl side chain longer than C6.
Yersinia
The YenR and YenI proteins produced by the gammaproteobacterium Yersinia enterocolitica are similar to Aliivibrio fischeri LuxR and LuxI. YenR activates the expression of a small non-coding RNA, YenS. YenS inhibits YenI expression and acylhomoserine lactone production. YenR/YenI/YenS are involved in the control of swimming and swarming motility.
Molecules involved
Three-dimensional structures of proteins involved in quorum sensing were first published in 2001, when the crystal structures of three LuxS orthologs were determined by X-ray crystallography. In 2002, the crystal structure of the receptor LuxP of Vibrio harveyi with its inducer AI-2 (which is one of the few biomolecules containing boron) bound to it was also determined. Many bacterial species, including E. coli, an enteric bacterium and model organism for gram-negative bacteria, produce AI-2. A comparative genomic and phylogenetic analysis of 138 genomes of bacteria, archaea, and eukaryotes found that "the LuxS enzyme required for AI-2 synthesis is widespread in bacteria, while the periplasmic binding protein LuxP is present only in Vibrio strains," leading to the conclusion that either "other organisms may use components different from the AI-2 signal transduction system of Vibrio strains to sense the signal of AI-2 or they do not have such a quorum sensing system at all." Vibrio species utilize Qrr RNAs, small non-coding RNAs, that are activated by these autoinducers to target cell density master regulators. Farnesol is used by the fungus Candida albicans as a quorum sensing molecule that inhibits filamentation.
A database of quorum-sensing peptides is available under the name Quorumpeps.
Certain bacteria can produce enzymes called lactonases that can target and inactivate AHLs.
Researchers have developed novel molecules which block the signalling receptors of bacteria ("Quorum quenching"). mBTL is a compound that has been shown to inhibit quorum sensing and decrease the amount of cell death by a significant amount.
Additionally, researchers are also examining the role of natural compounds (such as caffeine) as potential quorum sensing inhibitors. Research in this area has been promising and could lead to the development of natural compounds as effective therapeutics.
Evolution
Sequence analysis
The majority of quorum sensing systems that fall under the "two-gene" (an autoinducer synthase coupled with a receptor molecule) paradigm as defined by the Vibrio fischeri system occur in the gram-negative Pseudomonadota. A comparison between the Pseudomonadota phylogeny as generated by 16S ribosomal RNA sequences and phylogenies of LuxI-, LuxR-, or LuxS-homologs shows a notably high level of global similarity. Overall, the quorum sensing genes seem to have diverged along with the Pseudomonadota phylum as a whole. This indicates that these quorum sensing systems are quite ancient, and arose very early in the Pseudomonadota lineage.
LuxI and LuxR have coevolved through a long history of horizontal gene transfer (HGT) events. An early study reconciling their gene trees with the rRNA tree suggested frequent HGT events for both LuxI and LuxR, indicating that they are horizontally transferred together and coevolve due to their functional dependency. Similarly, in QS systems in bacteria associated with Populus deltoides, the gene trees for luxI and luxR show high topological similarity, indicating coevolution of cognate pairs. In addition to horizontal transfer of complete LuxI/LuxR-type QS systems, many Proteobacteria genomes exhibit an excess of LuxR genes or cases with only LuxR but not LuxI, acquired from different sources via HGT. Due to the frequent transfer of functional pairs of homologs (i.e., LuxI/LuxR-type systems from multiple independent sources), it is possible that the regulatory hierarchy formed by the LuxI/LuxR and RhlR-RhlI systems is a result of sequential integration of circuits obtained from different sources, due to interactions between multiple homologs. Interestingly, LuxI genes have likely undergone horizontal gene transfer from Proteobacteria to other lineages, as they have been detected in Nitrospira lineage II.
In quorum sensing genes of Gammaproteobacteria, which includes Pseudomonas aeruginosa and Escherichia coli, the LuxI/LuxR genes form a functional pair, with LuxI as the auto-inducer synthase and LuxR as the receptor. Gammaproteobacteria are unique in possessing quorum sensing genes, which, although functionally similar to the LuxI/LuxR genes, have a markedly divergent sequence. This family of quorum-sensing homologs may have arisen in the Gammaproteobacteria ancestor, although the cause of their extreme sequence divergence yet maintenance of functional similarity has yet to be explained. In addition, species that employ multiple discrete quorum sensing systems are almost all members of the Gammaproteobacteria, and evidence of horizontal transfer of quorum sensing genes is most evident in this class.
Interaction of quorum-sensing molecules with mammalian cells and its medical applications
Next to the potential antimicrobial functionality, quorum-sensing derived molecules, especially the peptides, are being investigated for their use in other therapeutic domains as well, including immunology, central nervous system disorders and oncology. Quorum-sensing peptides have been demonstrated to interact with cancer cells, as well as to permeate the blood–brain barrier reaching the brain parenchyma.
Role of quorum sensing in biofilm development
Quorum sensing (QS) is used by bacteria to form biofilms. Quorum sensing is used by bacteria to form biofilms because the process determines if the minimum number of bacteria necessary for biofilm formation are present. The criteria to form a biofilm is dependent on a certain density of bacteria rather than a certain number of bacteria being present. When aggregated in high enough densities, some bacteria may form biofilms to protect themselves from biotic or abiotic threats. Quorum sensing is used by both Gram-positive and Gram-negative bacteria because it aids cellular reproduction. Once in a biofilm, bacteria can communicate with other bacteria of the same species. Bacteria can also communicate with other species of bacteria. This communication is enabled through autoinducers used by the bacteria.
Additionally, certain responses can be generated by the host organism in response to the certain bacterial autoinducers. Despite the fact that specific bacterial quorum sensing systems are different, for example the target genes, signal relay mechanisms, and chemical signals used between bacteria, the ability to coordinate gene expression for a specific species of bacteria remains the same. This ability alludes to the larger idea that bacteria have potential to become a multicellular bacterial body.
Secondly, biofilms may also serve to transport nutrients into the microbial community or transport toxins out by means of channels that permeate the extracellular polymeric matrix (like cellulose) that holds the cells together. Finally, biofilms are an ideal environment for horizontal gene transfer through either conjugation or environmental DNA (eDNA) that exists in the biofilm matrix.
The process of biofilm development is often triggered by environmental signals, and bacteria are proven to require flagella to successfully approach a surface, adhere to it, and form the biofilm. As cells either replicate or aggregate in a location, the concentration of autoinducers outside of the cells increases until a critical mass threshold is reached. At this point, it is energetically unfavorable for intracellular autoinducers to leave the cell and they bind to receptors and trigger a signaling cascade to initiate gene expression and begin secreting an extracellular polysaccharide to encase themselves inside.
One modern method of preventing biofilm development without the use of antibiotics is with anti-QS substances, such (naringenin, taxifolin, etc) that can be utilized as alternative form of therapy against bacterial virulence.
Archaea
Examples
Methanosaeta harundinacea 6Ac
Methanosaeta harundinacea 6Ac, a methanogenic archaeon, produces carboxylated acyl homoserine lactone compounds that facilitate the transition from growth as short cells to growth as filaments.
Viruses
A mechanism involving arbitrium has recently been described in bacteriophages infecting several Bacillus species. The viruses communicate with each other to ascertain their own density compared to potential hosts. They use this information to decide whether to enter a lytic or lysogenic life-cycle. This decision is crucial as it affects their replication strategy and potential to spread within the host population, optimizing their survival and proliferation under varying environmental conditions. This communication mechanism enables a coordinated infection strategy, significantly enhancing the efficiency of phage proliferation. By synchronizing their life cycles, bacteriophages can maximize their impact on the host population, potentially leading to more effective control of bacterial densities.
Plants
QS is important to plant-pathogen interactions, and their study has also contributed to the QS field more generally. The first X-ray crystallography results for some of the key proteins were those of Pantoea stewartii subsp. stewartii in maize/corn and Agrobacterium tumefaciens, a crop pathogen with a wider range of hosts. These interactions are facilitated by quorum-sensing molecules and play a major role in maintaining the pathogenicity of bacteria towards other hosts, such as humans. This mechanism can be understood by looking at the effects of N-Acyl homoserine lactone (AHL), one of the quorum sensing-signaling molecules in gram-negative bacteria, on plants. The model organism used is Arabidopsis thaliana. Further insights reveal that AHLs influence plant immune responses and can alter plant hormone levels, thereby affecting plant growth and susceptibility to infection. Understanding these dynamics is crucial for developing innovative strategies to combat plant diseases and improve agricultural productivity. Researchers have also noted that certain plants can degrade these signaling molecules, potentially as a defensive strategy to disrupt bacterial communication. This interplay between bacterial signaling and plant responses suggests a complex co-evolutionary relationship that could be exploited to enhance crop resistance to bacterial pathogens.
The role of AHLs having long carbon-chains (C12, C14), which have an unknown receptor mechanism, is less well understood than AHLs having short carbon-chains (C4, C6, C8), which are perceived by the G protein-coupled receptor. A phenomenon called "AHL priming", which is a dependent signalling pathway, enhanced our knowledge of long-chain AHLs. The role of quorum-sensing molecules was better explained according to three categories: host physiology–based impact of quorum sensing molecules; ecological effects; and cellular signaling. Calcium signalling and calmodulin have a large role in short-chain AHLs' response in Arabidopsis. Research was also conducted on barley and the crop called yam bean (Pachyrhizus erosus) that reveals the AHLs determining the detoxification enzymes called GST were found less in yam bean.
Quorum sensing-based regulatory systems are necessary to plant-disease-causing bacteria. Looking towards developing new strategies based on plant-associated microbiomes, the aim of further study is to improve the quantity and quality of the food supply. Further research into this inter-kingdom communication also enhances the possibility of learning about quorum sensing in humans. This exploration could open new avenues for managing microbial communities in agricultural settings, potentially leading to the development of more sustainable farming practices that leverage natural microbial processes to boost crop resilience and productivity.
Quorum quenching
Quorum quenching is the process of preventing quorum sensing by disrupting signalling. This is achieved by inactivating signalling enzymes, by introducing molecules that mimic signalling molecules and block their receptors, by degrading signalling molecules themselves, or by a modification of the quorum sensing signals due to an enzyme activity.
Inhibition
Closantel and triclosan are known inhibitors of quorum sensing enzymes. Closantel induces aggregation of the histidine kinase sensor in two-component signalling. The latter disrupts the synthesis of a class of signalling molecules known as N-acyl homoserine lactones (AHLs) by blocking the enoyl-acyl carrier protein (ACP) reductase.
Mimicry
Two groups of well-known mimicking molecules include halogenated furanones, which mimic AHL molecules, and synthetic Al peptides (AIPs), which mimic naturally occurring AIPs. These groups inhibit receptors from binding substrate or decrease the concentration of receptors in the cell. Furanones have also been found to act on AHL-dependant transcriptional activity, whereby the half life of the autoinducer-binding LuxR protein is significantly shortened.
Degradation
Recently, a well-studied quorum quenching bacterial strain (KM1S) was isolated and its AHL degradation kinetics were studied using rapid resolution liquid chromatography (RRLC). RRLC efficiently separates components of a mixture to a high degree of sensitivity, based on their affinities for different liquid phases. It was found that the genome of this strain encoded an inactivation enzyme with distinct motifs targeting the degradation of AHLs.
Modifications
As mentioned before, N-acyl-homoserine lactones (AHL) are the quorum sensing signaling molecules of the gram-negative bacteria. However, these molecules may have different functional groups on their acyl chain, and also a different length of acyl chain. Therefore, there exist many different AHL signaling molecules, for example, 3-oxododecanoyl-L-homoserine lactone (3OC12-HSL) or 3-hydroxydodecanoyl-L-homoserine lactone (3OHC12-HSL). The modification of those quorum sensing (QS) signaling molecules is another sort of quorum quenching. This can be carried out by an oxidoreductase activity. As an example, we will discuss the interaction between a host, Hydra vulgaris, and the main colonizer of its epithelial cell surfaces, Curvibacter spp. Those bacteria produce 3-oxo-HSL quorum sensing molecules. However, the oxidoreductase activity of the polyp Hydra is able to modify the 3-oxo-HSL into their 3-hydroxy-HSL counterparts. We can characterize this as quorum quenching since there is an interference with quorum sensing molecules. In this case, the outcomes differ from simple QS inactivation: the host modification results in a phenotypic switch of Curvibacter, which modifies its ability to colonize the epithelial cell surfaces of H. vulgaris.
Applications
Applications of quorum quenching that have been exploited by humans include the use of AHL-degrading bacteria in aquacultures to limit the spread of diseases in aquatic populations of fish, mollusks and crustaceans. This technique has also been translated to agriculture, to restrict the spread of pathogenic bacteria that use quorum sensing in plants. Anti-biofouling is another process that exploits quorum quenching bacteria to mediate the dissociation of unwanted biofilms aggregating on wet surfaces, such as medical devices, transportation infrastructure and water systems. Quorum quenching is recently studied for the control of fouling and emerging contaminants in electro membrane bioreactors (eMBRs) for the advanced treatment of wastewater. Extracts of several traditional medicinal herbs display quorum quenching activity, and have potential antibacterial applications.
Social insects
Social insect colonies are an excellent example of a decentralized system, because no individual is in charge of directing or making decisions for the colony. Several groups of social insects have been shown to use quorum sensing in a process that resembles collective decision-making.
Examples
Ants
Colonies of the ant Temnothorax albipennis nest in small crevices between rocks. When the rocks shift and the nest is broken up, these ants must quickly choose a new nest to move into. During the first phase of the decision-making process, a small portion of the workers leave the destroyed nest and search for new crevices. When one of these scout ants finds a potential nest, she assesses the quality of the crevice based on a variety of factors including the size of the interior, the number of openings (based on light level), and the presence or absence of dead ants. The worker then returns to the destroyed nest, where she waits for a short period before recruiting other workers to follow her to the nest that she has found, using a process called tandem running. The waiting period is inversely related to the quality of the site; for instance, a worker that has found a poor site will wait longer than a worker that encountered a good site. As the new recruits visit the potential nest site and make their own assessment of its quality, the number of ants visiting the crevice increases. During this stage, ants may be visiting many different potential nests. However, because of the differences in the waiting period, the number of ants in the best nest will tend to increase at the greatest rate. Eventually, the ants in this nest will sense that the rate at which they encounter other ants has exceeded a particular threshold, indicating that the quorum number has been reached. Once the ants sense a quorum, they return to the destroyed nest and begin rapidly carrying the brood, queen, and fellow workers to the new nest. Scouts that are still tandem-running to other potential sites are also recruited to the new nest, and the entire colony moves. Thus, although no single worker may have visited and compared all of the available options, quorum sensing enables the colony as a whole to quickly make good decisions about where to move.
Honey bees
Honey bees (Apis mellifera) also use quorum sensing to make decisions about new nest sites. Large colonies reproduce through a process called swarming, in which the queen leaves the hive with a portion of the workers to form a new nest elsewhere. After leaving the nest, the workers form a swarm that hangs from a branch or overhanging structure. This swarm persists during the decision-making phase until a new nest site is chosen.
The quorum sensing process in honey bees is similar to the method used by Temnothorax ants in several ways. A small portion of the workers leave the swarm to search out new nest sites, and each worker assesses the quality of the cavity it finds. The worker then returns to the swarm and recruits other workers to her cavity using the honey bee waggle dance. However, instead of using a time delay, the number of dance repetitions the worker performs is dependent on the quality of the site. Workers that found poor nests stop dancing sooner, and can, therefore, be recruited to the better sites. Once the visitors to a new site sense that a quorum number (usually 10–20 bees) has been reached, they return to the swarm and begin using a new recruitment method called piping. This vibration signal causes the swarm to take off and fly to the new nest location. In an experimental test, this decision-making process enabled honey bee swarms to choose the best nest site in four out of five trials.
Synthetic biology
Quorum sensing has been engineered using synthetic biological circuits in different systems. Examples include rewiring the AHL components to toxic genes to control population size in bacteria; and constructing an auxin-based system to control population density in mammalian cells. Synthetic quorum sensing circuits have been proposed to enable applications like controlling biofilms or enabling drug delivery. Quorum sensing based genetic circuits have been used to convert AI-2 signals to AI-1 and then subsequently use the AI-1 signal to alter bacterial growth rate, thereby changing the composition of a consortium.
Remarkable advancements have been and are continuing to be made in recent years in our understanding of synthetic biology in terms of endocrine and paracrine signaling mechanisms, and the myriad of modes by which bacteria record domestic and foreign cell numbers. The modulation of gene expression in response to oscillations in cell-population density is thanks to the QS techniques regulating bacterial communication natural and artificial cultures. It is also clear that intra- and inter-species cell–cell communication occurs and is regulated by quorum sensing systems. Further, there is mounting data demonstrating that autoinducer signals elicit specific responses from eukaryotic hosts.
Computing and robotics
Quorum sensing can be a useful tool for improving the function of self-organizing networks such as the SECOAS (Self-Organizing Collegiate Sensor) environmental monitoring system. In this system, individual nodes sense that there is a population of other nodes with similar data to report. The population then nominates just one node to report the data, resulting in power savings. Ad hoc wireless networks can also benefit from quorum sensing, by allowing the system to detect and respond to network conditions.
Quorum sensing can also be used to coordinate the behavior of autonomous robot swarms. Using a process similar to that used by Temnothorax ants, robots can make rapid group decisions without the direction of a controller.
Despite recent advancements, the true nature of these back-and-forth conversations remains a mystery, and further rigorous research targeting inter- and intra- species communication is still necessary to maximize knowledge of quorum sensing and its potential to improve research and treatments of cancer and bacterial diseases. The code to understanding these complex bacterial languages is to decipher the impact of the words.
See also
Cell signaling
Collective behavior
Diffusible signal factor
Interspecies quorum sensing
Microbial intelligence
Pheromone
Signal transduction
Swarm intelligence
References
Further reading
Dedicated issue of Philosophical Transactions B on quorum sensing (2007). Some articles are freely available.
High citation review:
External links
The Quorum Sensing Website
Cell-to-Cell Communication in Bacteria
The SECOAS project—Development of a Self-Organising, Wireless Sensor Network for Environmental Monitoring
Measurement of Space: From Ants to Robots
Instant insight into quorum sensing from the Royal Society of Chemistry
Bonnie Bassler: Discovering bacteria's amazing communication system
Bonnie Bassler's seminar: "Cell-Cell Communication in Bacteria"
Bacteriology
Microbial population biology
Cell communication
Complex systems theory
Superorganisms
Environmental microbiology
Gene expression | Quorum sensing | [
"Chemistry",
"Biology",
"Environmental_science"
] | 8,475 | [
"Superorganisms",
"Cell communication",
"Symbiosis",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Environmental microbiology"
] |
161,999 | https://en.wikipedia.org/wiki/Idea | In common usage and in philosophy, ideas are the results of thought. Also in philosophy, ideas can also be mental representational images of some object. Many philosophers have considered ideas to be a fundamental ontological category of being. The capacity to create and understand the meaning of ideas is considered to be an essential and defining feature of human beings.
An idea arises in a reflexive, spontaneous manner, even without thinking or serious reflection, for example, when we talk about the idea of a person or a place. A new or an original idea can often lead to innovation. Our actions are based upon beliefs, beliefs are patterns or organized sets of ideas.
Etymology
The word idea comes from Greek ἰδέα idea "form, pattern", from the root of ἰδεῖν idein, "to see."
History
The argument over the underlying nature of ideas is opened by Plato, whose exposition of his theory of forms—which recurs and accumulates over the course of his many dialogs—appropriates and adds a new sense to the Greek word for things that are "seen" (re. εἶδος) that highlights those elements of perception which are encountered without material or objective reference available to the eyes (re. ἰδέα). As this argument is disseminated the word "idea" begins to take on connotations that would be more familiarly associated with the term today. In the fifth book of his Republic, Plato defines philosophy as the love of this formal (as opposed to visual) way of seeing.
Plato advances the theory that perceived but immaterial objects of awareness constituted a realm of deathless forms or ideas from which the material world emanated. Aristotle challenges Plato in this area, positing that the phenomenal world of ideas arises as mental composites of remembered observations. Though it is anachronistic to apply these terms to thinkers from antiquity, it clarifies the argument between Plato and Aristotle if we call Plato an idealist thinker and Aristotle an empiricist thinker.
This antagonism between empiricism and idealism generally characterizes the dynamism of the argument over the theory of ideas up to the present. This schism in theory has never been resolved to the satisfaction of thinkers from both sides of the disagreement and is represented today in the split between analytic and continental schools of philosophy. Persistent contradictions between classical physics and quantum mechanics may be pointed to as a rough analogy for the gap between the two schools of thought.
Philosophy
Plato
Plato in Ancient Greece was one of the earliest philosophers to provide a detailed discussion of ideas and of the thinking process (in Plato's Greek the word idea carries a rather different sense of our modern English term). Plato argued in dialogues such as the Phaedo, Symposium, Republic, and Timaeus that there is a realm of ideas or forms (eidei), which exist independently of anyone who may have thoughts on these ideas, and it is the ideas which distinguish mere opinion from knowledge, for unlike material things which are transient and liable to contrary properties, ideas are unchanging and nothing but just what they are. Consequently, Plato seems to assert forcefully that material things can only be the objects of opinion; real knowledge can only be had of unchanging ideas. Furthermore, ideas for Plato appear to serve as universals; consider the following passage from the Republic:
René Descartes
Descartes often wrote of the meaning of the idea as an image or representation, often but not necessarily "in the mind", which was well known in the vernacular. Despite Descartes' invention of the non-Platonic use of the term, he at first followed this vernacular use.b In his Meditations on First Philosophy he says, "Some of my thoughts are like images of things, and it is to these alone that the name 'idea' properly belongs." He sometimes maintained that ideas were innate and uses of the term idea diverge from the original primary scholastic use. He provides multiple non-equivalent definitions of the term, uses it to refer to as many as six distinct kinds of entities, and divides ideas inconsistently into various genetic categories. For him knowledge took the form of ideas and philosophical investigation is devoted to the consideration of these entities.
John Locke
John Locke's use of idea stands in striking contrast to Plato's. In his Introduction to An Essay Concerning Human Understanding, Locke defines idea as "that term which, I think, serves best to stand for whatsoever is the object of the understanding when a man thinks, I have used it to express whatever is meant by phantasm, notion, species, or whatever it is which the mind can be employed about in thinking; And I could not avoid frequently using it." He said he regarded the contribution offered in his essay as necessary to examine our own abilities and discern what objects our understandings were, or were not, fitted to deal with. In this style of ideal conception other outstanding figures followed in his footsteps — Hume and Kant in the 18th century, Arthur Schopenhauer in the 19th century, and Bertrand Russell, Ludwig Wittgenstein, and Karl Popper in the 20th century. Locke always believed in the good sense — not pushing things to extremes and while taking fully into account the plain facts of the matter. He prioritized common-sense ideas that struck him as "good-tempered, moderate, and down-to-earth."
As John Locke studied humans in his work "An Essay Concerning Human Understanding" he continually referenced Descartes for ideas as he asked this fundamental question: "When we are concerned with something about which we have no certain knowledge, what rules or standards should guide how confident we allow ourselves to be that our opinions are right?" Put in another way, he inquired into how humans might verify their ideas, and considered the distinctions between different types of ideas. Locke found that an idea "can simply mean some sort of brute experience." He shows that there are "No innate principles in the mind." Thus, he concludes that "our ideas are all experienced in nature." An experience can either be a sensation or a reflection: "consider whether there are any innate ideas in the mind before any are brought in by the impression from sensation or reflection." Therefore, an idea was an experience in which the human mind apprehended something.
In a Lockean view, there are really two types of ideas: complex and simple. Simple ideas are the building blocks for more complex ideas, and "While the mind is wholly passive in the reception of simple ideas, it is very active in the building of complex ideas…" Complex ideas, therefore, can either be modes, substances, or relations.
Modes combine simpler ideas in order to convey new information. For instance, David Banach gives the example of beauty as a mode. He points to combinations of color and form as qualities constitutive of this mode. Substances, however, are distinct from modes. Substances convey the underlying formal unity of certain objects, such as dogs, cats, or tables. Relations represent the relationship between two or more ideas that contain analogous elements to one another without the implication of underlying formal unity. A painting or a piece of music, for example, can both be called 'art' without belonging to the same substance. They are related as forms of art (the term 'art' in this illustration would be a 'mode of relations'). In this way, Locke concluded that the formal ambiguity around ideas he initially sought to clarify had been resolved.
David Hume
Hume differs from Locke by limiting idea to only one of two possible types of perception. The other one is called "impression", and is more lively: these are perceptions we have "when we hear, or see, or feel, or love, or hate, or desire, or will." Ideas are more complex and are built upon these more basic and more grounded perceptions. Hume shared with Locke the basic empiricist premise that it is only from life experiences (whether their own or others') that humans' knowledge of the existence of anything outside of themselves can be ultimately derived, that they shall carry on doing what they are prompted to do by their emotional drives of varying kinds. In choosing the means to those ends, they shall follow their accustomed associations of ideas.d Hume has contended and defended the notion that "reason alone is merely the 'slave of the passions'."
Immanuel Kant
Immanuel Kant defines ideas by distinguishing them from concepts. Concepts arise by the compositing of experience into abstract categorial representations of presumed or encountered empirical objects whereas the origin of ideas, for Kant, is a priori to experience. Regulative ideas, for example, are ideals that one must tend towards, but by definition may not be completely realized as objects of empirical experience. Liberty, according to Kant, is an idea whereas "tree" (as an abstraction covering all species of trees) is a concept. The autonomy of the rational and universal subject is opposed to the determinism of the empirical subject. Kant felt that it is precisely in knowing its limits that philosophy exists. The business of philosophy he thought was not to give rules, but to analyze the private judgement of good common sense.e
Rudolf Steiner
Whereas Kant declares limits to knowledge ("we can never know the thing in itself"), in his epistemological work, Rudolf Steiner sees ideas as "objects of experience" which the mind apprehends, much as the eye apprehends light. In Goethean Science (1883), he declares, "Thinking ... is no more and no less an organ of perception than the eye or ear. Just as the eye of perception perceives colors and the ear sounds, so thinking perceives ideas." He holds this to be the premise upon which Goethe made his natural-scientific observations.
Wilhelm Wundt
Wundt widens the term from Kant's usage to include conscious representation of some object or process of the external world. In so doing, he includes not only ideas of memory and imagination, but also perceptual processes, whereas other psychologists confine the term to the first two groups. One of Wundt's main concerns was to investigate conscious processes in their own context by experiment and introspection. He regarded both of these as exact methods, interrelated in that experimentation created optimal conditions for introspection. Where the experimental method failed, he turned to other objectively valuable aids, specifically to those products of cultural communal life which lead one to infer particular mental motives. Outstanding among these are speech, myth, and social custom. Wundt designed the basic mental activity apperception — a unifying function which should be understood as an activity of the will. Many aspects of his empirical physiological psychology are used today. One is his principles of mutually enhanced contrasts and of assimilation and dissimilation (i.e. in color and form perception and his advocacy of objective methods of expression and of recording results, especially in language. Another is the principle of heterogony of ends — that multiply motivated acts lead to unintended side effects which in turn become motives for new actions.
Charles Sanders Peirce
C. S. Peirce published the first full statement of pragmatism in his important works "How to Make Our Ideas Clear" (1878) and "The Fixation of Belief" (1877). In "How to Make Our Ideas Clear" he proposed that a clear idea (in his study he uses concept and idea as synonymic) is defined as one, when it is apprehended such as it will be recognized wherever it is met, and no other will be mistaken for it. If it fails of this clearness, it is said to be obscure. He argued that to understand an idea clearly we should ask ourselves what difference its application would make to our evaluation of a proposed solution to the problem at hand. Pragmatism (a term he appropriated for use in this context), he defended, was a method for ascertaining the meaning of terms (as a theory of meaning). The originality of his ideas is in their rejection of what was accepted as a view and understanding of knowledge as impersonal facts which had been accepted by scientists for some 250 years. Peirce contended that we acquire knowledge as participants, not as spectators. He felt "the real", sooner or later, is composed of information that has been acquired through ideas and knowledge and ordered by the application of logical reasoning. The rational distinction of the empirical object is not prior to its perception by a knowledgeable subject, in other words. He also published many papers on logic in relation to ideas.
G. F. Stout and J. M. Baldwin
G. F. Stout and J. M. Baldwin, in the Dictionary of Philosophy and Psychology, define the idea as "the reproduction with a more or less adequate image, of an object not actually present to the senses." They point out that an idea and a perception are by various authorities contrasted in various ways. "Difference in degree of intensity", "comparative absence of bodily movement on the part of the subject", "comparative dependence on mental activity", are suggested by psychologists as characteristic of an idea as compared with a perception.
An idea, in the narrower and generally accepted sense of a mental reproduction, is frequently composite. That is, as in the example given above of the idea of a chair, a great many objects, differing materially in detail, all call a single idea. When a man, for example, has obtained an idea of chairs in general by comparison with which he can say "This is a chair, that is a stool", he has what is known as an "abstract idea" distinct from the reproduction in his mind of any particular chair (see abstraction). Furthermore, a complex idea may not have any corresponding physical object, though its particular constituent elements may severally be the reproductions of actual perceptions. Thus the idea of a centaur is a complex mental picture composed of the ideas of man and horse, that of a mermaid of a woman and a fish.
Walter Benjamin
"Ideas are to objects [of perception] as constellations are to stars," writes Walter Benjamin in the introduction to his The Origin of German Tragic Drama. "The set of concepts which assist in the representation of an idea lend it actuality as such a configuration. For phenomena are not incorporated into ideas. They are not contained in them. Ideas are, rather, their objective virtual arrangement, their objective interpretation."
Benjamin advances, "That an idea is that moment in the substance and being of a word in which this word has become, and performs, as a symbol." as George Steiner summarizes. In this way techne--art and technology—may be represented, ideally, as "discrete, fully autonomous objects...[thus entering] into fusion without losing their identity."
In anthropology and the social sciences
Diffusion studies explore the spread of ideas from culture to culture. Some anthropological theories hold that all cultures imitate ideas from one or a few original cultures, the Adam of the Bible, or several cultural circles that overlap. Evolutionary diffusion theory holds that cultures are influenced by one another but that similar ideas can be developed in isolation.
In the mid-20th century, social scientists began to study how and why ideas spread from one person or culture to another. Everett Rogers pioneered diffusion of innovations studies, using research to prove factors in adoption and profiles of adopters of ideas. In 1976, in his book The Selfish Gene, Richard Dawkins suggested applying biological evolutionary theories to the spread of ideas. He coined the term meme to describe an abstract unit of selection, equivalent to the gene in evolutionary biology.
Ideas and intellectual property
Relationship between ideas and patents
On susceptibility to exclusive property
Patent law regulates various aspects related to the functional manifestation of inventions based on new ideas or incremental improvements to existing ones. Thus, patents have a direct relationship to ideas.
Relationship between ideas and copyrights
In some cases, authors can be granted limited legal monopolies on the manner in which certain works are expressed. This is known colloquially as copyright, although the term intellectual property is used mistakenly in place of copyright. Copyright law regulating the aforementioned monopolies generally does not cover the actual ideas. The law does not bestow the legal status of property upon ideas per se. Instead, laws purport to regulate events related to the usage, copying, production, sale and other forms of exploitation of the fundamental expression of a work, that may or may not carry ideas. Copyright law is fundamentally different from patent law in this respect: patents do grant monopolies on ideas (more on this below).
A copyright is meant to regulate some aspects of the usage of expressions of a work, not an idea. Thus, copyrights have a negative relationship to ideas.
Work means a tangible medium of expression. It may be an original or derivative work of art, be it literary, dramatic, musical recitation, artistic, related to sound recording, etc. In (at least) countries adhering to the Berne Convention, copyright automatically starts covering the work upon the original creation and fixation thereof, without any extra steps. While creation usually involves an idea, the idea in itself does not suffice for the purposes of claiming copyright.
Relationship of ideas to confidentiality agreements
Confidentiality and nondisclosure agreements are legal instruments that assist corporations and individuals in keeping ideas from escaping to the general public. Generally, these instruments are covered by contract law.
See also
Idealism
Brainstorming
Creativity techniques
Diffusion of innovations
Form
Ideology
List of perception-related topics
Notion (philosophy)
Object of the mind
Think tank
Thought experiment
History of ideas
Intellectual history
Concept
Philosophical analysis
Notes
References
The Encyclopedia of Philosophy, Macmillan Publishing Company, New York, 1973
Dictionary of the History of Ideas Charles Scribner's Sons, New York 1973–74,
- Nous
¹ Volume IV 1a, 3a
² Volume IV 4a, 5a
³ Volume IV 32 - 37
Ideas
Ideology
Authority
Education
Liberalism
Idea of God
Pragmatism
Chain of Being
The Story of Thought, DK Publishing, Bryan Magee, London, 1998,
a.k.a. The Story of Philosophy, Dorling Kindersley Publishing, 2001,
(subtitled on cover: The Essential Guide to the History of Western Philosophy)
a Plato, pages 11 - 17, 24 - 31, 42, 50, 59, 77, 142, 144, 150
b Descartes, pages 78, 84 - 89, 91, 95, 102, 136 - 137, 190, 191
c Locke, pages 59 - 61, 102 - 109, 122 - 124, 142, 185
d Hume, pages 61, 103, 112 - 117, 142 - 143, 155, 185
e Kant, pages 9, 38, 57, 87, 103, 119, 131 - 137, 149, 182
f Peirce, pages 61, How to Make Our Ideas Clear 186 - 187 and 189
g Saint Augustine, pages 30, 144; City of God 51, 52, 53 and The Confessions 50, 51, 52
- additional in the Dictionary of the History of Ideas for Saint Augustine and Neo-Platonism
h Stoics, pages 22, 40, 44; The governing philosophy of the Roman Empire on pages 46 - 47.
- additional in Dictionary of the History of Ideas for Stoics, also here , and here , and here .
The Reader's Encyclopedia, 2nd Edition 1965, Thomas Y. Crowell Company,
An Encyclopedia of World Literature
¹apage 774 Plato (427–348 BC)
²apage 779 Francesco Petrarca
³apage 770 Charles Sanders Peirce
¹bpage 849 the Renaissance
This article incorporates text from the Schaff-Herzog Encyclopedia of Religious Knowledge, a publication now in the public domain.
Further reading
Stephen Laurence and Eric Margolis, The Building Blocks of Thought: A Rationalist Account of the Origins of Concepts (Oxford University Press, 2024)
Jerry Fodor, Hume Variations (Oxford University Press, 2003)
Stephen P. Stitch (ed.), Innate Ideas (University of California Press, 1975)
A. G. Balz, Idea and Essence in the Philosophy of Hobbes and Spinoza (New York 1918)
Gregory T. Doolan, Aquinas on the divine ideas as exemplar causes (Washington, D.C.: Catholic University of America Press, 2008)
Patricia A. Easton (ed.), Logic and the Workings of the Mind. The Logic of Ideas and Faculty Psychology in Early Modern Philosophy (Atascadero, Calif.: Ridgeview 1997)
Pierre Garin, La Théorie de l'idée suivant l'école thomiste (Paris 1932)
Marc A. High, Idea and Ontology. An Essay in Early Modern Metaphysics of Ideas ( Pennsylvania State University Press, 2008)
Lawrence Lessig, The Future of Ideas (New York 2001)
Paul Natorp, Platons Ideenlehre (Leipzig 1930)
W. D. Ross, Plato's Theory of Ideas (Oxford 1951)
Peter Watson, Ideas: A History from Fire to Freud, Weidenfeld & Nicolson (London 2005)
J. W. Yolton, John Locke and the Way of Ideas (Oxford 1956)
Cognition
Creativity
Concepts in metaphysics
Idealism
Innovation
Ontology
Platonism | Idea | [
"Biology"
] | 4,373 | [
"Creativity",
"Behavior",
"Human behavior"
] |
162,015 | https://en.wikipedia.org/wiki/Cellophane | Cellophane is a thin, transparent sheet made of regenerated cellulose. Its low permeability to air, oils, greases, bacteria, and liquid water makes it useful for food packaging. Cellophane is highly permeable to water vapour, but may be coated with nitrocellulose lacquer to prevent this.
Cellophane is also used in transparent pressure-sensitive tape, tubing, and many other similar applications.
Cellophane is compostable and biodegradable, and can be obtained from biomaterials. The original production process uses carbon disulfide (CS2), which has been found to be highly toxic to workers. The newer lyocell process can be used to produce cellulose film without involving carbon disulfide.
"Cellophane" is a generic term in some countries, while in other countries it is a registered trademark.
Production
Cellulose from wood, cotton, hemp, or other sources is dissolved in alkali and carbon disulfide to make a solution called viscose, which is then extruded through a slit into a bath of dilute sulfuric acid and sodium sulfate to reconvert the viscose into cellulose. The film is then passed through several more baths, one to remove sulfur, one to bleach the film, and one to add softening materials such as glycerin to prevent the film from becoming brittle.
A similar process, using a hole (a spinneret) instead of a slit, is used to make a fibre called rayon. Chemically, cellophane, rayon, and cellulose are polymers of glucose; they differ structurally rather than chemically.
History
Cellophane was invented by Swiss chemist Jacques E. Brandenberger while employed by Blanchisserie et Teinturerie de Thaon. In 1900, inspired by seeing wine spill on a restaurant's tablecloth, he decided to create a cloth that could repel liquids rather than absorb them. His first step was to spray a waterproof coating onto fabric, and he opted to try viscose. The resultant coated fabric was far too stiff, but the diaphanous film coating could be separated from the backing cloth easily and in one undamaged piece. Seeing the possibilities of this new material on its own, Brandenberger soon abandoned his original idea.
It took ten years for Brandenberger to perfect his film. His chief improvement over earlier work with such films was adding glycerin to soften the material. By 1912 he had constructed a machine to manufacture the film, which he had named Cellophane, from the words cellulose and diaphane ("transparent"). Cellophane was patented that year. The following year, the company Comptoir des Textiles Artificiels (CTA) bought the Thaon firm's interest in Cellophane and established Brandenberger in a new company, La Cellophane SA.
Whitman's candy company initiated use of cellophane for candy wrapping in the United States in 1912 for their Whitman's Sampler. They remained the largest user of imported cellophane from France until nearly 1924, when DuPont built the first cellophane manufacturing plant in the US. Cellophane saw limited sales in the US at first since while it was waterproof, it was not moisture proof—it held or repelled water but was permeable to water vapor. This meant that it was unsuited to packaging products that required moisture proofing. DuPont hired chemist William Hale Charch (1898–1958), who spent three years developing a nitrocellulose lacquer that, when applied to Cellophane, made it moisture proof. Following the introduction of moisture-proof Cellophane in 1927, the material's sales tripled between 1928 and 1930, and in 1938, Cellophane accounted for 10% of DuPont's sales and 25% of its profits.
Cellophane played a crucial role in developing the self-service retailing of fresh meat. Cellophane visibility helped customers know quality of meat before buying. Cellophane also worked to consumers' disadvantage when manufacturers learned to manipulate the appearance of a product by controlling oxygen and moisture levels to prevent discolouration of food. It was considered such a useful invention that cellophane was listed alongside other modern marvels in the 1934 song "You're the Top" (from Anything Goes).
The British textile company Courtaulds' viscose technology had allowed it to diversify in 1930 into viscose film, which it named "Viscacelle". However, competition with Cellophane was an obstacle to its sales, and in 1935 it founded British Cellophane Limited (BCL) in conjunction with the Cellophane Company and its French parent company CTA. A major production facility was constructed at Bridgwater, Somerset, England, from 1935 to 1937, employing 3,000 workers. BCL subsequently constructed plants in Cornwall, Ontario (BCL Canada), as an adjunct to the existing Courtaulds viscose rayon plant there (from which it bought the viscose solution), and in 1957 at Barrow-in-Furness, Cumbria. The latter two plants were closed in the 1990s.
Today
Cellulose film has been manufactured continuously since the mid-1930s and is still used today. As well as packaging a variety of food items, there are also industrial applications, such as a base for such self-adhesive tapes as Sellotape and Scotch Tape, a semi-permeable membrane in a certain type of battery, as dialysis tubing (Visking tubing), and as a release agent in the manufacture of fibreglass and rubber products. Cellophane is the most popular material for manufacturing cigar packaging; its permeability to water vapor makes cellophane a good product for this application as cigars must be allowed to "breathe" while wrapped and in storage.
Cellophane sales have dwindled since the 1960s, due to alternative packaging options. The polluting effects of carbon disulfide and other by-products of the process used to make viscose may have also contributed to its falling behind lower cost petrochemical-based films such as biaxially-oriented polyethylene terephthalate (BoPET) and biaxially oriented polypropylene (BOPP) in the 1980s and 1990s. However, as of 2017, it has made something of a resurgence in recent times due to its being biosourced, compostable, and biodegradable. Its sustainability record is clouded by its energy-intensive manufacturing process and the potential negative impact of some of the chemicals used, but significant progress in recent years has been made by leading manufacturers in reducing their environmental footprint.
Material properties
When placed between two plane polarizing filters, cellophane produces prismatic colours due to its birefringent nature. Artists have used this effect to create stained glass-like creations that are kinetic and interactive.
Cellophane is biodegradable, but highly toxic carbon disulfide is used in most cellophane production. Viscose factories vary widely in the amount of CS2 they expose their workers to, and most give no information about their quantitative safety limits or how well they keep to them.
Branding
In the UK and in many other countries, "Cellophane" is a registered trademark and the property of Futamura Chemical UK Ltd, based in Wigton, Cumbria, United Kingdom. In the USA and some other countries "cellophane" has become genericized, and is often used informally to refer to a wide variety of plastic film products, even those not made of cellulose, such as PVC-based plastic wrap.
See also
Bioplastics
British Cellophane
Genericized trademark
References
External links
Cellophane Invention
Biodegradable plastics
Bioplastics
Brands that became generic
Cellulose
Packaging materials
Transparent materials
Swiss inventions
Products introduced in 1912 | Cellophane | [
"Physics"
] | 1,658 | [
"Physical phenomena",
"Optical phenomena",
"Materials",
"Transparent materials",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.