id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
16,143,832
https://en.wikipedia.org/wiki/Minimal%20prime%20ideal
In mathematics, especially in commutative algebra, certain prime ideals called minimal prime ideals play an important role in understanding rings and modules. The notion of height and Krull's principal ideal theorem use minimal prime ideals. Definition A prime ideal P is said to be a minimal prime ideal over an ideal I if it is minimal among all prime ideals containing I. (Note: if I is a prime ideal, then I is the only minimal prime over it.) A prime ideal is said to be a minimal prime ideal if it is a minimal prime ideal over the zero ideal. A minimal prime ideal over an ideal I in a Noetherian ring R is precisely a minimal associated prime (also called isolated prime) of ; this follows for instance from the primary decomposition of I. Examples In a commutative Artinian ring, every maximal ideal is a minimal prime ideal. In an integral domain, the only minimal prime ideal is the zero ideal. In the ring Z of integers, the minimal prime ideals over a nonzero principal ideal (n) are the principal ideals (p), where p is a prime divisor of n. The only minimal prime ideal over the zero ideal is the zero ideal itself. Similar statements hold for any principal ideal domain. If I is a p-primary ideal (for example, a symbolic power of p), then p is the unique minimal prime ideal over I. The ideals and are the minimal prime ideals in since they are the extension of prime ideals for the morphism , contain the zero ideal (which is not prime since , but, neither nor are contained in the zero ideal) and are not contained in any other prime ideal. In the minimal primes over the ideal are the ideals and . Let and the images of x, y in A. Then and are the minimal prime ideals of A (and there are no others). Let be the set of zero-divisors in A. Then is in D (since it kills nonzero ) while neither in nor ; so . Properties All rings are assumed to be commutative and unital. Every proper ideal I in a ring has at least one minimal prime ideal above it. The proof of this fact uses Zorn's lemma. Any maximal ideal containing I is prime, and such ideals exist, so the set of prime ideals containing I is non-empty. The intersection of a decreasing chain of prime ideals is prime. Therefore, the set of prime ideals containing I has a minimal element, which is a minimal prime over I. Emmy Noether showed that in a Noetherian ring, there are only finitely many minimal prime ideals over any given ideal. The fact remains true if "Noetherian" is replaced by the ascending chain conditions on radical ideals. The radical of any proper ideal I coincides with the intersection of the minimal prime ideals over I. This follows from the fact that every prime ideal contains a minimal prime ideal. The set of zero divisors of a given ring contains the union of the minimal prime ideals. Krull's principal ideal theorem says that, in a Noetherian ring, each minimal prime over a principal ideal has height at most one. Each proper ideal I of a Noetherian ring contains a product of the possibly repeated minimal prime ideals over it (Proof: is the intersection of the minimal prime ideals over I. For some n, and so I contains .) A prime ideal in a ring R is a unique minimal prime over an ideal I if and only if , and such an I is -primary if is maximal. This gives a local criterion for a minimal prime: a prime ideal is a minimal prime over I if and only if is a -primary ideal. When R is a Noetherian ring, is a minimal prime over I if and only if is an Artinian ring (i.e., is nilpotent module I). The pre-image of under is a primary ideal of called the -primary component of I. When is Noetherian local, with maximal ideal , is minimal over if and only if there exists a number such that . Equidimensional ring For a minimal prime ideal in a local ring , in general, it need not be the case that , the Krull dimension of . A Noetherian local ring is said to be equidimensional if for each minimal prime ideal , . For example, a local Noetherian integral domain and a local Cohen–Macaulay ring are equidimensional. See also equidimensional scheme and quasi-unmixed ring. See also Extension and contraction of ideals Normalization Notes References Further reading http://stacks.math.columbia.edu/tag/035E http://stacks.math.columbia.edu/tag/035P Commutative algebra Prime ideals
Minimal prime ideal
[ "Mathematics" ]
996
[ "Fields of abstract algebra", "Commutative algebra" ]
16,144,332
https://en.wikipedia.org/wiki/DENIS-P%20J020529.0%E2%88%92115925
DENIS-P J020529.0−115925 is a brown dwarf system in the constellation of Cetus. It is located 64 light-years (19.8 parsecs) away, based on the system's parallax. It was first found in the Deep Near Infrared Survey of the Southern Sky. This is a triple brown dwarf system: objects that do not have enough mass to fuse hydrogen like stars. The two brightest components, designated A and B respectively, are both L-type objects. As of 2003, the two were separated 0.287° along a position angle of 246°. Component B was observed as elongated, suggesting a third component. This third component, named C, is a T-type object. It is separated about 1.9 astronomical units (au) from B, and based on a total mass of , the two may orbit each other every 8 years. See also DENIS-P J1058.7−1548 DENIS-P J1228.2−1547 DENIS-P J082303.1−491201 b DENIS-P J101807.5−285931 other triple brown dwarfs: 2M1510 2MASS J08381155+1511155 VHS J1256–1257 2MASS J0920+3517 References Triple star systems Brown dwarfs L-type brown dwarfs T-type brown dwarfs Cetus DENIS objects Astronomical objects discovered in 1997
DENIS-P J020529.0−115925
[ "Astronomy" ]
306
[ "Cetus", "Constellations" ]
16,144,992
https://en.wikipedia.org/wiki/V380%20Orionis
V380 Ori is a young multiple star system located near the Orion Nebula in the constellation Orion, thought to be somewhere between 1 and 3 million years old. It lies at the centre of NGC 1999 and is the primary source lighting up this and other nebulae in the region. System V380 Orionis is a multiple star system containing at least three stars. A very faint cool star 9" away is also thought to be gravitationally bound, making it a hierarchical quadruple system. Two infrared sources within NGC 1999 have been listed as companions in some catalogues, but are not thought to be stars. When discovered, they were referred to as V380 Ori-B and V-380 Ori-C, a notation which can lead to confusion. The main component is visible as the 10th magnitude variable star at the centre of NGC 1999, referred to as the primary. Speckle interferometry shows a cool companion separated by 0.15", approximately 62 AU, referred to as the tertiary. Spectroscopy shows a third star at a projected separation less than 0.33 AU, referred to as the secondary. The two closest stars, the primary and tertiary, are surrounded by a circumstellar disk, lying almost edge-on to observers on earth. The fourth star has a projected separation of 4,000 AU and is receding from the other three. The system is believed to have formed with all four stars close together, but interacted to eject the smallest star into an unstable but gravitationally bound orbit around 20,000 years ago. The primary and secondary, the two closest stars, are calculated to orbit every 104 days. The radial velocity signatures in the spectrum have a large margin of uncertainty and the orbit is poorly defined. Comparing the mass ratio found from the orbit with masses assumed from other physical properties suggests that the orbit is seen close to pole-on. Properties The primary star is a hot white Herbig Ae/Be star that has been variously assigned spectral types between B9 and A1. It has a surface temperature of 10,500 ± 500 K, is around 2.87 times as massive as the sun, 3 times its radius, and 100 times as luminous. It has a strong magnetic field which varies every 4.1 days and this is assumed to be the star's rotation period. Models show that the axis of rotation is inclined at 32 degrees. It is a variable star, considered an Orion variable, with occasional fading and other variability caused by obscuration from the surrounding dust. The apparent magnitude varies irregularly between 10.2 and 10.7. The properties of the star are calculated based on its maximum brightness, assumed to be the least obscured. The secondary is a T Tauri star, detected by distinctive spectral lines that could not be produced by the hotter primary star, that has a surface temperature of 5,500 ± 500 K, is around 1.6 times as massive as the sun, twice its radius, and three times as luminous. The nature of the tertiary component is uncertain. No spectral lines have been seen originating from this component. The fourth star, sometimes called V380 Orionis B, is a small, cool object of spectral type M5 or M6 that is either a red dwarf or brown dwarf. Nebulosity One of the component stars of V380 Orionis appears to have launched a astrophysical jet that helped to clear the keyhole-shaped hole in the surrounding nebula known as NGC 1999. The system is surrounded by a bow shock—the total structure over 17 light-years (5.3 parsecs) across. References Orion (constellation) 026327 Orionis, V380 A-type stars Herbig Ae/Be stars T Tauri stars 4 Durchmusterung objects
V380 Orionis
[ "Astronomy" ]
772
[ "Constellations", "Orion (constellation)" ]
16,145,231
https://en.wikipedia.org/wiki/Christopher%20Henn-Collins
Lieutenant-Colonel Christopher A Henn-Collins (5 June 1915 – 8 August 2006), CEng, FIEE, FIERE served in the Second World War, notably, in the Polish Campaign under General Adrian Carton de Wiart. After the war Henn-Collins was a prolific inventor, including the first transistorised quartz clock. Early life Born in 1915, Christopher Henn-Collins was the third son of Lieutenant-Colonel the Hon. Richard Henn Collins, CMG, DSO, and grandson of Lord Collins, Master of the Rolls from 1901 to 1907. He was educated at Shrewsbury, and destined for a military career in his father's regiment, but pleaded to be allowed to pursue his boyhood ambition to be a telecommunications engineer. In 1934 he enlisted as a Gentleman Cadet at the Royal Military Academy at Woolwich for signals training and was commissioned in 1935. Polish Campaign After service in Palestine he earned the dubious distinction of being possibly the first serving officer to come under enemy fire in the first few hours of the Second World War. In August 1939, when he was Brigade Signals Officer to the 1st Brigade of Guards, he had been ordered to lead a detachment of signallers and their equipment into Poland, as part of a British Military Mission under the command of the battle-scarred veteran General Carton de Wiart, VC, blinded in one eye and with an artificial hand. Their objective was to set up radio communications between Mission HQ in Warsaw, the UK and units of the Polish army. They were to travel in plain clothes, but with battle-dress in their kit, and six tons of equipment, through France to Marseilles, where HMS Shropshire would take them to Alexandria. There they were issued with passports and fictitious occupations, before trans-shipping to a ferry en route to Turkey, by which time Britain and France were at war with Germany. From there they travelled by rail through Romania, setting up radio communications along the way. By the time they crossed the Polish frontier southeast of Warsaw, German armoured divisions were driving east towards the capital, and their reconnaissance planes were taking an interest in this strange convoy, which was now in a war zone. The detachment was ordered to change into uniform. In Lvov they were under heavy fire from low-flying aircraft: they could not move forward, nor could they stay put, risking further attentions from the Luftwaffe. For several nights they shuttled to and fro a few miles west to east and back again, awaiting instructions, and it was not until 8 September when they rendezvoused with General de Wiart, who had moved his headquarters from Warsaw to Tarnopol, that their mission was abandoned. They were ordered to destroy their equipment, and make their way home in twos and threes as best they could. Back in Alexandria Henn-Collins's instructions were to return to London where he was posted to Staff College at Camberley, and wrote a critical report on the lessons to be learned from this expedition. Although the mission was aborted, the outcome would have been quite different if the Russians had not invaded. The Poles had plans to conduct a guerrilla war in the east, and a British Signals unit behind the lines would have been of considerable use to the Allies. Later wartime postings For Henn-Collins various postings during the next three years included a period in the Directorate of Military Training, and promotion to major and then, with the rank of lieutenant-colonel, to Allied Forces Headquarters in Algiers as Officer in Charge of Radio Section, to set up links throughout the North African Theatre. Post-war engineering career and retirement He was a resourceful, inventive and practical engineer. He patented an enciphering and deciphering machine, assigned to the Ministry of Supply with no financial benefit to himself; and he had so many ideas for civilian projects which could not be exploited within the service that he resigned his commission in 1947 in order to set up as a consulting engineer. Partly as a result of his wartime contacts, his company, Henn-Collins Associates, undertook a wide range of projects for government agencies and commercial organisations worldwide, mostly in the field of telecommunications, but he had other interests as well, and in the 1950s and 60s he patented a number of devices of an electro-mechanical nature. In his workshop he developed his idea for a quartz crystal clock which by using transistors in place of thermionic valves, made possible a much smaller quartz clock than was previously feasible. He described his "mantelpiece" clock in the British Horological Journal in 1957 and showed it at an exhibition in Goldsmiths' Hall in 1958, "The Pendulum to the Atom", which was opened by Prince Philip, Duke of Edinburgh. Christopher Henn-Collins and Dr Louis Essen, inventor of the caesium clock were presented to him. Before he retired to Guernsey in 1970 he represented the Institution of Electrical Engineers and the Institution of Electrical and Radio Engineers on a British Standards Institution committee which produced a Code of Practice for the reception of sound and television broadcasting. He returned to England three years before his death. Personal life He married first Patricia Hooper, who died in 1974, and in 1976 he married Andora de Quehen who survived him. References Tearle, John Lieutenant Colonel C A Henn-Collins, CEng, FIEE,FIERE draft notes for Times obituary External links Obituary, The Times, 27 September 2006 1915 births 2006 deaths Military personnel from Shrewsbury Fellows of the Institution of Engineering and Technology British inventors 20th-century British engineers
Christopher Henn-Collins
[ "Engineering" ]
1,123
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
16,145,742
https://en.wikipedia.org/wiki/GJ%203685
GJ 3685 is a star in the constellation of Leo. It is extremely faint; its apparent magnitude is 13.3, and can only be seen with a ten-inch (25 cm) telescope (see Limiting magnitude). Based on a parallax of 53.1361 milliarcseconds, the system is located away from the Earth. This is a part of a binary star system consisting of two components separated by 24. The primary component, GJ 3685 (also known as GJ 3685 A), is a very old red dwarf that is also a flare star. A 20-minute flare was observed in 2004 by the GALEX satellite. Its companion, GJ 3686, is another faint red dwarf with a spectral type of M5. It is also known as LP 613-50 and is also located roughly the same distance as its primary. References Leo (constellation) M-type main-sequence stars 3685 Binary stars
GJ 3685
[ "Astronomy" ]
199
[ "Leo (constellation)", "Constellations" ]
16,146,360
https://en.wikipedia.org/wiki/Standardization%20of%20Office%20Open%20XML
The Office Open XML file formats, also known as OOXML, were standardised between December 2006 and November 2008, first by the Ecma International consortium (where they became ECMA-376), and subsequently, after a contentious standardization process, by the ISO/IEC's Joint Technical Committee 1 (where they became ISO/IEC 29500:2008). Standardization within Ecma International More than a year after being asked by the European Union to standardize their Office 2003 XML formats, Microsoft submitted 2,000 pages of documentation for a new file format to the Ecma International consortium for it to be made into an open standard. Ecma formed a technical committee (TC45) in December 2005, in order to produce and maintain a "formal standard for office productivity applications that is fully compatible with the Office Open XML Formats, submitted by Microsoft". The technical committee was chaired by two Microsoft employees and included members drawn from Apple, Canon, Intel, NextPage, Novell, Pioneer, Statoil ASA, Toshiba, The United States Library of Congress, The British Library and the Gnome Foundation. During standardisation within Ecma the specification grew to approximately 6,000 pages. It was approved as an Ecma standard (ECMA-376) on December 7, 2006. The standard can be downloaded from Ecma free of charge. International standardization Using their entitlement as an ISO/IEC JTC 1 external Category A liaison, Ecma International submitted ECMA-376 to the JTC 1 fast track standardization process. To meet the requirements of this process, they submitted the documents "Explanatory report on Office Open XML Standard (Ecma-376) submitted to JTC 1 for fast-track" and "Licensing conditions that Microsoft offers for Office Open XML". ISO and IEC classified the specification as DIS 29500 (Draft International Standard 29500) Information technology – Office Open XML file formats. The fast track process consists of a contradictions phase, a ballot phase, and a ballot resolution phase. During the contradictions phase, ISO and IEC members submitted perceived contradictions to JTC 1. During the ballot phase the members voted on the specification as it was submitted by Ecma and submitted editorial and technical comments with their vote. In the ballot resolution phase the submitted comments were addressed and members were invited to reconsider their vote. Interim ballot result During the standardization of Office Open XML, Ecma International submitted its Office Open XML File Formats standard (ECMA-376) to the ISO Fast Track process. After a comment period, the ISO held a ballot that closed September 2007. This has been observed to be perhaps the most controversial and unusual ISO ballot ever convened, both in the number of comments in opposition, and in unusual actions during the voting process. Various factions have strongly supported and opposed this fast track process. On the supporting side were primarily Microsoft affiliated companies; on the opposing side were free- or open-source software organizations, IBM and affiliates, Sun Microsystems, and Google. There have been reports of attempted vote buying, heated verbal confrontations, refusal to come to consensus and other very unusual behavior in national standards bodies. This is said to be unprecedented for standards bodies, which usually act together and have generally worked to resolve concerns amicably. 87 ISO member countries responded to the five-month ballot. There were 51 votes of "approval", 18 votes of "disapproval" and 18 abstentions. For the measure to pass, of "P" members (participating, as opposed to "O" members: observing) must approve and less than of all voting national members (excluding members that abstain from voting) must disapprove. The ballot shows 53% approval by "P" members and 26% disapproval from the total votes. The following table shows the results by member of the balloting that ended 2 September 2007: On 25–29 February 2008, a Ballot Resolution Meeting was held in Geneva, Switzerland, to consider revisions to the OOXML proposal. Under ISO rules, national standards bodies have thirty days following the Ballot Resolution Meeting to reconsider and possibly change their votes. Belgium The Belgian Bureau de Normalisation considered the revisions, but failed to reach a consensus on the proposal. Belgium's initial abstention therefore stood. Czech Republic The Český Normalizační Institut considered the revisions and changed its initial vote against the proposal to a vote in favour. Germany The Normenausschuss Informationstechnik und Anwendungen considered the revisions and reaffirmed Germany's initial vote for the proposal. India The Bureau of Indian Standards considered the revisions and reaffirmed India's initial vote against the proposal. Netherlands The Netherlands Standardization Institute (NEN) considered the revisions and reaffirmed the Netherlands' initial abstention. Trinidad and Tobago The Trinidad and Tobago Bureau of Standards announced that it will change its initial abstention to a vote for the revised proposal. United States The International Committee on Information Technology Standards (INCITS) considered the revisions and reaffirmed the U.S.'s initial vote for the proposal. In September 2007 eighty-seven ISO and IEC member countries had responded to the ballot. There were 51 votes of "approval", 18 votes of "disapproval" and 18 abstentions. "P-members", who were required to vote, had to approve by 66.67% for the text to be approved. The P-members voted 17 in favour out of 32, below the required threshold for approval. Also, no more than 25% of the total member votes may be negative for the text to be approved, and this requirement was also not met since 26% of the total votes were negative. The standardization process then entered its ballot resolution phase, described below. Response to ballot comments Ecma produced a draft "Disposition of Comments" document that addresses the 1,027 distinct "NB comments" (that is, comments by national bodies) that had been submitted in the letter ballot phase. This document comprised 1,600 pages of commentary and proposed changes. The ISO and IEC members had 6 weeks to review this draft, and had an opportunity to participate in several informal conference call sessions with the Ecma TC45 to discuss it before the BRM. Ballot resolution process A Ballot Resolution Meeting (BRM) is an integral part of the ballot resolution phase. The outcome of, and period following, this meeting decided whether DIS 29500 succeeded or failed in its bid to become an International Standard. The DIS 29500 BRM took place in late February 2008. At the BRM, 873 proposed changes to the specification were submitted by Ecma (of their 1,027 responses, 154 proposed no change). Of these only 20% were discussed and modified in meeting sessions, given the 5 day time limit of the meeting. The remaining 80% were not discussed and were subject to a voting mechanism approved by the meeting (see Resolution 37 of the meeting resolutions cited below). Using this voting mechanism NBs could approve, disapprove or abstain on each and every one of these proposed changes. This allowed a set of approved changes to be decided upon without discussion. With the original submitted draft used as the base, all the agreed-upon changes were applied by the Project Editor to create a new set of documents incorporating the changes agreed during the BRM. In parallel with this, NBs had 30 days after the BRM in which to decide whether to amend their votes of September 2, 2007. Ballot result A number of JTC 1 members took the opportunity to amend their votes, predominantly in favour of approval of DIS 29500. Thus, on April 2, 2008, ISO and IEC officially stated that the DIS 29500 had been approved for acceptance as an ISO/IEC Standard, pending any appeals. They stated that "75% of the JTC 1 participating member votes cast positive and 14% of the total of national member body votes cast negative" In accordance with the JTC 1 directives the Project Editor had created a new version of the final text within a month of the BRM. After review, corrections and the resolution of appeals, this text was distributed to the members of SC34. Appeals Four JTC 1 members appealed the standardisation: the bodies of South Africa, Brazil, India and Venezuela. Since the appeals system is designed to find a solution by consensus, it was unlikely that the process would have resulted in ISO/IEC abandoning progress of DIS 29500. The CEOs of ISO and IEC advised the management board that these appeals should no longer be processed any further: the Secretary General of ISO is reported as stating: "[t]he processing of the ISO/IEC DIS 29500 project has been conducted in conformity with the ISO/IEC JTC 1 Directives, with decisions determined by the votes expressed by the relevant ISO and IEC national bodies under their own responsibility, and consequently, for the reasons mentioned above, the appeals should not be processed further". The main issue in the appeals was the BRM procedures. The 3 appealing countries did not appeal during the BRM and even all voted approval on the resolution that allowed for voting on each of the resolutions that had not been discussed in the plenary meeting through means of a form. The three countries appealing used that form vote for a disapproval vote of most of the responses (in total only 4 countries did that) but failed to have a significant number of responses disapproved. The appeals did not get sufficient support of the National Bodies voting on the ISO and IEC management boards, and consequently the go-ahead was given to publish ISO/IEC DIS 29500, Information technology – Office Open XML formats, as an ISO/IEC International Standard on August 15, 2008. Publication The International Standard ISO/IEC 29500:2008 was published in November 2008. Maintenance regime Following the standardization of ISO/IEC 29500, ISO/IEC JTC 1/SC 34, as the designated maintenance group for the standard, established two ad hoc groups for deciding how the Standard would be maintained: a group to collect comments on the newly approved standard, and a group to decide what structures should be used for long-term maintenance. The resulting recommendation was that ISO/IEC JTC 1/SC 34 should assume full control of the maintenance work on ISO/IEC 29500. This decision was duly ratified at SC 34's September 2008 meeting on Jeju Island, Korea. Ecma were invited as a liaison to provide individual experts to contribute to the maintenance activity. This decision superseded an earlier proposal from Ecma, in which Ecma itself proposed it was responsible for maintenance. On May 21, 2008, Microsoft announced that it would be "an active participant in the future evolution of ODF, Open XML, XPS and PDF standards". ISO/IEC 29500 is maintained within Working Group 4 ("WG 4") of ISO/IEC JTC 1/SC 34 under the convenorship of MURATA Makoto of Japan. Under this maintenance regime the JTC 1 Directives apply, and these stipulate that: Proposals to amend the text, and acceptance of any such amendments, are subject to normal JTC 1 voting processes (JTC 1 Directives clause 15.5) The standard cannot be "stabilised" (no longer subject to periodic maintenance) except through approval in a JTC 1 ballot (JTC 1 Directives, clause 15.6.2). For the standard to be stabilised it must have passed through one review cycle (JTC 1 Directives, clause 15.6.1). In this review cycle, the text would have to have been re-written to comply with ISO's formatting and verbal requirements (JTC 1 Directives, clause 13.4). WG 4 has a web site and open document register. Defect logs and statistics from WG 4 are available online. At the WG4 meeting in Copenhagen, June 22–24, 2009, there were 16 people listed as present; 5 of these were employed by Microsoft, 4 by universities. Reactions to standardization Complaints about the national bodies process There have been allegations that the ISO ballot process for Office Open XML was marred with voting irregularities and heavy-handed tactics by some stakeholders. An Ars Technica article sources Groklaw stating that at Portugal's national body TC meeting, "representatives from Microsoft attempted to argue that Sun Microsystems, the creators and supporters of the competing OpenDocument format (ODF), could not be given a seat at the conference table because there was a lack of chairs." In Sweden, Microsoft notified the Swedish Standards Institute (SIS) that an employee sent a memo to two of its partners, requesting them to join the SIS committee and vote in favor of Office Open XML in return for "marketing contributions". Jason Matusow, a Director in the Corporate Standards Strategy Team at Microsoft, stated that the memo was the action of an individual employee acting outside company policy, and that the memo was retracted as soon as it was discovered. SIS have since changed its voting procedure so that a member has to actually participate before being allowed to vote. Sweden invalidated its vote (80% was for approval) as one company cast more than one vote, which is against SIS policy. Finnish IT journalists described that meeting as raising strong differences in opinions. In Switzerland, SNV registered a vote of "approval with comments," and there was some criticism about a "conflict of interest" regarding the chairman of the UK 14 sub-committee, who did not allow discussion of licensing, economic and political arguments. In addition, the chairman of the relevant SNV parent committee is also the secretary general of Ecma International, which approved OOXML as a standard. Further complaints regarded "committee stuffing", which is however allowed by present SNV rules, and non-adherence to SNV rules by the UK 14 chairman, which resulted in a re-vote with the same result. Australia's national standards body, Standards Australia, was criticized for its handling of the OOXML process by the New Zealand Open Source Society, the open source advisory firm Waugh Partners, Australian National University Professor Roger Clarke, OASIS lawyer Andrew Updegrove, IBM and Google. Standards Australia sent ISO SC 34 expert and XML and Schematron specialist Rick Jelliffe to the BRM, despite critics alleging that Jelliffe would not represent the views of those opposing the standardization. Jelliffe had previously been in the news after being offered payment by Microsoft to improve incorrect Wikipedia articles about Office Open XML. Microsoft had bought a schema conversion tool from his company and he had performed the initial conversion of the Office Open XML schemas from XML Schemas to RELAX NG, both schema languages he had been involved in standardizing. It was alleged that Standards Australia had broken a previous public pledge to send two internal employees to the BRM. However Standards Australia issued a press release denying this and stating that the Computerworld article was "riddled with inaccuracies and misrepresentations." Norway's vote was decided by Standard Norge; the mostly opposing viewpoints of the technical committee resulted in a disapproval vote in the 2007 ballot. However, the administration of Standard Norge changed Norway's vote to "approval" in 2008 even if the majority of the committee argued in favor of keeping its "disapproval" vote. Membership in the technical committee had risen from 6–7 to 30 members; all of the pre-OOXML members argued in favour of a "no" vote. In October 2008, 13 of the 23 members, 12 of which are associated with the open-source movement, resigned after OOXML was ratified by ISO and all appeals were rejected. The IDABC community programme (which is managed by the European Commission) runs the "Open Source Observatory" which is "dedicated to Free/Libre/Open Source Software." Via its "Open Source News", it has reported on reports which criticize the standardization process. It states that the German IT news site Heise reports that in Germany, two opponents of Office Open XML, Deutsche Telekom and Google, were not allowed to vote because they tried to join the committee last-minute. Open Source News says, "Participants described the process as ludicrous." It relays a report from Michiel Leenaars (director of the Internet Society Netherlands) that in the Netherlands, "the chair of the national standardization committee deciding on OOXML, protested that the almost unanimous conditional approval was blocked by Microsoft." It reports on a report from Borys Musielak, a member of Poland's Linux community, who wrote on the PolishLinux website that Poland's technical committee KT 171 rejected Office Open XML. The vote was invalidated and assigned to KT 182. A member of Poland's Linux community believes this was due to "reorganisation in the Polish standardisation body." KT 182 voted to approve Office Open XML. It reports that in Andalucía, the director of Andalucía's Department for Innovation complained that Microsoft submitted misinformation to the Spanish National Body stating that it (Andalucía) supported the company's Office Open XML-proposal. It reports that in Portugal, eleven companies (including IBM) and open source advocacy groups requested that Portugal's Ministry of Economy and Innovation investigate Portugal's vote on Office Open XML. In June 2008, the High Court of Justice in the United Kingdom rejected a complaint by the UK Unix and Open Systems User Group (UKUUG), requesting a review of the British Standard Institution's decision to vote in favour of DIS 29500. The judge commented that "this application does not disclose any arguable breach of the procedures of BSI or of rules of procedural fairness". Other complaints A further letter of protest was filed by Open Source Leverandørforeningen, a Danish open source vendor association although no appeal has been filed directly by Dansk Standard itself. In September 2008, a joint letter known as the Consegi declaration was issued and signed by 3 representatives for free software of the countries that issued appeals (South Africa, Brazil and Venezuela) as well as Ecuador, Cuba and Paraguay. After the specification was officially accepted as an ISO standard, Red Hat and IBM claimed the ISO is losing credibility, and Ubuntu founder Mark Shuttleworth commented "We're not going to invest in trying to implement a standard that is poorly defined." IBM issued a press release stating: "IBM will continue to be an active supporter of ODF. We look forward to being part of the community that works to harmonize ODF and OOXML for the sake of consumers, companies and governments, when OOXML control and maintenance is fully transferred to JTC1." Examination of fast track process Deutsches Institut für Normung (DIN, Germany) voted "yes" on DIS 29500, and stated that DIN as a whole "recognised that there has been no serious breach of JTC 1 and ISO rules", but that, "the conclusion has been reached that the rules for the fast-track procedure need to be amended". At the plenary meeting of JTC 1 in Nara, Japan that took place in November 2008, a resolution was passed which related to concerns expressed during the standardisation of ISO/IEC 29500. Resolution 49 was entitled "Clarification on Consistency of Standards vs Competing Specifications" and contained the following text: JTC 1 recognizes its commitment to ISO's and IEC's "one standard" principle; however, it recognizes that neither it nor its SCs are in a position to mandate either the creation or the use of a single standard, and that there are times when multiple standards make the most sense in order to respond to the needs of the marketplace and of society at large. It is not practical to define, a priori, criteria for making these decisions. Therefore each standard must be judged by the National Bodies, based on their markets, on its own merits. At a companion meeting of the Special Working Group on Directives (SWG-Directives) in Osaka a recommendation was made describing series of "concepts" that would in future be applied to the Ballot Resolution process of future Fast Tracked standards. These mirrored the process that had taken place for ISO/IEC 29500: The purpose is to review and address ballot comments The meeting must have a separate agenda and be convened as a separate meeting even if it is in conjunction with/co-located with an SC/WG meeting The comments must be discussed within a single meeting and NOT distributed over a series of meetings The meeting is open to the Fast track Submitter and to all National Bodies regardless of whether or not the National Body has voted on the document under review – no limitation on which National Body can participate The meeting participants represent their National Body and their National Body positions All National Bodies have an equal say in any decisions made during the meeting The Project Editor must prepare an Editor's proposed disposition of ballot comments in sufficient time prior to the BRM to allow consideration by National Bodies. This editor's proposed disposition of comments document will be reviewed during the ballot resolution meeting A disposition of ballot comments approved during the meeting must be circulated following the meeting for the information of all National Bodies When all comments have been addressed and a disposition of comments has been approved by the meeting, the BRM meeting criteria have been met Standards lawyer Andy Updegrove (whose firm represents OASIS) commented that he was "startled and dismayed" at these concepts, since they "basically add up to a ratification of the conduct of the Geneva BRM." Investigation of Microsoft by the European Commission In January 2008, the European Commission started an antitrust investigation into the interoperability of the Office Open XML format on the request of European Committee for Interoperable Systems, described as "a coalition of Microsoft's largest competitors". Anonymous source(s) of the Wall Street Journal claim that this investigation also includes an investigation into whether Microsoft violated antitrust laws in the course of the standardization process. The Financial Times reports that European ISO members have confirmed receipt of a letter by the European Commission "asking how they prepared for votes [...] on acceptance of Microsoft's OOXML document format as a worldwide standard." Microsoft complaints about competitors On February 14, 2007, Microsoft attacked IBM's opposition to the Office Open XML standardization process in an open letter, saying On December 7, Ecma approved the adoption of Open XML as an international open standard. The vote was nearly unanimous; of the 21 members, IBM's was the sole dissenting vote. IBM again was the lone dissenter when Ecma also agreed to submit Open XML as a standard for ratification by ISO/IEC JTC1. IBM led a global campaign urging national bodies to demand that ISO/IEC JTC1 not even consider Open XML, because ODF had made it through ISO/IEC JTC1 first. Nicos Tsilas, Microsoft's senior director of interoperability and intellectual property policy, downplaying Microsoft's American and EU conviction as abusers of monopoly power, expressed concern that IBM and the Free Software Foundation have been lobbying governments to mandate the use of the rival OpenDocument format (ODF) to the exclusion of other formats. In his opinion, they are "using government intervention as a way to compete" as they "couldn't compete technically." IBM have asked governments to have an open-source, exclusive purchasing policy. Arguments in support and criticism of Office Open XML standard Support Microsoft believes its own format should be adopted. It has presented this argument on its "community web site", a site owned and operated by Microsoft. Sun Microsystems initially voted against approval of DIS 29500 in the INCITS V1 committee, but stated on the committee mailing list "We wish to make it completely clear that we support DIS 29500 becoming an ISO Standard and are in complete agreement with its stated purposes of enabling interoperability among different implementations and providing interoperable access to the legacy of Microsoft Office documents" and that "We voted in the expectation that [...] changes will be made and that a version of DIS 29500 capable of achieving its objectives would be approved as an ISO Standard.". ODF Alliance India published an extensive technical report in 2007 containing concrete issues by members of the association, as well as replies from Microsoft. In December 2007 Ecma International announced that many reported issues will be taken into account in next edition of the standardisation proposal to ISO. The British Library and the United States Library of Congress have participated in the work of Ecma TC45 and support the Office Open XML standard. Former Gnome Foundation board member Miguel de Icaza, who started the GNOME and Mono projects, showed support for the Office Open XML document format, stating "OOXML is a superb standard and yet, it has been FUDed so badly by its competitors that serious people believe that there is something fundamentally wrong with it." Patrick Durusau, the editor of the OpenDocument standard, has characterized OOXML as a "poster child for the open standards development process" User base The most widely used office productivity packages currently rely on various proprietary and reverse engineered binary file formats such as those created by successive releases of Microsoft Word, PowerPoint and Excel. However, OOXML is a new format which is not backwards or forwards compatible with any of the old Microsoft Office formats. Policy arguments With regards to the alleged overlap in scope with the OpenDocument format, Ecma has provided the following policy arguments in favor of standardization: overlap in scope of ISO/IEC standards is common and can serve a practical purpose; Office Open XML addresses distinct user requirements; The OpenDocument Format and Office Open XML are structured to meet different user requirements; and Office Open XML and OpenDocument can serve as duo-standards. Technical arguments A study comparing IS 29500:2008 and IS 26300:2006 (ODF 1.0) by the German Fraunhofer Society found It may be concluded that many of the functionalities, especially those found in simpler documents, can be translated between the standards, while the translation of other functionalities can prove complex or even impossible. The use of the Open Packaging Conventions which allows for Indirection, Chunking and Relative indirection. Uses the ZIP format, making ZIP part of the standard. Due to compression, files are smaller than current binary formats. It supports custom data elements for integration of data specific to an application or an organisation that wants to use the format. It defines spreadsheet formulas. Office Open XML contains alternate representations for the XML schemas and extensibility mechanisms using RELAX NG (ISO/IEC 19757-2) and NVDL (ISO/IEC 19757-4.) No restriction on image, audio or video types, Book 1 §14.2.12. Embedded controls can be of any type, such as Java or ActiveX, Book 1 §15.2.8. WordprocessingML font specifications can include font metrics and PANOSE information to assist in finding a substitution font if the original is not available, Book 3 §2.10.5. In the situation where a consuming application might not be capable of interpreting what a producing application wrote, Office Open XML defines an Alternate Content Block which can represent said data in an alternate format, such as an image. Book 3 §2.18.4. Internationalization support. For example, date representation: In WordprocessingML (Book 4 §2.18.7) and SpreadsheetML (Book 4 §3.18.5), calendar dates after 1900 CE can be written using Gregorian (three variants), Hebrew, Hijri, Japanese (Emperor Era), Korean (Tangun Era), Saka, Taiwanese, and Thai formats. Also, there are several internationalization related spreadsheet conversion functions. Custom XML schema extensibility allows the addition of features to the format. This can, for instance, facilitate conversion from other formats and future features that are not part of the official specification. Criticism Technical The standard has been the subject of debate within the software industry. At over 6,000 pages, the specification is difficult to evaluate quickly. Objectors also claim that there could be user confusion regarding the two standards because of the similarity of the "Office Open XML" name to both "OpenDocument" and "OpenOffice". Objectors also argued that an ISO standard for documents already exists and there is no need for a second standard. Google stated that "the ODF standard, which achieves the same goal, is only 867 pages" and that If ISO were to give OOXML with its 6546 pages the same level of review that other standards have seen, it would take 18 years (6576 days for 6546 pages) to achieve comparable levels of review to the existing ODF standard (871 days for 867 pages) which achieves the same purpose and is thus a good comparison. Considering that OOXML has only received about 5.5% of the review that comparable standards have undergone, reports about inconsistencies, contradictions and missing information are hardly surprising. Those who support the ODF standard include the FFII, ODF Alliance IBM, as well as South Africa, and other nations that voiced strong opposition to OOXML during standardization. The ODF Alliance UK Action Group has stated that with OpenDocument an ISO standard for Office files already exists. Further, they argue that the Office Open XML file-format is heavily based on Microsoft's own Office applications and is thus not vendor-neutral, and that it has inconsistencies with existing ISO standards such as time and date formats and color codes. Process manipulation In addition, the standardization process itself has been questioned, including claims of balloting irregularities by some technical committees, Microsoft representatives and Microsoft partners in trying to get Office Open XML approved. "The editorial group who actually produce the spec is referred to as "ECMA", but in fact the work is mostly done by Microsoft people." Post-adoption quotes During a panel discussion on Red Hat Summit in Boston in June 2008 Microsoft's national technology officer Stuart McKee said that "ODF has clearly won". He also made the following statement: We found ourselves so far down the path of the standardisation process with no knowledge. We don't have a standards office. We didn't have a standards department in the company. I think the one thing that we would acknowledge and that we were frustrated with is that, by the time we realised what was going on and the competitive environment that was underway, we were late and there was a lot of catch-up. It was very difficult to enter into conversations around the world where the debate had already been framed. On June 25, 2008, Gray Knowlton, a Group Product Manager for the Microsoft Office system made the following statements regarding the future of Open XML: Microsoft will continue to support the development of the specification and the adoption of the Open XML formats, in addition to the other work we are driving around document formats in Office. [...] In the end, Open XML is still the better choice for the compatibility and line-of-business interoperability scenarios we have discussed throughout its history. [...] while we are working on ODF moving forward, we will remain committed to Open XML and believe that it will be the format of choice for large parts of the global community. In an interview, Richard Stallman, head of the Free Software Foundation, said: Microsoft corrupted many members of ISO in order to win approval for its phony 'open' document format, OOXML. This was so governments that keep their documents in a Microsoft-only format can pretend that they are using 'open standards.' The government of South Africa has filed an appeal against the decision, citing the irregularities in the process. On March 31, 2010, Dr Alex Brown, who had been the Convener of the February 2008 Ballot Resolution Meeting, posted an entry on his personal blog in which he complained of Microsoft's lack of progress in adapting current and future versions of Microsoft Office to produce files in the Strict (as opposed to the Transitional) ISO 29500 format: On this count Microsoft seems set for failure. In its pre-release form Office 2010 supports not the approved Strict variant of OOXML, but the very format the global community rejected in September 2007, and subsequently marked as not for use in new documents—the Transitional variant. Microsoft are behaving as if the JTC 1 standardisation process never happened... Microsoft responded that the next release of Microsoft Office (version 15) would fully support ISO/IEC 29500 Strict. See also OpenDocument standardization Comparison of Office Open XML and OpenDocument Comparison of document markup languages References External links Non-standard extensions used by Microsoft's applications Ecma standards ISO standards Markup languages Microsoft criticisms and controversies Microsoft Office Office Open XML Open formats Document-centric XML-based standards
Standardization of Office Open XML
[ "Technology" ]
6,768
[ "Computer standards", "Ecma standards" ]
16,146,402
https://en.wikipedia.org/wiki/L1157
L 1157 is a dark nebula in the constellation Cepheus. It was catalogued in 1962 by U.S. astronomer Beverly T. Lynds in her Catalogue of Dark Nebulae, becoming the 1157th entry in the table; hence the designation. The cloud contains an estimated 3,900 Solar masses of material. It includes protostars that are ejecting material in bipolar outflows, forming bow shocks in the surrounding ambient gas. Formamide and HCNO have been detected in these shocked regions, among other compounds. References External links http://jumk.de/astronomie/special-stars/l1157.shtml http://simbad.u-strasbg.fr/simbad/sim-id?protocol=html&Ident=L1157&NbIdent=1&Radius=2&Radius.unit=arcmin&submit=submit+id Dark nebulae Cepheus (constellation)
L1157
[ "Astronomy" ]
203
[ "Nebula stubs", "Astronomy stubs", "Constellations", "Cepheus (constellation)" ]
16,146,715
https://en.wikipedia.org/wiki/HH%2034
HH 34 is a Herbig–Haro object located in the Orion A molecular cloud at a distance of about 460 parsecs (1500 light-years). It is notable for its highly collimated jet and very symmetric bow shocks. A bipolar jet from the young star is ramming into surrounding medium at supersonic speeds, heating the material to the point of ionization and emission at visual wavelengths. The source star is a class I protostar with a total luminosity of 45 . Two bow shocks separated by 0.44 parsecs make the primary HH 34 system. Several larger and fainter bow shocks were later discovered on either side, making the extent of the system around 3 parsecs. The jet blows up the dusty envelope of the star, giving rise to 0.3 parsec long molecular outflow. See also HH 46/47 Stellar evolution Hayashi track Pre-main-sequence star References External links Orion (constellation) Orion molecular cloud complex 34
HH 34
[ "Astronomy" ]
203
[ "Nebula stubs", "Astronomy stubs", "Constellations", "Orion (constellation)" ]
16,147,774
https://en.wikipedia.org/wiki/Fundamental%20station
The term fundamental station is used for special observatories which combine several space positioning techniques like VLBI, satellite laser ranging, GPS, Glonass, etc. They are the basis of plate tectonic analysis, allowing the monitoring of continental drift rates with millimetre accuracies. A fundamental point is the geometric origin of a geodetic network and defines the geodetic datum of a national survey. Some fundamental stations are an astronomical or satellite geodetic observatory. The geographic latitude and longitude of the station is precisely determined by methods of astrogeodesy and is chosen as ellipsoidal latitude and longitude at the Earth ellipsoid which is used to calculate the coordinates of the whole network. Also, precise azimuths to one or two network points are observed, and are taken over as orientated directions of these network lines. By these procedures, the polar axis of the reference ellipsoid becomes parallel to the Earth rotation axis, and therefore the vertical deflection of the fundamental point is zero. Important fundamental stations include: Grasse Métrologie Optique (MéO) observatory, France Graz-Lustbühel, Austria Herstmonceux Geodetic Observatory, England Onsala Space Observatory, Sweden Metsähovi Radio Observatory, Finland Geodetic Observatory Wettzell, Germany Zimmerwald Observatory, Switzerland Goddard Geophysical and Astronomical Observatory, U.S. Hartebeesthoek Radio Astronomy Observatory, South Africa Yarragadee Geodetic Observatory, Australia. Worldwide about 30 fundamental stations are in existence: about 5 in the United States and in Commonwealth of Independent States countries, and 2–3 in South America, Africa, Eastern Asia, Australia and Antarctica. The basic coordinate system is the ITRF reference frame, which is related to the ICRS celestial inertial system by means of very precise Earth Orientation Parameters (EOPs), containing polar coordinates, Earth rotation and nutation parameters. The ITRF data set is revised every 3–5 years, the actual accuracy is in the millimeter area. The ICRS is based on about 500 quasars in the far universe, and on some 3000 fundamental stars of our galaxy. The actual coordinates of the latter (FK6) were published in 2000 by the Astronomisches Rechen-Institut (ARI) in Heidelberg. References External links Fundamentalstation Wettzell, Bavaria Two Fundamental stations at the southern hemisphere (german website) Fundamental system FK6, Astronomisches Rechen-Institut (ARI), 1999 Geodesy Astrometry
Fundamental station
[ "Astronomy", "Mathematics" ]
526
[ "Applied mathematics", "Geodesy", "Astrometry", "Astronomical sub-disciplines" ]
16,148,649
https://en.wikipedia.org/wiki/Stereoautograph
The stereoautograph is a complex opto-mechanical measurement instrument for the evaluation of analog or digital photograms. It is based on the stereoscopy effect by using two aero photos or two photograms of the topography or of buildings from different standpoints. It was invented by Eduard von Orel in 1907. The photograms or photographic plates are oriented by measured passpoints in the field or on the building. This procedure can be carried out digitally (by methods of triangulation and projective geometry or iteratively (repeated angle corrections by congruent rays). The accuracy of modern autographs is about 0.001 mm. Well known are the instruments of the companies Wild Heerbrugg (Leica), e.g. analog A7, B8 of the 1980s and the digital autographs beginning in the 1990s, or special instruments of Zeiss and Contraves. References Gilbert Willy Military Topography and Photography by Floyd D. Carlock, U.S. Army, 1916, p.104 ff, with photos (Available online at Google Books) Measuring instruments Photogrammetry Optical instruments Cartography Stereoscopic photography
Stereoautograph
[ "Technology", "Engineering" ]
233
[ "Measuring instruments" ]
16,150,021
https://en.wikipedia.org/wiki/Boris%20Mamyrin
Boris Aleksandrovich Mamyrin (; 25 May 1919 5 March 2007) was a Soviet and Russian physicist, best known for his invention of the electrostatic ion mirror mass spectrometer known as the reflectron. Biography Mamyrin was born in 1919 in Lipetsk, Soviet Russia during Russian Civil War. Both of his parents were medical doctors and his early aim was to follow in their footsteps. However, shortly after he obtained his M.S. degree in physics from the Leningrad Polytechnic Institute, World War II cut his studies short. He served in the army throughout the war, finally being discharged from military service in 1948. He returned to the Polytechnic Institute and obtained his doctoral degree within a year. He became the head and leading research scientist of the laboratory for mass spectrometry at Ioffe Physico-Technical Institute of the Russian Academy of Sciences. He was a corresponding member of the Russian Academy of Sciences and a full member of the Russian Academy of Natural Sciences. See also Time-of-flight mass spectrometry References External links 1919 births 2007 deaths 20th-century Russian physicists 21st-century Russian physicists People from Lipetsk Corresponding Members of the Russian Academy of Sciences Peter the Great St. Petersburg Polytechnic University alumni Recipients of the Order of Honour (Russia) Recipients of the Order of the Red Banner of Labour Mass spectrometrists Soviet physicists Russian scientists
Boris Mamyrin
[ "Physics", "Chemistry" ]
283
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
17,383,315
https://en.wikipedia.org/wiki/New%20England%20Hospital%20for%20Women%20and%20Children
The New England Hospital for Women and Children was founded by Marie Zakrzewska on July 1, 1862. The hospital's goal was to provide patients with competent female physicians, educate women in the study of medicine, and train nurses to care for the sick. Until 1951, the hospital remained dedicated to women, it was then renamed to New England Hospital to include male patients. The hospital was renamed again to the Dimock Community Health Center in 1969. At present, that institution provides a range of healthcare services including adult & pediatric primary care, women's healthcare, and HIV/AIDS specialty care. Establishment Marie Zakrzewska was born on September 9, 1829, in Berlin. In one of her memoirs she wrote "I prefer to be remembered only as a woman who was willing to work for the elevation of woman." She did just that by starting the hospital. As a child she followed her mother (a midwife) around the school of midwifery where she worked. Once she was 18 she applied to study midwifery, which was the only part of medicine women were allowed to work in, at the Royal Charité Hospital in Berlin. She was rejected, however, she continued to apply at ages 19 and 20, but was still rejected. It was only once Dr. Joseph Hermann Schmidt, who worked at the school, used his influence to get her in that she finally was able to study medicine. Later, she moved to America, where women were allowed to be doctors. Once in New York, she met Dr. Elizabeth Blackwell, who helped her to become a doctor. The two of them opened the New York Infirmary for Women and Children on May 1, 1857. This led her to go to Boston to meet with the board at the New England Female College, where she was offered, and accepted, a position as a professor of Obstetrics, and Diseases of Women and Children and the head of the clinical program. She eventually resigned. With the help of members of the board of the New England Female College as well as Ednah D. Cheney (legal sponsor), Lucy Goddard (legal sponsor), Mrs. George G. Lee (donated $3000), and Samuel E. Sewall (donated $1000), she opened the New England Hospital for Women and Children on July 1, 1862. Early history The goal of the hospital was "1) to provide for women medical aid of competent physicians of their own sex; 2) to assist educated women in the practical study of medicine; and 3) to train nurses for the care of the sick." Zakrzewska was devoted to providing obstetric care to all who needed it, regardless of race or economic status. At the start of the hospital, the staff consisted of Zakrzewska, two interns, and two consulting physicians. By 1900 there was a resident physician, 54 attending, assisting, and advisory physicians, and 13 consulting physicians. All staff and doctors that worked there were women until 1950 when a lack of money led the board to reverse the policy. The policy was again reversed in 1952 to only allow women to be on the staff. This was reversed a final time in 1962 when the by-laws were rewritten. In 1864, the Massachusetts state legislature gave the hospital a grant of $5000, allowing it to expand down Pleasant street. The hospital also acquired property behind the hospital at the end of 1864. This allowed the hospital to split into 3 parts. A hospital, a dispensary, and an inpatient facility. In 1865, Dr. Anita Tyng was hired as a surgeon. She was the first woman in the US to ever be listed as specializing in surgery by a hospital. She is one of the women who sent a letter to the founder of the MIT, in January 1867, requesting to "continue the study of Chemistry in the Technological Institute." Dr. Fanny Berlin, who was later appointed as the hospital's chief surgeon, was also one of the first Jewish-American women to practice surgery in the US. In 1872, the hospital moved to Codman Avenue in suburban Roxbury, and the dispensary stayed in the center of Boston. This new hospital opened a nursing school, the first in America. The first American trained nurse, Linda Richards (graduated 1873) and the first African American trained nurse, Mary Eliza Mahoney (graduated 1879) were both trained at the nursing school. The nursing school was closed in 1951. The hospital remained dedicated to women and children until 1951 when it was renamed the New England Hospital. Part of this decision was to reflect that it was now open to men as well as women and children. The decision to open the hospital to men also resulted from its financial difficulties. Dimock Center In 1969, the hospital was renamed again. This time it was named The Dimock Community Health Center. It was named after Susan Dimock, a resident doctor (surgeon) at the hospital who drowned in the shipwreck of the SS Schiller on May 7, 1875, when she was 28. Because of its history, the clinic's buildings are listed as a National Historic Landmark. All of the buildings at the Dimock Center are named after women. From the center's historic nine-acre campus located in the Egleston Square section of Roxbury, Massachusetts, and several satellite locations, The Dimock Center provides access to healthcare and human services that include: Adult & Pediatric Primary Care, Women's Healthcare, Eye and Dental Care, HIV/AIDS Specialty Care, Outpatient Mental Health services, Residential Programs, The Mary Eliza Mahoney House shelter for families, pre-school, Head Start programs, after-school programs and Adult Basic Education & Workforce Training programs. See also Susan Dimock Mary Eliza Mahoney Hannah Myrick Women in medicine Marie Elizabeth Zakrzewska Helen Morton (physician) References Sources New England Hospital for Women and Children at Sophia Smith Collection Accessed May 12, 2008 Michael Reiskind, "Hospital Founded by Women for Women", Jamaica Plain Historical Society (1995). Accessed April 20, 2010. "New England Hospital for Women and Children. Records, 1914-1954 (Inclusive), 1950-1954 (Bulk): A Finding Aid." Harvard University Library, Arthur and Elizabeth Schlesinger Library on the History of Women in America, Radcliffe Institute for Advanced Study, Harvard University Davis, A T. "America’s first school of nursing: the New England Hospital for Women and Children." The Journal of nursing education., U.S. National Library of Medicine, Apr. 1991, Pula, James S. ""A Passion for Humanity" Founding the New England Hospital for Women and Children" The Polish Review, Vol. 57, No. 3, 2012 "Changing the Face of Medicine | Marie E. Zakrzewska." U.S. National Library of Medicine, National Institutes of Health, 3 June 2015 External links Dimock Center web site New England Hospital for Women and Children Records at the Sophia Smith Collection, Smith College Special Collections Records, 1914-1954 (inclusive), 1950-1954 (bulk). Schlesinger Library, Radcliffe Institute, Harvard University. Click! The Ongoing Feminist Revolution: Women in the workplace Hospitals in Boston Organizations for women in science and technology Women's hospitals Children's hospitals in the United States Hospitals established in 1862 History of women in Massachusetts Women in Boston
New England Hospital for Women and Children
[ "Technology" ]
1,514
[ "Organizations for women in science and technology", "Women in science and technology" ]
17,383,329
https://en.wikipedia.org/wiki/Hemozoin
Haemozoin is a disposal product formed from the digestion of blood by some blood-feeding parasites. These hematophagous organisms such as malaria parasites (Plasmodium spp.), Rhodnius and Schistosoma digest haemoglobin and release high quantities of free heme, which is the non-protein component of haemoglobin. Heme is a prosthetic group consisting of an iron atom contained in the center of a heterocyclic porphyrin ring. Free heme is toxic to cells, so the parasites convert it into an insoluble crystalline form called hemozoin. In malaria parasites, hemozoin is often called malaria pigment. Since the formation of hemozoin is essential to the survival of these parasites, it is an attractive target for developing drugs and is much-studied in Plasmodium as a way to find drugs to treat malaria (malaria's Achilles' heel). Several currently used antimalarial drugs, such as chloroquine and mefloquine, are thought to kill malaria parasites by inhibiting haemozoin biocrystallization. Discovery Black-brown pigment was observed by Johann Heinrich Meckel in 1847, in the blood and spleen of a person suffering from insanity. However, it was not until 1849 that the presence of this pigment was connected to infection with malaria. Initially, it was thought that this pigment was produced by the body in response to infection, but Charles Louis Alphonse Laveran realized in 1880 that "malaria pigment" is, instead, produced by the parasites, as they multiplied within the red blood cell. The link between pigment and malaria parasites was used by Ronald Ross to identify the stages in the Plasmodium life cycle that occur within the mosquito, since, although these forms of the parasite are different in appearance to the blood stages, they still contain traces of pigment. Later, in 1891, T. Carbone and W.H. Brown (1911) published papers linking the hemoglobin degradation with pigment production, describing the malaria pigment as a form of hematin and disproving the widely held idea that it is related to melanin. Brown observed that all melanins were bleaching rapidly with potassium permanganate, while with this reagent malarial pigment manifests not the slightest sign of a true bleach reaction. The name "hemozoin" was proposed by Louis Westenra Sambon. In the 1930s several authors identified hemozoin as a pure crystalline form of α-hematin and showed that the substance did not contain proteins within the crystals, but no explanation for the solubility differences between malaria pigment and α-hematin crystals was given. Formation During its intraerythrocytic asexual reproduction cycle Plasmodium falciparum consumes up to 80% of the host cell hemoglobin. The digestion of hemoglobin releases monomeric α-hematin (ferriprotoporphyrin IX). This compound is toxic, since it is a pro-oxidant and catalyzes the production of reactive oxygen species. Oxidative stress is believed to be generated during the conversion of heme (ferroprotoporphyrin) to hematin (ferriprotoporphyrin). Free hematin can also bind to and disrupt cell membranes, damaging cell structures and causing the lysis of the host erythrocyte. The unique reactivity of this molecule has been demonstrated in several in vitro and in vivo experimental conditions. The malaria parasite, therefore, detoxifies the hematin, which it does by biocrystallization—converting it into insoluble and chemically inert β-hematin crystals (called hemozoin). In Plasmodium the food vacuole fills with hemozoin crystals, which are about 100–200 nanometres long and each contain about 80,000 heme molecules. Detoxification through biocrystallization is distinct from the detoxification process in mammals, where an enzyme called heme oxygenase instead breaks excess heme into biliverdin, iron, and carbon monoxide. Several mechanisms have been proposed for the production of hemozoin in Plasmodium, and the area is highly controversial, with membrane lipids, histidine-rich proteins, or even a combination of the two, being proposed to catalyse the formation of hemozoin. Other authors have described a heme detoxification protein, which is claimed to be more potent than either lipids or histidine-rich proteins. It is possible that many processes contribute to the formation of hemozoin. The formation of hemozoin in other blood-feeding organisms is not as well-studied as in Plasmodium. However, studies on Schistosoma mansoni have revealed that this parasitic worm produces large amounts of hemozoin during its growth in the human bloodstream. Although the shapes of the crystals are different from those produced by malaria parasites, chemical analysis of the pigment showed that it is made of hemozoin. In a similar manner, the crystals formed in the gut of the kissing bug Rhodnius prolixus during digestion of the blood meal also have a unique shape, but are composed of hemozoin. Hz formation in R. prolixus midgut occurs at physiologically relevant physico-chemical conditions and lipids play an important role in heme biocrystallization. Autocatalytic heme crystallization to Hz is revealed to be an inefficient process and this conversion is further reduced as the Hz concentration increases. Several other mechanisms have been developed to protect a large variety of hematophagous organisms against the toxic effects of free heme. Mosquitoes digest their blood meals extracellularly and do not produce hemozoin. Heme is retained in the peritrophic matrix, a layer of protein and polysaccharides that covers the midgut and separates gut cells from the blood bolus. Although β-hematin can be produced in assays spontaneously at low pH, the development of a simple and reliable method to measure the production of hemozoin has been difficult. This is in part due to the continued uncertainty over what molecules are involved in producing hemozoin, and partly from the difficulty in measuring the difference between aggregated or precipitated heme, and genuine hemozoin. Current assays are sensitive and accurate, but require multiple washing steps so are slow and not ideal for high-throughput screening. However, some screens have been performed with these assays. Structure β-Hematin crystals are made of dimers of hematin molecules that are, in turn, joined together by hydrogen bonds to form larger structures. In these dimers, an iron-oxygen coordinate bond links the central iron of one hematin to the oxygen of the carboxylate side-chain of the adjacent hematin. These reciprocal iron–oxygen bonds are highly unusual and have not been observed in any other porphyrin dimer. β-Hematin can be either a cyclic dimer or a linear polymer, a polymeric form has never been found in hemozoin, disproving the widely held idea that hemozoin is produced by the enzyme heme-polymerase. Hemozoin crystals have a distinct triclinic structure and are weakly magnetic. The difference between diamagnetic low-spin oxyhemoglobin and paramagnetic hemozoin can be used for isolation. They also exhibit optical dichroism, meaning they absorb light more strongly along their length than across their width, enabling the automated detection of malaria. Hemozoin is produced in a form that, under the action of an applied magnetic field, gives rise to an induced optical dichroism characteristic of the hemozoin concentration; and precise measurement of this induced dichroism (Magnetic circular dichroism) may be used to determine the level of malarial infection. Inhibitors Hemozoin formation is an excellent drug target, since it is essential to malaria parasite survival and absent from the human host. The drug target hematin is host-derived and largely outside the genetic control of the parasite, which makes the development of drug resistance more difficult. Many clinically used drugs are thought to act by inhibiting the formation of hemozoin in the food vacuole. This prevents the detoxification of the heme released in this compartment, and kills the parasite. The best-understood examples of such hematin biocrystallization inhibitors are quinoline drugs such as chloroquine and mefloquine. These drugs bind to both free heme and hemozoin crystals, and therefore block the addition of new heme units onto the growing crystals. The small, most rapidly growing face is the face to which inhibitors are believed to bind. Role in pathophysiology Hemozoin is released into the circulation during reinfection and phagocytosed in vivo and in vitro by host phagocytes and alters important functions in those cells. Most functional alterations were long-term postphagocytic effects, including erythropoiesis inhibition shown in vitro. In contrast, a powerful, short-term stimulation of oxidative burst by human monocytes was also shown to occur during phagocytosis of nHZ. Lipid peroxidation non-enzymatically catalysed by hemozoin iron was described in immune cells. Lipoperoxidation products, as hydroxyeicosatetraenoic acids (HETEs) and 4-hydroxynonenal (4-HNE), are functionally involved in immunomodulation. See also Biocrystallization Drug discovery History of malaria Parasitic diseases References Malaria Biomolecules
Hemozoin
[ "Chemistry", "Biology" ]
2,047
[ "Natural products", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Molecular biology" ]
17,383,864
https://en.wikipedia.org/wiki/Premazepam
Premazepam is a Pyrrolodiazepine class of drug. It is a partial agonist of benzodiazepine receptors and was shown in 1984 to possess both anxiolytic and sedative properties in humans but was never marketed. Properties The initial doses of premazepam given to human test subjects demonstrated similar psychological test results to those produced by diazepam. It was also demonstrated that initial dosing with premazepam produces similar sedative effects as compared with diazepam, although psychomotor impairments are greater with premazepam than with diazepam after initial dosing. However, with repeated dosing for more than one day premazepam causes less sedation and less psychomotor impairment than diazepam. Premazepam possesses sedative and anxiolytic properties. Premazepam produces more slow wave and less fast wave EEG changes than diazepam. Tests have shown that 7.5 mg of premazepam is approximately equivalent to 5 mg of diazepam. Pharmacology Premazepam is a pyrrolodiazepine and acts as a partial agonist at benzodiazepine receptors. The mean time taken to reach peak plasma levels is 2 hours and the mean half life of premazepam in humans is 11.5 hours. About 90% of the drug is excreted in unchanged form. Of the remaining 10% of the drug none of the metabolites showed any pharmacological activity. Thus premazepam produces no active metabolites in humans. See also Benzodiazepine Benzodiazepine dependence Benzodiazepine withdrawal syndrome Long-term effects of benzodiazepines References Abandoned drugs Designer drugs GABAA receptor positive allosteric modulators Lactams Pyrrolodiazepines
Premazepam
[ "Chemistry" ]
391
[ "Drug safety", "Abandoned drugs" ]
17,384,208
https://en.wikipedia.org/wiki/In%20Good%20King%20Charles%27s%20Golden%20Days
In Good King Charles's Golden Days is a play by George Bernard Shaw, subtitled A True History that Never Happened. It was written in 1938-39 as an "educational history film" for film director Gabriel Pascal in the aftermath of Pygmalions cinema triumph. The cast of the proposed film were to be sumptuously clothed in 17th century costumes, far beyond the resources of most theatre managements. However, by the time of its completion in May 1939, it had turned into a Shavian Restoration comedy. The title of the play is taken from the first line of the traditional song "The Vicar of Bray". Plot The setting is the English court during the reign of Charles II (). A discussion play, the issues of nature, science, power and leadership are debated between Charles ("Mr Rowley"), Isaac Newton, George Fox, and the artist Godfrey Kneller, with interventions by three of the king's mistresses (Barbara Villiers, Louise de Kérouaille, and Nell Gwynn). The short second act involves Charles in conversation with his queen, Catherine of Braganza. Original production Billed as "A history lesson in three scenes by Bernard Shaw", the first production was at the Malvern Festival Theatre on 12 August 1939, directed by H. K. Ayliff and designed by Paul Shelving. Cast: Mrs Basham: Isobel Thornton Sally: Betty Marsden Isaac Newton: Cecil Trouncer George Fox: Herbert Lomas Mr Rowley (Charles II): Ernest Thesiger Nell Gwynn: Eileen Beldon Barbara Villiers; Daphne Heard Louise de Kérouaille: Ina De La Haye James, Duke of York: William Hutchison Godfrey Kneller: Alec Clunes Catherine of Braganza: Irene Vanbrugh Ayliff's production first transferred to the Streatham Hill Theatre on 15 April 1940, then to the New Theatre in London on 9 May 1940. James Agate, writing for The Sunday Times, noted that the play was the best to have "come from the Shavian loom since Methuselah". Revivals Ernest Thesiger, who again played "Mr Rowley", revived the play at the Malvern Festival on 11 August 1949. It was also revived at the Malvern Festival Theatre in 1983. A radio production was broadcast on the BBC Third Programme on September 18, 1949, with Abraham Sofaer in the title role. The first North American production was on 24 January 1957 at the Downtown Theater on New York's East 4th Street, where it ran for nearly two years, one of the longest runs of any Shaw play in the USA (as noted by Lawrence Langner). A BBC production in the Play of the Month series, starring Sir John Gielgud as King Charles, was broadcast in February 1970. References In Good King Charles's Golden Days by Bernard Shaw, with 12 text illustrations by Feliks Topolski, Constable, London (1939) File on Shaw, compiled by Margery Morgan, Methuen, London (1989) Bernard Shaw, a biography by Michael Holroyd in five volumes, Chatto and Windus (1988-1992) Shaw's preface to the play, first published in the collected edition of Geneva, Cymbeline Refinished and In Good King Charles's Golden Days, Constable (1947) Bernard Shaw: The Complete Prefaces, volume III, 1930–1950, edited by Dan H Laurence and Daniel J Leary, Allen Lane, The Penguin Press (1997) 1939 plays Plays set in the 17th century Plays by George Bernard Shaw Cultural depictions of Isaac Newton Cultural depictions of Charles II of England Cultural depictions of Barbara Palmer, 1st Duchess of Cleveland Cultural depictions of Louise de Kérouaille, Duchess of Portsmouth Cultural depictions of Nell Gwyn Cultural depictions of Catherine of Braganza
In Good King Charles's Golden Days
[ "Astronomy" ]
785
[ "Cultural depictions of Isaac Newton", "Cultural depictions of astronomers" ]
17,384,910
https://en.wikipedia.org/wiki/Observer%20%28special%20relativity%29
In special relativity, an observer is a frame of reference from which a set of objects or events are being measured. Usually this is an inertial reference frame or "inertial observer". Less often an observer may be an arbitrary non-inertial reference frame such as a Rindler frame which may be called an "accelerating observer". The special relativity usage differs significantly from the ordinary English meaning of "observer". Reference frames are inherently nonlocal constructs, covering all of space and time or a nontrivial part of it; thus it does not make sense to speak of an observer (in the special relativistic sense) having a location. Also, an inertial observer cannot accelerate at a later time, nor can an accelerating observer stop accelerating. Physicists use the term "observer" as shorthand for a specific reference frame from which a set of objects or events is being measured. Speaking of an observer in special relativity is not specifically hypothesizing an individual person who is experiencing events, but rather it is a particular mathematical context which objects and events are to be evaluated from. The effects of special relativity occur whether or not there is a sentient being within the inertial reference frame to witness them. History Einstein made frequent use of the word "observer" (Beobachter) in his original 1905 paper on special relativity and in his early popular exposition of the subject. However he used the term in its vernacular sense, referring for example to "the man at the railway-carriage window" or "observers who take the railway train as their reference-body" or "an observer inside who is equipped with apparatus". Here the reference body or coordinate system—a physical arrangement of metersticks and clocks which covers the region of spacetime where the events take place—is distinguished from the observer—an experimenter who assigns spacetime coordinates to events far from himself by observing (literally seeing) coincidences between those events and local features of the reference body. This distinction between observer and the observer's "apparatus" like coordinate systems, measurement tools etc. was dropped by many later writers, and today it is common to find the term "observer" used to imply an observer's associated coordinate system (usually assumed to be a coordinate lattice constructed from an orthonormal right-handed set of spacelike vectors perpendicular to a timelike vector (a frame field), see Doran). Where Einstein referred to "an observer who takes the train as his reference body" or "an observer located at the origin of the coordinate system", this group of modern writers says, for example, "an observer is represented by a coordinate system in the four variables of space and time" or "the observer in frame S finds that a certain event A occurs at the origin of his coordinate system". However, there is no unanimity on this point, with a number of authors continuing a preference for distinguishing between observer (as a concept related to state of motion) from the more abstract general mathematical notion of coordinate system (which can be, but need not be, related to motion). This approach places more emphasis on the many choices for description open to an observer. The observer is then identified with an observational reference frame, rather than with the combination of coordinate system, measurement apparatus and state of motion. It also has been suggested that the term "observer" is antiquated, and should be replaced by an observer team (or family of observers) in which each observer makes observations in their immediate vicinity, where delays are negligible, cooperating with the rest of the team to set up synchronized clocks across the entire region of observation, and all team members sending their various results back to a data collector for synthesis. "Observer" as a form of relative coordinates Relative direction is a concept found in many human languages. In English, a description of the spatial location of an object may use terms such as "left" and "right" which are relative to the speaker or relative to a particular object or perspective (e.g. "to your left, as you are facing the front door"). The degree to which such a description is subjective is rather subtle. See the Ozma Problem for an illustration of this. Some impersonal examples of relative direction in language are the nautical terms bow, aft, port, and starboard. These are relative, egocentric-type spatial terms but they do not involve an ego: there is a bow, an aft, a port, and a starboard to a ship even when no one is aboard. Special relativity statements involving an "observer" are in some measure articulating a similar kind of impersonal relative direction. An "observer" is a perspective in that it is a context from which events in other inertial reference frames are evaluated but it is not the sort of perspective that a single particular person would have: it is not localized and it is not associated with a particular point in space, but rather with an entire inertial reference frame that may exist anywhere in the universe (given certain lengthy mathematical specifications and caveats). Usage in other scientific disciplines The term observer also has special meaning in other areas of science, such as quantum mechanics, and information theory. See for example, Schrödinger's cat and Maxwell's demon. In general relativity the term "observer" refers more commonly to a person (or a machine) making passive local measurements, a usage much closer to the ordinary English meaning of the word. In quantum mechanics, "observation" is synonymous with quantum measurement and "observer" with a measurement apparatus and observable with what can be measured. This conflict of usages within physics is sometimes a source of confusion. See also Frame of reference Minkowski diagram Observer (disambiguation) References Special relativity
Observer (special relativity)
[ "Physics" ]
1,189
[ "Special relativity", "Theory of relativity" ]
17,385,860
https://en.wikipedia.org/wiki/Foldit
Foldit is an online puzzle video game about protein folding. It is part of an experimental research project developed by the University of Washington, Center for Game Science, in collaboration with the UW Department of Biochemistry. The objective of Foldit is to fold the structures of selected proteins as perfectly as possible, using tools provided in the game. The highest scoring solutions are analyzed by researchers, who determine whether or not there is a native structural configuration (native state) that can be applied to relevant proteins in the real world. Scientists can then use these solutions to target and eradicate diseases and create biological innovations. A 2010 paper in the science journal Nature credited Foldit's 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions. History Rosetta Prof. David Baker, a protein research scientist at the University of Washington, founded the Foldit project. Seth Cooper was the lead game designer. Before starting the project, Baker and his laboratory coworkers relied on another research project named Rosetta to predict the native structures of various proteins using special computer protein structure prediction algorithms. Rosetta was eventually extended to use the power of distributed computing: The Rosetta@home program was made available for public download, and displayed its protein-folding progress as a screensaver. Its results were sent to a central server for verification. Some Rosetta@home users became frustrated when they saw ways to solve protein structures, but could not interact with the program. Hoping that humans could improve the computers' attempts to solve protein structures, Baker approached David Salesin and Zoran Popović, computer science professors at the same university, to help conceptualize and build an interactive program - a video game - that would appeal to the public and help efforts to find native protein structures. Foldit Many of the same people who created Rosetta@home worked on Foldit. The public beta version was released in May 2008 and has 240,000 registered players. Since 2008, Foldit has participated in Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiments, submitting its best solutions to targets based on unknown protein structures. CASP is an international program to assess methods of protein structure prediction and identify those that are most productive. Goals Protein structure prediction is important in several fields of science, including bioinformatics, molecular biology, and medicine. Identifying natural proteins' structural configurations enables scientists to understand them better. This can lead to creating novel proteins by design, advances in treating disease, and solutions for other real-world problems such as invasive species, waste, and pollution. The process by which living beings create the primary structure of proteins, protein biosynthesis, is reasonably well understood, as is the means by which proteins are encoded as DNA. However, determining how a given protein's primary structure becomes a functioning three-dimensional structure, how the molecule folds, is more difficult. The general process is understood, but predicting a protein's eventual, functioning structure is computationally demanding. Methods Similarly to Rosetta@home, Foldit is a means to discover native protein structures faster through distributed computing. However, Foldit has a greater emphasis on community collaboration through its forums, where users can collaborate on certain folds. Furthermore, Foldit's crowdsourced approach places a greater emphasis on the user. Foldit's virtual interaction and gamification create a unique and innovative environment with the potential to greatly advance protein folding research. Virtual interaction Foldit attempts to apply the human brain's three-dimensional pattern matching and spatial reasoning abilities to help solve the problem of protein structure prediction. 2016 puzzles are based on well-understood proteins. By analysing how humans intuitively approach these puzzles, researchers hope to improve the algorithms used by protein-folding software. Foldit includes a series of tutorials where users manipulate simple protein-like structures and a periodically updated set of puzzles based on real proteins. It shows a graphical representation of each protein which users can manipulate using a set of tools. Gamification Foldit's developers wanted to attract as many people as possible to the cause of protein folding. So, rather than only building a useful science tool, they used gamification (the inclusion of gaming elements) to make Foldit appealing and engaging to the general public. As a protein structure is modified, a score is calculated based on how well-folded the protein is, and a list of high scores for each puzzle is maintained. Foldit users may create and join groups, and members of groups can share puzzle solutions. Groups have been found to be useful in training new players. A separate list of group high scores is maintained, as well as two leaderboards for groups and individuals. Accomplishments Results from Foldit have been included in a number of scientific publications. Foldit players have been cited collectively as "Foldit players" or "Players, F." in some cases. Individual players have also been listed as authors on at least one paper, and on four related Protein Data Bank depositions. An August 2010 paper in the journal Nature credited Foldit's 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions, stating "[p]layers working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only conformational space but also the space of possible search strategies". A November 2011 article in PNAS compared "recipes" developed by Foldit players to Rosetta scripts developed by members of the Baker Lab at the University of Washington. The player-developed "Blue Fuse" recipe compared favorably with the scientists' "Fast Relax" algorithm. In 2011, Foldit players helped decipher the crystal structure of a retroviral protease from Mason-Pfizer monkey virus (M-PMV), a monkey virus which causes HIV/AIDS-like symptoms, a scientific problem that had been unsolved for 15 years. While the puzzle was available for three weeks, players produced a 3D model of the enzyme in only ten days that is accurate enough for molecular replacement. In January 2012, Scientific American reported that Foldit gamers achieved the first crowdsourced redesign of a protein, an enzyme that catalysed the Diels–Alder reactions widely used in synthetic chemistry. A team including David Baker in the Center for Game Science at University of Washington in Seattle computationally designed the enzyme from scratch but found its potency needed improvement. Foldit players reengineered the enzyme by adding 13 amino acids, increasing its activity by more than 18 times. A September 2016 article in Nature Communications detailed a "crystallographic model-building competition between trained crystallographers, undergraduate students, Foldit players and automatic model-building algorithms" in which "a team of Foldit players achieved the most accurate structure" fitting a protein to the results of an X-ray crystallography experiment. A July 2018 article in Nature Communications reviewed the collaboration between Foldit players and teams in the WeFold consortium in biennial CASP competitions CASP11 and CASP12. A June 2019 letter in Nature described the analysis of proteins designed by Foldit players. Four player-designed proteins were successfully grown in E. coli and then "solved" via X-ray crystallography. The proteins were added to the Protein Data Bank as 6MRR, 6MRS, 6MSP, and 6NUK. In November 2019, an article in PLOS Biology reported how Foldit players were able to "build protein structures into crystallographic, high-resolution maps more accurately than expert crystallographers or automated model-building algorithms" using data from cryo EM experiments. Future development Foldit's toolbox is mainly for the design of protein molecules. The game's creator announced the plan to add, by 2013, the chemical building blocks of organic subcomponents to enable players to design small molecules. The small molecule design system termed Drugit was tested on the Von Hippel-Lindau tumor suppressor (VHL). Results of the VHL experiment were presented in a March 2023 preprint paper and at an August 2023 American Chemical Society conference session. See also Citizen science Rosetta@home EteRNA Eyewire Folding@home Human-based computation game Molecular graphics Comparison of software for molecular mechanics modeling Predictor@home Quantum Moves Protein structure prediction Protein structure prediction software Serious game References External links official Foldit website 2008 video games Linux games MacOS games Windows games Puzzle video games Human-based computation games Lua (programming language)-scripted video games Structural bioinformatics software Computational biology Molecular biology Protein folding Protein structure Gamification Video games developed in the United States
Foldit
[ "Chemistry", "Biology" ]
1,759
[ "Structural biology", "Computational biology", "Biochemistry", "Protein structure", "Molecular biology" ]
17,387,004
https://en.wikipedia.org/wiki/Fat%20globule
Fat globules (also known as mature lipid droplets) are individual pieces of intracellular fat in human cell biology. The lipid droplet's function is to store energy for the organism's body and is found in every type of adipocytes. They can consist of a vacuole, droplet of triglyceride, or any other blood lipid, as opposed to fat cells in between other cells in an organ. They contain a hydrophobic core and are encased in a phospholipid monolayer membrane. Due to their hydrophobic nature, lipids and lipid digestive derivatives must be transported in the globular form within the cell, blood, and tissue spaces. The formation of a fat globule starts within the membrane bilayer of the endoplasmic reticulum. It starts as a bud and detaches from the ER membrane to join other droplets. After the droplets fuse, a mature droplet (full-fledged globule) is formed and can then partake in neutral lipid synthesis or lipolysis. Globules of fat are emulsified in the duodenum into smaller droplets by bile salts during food digestion, speeding up the rate of digestion by the enzyme lipase at a later point in digestion. Bile salts possess detergent properties that allow them to emulsify fat globules into smaller emulsion droplets, and then into even smaller micelles. This increases the surface area for lipid-hydrolyzing enzymes to act on the fats. Micelles are roughly 200 times smaller than fat emulsion droplets, allowing them to facilitate the transport of monoglycerides and fatty acids across the surface of the enterocyte, where absorption occurs. Milk fat globules (MFGs) are another form of intracellular fat found in the mammary glands of female mammals. Their function is to provide enriching glycoproteins from the female to their offspring. They are formed in the endoplasmic reticulum found in the mammary epithelial lactating cell. The globules are made up of triacylglycerols encased in cellular membranes and proteins like adipophilin and TIP 47. The proteins are spread throughout the ER membrane and fuse with the droplets before they are released from the ER. The ER releases the droplets into the cytosol of the mammary epithelial lactating cell. While in the cytosol, proteins and polar lipids will coat the droplets and form various sizes of globules. MFGs can exist in various diameters ranging from 1 μm- 8 μm and even higher on rare occasions. See also Steatosis Bibliography Barisch, Caroline; Soldati, Thierry (2017-10-01). "Breaking fat! How mycobacteria and other intracellular pathogens manipulate host lipid droplets". Biochimie. Microbe and Host Lipids Gerli Meeting. 141: 54–61. doi:10.1016/j.biochi.2017.06.001. ISSN 0300-9084. Heid, Hans W.; Keenan, Thomas W. (2005-03). "Intracellular origin and secretion of milk fat globules": 245–58. European Journal of Cell Biology. Martini, Mina; Salari, Federica; Altomonte, Iolanda (2016-05-18). "The Macrostructure of Milk Lipids: The Fat Globules". Critical Reviews in Food Science and Nutrition. 56 (7): 1209–1221. doi:10.1080/10408398.2012.758626. ISSN 1040-8398. PMID 24915408. Cell biology
Fat globule
[ "Biology" ]
801
[ "Cell biology" ]
17,387,252
https://en.wikipedia.org/wiki/Pend
In Scotland, a Pend is a passageway through a building, often from a street through to a courtyard or 'back court', and may be for both vehicles and pedestrian access or exclusively pedestrians. The term "common pend" can often be found in descriptions of Scottish property for sale, such as "a common pend shared with the residential dwellings above". A pend is distinct from a vennel or a close, as it has rooms directly above it, whereas vennels and closes tend not to be covered over and are typically passageways between separate buildings. However, a 'close' also means a common entry to multi-dwelling tenement properties in Scotland. Etymology The OED suggests that the etymology of the word is probably related to the archaic verb pend - "arch, arch over, vault", this in turn being derived from the French pendre, Latin pendēre "to hang", from which also derives the word pendulum. References Architecture in Scotland Architectural elements
Pend
[ "Technology", "Engineering" ]
204
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
17,387,312
https://en.wikipedia.org/wiki/Cation-anion%20radius%20ratio
In condensed matter physics and inorganic chemistry, the cation-anion radius ratio can be used to predict the crystal structure of an ionic compound based on the relative size of its atoms. It is defined as the ratio of the ionic radius of the positively charged cation to the ionic radius of the negatively charged anion in a cation-anion compound. Anions are larger than cations. Large sized anions occupy lattice sites, while small sized cations are found in voids. In a given structure, the ratio of cation radius to anion radius is called the radius ratio. This is simply given by . Ratio rule and stability The radius ratio rule defines a critical radius ratio for different crystal structures, based on their coordination geometry. The idea is that the anions and cations can be treated as incompressible spheres, meaning the crystal structure can be seen as a kind of unequal sphere packing. The allowed size of the cation for a given structure is determined by the critical radius ratio. If the cation is too small, then it will attract the anions into each other and they will collide hence the compound will be unstable due to anion-anion repulsion; this occurs when the radius ratio drops below the critical radius ratio for that particular structure. At the stability limit the cation is touching all the anions and the anions are just touching at their edges. For radius ratios greater than the critical ratius ratio, the structure is expected to be stable. The rule is not obeyed for all compounds. By one estimate, the crystal structure can only be guessed about 2/3 of the time. Errors in prediction are partly due to the fact that real chemical compounds are not purely ionic, they display some covalent character. The table below gives the relation between critical radius ratio, , and coordination number, , which may be obtained from a simple geometrical proof. History The radius ratio rule was first proposed by Gustav F. Hüttig in 1920. In 1926, Victor Goldschmidt extended the use to ionic lattices. In 1929, the rule was incorporated as the first of Pauling's rules for crystal structures. See also Goldschmidt tolerance factor Pauling's rules Cubic crystal system Sphere packing References Crystallography Inorganic chemistry Ratios Atomic radius
Cation-anion radius ratio
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
468
[ "Materials science", "Ratios", "Atomic radius", "Crystallography", "Arithmetic", "Condensed matter physics", "nan", "Atoms", "Matter" ]
17,387,711
https://en.wikipedia.org/wiki/Animal%20pound
An animal pound is a place where stray livestock were impounded. Animals were kept in a dedicated enclosure, until claimed by their owners, or sold to cover the costs of impounding. Etymology The terms "pinfold" and "pound" are Saxon in origin. Pundfald and pund both mean an enclosure. There appears to be no difference between a pinfold and a village pound. The person in charge of the pinfold was the "pinder", giving rise to the surname Pinder. Village pound or pinfold The village pound was a feature of most English medieval villages, and they were also found in the English colonies of North America and in Ireland. A high-walled and lockable structure served several purposes; the most common use was to hold stray sheep, pigs and cattle until they were claimed by the owners, usually for the payment of a fine or levy. The pound could be as small as or as big as and may be circular or square. Early pounds had just briar hedges, but most were built in stone or brick, making them more stock-proof. The size and shape of village pounds varies. Some are four-sided—rectangular, square and irregular—while others are circular. In size they vary from a few square metres (some square feet) to over . Pounds are known to date from the medieval period. By the 16th century most villages and townships would have had a pound. Most of what remain today would date from the 16th and 17th centuries. Some are listed buildings, but most have fallen into disrepair. The Sussex County Magazine in 1930 stated: Although pounds are most common to England, there are also examples in other countries. In Americans and Their Forests: a Historical Geography, author Michael Williams writes: "There was hardly a town in eighteenth-century New England without its town pound..." In some mountainous areas of northern Spain (such as Cantabria or Asturias) some similar enclosures are traditionally used to protect beehives from bear attacks. Cultural references The artist Andy Goldsworthy has produced a series of sculptures in several of the pinfolds in Cumbria. See also Kraal Pen (enclosure) Scarisbrick, Lancashire, in which is the hamlet of Pinfold List of extant pinfolds in Cheshire Village lock-up Poundmaster Notes References External links photos of examples of village pounds today on geograph Google maps aerial view of a pinfold in Hougham, Lincolnshire Agricultural buildings Animal equipment Animal welfare Buildings and structures used to confine animals Society in medieval England
Animal pound
[ "Biology" ]
515
[ "Animal equipment", "Animals" ]
17,388,115
https://en.wikipedia.org/wiki/Floating%20ground
A floating ground is a reference point for electrical potential in a circuit which is galvanically isolated from actual earth ground. Most electrical circuits have a ground which is electrically connected to the Earth, hence the name "ground". The ground is said to be floating when this connection does not exist. Conductors are also described as having a floating voltage if they are not connected electrically to another non-floating (grounded) conductor. Without such a connection, voltages and current flows are induced by electromagnetic fields or charge accumulation within the conductor rather than being due to the usual external potential difference of a power source. Applications Electrical equipment may be designed with a floating ground for one of several reasons. One is safety. For example, a low-voltage DC power supply, such as a mobile phone charger, is connected to the mains through a transformer of one type or another, and there is no direct electrical connection between the current return path on the low-voltage side and physical ground (earth). Ensuring that there is no electrical connection between mains voltage and the low-voltage plug makes it much easier to guarantee the safety of the supply. It also allows the charger to safely only connect to live and neutral, which allows a two-prong plug in countries where this is relevant. Indeed, any home appliance with a two-prong plug must have a floating ground. Another application is in electronic test equipment. Suppose you wish to measure a 0.5 V potential difference between two wires that are both approximately 100 V above Earth ground. If your measuring device has to connect to Earth, some of its electronic components must deal with a 100 V potential difference across their terminals. If the whole device floats, then its electronics will only see the 0.5 V difference, allowing more delicate components to be used, which can make more precise measurements. Such devices are often battery powered. Other applications include aircraft and spacecraft, where a direct connection to Earth ground is physically impossible when in flight. Fourthly, a floating ground can help eliminate ground loops, which reduces the noise coupled to the system. The image on the right shows an example of such a configuration. Systems isolated in this manner can and do drift in potential and if the transformer is capable of supplying much power, they can be dangerous. This is particularly likely if the floated system is near high voltage power lines. To reduce the danger of electric shocks, the chassis of the instruments are usually connected separately to Earth ground. Safety Floating grounds can be dangerous if they are caused by failure to properly ground equipment that was designed to require grounding because the chassis can be at a very different potential from that of any nearby organisms, who then get an electric shock upon touching it. Live chassis TVs, also known as hot chassis, where the set's ground is derived by rectifying live mains, were common until the 1990s. Exposed live grounds are dangerous. They are live, and can electrocute end users if touched. Headphone sockets fitted by end users to live chassis TVs are especially dangerous, as not only are they often live, but any electrical shock will pass through the user’s head. Sets with a headphone socket and a live chassis use an audio isolation transformer to make the arrangement safe. Floating grounds can cause problems with audio equipment using RCA connectors (also called phono connectors). With these common connectors, the signal pin connects before the ground, and 2 pieces of equipment can have a greater difference between their grounds than it takes to saturate the audio input. As a result, plugging or unplugging while powered up can result in very loud noises in speakers. If the ground voltage difference is small, it tends to only cause hum and clicks. A residual current device can be incorporated into a system to reduce but not eliminate the risks caused by a floating ground. See also Chassis ground Floating-gate MOSFET Cheater plug References Electronic circuits
Floating ground
[ "Engineering" ]
801
[ "Electronic engineering", "Electronic circuits" ]
17,388,125
https://en.wikipedia.org/wiki/Oxide%20dispersion-strengthened%20alloy
Oxide dispersion strengthened alloys (ODS) are alloys that consist of a metal matrix with small oxide particles dispersed within it. They have high heat resistance, strength, and ductility. Alloys of nickel are the most common but includes iron aluminum alloys. Applications include high temperature turbine blades and heat exchanger tubing, while steels are used in nuclear applications. ODS materials are used on spacecraft to protect the vehicle, especially during re-entry. Noble metal ODS alloys, for example, platinum-based alloys, are used in glass production. When it comes to re-entry at hypersonic speeds, the properties of gases change dramatically. Shock waves that can cause serious damage on any structure are created. At these speeds and temperatures, oxygen becomes aggressive. Mechanism Oxide dispersion strengthening is based on incoherency of the oxide particles within the lattice of the material. Coherent particles have a continuous lattice plane from the matrix to the particles whereas incoherent particles do not have this continuity and therefore both lattice planes end at the interface. This mismatch in interfaces results in a high interfacial energy, which impedes dislocation. The oxide particles instead are stable in the matrix, which helps prevent creep. Particle stability implies little dimensional change, embrittlement, effects on properties, stable particle spacing, and general resistance to change at high temperatures. Since the oxide particles are incoherent, dislocations can only overcome the particles by climb. If instead the particles are semi-coherent or coherent with the lattice, dislocations can simply cut the particles by a more favourable process that requires less energy called dislocation glide or by Orowan bowing between particles, both of which are athermal mechanisms. Dislocation climb is a diffusional process, which is less energetically favourable, and mostly occurs at higher temperatures that provide enough energy to advance via the addition and removal of atoms. Because the particles are incoherent, glide mechanisms alone are not enough and the more energetically exhausting climb process is dominant, meaning that dislocations are stopped more effectively. Climb can occur either at the particle-dislocation interface (local climb) or by overcoming multiple particles at once (general climb). In local climb, the part of the dislocation that is between two particles stays in the glide plane while the rest of the dislocation is climbing along the surface of the particle. For general climb, the dislocations all come out the glide plane. General climb requires less energy because the mechanism decreases the dislocation line length which reduces the elastic strain energy and therefore is the common climb mechanism. For γ’ volume fractions of 0.4 to 0.6 in nickel-based alloys, the threshold stress for local climb is only about 1.25 to 1.40 times higher than general climb. Dislocations are not limited to either all local or all general climb as the path that requires less energy is taken. Cooperative climb is an example of a more nuanced mechanism where a dislocation travels around a group of particles rather than climbing past each particle individually. McLean stated that the dislocation is most relaxed when climbing over multiple particles because of the skipping of some of the abrupt interfaces between segments in the glide plane to segments that travel along the particle surface. The presence of incoherent particles introduces a threshold stress (σt), since an additional stress will have to be applied for the dislocations to move past the oxides by climb. After overcoming a particle by climb, dislocations can remain pinned at the particle-matrix interface with an attractive phenomenon called interfacial pinning, which requires additional threshold stress to free a dislocation out of this pinning, which must be overcome for plastic deformation to occur. This detachment phenomenon is a result of the interaction between the particle and the dislocation where total elastic strain energy is reduced. Schroder and Arzt explain that the additional stress required is due to the relaxation caused by the reduction in the stress field as the dislocation climbs and accommodates the shear traction. The following equations represent the strain rate and stress as a result of oxide introduction. Strain Rate: Threshold Shear Stress: Synthesis Ball-milling ODS steels creep properties are dependent on the characteristics of the oxide particles in the metal matrix, specifically their ability to prevent dislocation motion as well as the size and distribution of the particles. Hoelzer and coworkers showed that an alloy containing a homogeneous dispersion of 1-5 nm Y2Ti2O7 nanoclusters has superior creep properties to an alloy with a heterogeneous dispersion of 5-20 nm nanoclusters of the same composition. ODS steels are commonly produced through ball-milling an oxide of interest (e.g. Y2O3, Al2O3) with pre-alloyed metal powders followed by compression and sintering. It is believed that the oxides enter into solid solution with the metal during ball-milling and subsequently precipitate during the thermal treatment. This process seems simple but many parameters need to be carefully controlled to produce a successful alloy. Leseigneur and coworkers carefully controlled some of these parameters and achieved more consistent and better microstructures. In this two step method the oxide is ball-milled for longer periods to ensure a homogeneous solid solution of the oxide. The powder is annealed at higher temperatures to begin a controlled nucleation of the oxide clusters. Finally the powder is again compressed and sintered to yield the final material. Additive manufacturing NASA used ResonantAcoustic mixing and additive manufacturing to synthesize an alloy they termed GRX-810, which survived temperatures over . The alloy also featured improved strength, malleability, and durability. The printer dispersed oxide particles uniformly throughout the metal matrix. The alloy was identified using 30 simulations of thermodynamic modeling. Advantages and disadvantages Advantages: Can be machined, brazed, formed, cut with available processes. Develops a protective oxide layer that is self-healing. This oxide layer is stable and has a high emission coefficient. Allows the design of thin-walled structures (sandwich). Resistant to harsh weather conditions in the troposphere. Low maintenance cost. Low material cost. Disadvantages: It has a higher expansion coefficient than other materials, causing higher thermal stresses. Higher density. Lower maximum allowable temperature. See also Superalloy References Alloys Metallurgy
Oxide dispersion-strengthened alloy
[ "Chemistry", "Materials_science", "Engineering" ]
1,335
[ "Metallurgy", "Materials science", "Alloys", "Chemical mixtures", "nan" ]
17,389,142
https://en.wikipedia.org/wiki/System%20appreciation
System appreciation is an activity often included in the maintenance phase of software engineering projects. Key deliverables from this phase include documentation that describes what the system does in terms of its functional features, and how it achieves those features in terms of its architecture and design. Software architecture recovery is often the first step within System appreciation. References Further reading Software engineering
System appreciation
[ "Technology", "Engineering" ]
72
[ "Systems engineering", "Computer engineering", "Software engineering stubs", "Software engineering", "Information technology" ]
17,389,565
https://en.wikipedia.org/wiki/Joint%20Battlespace%20Infosphere
The Joint Battlespace Infosphere is a project funded by the AFRL (Air Force Research Lab) intended to provide management for network-centric warfare systems that utilize the GIG (Global Information Grid). References Grid computing Military communications
Joint Battlespace Infosphere
[ "Engineering" ]
48
[ "Military communications", "Telecommunications engineering" ]
17,389,946
https://en.wikipedia.org/wiki/Crying
Crying is the dropping of tears (or welling of tears in the eyes) in response to an emotional state or physical pain. Emotions that can lead to crying include sadness, anger, joy, and fear. Crying can also be caused by relief from a period of stress or anxiety, or as an empathetic response. The act of crying has been defined as "a complex secretomotor phenomenon characterized by the shedding of tears from the lacrimal apparatus, without any irritation of the ocular structures", instead, giving a relief which protects from conjunctivitis. A related medical term is lacrimation, which also refers to the non-emotional shedding of tears. Various forms of crying are known as sobbing, weeping, wailing, whimpering, bawling, and blubbering. For crying to be described as sobbing, it usually has to be accompanied by a set of other symptoms, such as slow but erratic inhalation, occasional instances of breath holding, and muscular tremor. A neuronal connection between the lacrimal gland and the areas of the human brain involved with emotion has been established. Tears produced during emotional crying have a chemical composition which differs from other types of tears. They contain significantly greater quantities of the hormones prolactin, adrenocorticotropic hormone, and Leu-enkephalin, and the elements potassium and manganese. Function The question of the function or origin of emotional tears remains open. Theories range from the simple, such as response to inflicted pain, to the more complex, including nonverbal communication in order to elicit altruistic helping behaviour from others. Some have also claimed that crying can serve several biochemical purposes, such as relieving stress and clearance of the eyes. There is some empirical evidence that crying lowers stress levels, potentially due to the release of hormones such as oxytocin. Crying is believed to be an outlet or a result of a burst of intense emotional sensations, such as agony, surprise, or joy. This theory could explain why people cry during cheerful events, as well as very painful events. Individuals tend to remember the positive aspects of crying, and may create a link between other simultaneous positive events, such as resolving feelings of grief. Together, these features of memory reinforce the idea that crying helped the individual. In Hippocratic and medieval medicine, tears were associated with the bodily humors, and crying was seen as purgation of excess humors from the brain. William James thought of emotions as reflexes prior to rational thought, believing that the physiological response, as if to stress or irritation, is a precondition to cognitively becoming aware of emotions such as fear or anger. William H. Frey II, a biochemist at the University of Minnesota, proposed that people feel "better" after crying due to the elimination of hormones associated with stress, specifically adrenocorticotropic hormone. This, paired with increased mucosal secretion during crying, could lead to a theory that crying is a mechanism developed in humans to dispose of this stress hormone when levels grow too high. Tears have a limited ability to eliminate chemicals, reducing the likelihood of this theory. Recent psychological theories of crying emphasize the relationship of crying to the experience of perceived helplessness. From this perspective, an underlying experience of helplessness can usually explain why people cry. For example, a person may cry after receiving surprisingly happy news, ostensibly because the person feels powerless or unable to influence what is happening. Emotional tears have also been put into an evolutionary context. One study proposes that crying, by blurring vision, can handicap aggressive or defensive actions, and may function as a reliable signal of appeasement, need, or attachment. Oren Hasson, an evolutionary psychologist in the zoology department at Tel Aviv University believes that crying shows vulnerability and submission to an attacker, solicits sympathy and aid from bystanders, and signals shared emotional attachments. Another theory that follows evolutionary psychology is given by Paul D. MacLean, who suggests that the vocal part of crying was used first as a "separation cry" to help reunite parents and offspring. The tears, he speculates, are a result of a link between the development of the cerebrum and the discovery of fire. MacLean theorizes that since early humans must have relied heavily on fire, their eyes were frequently producing reflexive tears in response to the smoke. As humans evolved the smoke possibly gained a strong association with the loss of life and, therefore, sorrow. In 2017, Carlo Bellieni analysed the weeping behavior, and concluded that most animals can cry but only humans have psychoemotional shedding of tears, also known as "weeping". Weeping is a behavior that induces empathy perhaps with the mediation of the mirror neurons network, and influences the mood through the release of hormones elicited by the massage effect made by the tears on the cheeks, or through the relief of the sobbing rhythm. Many ethologists would disagree. Biological response It can be very difficult to observe biological effects of crying, especially considering many psychologists believe the environment in which a person cries can alter the experience of the crier. Laboratory studies have shown several physical effects of crying, such as increased heart rate, sweating, and slowed breathing. Although it appears that the type of effects an individual experiences depends largely on the individual, for many it seems that the calming effects of crying, such as slowed breathing, outlast the negative effects, which could explain why people remember crying as being helpful and beneficial. Globus sensation The most common side effect of crying is feeling a lump in the throat of the crier, otherwise known as a globus sensation. Although many things can cause a globus sensation, the one experienced in crying is a response to the stress experienced by the sympathetic nervous system. When an animal is threatened by some form of danger, the sympathetic nervous system triggers several processes to allow the animal to fight or flee. This includes shutting down unnecessary body functions, such as digestion, and increasing blood flow and oxygen to necessary muscles. When an individual experiences emotions such as sorrow, the sympathetic nervous system still responds in this way. Another function increased by the sympathetic nervous system is breathing, which includes opening the throat in order to increase air flow. This is done by expanding the glottis, which allows more air to pass through. As an individual is undergoing this sympathetic response, eventually the parasympathetic nervous system attempts to undo the response by decreasing high stress activities and increasing recuperative processes, which includes running digestion. This involves swallowing, a process which requires closing the fully expanded glottis to prevent food from entering the larynx. The glottis attempts to remain open as an individual cries. This fight to close the glottis creates a sensation that feels like a lump in the individual's throat. Other common side effects of crying are quivering lips, a runny nose, and an unsteady, cracking voice. Frequency According to the German Society of Ophthalmology, which has collated different scientific studies on crying, the average woman cries between 30 and 64 times a year, and the average man cries between 6 and 17 times a year. Men tend to cry for between two and four minutes, and women cry for about six minutes. Crying turns into sobbing for women in 65% of cases, compared to just 6% for men. Before adolescence, no difference between the sexes was found. The gap between how often men and women cry is larger in countries that have more wealth, democracy, and gender egalitarianism. In infants Infants can shed tears at approximately four to eight weeks of age. Crying is critical to when a baby is first born. Their ability to cry upon delivery signals they can breathe on their own and reflects they have successfully adapted to life outside the womb. Although crying is an infant's mode of communication, it is not limited to a monotonous sound. There are three different types of cries apparent in infants. The first of these three is a basic cry, which is a systematic cry with a pattern of crying and silence. The basic cry starts with a cry coupled with a briefer silence, which is followed by a short high-pitched inspiratory whistle. Then, there is a brief silence followed by another cry. Hunger is a main stimulant of the basic cry. An anger cry is much like the basic cry; in this cry, more excess air is forced through the vocal cords, making it a louder, more abrupt cry. This type of cry is characterized by the same temporal sequence as the basic pattern but distinguished by differences in the length of the various phase components. The third cry is the pain cry, which, unlike the other two, has no preliminary moaning. The pain cry is one loud cry, followed by a period of breath holding. Most adults can determine whether an infant's cries signify anger or pain. Most parents also have a better ability to distinguish their own infant's cries than those of a different child. A 2009 study found that babies mimic their parents' pitch contour. French infants wail on a rising note while German infants favor a falling melody. Carlo Bellieni found a correlation between the features of babies' crying and the level of pain, though he found no direct correlation between the cause of crying and its characteristics. T. Berry Brazelton has suggested that overstimulation may be a contributing factor to infant crying and that periods of active crying might serve the purpose of discharging overstimulation and helping the baby's nervous system regain homeostasis. Sheila Kitzinger found a correlation between the mother's prenatal stress level and later amount of crying by the infant. She also found a correlation between birth trauma and crying. Mothers who had experienced obstetrical interventions or who were made to feel powerless during birth had babies who cried more than other babies. Rather than try one remedy after another to stop this crying, she suggested that mothers hold their babies and allow the crying to run its course. Other studies have supported Kitzinger's findings. Babies who had experienced birth complications had longer crying spells at three months of age and awakened more frequently at night crying. Based on these various findings, Aletha Solter has proposed a general emotional release theory of infant crying. When infants cry for no obvious reason after all other causes (such as hunger or pain) are ruled out, she suggests that the crying may signify a beneficial stress-release mechanism. She recommends the "crying-in-arms" approach as a way to comfort these infants. Another way of comforting and calming the baby is to mimic the familiarity and coziness of mother's womb. Robert Hamilton developed a technique to parents where a baby may be calmed and stop crying in five seconds. A study published in Current Biology has shown that some parents with experience of children are better at identifying types of cries than those who do not have experience of children. Categorizing dimensions There have been many attempts to differentiate between the two distinct types of crying: positive and negative. Different perspectives have been broken down into three dimensions to examine the emotions being felt and also to grasp the contrast between the two types. Spatial perspective explains sad crying as reaching out to be "there", such as at home or with a person who may have just died. In contrast, joyful crying is acknowledging being "here." It emphasized the intense awareness of one's location, such as at a relative's wedding. Temporal perspective explains crying slightly differently. In temporal perspective, sorrowful crying is due to looking to the past with regret or to the future with dread. This illustrated crying as a result of losing someone and regretting not spending more time with them or being nervous about an upcoming event. Crying as a result of happiness would then be a response to a moment as if it is eternal; the person is frozen in a blissful, immortalized present. The last dimension is known as the public-private perspective. This describes the two types of crying as ways to imply details about the self as known privately or one's public identity. For example, crying due to a loss is a message to the outside world that pleads for help with coping with internal sufferings. Or, as Arthur Schopenhauer suggested, sorrowful crying is a method of self-pity or self-regard, a way one comforts oneself. Joyful crying, in contrast, is in recognition of beauty, glory, or wonderfulness. Religious views In Orthodox and Catholic Christianity, tears are considered to be a sign of genuine repentance, and a desirable thing in many cases. Tears of true contrition are thought to be sacramental, helpful in forgiving sins, in that they recall the Baptism of the penitent. The Shia Ithna Ashari (Muslims who believe in Twelve Imams after Muhammad) consider crying to be an important responsibility towards their leaders who were martyred. They believe a true lover of Imam Hussain can feel the afflictions and oppressions Imam Hussain suffered; his feelings are so immense that they break out into tears and wail. The pain of the beloved is the pain of the lover. Crying on Imam Hussain is the sign or expression of true love. The imams of Shias have encouraged crying especially on Imam Hussain and have been informed about rewards for this act. They support their view through a tradition (saying) from Muhammad who said: (On the Day of Judgment, a group would be seen in the most excellent and honourable of states. They would be asked if they were of the Angels or of the Prophets.) In reply they would state: "We are neither Angels nor Prophets but of the indigent ones from the ummah of Muhammad". They would then be asked: "How then did you achieve this lofty and honourable status?" They would reply: "We did not perform very many good deeds nor did we pass all the days in a state of fasting or all the nights in a state of worship but yes, we used to offer our (daily) prayers (regularly) and whenever we used to hear the mention of Muhammad, tears would roll down our cheeks". Types of tears There are three types of tears: basal tears, reflexive tears, and psychic tears. Basal tears are produced at a rate of about 1 to 2 microliters a minute, and are made in order to keep the eye lubricated and smooth out irregularities in the cornea. Reflexive tears are tears that are made in response to irritants to the eye, such as when chopping onions or getting poked in the eye. Psychic tears are produced by the lacrimal system and are the tears expelled during emotional states. Related disorders Baby colic, where an infant's excessive crying has no obvious cause or underlying medical disorder. Bell's palsy, where faulty regeneration of the facial nerve can cause sufferers to shed tears while eating. Cri du chat syndrome, where the characteristic cry of affected infants, which is similar to that of a meowing kitten, is due to problems with the larynx and nervous system. Familial dysautonomia, where there can be a lack of overflow tears (alacrima), during emotional crying. Pseudobulbar affect, uncontrollable episodes of laughing and/or crying. References Further reading : examines the taboo that still surrounds public crying. External links Physiological psychology Emotion Reflexes
Crying
[ "Biology" ]
3,167
[ "Crying", "Emotion", "Behavior", "Human behavior" ]
17,390,365
https://en.wikipedia.org/wiki/Natural%20History%20of%20an%20Alien
Natural History of an Alien, also known as Anatomy of an Alien in the US, is an early Discovery Channel pseudo-documentary similar to Alien Planet, aired in 1998. This pseudo-documentary featured various alien ecosystem projects from the Epona Project to Ringworld. It also featured many notable scientists and science fiction authors such as Dr. Jack Cohen, Derek Briggs, Christopher McKay, David Wynn-Williams, Emily Holton, Peter Cattermole, Brian Aldiss, Sil Read, Wolf Read, Edward K. Smallwood, Adega Zuidema, Steve Hanly, Kevin Warwick and Dougal Dixon. Plot The viewer is in an intergalactic spaceship named the S.S. Attenborough, run by a small green alien. Cambrian Earth Earth during the Cambrian. Mars Asteroids The documentary visits asteroids and talks about the possibility of panspermia seeding solar system with life. Europa Featured organisms Europa Cone Bacteria: Orange-gray bacteria that grow in huge towers that rise many miles above the ocean floor. Inside these vents, warm water rises, nourishing layer upon layer of bacteria. Europa Sea Vent Herbivore: A giant, gray, shark-like swimmer that feeds on bacteria in schools with a suction cup-like mouth on an extended, Opabinia-like trunk. These trunk-shaped mouths pierce the vents to suck in vast quantities of bacteria. These grazers are territorial and, like squid on Earth, flash warning glows to drive away rivals. They make a series of dolphin-like cries. Europa Sea Vent Carnivore: A predatory, yellow-green, echolocating, streamlined, shark-like swimmer that is built for speed and preys on the Europa Sea Vent Herbivores. Like the Europa Sea Vent Herbivores, the Europa Sea Vent Carnivores also have an Opabinia-like snout, which they use to kill their prey. High Gravity Planet The next world visited is a high gravity planet home to many insect-like aliens who have adapted to 1.5 times Earth's gravity. High gravity means a thicker atmosphere (the planet in question having an atmosphere 15 times as dense as Earth's) and therefore easier flight. Featured organisms Pteropede: A gray-green, millipede-like creature from High Gravity Planet with accordion-folding, dragon-like wings. It resembles a dragonfly when it flies. It may be able to take advantage over of the denser air to fly, but high gravity can lead to a bumpy landing. To support its great weight, the creature's eight legs are directly under its body. It breathes through lungs in the tip of its tail, which is more efficient than the way insects take in air, so the Pteropede is able to pump oxygen through its large, heavy body. To grow, the Pteropede must enter the water. It is only in the buoyancy of water, where gravity has little effect, that the Pteropede can shed its skin and increase in size. Once the new skin is hardened, the Pteropede can return to the demanding heavy gravity environment on land. Sputnik Bug: A small, blue, Eoarthropleura-like creature from High Gravity Planet named after Sputnik 1, the first artificial satellite in orbit. It has spines to protect it from a dangerous fall. Whenever it does fall, it immediately rolls up in a ball when it starts to tumble. Splatter Bug: A small, brown, eurypterid-like creature from High Gravity Planet. It sadly has nothing to protect its soft body. It's an evolutionary dead-end. Helliconia The documentary visits the science fiction world of Helliconia, which was created by Brian Aldiss. It's a binary system and they show how life can adapt to having two suns. Featured organism Helliconian Tree, a strange-looking tree from Helliconia with a cooling tower-shaped trunk with branches on the very top that sprout narrow leaves, making the branch look like a moth antenna during the short summers. Like deciduous trees on Earth, the Helliconian Tree turns dormant during snow-filled winters. It sheds its leaves but its branches curl up and go inside the Tree. The Tree then shields its top with an ice-like cap it grows. Sulfuria The documentary visits the science fiction world of Sulfuria, which was created by Dougal Dixon. It's a sulfur rich world that is similar to Io. Featured organisms Sulfurian Balloon Plant: A tall, orange organism from Sulfuria that lives off of sunlight. They are like giant balloons, anchored to the ground and buoyed up by the gas inside of their flattish, pizza-like tops. They are like kelp on Earth. Babies sprout from the sides of the parent plant and eventually break off, becoming independent adults. Parachute Worm: A whitish-gray, earthworm-like creature that lives off the gas of the Sulfurian Balloon Plant by sucking it out. Newly born larvae resemble twigs with two umbrella-like extensions. Larvae are born live as the mother is feeding. After a little while, the young depart from their mothers and use the umbrella-like extensions to parachute down gently through the murk of the atmosphere to the planet floor, after which the umbrellas are shed. After falling into water and shedding, the young Parachute Worms feed on the nutritious roots of the Sulfurian Balloon Plants. When they have fattened up, the adults make their epic journey back up the stalks to mate. The Parachute Worm is a perpetual migrant. Its lifecycle is a response to this extreme environment. Epona The next world visited is Epona, an imaginary ecosystem created by group of scientists and science fiction writers called The Epona Project and begun by Martyn J. Fogg. Epona is an offshoot of the Contact - Cultures Of The Imagination, a bi-yearly conference, where scientists and Science Fiction authors come together and discuss how the human race may progress in space. The 2012 event program is here. James Funaro is a guest on the Space Show Blog #3466 and describes one year's conference. The contact idea came from an original premise by Joel Hagen and James Funaro, instructor of anthropology at Cabrillo College, Palo Alto. Two groups of COTI Attendees are provided with simulated planetary conditions, then have to devise a species to fit in that ecology and that develops spaceflight and has "First contact" with one another, one race may be human. The process was described in This later developed into COTI - the roleplaying half simulating "first contact" and "The Bateson Project" - which is the strictly scientific disciplines come together. Science Fiction authors involved include Karen and Poul Anderson. [1] http://www.contact-conference.org [2] http://www.contact-conference.org/c12d.html (On the wayback internet archive). {3} Featured organisms Epona Pagoda Tree: A thin-trunked, green, sessile photosynthetic animal that appears like a tree surrounded by large, disk-shaped "leaves" which are evolved limbs. The "leaves" grow very large in order to get as much carbon dioxide as possible. While sessile, the trees are capable of a remarkable range of movement, a trait inherited from their mobile ancestors. If a "herbivore" comes to nibble on them, they are able to fold back their branches. Spring Croc: A green, hopping, one-legged, predatory, Venus flytrap-like creature and the major predator on Epona. It lies in wait for its prey, usually while partially submerged. It's extremely vicious and mostly focuses on eating. It does not need to be intelligent, it just has to be quiet. Uther: A brownish-gray flyer resembling a cross between a Sharovipteryx and a pterosaur. They are descended from flying fish-like ancestors. They started out in their avian-like lifestyle hunting Salacopods (small, amphibian-like creatures) and later adapted to feeding on larger carcasses. They then started to become predators themselves. In order to fly, they use a combination of hydrogen peroxide and ethanol to hyper-oxygenate their blood, which allows them incredible stamina. Greenworld The documentary then visits the science fiction world of Greenworld, which was created by Dougal Dixon. It is an Earth-like planet filled with lush rainforests. Featured organisms Curlywhorl: An arboreal, purple-red, centipede/iguana-like creature evolved from aquatic, sea star-like ancestors like all the other inhabitants of Greenworld. Pud: A small, green, three-eyed, weevil-like creature from Greenworld's equator. There are thousands of species across the planet. They are like the beetles of Earth. The featured species has six limbs, five of which are used for grasping and one for movement. This gives the Pud a hopping gait. They are often seen in groups, foraging for fallen fruit in the undergrowth. They have many predators and can sense danger coming with their three sensitive, leaf-like antennae. Puds make a series of chirps and hoots. Kwank: A large, reddish-brown, robber crab-like creature from Greenworld with a turtle-like shell on its back. It feeds on Puds. Unidentified large, lobster-like predator: An inhabitant of Greenworld that sometimes attacks and eats Kwanks. It's only shown in the form of its shadow. Artificial Life On Greenworld, the ship encounters an artificial lifeforms from a robotic cube ship. It uses solar panels to gather energy and mines asteroids to get resources to grow. It even sends down a probe resembling a metallic centipede to Greenworld to explore it. At the end of the film, the narrator is revealed to be a little green man-like female alien. References OMNI[3] magazine (ISSN 0149-8711), October 1992 Vol.15 #1 page 50."How to build an Alien" by Keith Ferrell External links Discovery Channel: Alien Planet Epona Project Speculative Visions British television specials 1998 television specials Speculative evolution Extraterrestrial life in popular culture Discovery Channel original programming
Natural History of an Alien
[ "Biology" ]
2,135
[ "Biological hypotheses", "Speculative evolution", "Hypothetical life forms" ]
17,392,109
https://en.wikipedia.org/wiki/Fordham%20Environmental%20Law%20Review
The Fordham Environmental Law Review is a triannual law journal published by students at Fordham University School of Law, addressing topics in environmental law, legislation, and public policy. It was established in 1989 as the Fordham Environmental Law Report and changed in 1993 to the Fordham Environmental Law Journal. In 2004, the journal obtained its current name. The journal publishes three issues annually: one centered around a symposium, along with fall and spring issues. It is the 10th-most-cited student-edited environmental journal by other law journals. External links American law journals Environmental law journals Environmental Law Academic journals established in 1989 Law journals edited by students 1989 establishments in New York City Triannual journals English-language journals Fordham University School of Law
Fordham Environmental Law Review
[ "Environmental_science" ]
145
[ "Environmental science journals", "Environmental social science stubs", "Environmental social science", "Environmental science journal stubs" ]
17,392,705
https://en.wikipedia.org/wiki/KioskNet
KioskNet is a system, developed at the University of Waterloo, to provide very low cost Internet access to rural villages in developing countries, based on the concept of delay-tolerant networking. It uses vehicles, such as buses, to ferry data between village kiosks and Internet gateways in nearby urban centers. The data is re-assembled at a Proxy Server for interaction with legacy servers. The system is free and open source. A video describing the KioskNet system can be found here, or is available on YouTube here. See also Srinivasan Keshav Delay-tolerant networking References Computer-mediated communication
KioskNet
[ "Technology" ]
129
[ "Computer-mediated communication", "Information systems", "Computing and society" ]
17,392,720
https://en.wikipedia.org/wiki/Journal%20of%20the%20Experimental%20Analysis%20of%20Behavior
The Journal of the Experimental Analysis of Behavior is a peer-reviewed academic journal of psychology that was established in 1958 by B.F. Skinner and Charles Ferster. JEAB publishes empirical research related to the experimental analysis of behavior and is published by Wiley-Blackwell on behalf of the Society for the Experimental Analysis of Behavior.The current editor-in-chief is Mark Galizio (University of North Carolina, Wilmington). The 2022 impact factor is 2.7. The mission of the Journal of the Experimental Analysis of Behavior (JEAB) is "the original publication of experiments relevant to the behavior of individual organisms." See also Journal of Applied Behavior Analysis (JABA) Behavior Modification (journal) Society for the Experimental Analysis of Behavior References External links Behaviorism journals Wiley-Blackwell academic journals English-language journals Academic journals established in 1958 Experimental psychology journals
Journal of the Experimental Analysis of Behavior
[ "Biology" ]
175
[ "Behavior", "Behaviorism", "Behaviorism journals" ]
17,393,354
https://en.wikipedia.org/wiki/Matcal%20Tower
Matcal Tower is a 17-floor high-rise building at Camp Rabin military base in HaKirya quarter of Tel Aviv, Israel. History The tower was originally planned to include only 14 floors and no helipad. It houses the headquarters of the Israel Ministry of Defense and offices of the IDF General Staff (Matcal in Hebrew). It was built in 2003, and is located close to another IDF building, the Marganit Tower, across the road from the civilian Azrieli Center. The tower and the Azrieli Bridge connecting the base with the Azrieli Center were designed by Moore Yaski Sivan Architects. See also Architecture of Israel References External links at Emporis Skyscrapers in Tel Aviv Deconstructivism Futurist architecture Postmodern architecture Skyscraper office buildings in Israel Office buildings completed in 2005 Ministry of Defense (Israel) 2005 establishments in Israel
Matcal Tower
[ "Engineering" ]
181
[ "Postmodern architecture", "Architecture" ]
17,393,629
https://en.wikipedia.org/wiki/Western%20silvereye
The western silvereye (Zosterops lateralis chloronotus) is a small greenish bird in the Zosteropidae or White-eye family. It is a subspecies of the silvereye that occurs in Western Australia and South Australia. It is sometimes called the white-eye or greenie. Aboriginal names for the bird include jule-we-de-lung or julwidilang from the Perth area and poang from the Pallinup River. Distribution and habitat The western silvereye is found in Southwest Australia with its range extending northwards to the vicinity of Shark Bay and Carnarvon, and rarely in winter as far as Point Cloates and the De Grey River. In the south its range extends eastwards along the south coast of Western Australia into South Australia at the head of the Great Australian Bight. It also occurs on many offshore islands, including the Houtman Abrolhos and the Archipelago of the Recherche. Habitats used by the bird include both wet and dry sclerophyll forest, temperate eucalypt woodland, mallee woodland and shrubland, and mangroves, as well as areas of and around human habitation. Description The upperparts are entirely bright olive-green, with the wings and tail feathers grey, edged with green. The throat and undertail coverts are yellow-green, with the rest of the underparts grey. Circlets of small white feathers surround the eyes. Males are brighter yellow on the throat than females. The birds are 10–13 cm in length and weigh about 10 g. They give a variety of high-pitched calls, with the distinctive and constantly uttered contact call a thin ‘psee’. Taxonomy and nomenclature The western silvereye is the only green-backed form of the silvereye found in Australia, the other subspecies there having grey backs. According to Serventy and Whittell, who treat it as a full species, the bird also lacks the pre-nuptial moult which characterise the eastern Australian populations of the species. Because of such differences, the western silvereye has often been considered a full species. However, Schodde and Mason retain it in lateralis because, with a similar niche and voice, it replaces the eastern forms of the species in south-west Australia; because it is connected by a zone of intergradation with Z. l. pinarochrous in South Australia; and because mtDNA data links chloronotus with pinarochrous eastwards to western Victoria where the latter intergrades with Z. l. westernensis, showing that the various forms meeting in south-eastern Australia are linked by broad zones of morphological intergradation. The specific (or subspecific) name gouldi Bonaparte, 1850, was previously applied to the bird on the mistaken presumption that chloronotus Gould, 1841 was a junior secondary homonym of Dicaeum chloronothos Viellot, 1817 in Zosterops. Thus chloronotus is the senior synonym and has priority. Behaviour Of the general behaviour of the western silvereye, Serventy and Whittell say: ”This is perhaps the commonest small bird in the Perth area and over much of the South-West. After the nesting season, by January, the birds gather into foraging flocks, which are noisily on the move until the pairs separate out again next spring. In the city and suburbs they play the role of the Sparrow (Passer domesticus) in the eastern States, or the tits (Parus) in Europe, visiting gardens, shrubberies and even the backyard fowl-run.” Breeding The western silvereye usually builds a suspended cup-shaped nest of grasses in a shrub or tree. The grasses are bound with spider web and the inner cup lined with finer grasses, wool or horsehair. The cup is about 5 cm across and 2–3 cm deep. Clutch size is two or three, sometimes four, pale blue eggs. Both parents incubate the eggs for a period of 10–13 days, with the young birds leaving the nest about 12 days after hatching. Breeding takes place mainly in the wetter, coastal part of the range from September to January, with the birds forming large flocks and moving further afield once breeding has ceased. When breeding conditions are good, pairs can produce and raise up to four broods in a season. Feeding Western silvereyes are omnivorous; they eat small insects as well as a wide variety of fruits and nectar. They form mixed-species foraging flocks with several other birds, especially weebills, western gerygones, western, inland and yellow-rumped thornbills, grey fantails and golden whistlers. In summer when their natural food supplies are scarce, they flock to vineyards and orchards and damage grapes and other soft fruits. When marri trees are flowering and producing large amounts of nectar in summer, damage to fruit is usually minimal. Relationship with humans In Western Australia, the western silvereye is a declared pest of agriculture under the provisions of the Agriculture and Related Resources Protection Act 1976, administered by the Western Australian Department of Agriculture and Food. References Notes Sources Zosterops Birds of South Australia Birds of Western Australia Agricultural pests Birds described in 1841
Western silvereye
[ "Biology" ]
1,081
[ "Pests (organism)", "Agricultural pests" ]
17,393,733
https://en.wikipedia.org/wiki/Western%20Corridor%20Recycled%20Water%20Scheme
The Western Corridor Recycled Water Scheme, a recycled water project, is located in the South East region of Queensland in Australia. The scheme is managed by Seqwater and forms a key part of the SEQ Water Grid constructed by the Queensland Government in response to population growth, climate change and severe drought. The 2.5 billion project is reported as the largest recycled water project in Australia. As of 2019, the scheme has been constructed and its performance has been validated. It remains in care and maintenance mode, and will commence operation after SEQ Water Grid dam levels reach 60%. Location and features The scheme involved the construction of three advanced water treatment plants constructed at Bundamba, Luggage Point and Gibson Island, which draw water from six existing wastewater treatment plants in the region to produce up to of purified recycled water daily. The treatment train consists of microfiltration, reverse osmosis, ultraviolet light with advanced oxidation and chlorine disinfection. The water is distributed via a network of pipelines measuring more than in length. Construction began on the Recycled Water Project in 2006 and completed in late 2008. 408 million of funding was provided by the Australian Government via its Water Smart Australia Program. In Stage 1 of the project the scheme has provided an alternative water source for Swanbank Power Station and both Tarong Power Station and Tarong North Power Station. Supplies to Swanbank started in 2007 and supplies to Tarong and Tarong North started in June 2008. The system has the capacity to provide water to other industrial users, agricultural users and to supplement drinking water supplies in Wivenhoe Dam. Testing of the pipeline to Wivenhoe Dam has been conducted, however in November 2008, Premier Anna Bligh declared that recycled water will not enter the dam unless levels drop to below 40%. Initially, the three power stations were the main customers of the recycled water, consuming per day. Since coming online in August 2007, through to July 2010, the Western Corridor Recycled Water Scheme has supplied more than of water into the SEQ Water Grid. In January 2013 it was reported that the Newman government was considering shutting down part or all of the scheme. See also Bradfield Scheme Goldfields Water Supply Scheme Irrigation in Australia Swanbank Power Station References External links Seqwater Bureau of Meteorology Water management in Queensland South East Queensland 2008 establishments in Australia Water treatment Infrastructure in Australia Infrastructure in Queensland
Western Corridor Recycled Water Scheme
[ "Chemistry", "Engineering", "Environmental_science" ]
481
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
17,394,950
https://en.wikipedia.org/wiki/Species-typical%20behavior
The ethological concept of species-typical behavior is based on the premise that certain behavioral similarities are shared by almost all members of a species. Some of these behaviors are unique to certain species, but to be 'species-typical' they do not have to be unique, they simply have to be characteristic of that species. Neuroscience Species-typical behaviors are almost always a result of similar nervous systems and adaptations to the environment in organisms of the same species. They are created and influenced by a species' genetic code and social and natural environment. Hence, they are strongly influenced by evolution. A classic example of species-typical behavior is breast crawl: the vast majority of human newborns, when placed on a reclined mother's abdomen, will find and begin to suckle on one of the mother's breasts without any assistance. Brain structures Species-typical behaviors are occasionally tied to certain structures of the brain. Murphy, MacLean, and Hamilton (1981) gave hamsters brain lesions at birth, which destroy certain brain structures. They discovered that while hamsters still expressed species-typical behavior without a neocortex, they lost much of their species-typical play and maternal behaviors when deprived of midline limbic convolutions. Likewise, if squirrel monkeys lose their globus pallidus, their ability to engage in certain sexual behavior (e.g. thigh-spreading, groin-thrusting) is either eliminated or impaired. Scientists may also use stimulation to discover the role of a structure in species-typical behavior. In a 1957 experiment, physiologist Walter Hess used an electrode to stimulate a certain part of a resting cat's brainstem; immediately after the stimulation, the cat stood up and arched its back with erect hair—a species-typical behavior in which cats engage when frightened. The behavior lasted as long as the stimulation lasted and ended as soon as the stimulation ended. Later experiments revealed that even if the same part of the brain is stimulated with the same amount of energy for the same period, the intensity of the elicited behavior changes depending on the context. In 1973, behavioral physiologist Erich von Holst attached an electrode to one part of a chicken's brainstem. When briefly stimulated without any unusual environmental factors, the chicken was restless. When briefly stimulated in the presence of a human fist, the chicken reacted with a slightly threatening posture, and in the presence of a weasel, the chicken took a very threatening pose, with feathers bristling. The brainstem, in this case, elicits species-typical behavior that is appropriate to the surrounding environment. Hormones and chemicals The presence or density of certain chemical receptors on cranial structures such as the brainstem often determines their importance in one species-typical behavior or in other species. For example, monogamous prairie voles have a high density of oxytocin receptors (OTRs) in the nucleus accumbens, while non-monogamous meadow voles do not. The manner in which hormones alter these receptors is an important behavioral regulator. For example, gonads affect OTRs in different rodents. In female rats, gonadal estrogen increases the level of OTR binding and, when the ovarian cycle maximizes the amount of estrogen in the bloodstream, causes OTRs to appear in ventrolateral regions of the structure called the ventromedial nucleus. This, in turn, increases the likelihood that a female rat will engage in certain species-typical sexual activity by increasing her sexual receptivity. But the effect of this regulatory mechanism differs between species; though a gonadectomy would decrease (and gonadal steroids would increase) sexual receptivity in the female rat, these things would have opposite impacts on female mice. Instinct and experience While some species-typical behavior is learned from the parents, it is also sometimes the product of a fixed action pattern, also known as an innate releasing mechanism (IRM). In these instances, a neural network is 'programmed' to create a hard-wired, instinctive behavior in response to an external stimulus. When a blind child hears news that makes her happy, she's likely to smile in response; she never had to be taught to smile, and she never learned this behavior by seeing others do it. Similarly, when kittens are shown a picture of a cat in a threatening posture, most of them arch their backs, bare their teeth, and sometimes even hiss, even though they've never seen another cat do this. Many IRMS can be explained by the theory of evolution—if an adaptive behavior helps a species survive long enough to reproduce, such as a cat hissing to discourage an attack from another creature, then the genes that coded for those brain circuits are more likely to be passed on. A heavily studied example of a fixed action pattern is the feeding behavior of the Helisoma trivolvis (pulmonata), a type of snail. A study has shown that the intricate connections within the buccal ganglia (see nervous system of gastropods) form a central system whereby sensory information stimulates feeding in the Helisoma. More specifically, a unique system of communication between three classes of neurons in the buccal ganglia are responsible for forming the neural network that influences feeding. A species-typical behavior can be altered by experience, as shown by experiments on Aplysia californica, a sea snail. When its gills are stimulated in a novel manner, it withdraws them into its shell for the sake of protection. This is a species-typical behavior. But after a stimulus that was once novel (e.g. a weak jet of water) has been applied repeatedly to the gills, aplysia no longer withdraws them. It has gone through habituation, a process by which the response to a stimulus becomes weaker with more exposure. This occurs because of changes in the nervous system. Neurons communicate with one another at synapses, which consist of the tip of the communicating cell (the presynaptic membrane), the tip of the receiving cell (the postsynaptic membrane), and the space in between the two (the synaptic cleft). When the presynaptic membrane is stimulated by the influx of calcium ions, it releases a chemical called a neurotransmitter, which travels over the synaptic cleft in order to bind to the postsynaptic membrane and thereby stimulate the receiving cell. During habituation, fewer calcium ions are brought into the presynaptic membrane, meaning less neurotransmitter is released, meaning that the stimulation of the receiving cell is not as strong, meaning that the action that it is supposed to stimulate will be weaker. Likewise, the number of synapses related to a certain behavior decreases as a creature habituates, also resulting in weaker reactions. And the structure of the synapse itself can be altered in any number of ways that weaken communication (e.g. decreased number of neurotransmitter receptors on the postsynaptic membrane). It is because of these processes that the species-typical behavior of aplysia was altered. Types Emotional These behaviors facilitated interactions between members of the same species and are central to a species' connections to the surroundings worlds. With regard to humans specifically, they are able to feel the same sorts of complex emotions that most other humans feel, and these emotions often elicit certain behaviors. Remorse is defined as a feeling of sadness and being sorry for something you have done. People incapable of feeling remorse are often labeled as having antisocial personality disorder (also known as dissocial personality disorder). To qualify the inability to feel or express remorse as a disorder underlines the degree to which it is species-typical. The behavioral manifestations of individuals who feel remorse range from person to person, but many individuals in a state of remorse show signs of sadness and disinhibition. They may decide to withdraw from once pleasurable activities and social interactions. An individual may become more or less likely to tell others about an action that causes remorse. Pride is defined as a feeling of satisfied accomplishment, and/or hubris and self-importance. People expressing pride tend to show a small smile, tilt their heads back, and even place their hands on their hips and improve posture. They also regularly choose to share their accomplishments with others. Pride - distinct from other emotions such as joy or happiness - requires a developed sense of self and is usually expressed through verbal interactions with other humans. Embarrassment is defined as a state of internal discomfort following a thought or action. Behavioral manifestations of embarrassment are similar to those of remorse. They often include the desire to retreat from socially intense situations where other people may remember an embarrassing incident. When alone, too, an embarrassed individual may try to avoid recollection of the incident due to feelings of shame it causes. Embarrassed individuals may also show signs of blushing due to embarrassment. Feeding These behaviors facilitate survival. Different species are physiologically adapted to consume different foods that must be acquired in different ways, and the manner in which they feed must correspond to these unique characteristics. Rodents share common species-typical feeding behaviors (also known as order-typical, since all these creatures are members of the same order, rodentia). For example, certain types of beavers, squirrels, rats, guinea pigs, hamsters, and prairie dogs all locate food by sniffing for it, grasp for food with their mouths, sit on their hindquarters to eat, and manipulate the food with their hands. But they each also have more unique feeding behaviors. For example, beavers grasp for food with their mouths, but may sometimes use a single paw to grasp and manipulate said food item. And many rodents manipulate the food with their digits in unique ways. A woodpecker consumes insects that can frequently be found inside trees. To access these insects, it uses a jackhammer like motion to drill into tree wood with its beak. It then reaches in and grabs the insects with its beak. Herons feed on aquatic creatures. In order to catch them, it wades in shallow water, searching for freshwater fish, amphibians, and the occasional reptile, and utilizing its neck and beak to spear the prey item. Learning/conditioning Species with complex nervous systems (esp. mammals), in addition to acting based on instinct and basic sensory stimuli, need to learn how to engage in certain activities. Because of the ways in which their nervous systems develop, they are frequently adept at learning certain behaviors at specific times in their lives. White-crowned sparrows are particularly adept at learning songs between the ages of fifteen and fifty days. A marsh wren can learn to sing over 150 bird songs, while the white-crowned sparrow can only learn a single song. Thus, the number of songs that can be sung varies between species of birds, due to relative limitations in their cognitive processing abilities. As the above bullet point suggests, birds have species-specific preferences for certain songs that are rooted in their genes. If a young bird is not exposed to birdsong very early in its life, but is then suddenly exposed to a variety of different bird songs, including the one typical of its species, it tends to show a preference for that one. Reproduction Reproduction is an activity that takes place between members of the same species. In order to interact and reproduce successfully, the members of a species must share common behaviors. The female fruit bat performs fellatio on a male fruit bat during copulation to increase overall copulation time. Although fellatio is a common human foreplay activity, it is less common among non-human animal species. At this point, it is unclear exactly what neurological forces motivate fruit bats to engage in fellatio during sex, although scientific researchers present various hypotheses. But not all species-typical reproductive behaviors are about specific reproductive activity between two animals. Infanticide is practiced by male hippopotami, most likely in order to improve their chances of reproductive success. They tend to commit infanticide within 50 days post-parturition, especially when water sources are scarce and dominance hierarchies are challenged. Sensory/motor activity Different species perceive the world in different ways. The nervous systems of species develop in concert with certain anatomical features in order to produce sensory environments common to most members of that species. Because mantis shrimp can visually sense and process ultraviolet light, they react to it, while animals like dogs do not. Mayflies are able to perceive certain patterns of light polarization which suggest to them that they are above water. In response, they release their eggs, since mayfly naiads (aquatic larvae) are biologically developed to live and grow in water. Dogs have a scratch reflex, meaning that they reflexively scratch an irritated skin region without direction from the brain. A limb (usually their hind leg) is extended to the irritated part of the body; because this is a spinal reflex, a dog will do this even if spinal connection to the brain is severed. A rat tends to groom itself using the same procedure in the same order: it sits up, licks its paws, wipes its nose and then its face with its paws, and then licks the fur on its body. Social activity Species interact with one another, and certain species exhibit commonly held social traits. A panda often expresses aggressiveness by lowering its head and directing its gaze at the target of its aggression. This behavior may have developed due to the nature of the creatures that pandas tend to try to threaten—because they feel threatened by this form of intimidation, pandas regularly engage. Cats, ponies, lions, baboons, and many other non-human species partake in social grooming to maintain the hygiene of other individuals. Social grooming among animals can be seen as a form of conflict resolution that also builds trust among other animals who live nearby. Research has shown that grooming influences the endocrine system—it appears to be relaxing to those who participate due to the release of beta-endorphin. In addition, an increase in maternal grooming has been shown to increase the number of glucocorticoid receptors in the brains of newborn rats. Notes References Ethology
Species-typical behavior
[ "Biology" ]
2,906
[ "Behavioural sciences", "Ethology", "Behavior" ]
17,395,221
https://en.wikipedia.org/wiki/Subdirect%20product
In mathematics, especially in the areas of abstract algebra known as universal algebra, group theory, ring theory, and module theory, a subdirect product is a subalgebra of a direct product that depends fully on all its factors without however necessarily being the whole direct product. The notion was introduced by Birkhoff in 1944, generalizing Emmy Noether's special case of the idea (and decomposition result) for Noetherian rings, and has proved to be a powerful generalization of the notion of direct product. Definition A subdirect product is a subalgebra (in the sense of universal algebra) A of a direct product ΠiAi such that every induced projection (the composite pjs: A → Aj of a projection pj: ΠiAi → Aj with the subalgebra inclusion s: A → ΠiAi) is surjective. A direct (subdirect) representation of an algebra A is a direct (subdirect) product isomorphic to A. An algebra is called subdirectly irreducible if it is not subdirectly representable by "simpler" algebras (formally, if in any subdirect representation, one of the projections is an isomorphism). Subdirect irreducibles are to subdirect product of algebras roughly as primes are to multiplication of integers. Birkhoff (1944) proved that every algebra all of whose operations are of finite arity is isomorphic to a subdirect product of subdirectly irreducible algebras. Examples Every permutation group is a sub-direct product of its restrictions to its orbits. Any distributive lattice L is subdirectly representable as a subalgebra of a direct power of the two-element distributive lattice. This can be viewed as an algebraic formulation of the representability of L as a set of sets closed under the binary operations of union and intersection, via the interpretation of the direct power itself as a power set. In the finite case such a representation is direct (i.e. the whole direct power) if and only if L is a complemented lattice, i.e. a Boolean algebra. The same holds for any semilattice when "semilattice" is substituted for "distributive lattice" and "subsemilattice" for "sublattice" throughout the preceding example. That is, every semilattice is representable as a subdirect power of the two-element semilattice. The chain of natural numbers together with infinity, as a Heyting algebra, is subdirectly representable as a subalgebra of the direct product of the finite linearly ordered Heyting algebras. The situation with other Heyting algebras is treated in further detail in the article on subdirect irreducibles. The group of integers under addition is subdirectly representable by any (necessarily infinite) family of arbitrarily large finite cyclic groups. In this representation, 0 is the sequence of identity elements of the representing groups, 1 is a sequence of generators chosen from the appropriate group, and integer addition and negation are the corresponding group operations in each group applied coordinate-wise. The representation is faithful (no two integers are represented by the same sequence) because of the size requirement, and the projections are onto because every coordinate eventually exhausts its group. Every vector space over a given field is subdirectly representable by the one-dimensional space over that field, with the finite-dimensional spaces being directly representable in this way. (For vector spaces, as for abelian groups, direct product with finitely many factors is synonymous with direct sum with finitely many factors, whence subdirect product and subdirect sum are also synonymous for finitely many factors.) Subdirect products are used to represent many small perfect groups in . Every reduced commutative Noetherian ring is a sub-direct product of integral domains (over a field, this corresponds to the decomposition of a variety into its irreducible components). And more generally every commutative Noetherian ring is a sub-direct product of rings whose only zero-divisors are nilpotent. (Originally proved in Section 6 of Noether (1921).) Every commutative reduced ring is a sub-direct product of fields (Lemma 2 of Birkhoff (1944)). See also Semidirect product, another kind of group product Goursat's lemma, which classifies subdirect products of two groups Classification of 3-factor subdirect products of groups by Neuen & Schweitzer References Universal algebra
Subdirect product
[ "Mathematics" ]
1,003
[ "Fields of abstract algebra", "Universal algebra" ]
17,395,232
https://en.wikipedia.org/wiki/Open%20mapping%20theorem%20%28complex%20analysis%29
In complex analysis, the open mapping theorem states that if is a domain of the complex plane and is a non-constant holomorphic function, then is an open map (i.e. it sends open subsets of to open subsets of , and we have invariance of domain.). The open mapping theorem points to the sharp difference between holomorphy and real-differentiability. On the real line, for example, the differentiable function is not an open map, as the image of the open interval is the half-open interval . The theorem for example implies that a non-constant holomorphic function cannot map an open disk onto a portion of any line embedded in the complex plane. Images of holomorphic functions can be of real dimension zero (if constant) or two (if non-constant) but never of dimension 1. Proof Assume is a non-constant holomorphic function and is a domain of the complex plane. We have to show that every point in is an interior point of , i.e. that every point in has a neighborhood (open disk) which is also in . Consider an arbitrary in . Then there exists a point in such that . Since is open, we can find such that the closed disk around with radius is fully contained in . Consider the function . Note that is a root of the function. We know that is non-constant and holomorphic. The roots of are isolated by the identity theorem, and by further decreasing the radius of the disk , we can assure that has only a single root in (although this single root may have multiplicity greater than 1). The boundary of is a circle and hence a compact set, on which is a positive continuous function, so the extreme value theorem guarantees the existence of a positive minimum , that is, is the minimum of for on the boundary of and . Denote by the open disk around with radius . By Rouché's theorem, the function will have the same number of roots (counted with multiplicity) in as for any in . This is because , and for on the boundary of , . Thus, for every in , there exists at least one in such that . This means that the disk is contained in . The image of the ball , is a subset of the image of , . Thus is an interior point of . Since was arbitrary in we know that is open. Since was arbitrary, the function is open. Applications Maximum modulus principle Rouché's theorem Schwarz lemma See also Open mapping theorem (functional analysis) References Theorems in complex analysis Articles containing proofs
Open mapping theorem (complex analysis)
[ "Mathematics" ]
533
[ "Articles containing proofs", "Theorems in mathematical analysis", "Theorems in complex analysis" ]
17,395,276
https://en.wikipedia.org/wiki/Open%20mapping%20theorem%20%28functional%20analysis%29
In functional analysis, the open mapping theorem, also known as the Banach–Schauder theorem or the Banach theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result that states that if a bounded or continuous linear operator between Banach spaces is surjective then it is an open map. A special case is also called the bounded inverse theorem (also called inverse mapping theorem or Banach isomorphism theorem), which states that a bijective bounded linear operator from one Banach space to another has bounded inverse . Statement and proof The proof here uses the Baire category theorem, and completeness of both and is essential to the theorem. The statement of the theorem is no longer true if either space is assumed to be only a normed vector space; see . The proof is based on the following lemmas, which are also somewhat of independent interest. A linear map between topological vector spaces is said to be nearly open if, for each neighborhood of zero, the closure contains a neighborhood of zero. The next lemma may be thought of as a weak version of the open mapping theorem. Proof: Shrinking , we can assume is an open ball centered at zero. We have . Thus, some contains an interior point ; that is, for some radius , Then for any in with , by linearity, convexity and , , which proves the lemma by dividing by . (The same proof works if are pre-Fréchet spaces.) The completeness on the domain then allows to upgrade nearly open to open. Proof: Let be in and some sequence. We have: . Thus, for each and in , we can find an with and in . Thus, taking , we find an such that Applying the same argument with , we then find an such that where we observed . Then so on. Thus, if , we found a sequence such that converges and . Also, Since , by making small enough, we can achieve . (Again the same proof is valid if are pre-Fréchet spaces.) Proof of the theorem: By Baire's category theorem, the first lemma applies. Then the conclusion of the theorem follows from the second lemma. In general, a continuous bijection between topological spaces is not necessarily a homeomorphism. The open mapping theorem, when it applies, implies the bijectivity is enough: Even though the above bounded inverse theorem is a special case of the open mapping theorem, the open mapping theorem in turns follows from that. Indeed, a surjective continuous linear operator factors as Here, is continuous and bijective and thus is a homeomorphism by the bounded inverse theorem; in particular, it is an open mapping. As a quotient map for topological groups is open, is open then. Because the open mapping theorem and the bounded inverse theorem are essentially the same result, they are often simply called Banach's theorem. Transpose formulation Here is a formulation of the open mapping theorem in terms of the transpose of an operator. Proof: The idea of 1. 2. is to show: and that follows from the Hahn–Banach theorem. 2. 3. is exactly the second lemma in . Finally, 3. 4. is trivial and 4. 1. easily follows from the open mapping theorem. Alternatively, 1. implies that is injective and has closed image and then by the closed range theorem, that implies has dense image and closed image, respectively; i.e., is surjective. Hence, the above result is a variant of a special case of the closed range theorem. Quantative formulation Terence Tao gives the following quantitative formulation of the theorem: Proof: 2. 1. is the usual open mapping theorem. 1. 4.: For some , we have where means an open ball. Then for some in . That is, with . 4. 3.: We can write with in the dense subspace and the sum converging in norm. Then, since is complete, with and is a required solution. Finally, 3. 2. is trivial. Counterexample The open mapping theorem may not hold for normed spaces that are not complete. A quickest way to see this is to note that the closed graph theorem, a consequence of the open mapping theorem, fails without completeness. But here is a more concrete counterexample. Consider the space X of sequences x : N → R with only finitely many non-zero terms equipped with the supremum norm. The map T : X → X defined by is bounded, linear and invertible, but T−1 is unbounded. This does not contradict the bounded inverse theorem since X is not complete, and thus is not a Banach space. To see that it's not complete, consider the sequence of sequences x(n) ∈ X given by converges as n → ∞ to the sequence x(∞) given by which has all its terms non-zero, and so does not lie in X. The completion of X is the space of all sequences that converge to zero, which is a (closed) subspace of the ℓp space ℓ∞(N), which is the space of all bounded sequences. However, in this case, the map T is not onto, and thus not a bijection. To see this, one need simply note that the sequence is an element of , but is not in the range of . Same reasoning applies to show is also not onto in , for example is not in the range of . Consequences The open mapping theorem has several important consequences: If is a bijective continuous linear operator between the Banach spaces and then the inverse operator is continuous as well (this is called the bounded inverse theorem). If is a linear operator between the Banach spaces and and if for every sequence in with and it follows that then is continuous (the closed graph theorem). Given a bounded operator between normed spaces, if the image of is non-meager and if is complete, then is open and surjective and is complete (to see this, use the two lemmas in the proof of the theorem). An exact sequence of Banach spaces (or more generally Fréchet spaces) is topologically exact. The closed range theorem, which says an operator (under some assumption) has closed image if and only if its transpose has closed image (see closed range theorem#Sketch of proof). The open mapping theorem does not imply that a continuous surjective linear operator admits a continuous linear section. What we have is: A surjective continuous linear operator between Banach spaces admits a continuous linear section if and only if the kernel is topologically complemented. In particular, the above applies to an operator between Hilbert spaces or an operator with finite-dimensional kernel (by the Hahn–Banach theorem). If one drops the requirement that a section be linear, a surjective continuous linear operator between Banach spaces admits a continuous section; this is the Bartle–Graves theorem. Generalizations Local convexity of or  is not essential to the proof, but completeness is: the theorem remains true in the case when and are F-spaces. Furthermore, the theorem can be combined with the Baire category theorem in the following manner: (The proof is essentially the same as the Banach or Fréchet cases; we modify the proof slightly to avoid the use of convexity,) Furthermore, in this latter case if is the kernel of then there is a canonical factorization of in the form where is the quotient space (also an F-space) of by the closed subspace The quotient mapping is open, and the mapping is an isomorphism of topological vector spaces. An important special case of this theorem can also be stated as On the other hand, a more general formulation, which implies the first, can be given: Nearly/Almost open linear maps A linear map between two topological vector spaces (TVSs) is called a (or sometimes, an ) if for every neighborhood of the origin in the domain, the closure of its image is a neighborhood of the origin in Many authors use a different definition of "nearly/almost open map" that requires that the closure of be a neighborhood of the origin in rather than in but for surjective maps these definitions are equivalent. A bijective linear map is nearly open if and only if its inverse is continuous. Every surjective linear map from locally convex TVS onto a barrelled TVS is nearly open. The same is true of every surjective linear map from a TVS onto a Baire TVS. Webbed spaces are a class of topological vector spaces for which the open mapping theorem and the closed graph theorem hold. See also References Bibliography Further reading Articles containing proofs Theorems in functional analysis
Open mapping theorem (functional analysis)
[ "Mathematics" ]
1,809
[ "Theorems in mathematical analysis", "Theorems in functional analysis", "Articles containing proofs" ]
13,447,186
https://en.wikipedia.org/wiki/Load%20pull
Load-pull is the colloquial term applied to the process of systematically varying the impedance presented to a device under test (DUT), most often a transistor, to assess its performance and the associated conditions to deliver that performance in a network. While load-pull itself implies impedance variation at the load port, impedance can also be varied at any of the ports of the DUT, most often at the source. Load-pull is required when superposition is no longer applicable, which occurs under large-signal operating conditions that make linear approximations unusable. The term load-pull derives from classical oscillator characterization whereupon variation of the load impedance pulls the oscillation center frequency away from nominal. Source-pull is also used for noise characterization, which, although linear, requires multiple impedances to be presented at the source to enable simultaneous solution of an over-determined system that yields the four noise parameters. Load-pull is the most common method globally for RF and MW power amplifier (PA) design, transistor characterization, semiconductor process development, and ruggedness analysis. A central theme of load-pull is management of nonlinearity versus analysis of nonlinearity, the latter being the domain of advanced mathematics that often yields little physical insight to nonlinear phenomena and suffers from an inability to accurately render actual behavior embedded in a network with significant parasitic and distributed effects. With automated load-pull, it is possible to fully optimize and design a final stage for GSM applications in less than a day, thereby providing a dramatic reduction in design cycle-time while assuring the best possible performance trade-off has been achieved. While there are in theory no physical limits on the frequency of which load-pull can be performed, most load-pull systems are based on passive distributed networks using either the slab transmission line in its TEM mode or the rectangular waveguide in its TE01 mode. Lumped tuners can be made for HF and VHF frequencies, whereas active load-pull is ideal for on-wafer mm-wave environments, where substantial loss between the tuner and DUT reference-plane limits maximum VSWR. See also Impedance matching Electronic test equipment Transistor tester References External links Radio electronics
Load pull
[ "Engineering" ]
459
[ "Radio electronics" ]
13,447,866
https://en.wikipedia.org/wiki/EpiData
EpiData is a group of applications used in combination for creating documented data structures and analysis of quantitative data. Overview The EpiData Association, which created the software, was created in 1999 and is based in Denmark. EpiData was developed in Pascal and uses open standards such as HTML where possible. EpiData is widely used by organizations and individuals to create and analyze large amounts of data. The World Health Organization (WHO) uses EpiData in its STEPS method of collecting epidemiological, medical, and public health data, for biostatistics, and for other quantitative-based projects. Epicentre, the research wing of Médecins Sans Frontières, uses EpiData to manage data from its international research studies and field epidemiology studies. E.g.: Piola P, Fogg C et al.: Supervised versus unsupervised intake of six-dose artemether-lumefantrine for treatment of acute, uncomplicated Plasmodium falciparum malaria in Mbarara, Uganda: a randomised trial. Lancet. 2005 Apr 23–29;365(9469):1467-73 ''. Other examples: '', '' or ''. EpiData has two parts: Epidata Entry – used for simple or programmed data entry and data documentation. It handles simple forms or related systems EpiData Analysis – performs basic statistical analysis, graphs, and comprehensive data management, such as recoding data, label values and variables, and basic statistics. This application can create control charts, such as pareto charts or p-charts, and many other methods to visualize and describe statistical data. The software is free; development is funded by governmental and non-governmental organizations like WHO. See also Clinical surveillance Disease surveillance Epidemiological methods Control chart References External links EpiData official site EpiData Wiki EpiData-list – mailing list for EpiData World Health Organization STEPS approach to surveillance Médecins Sans Frontières Epicentre 1999 software Biostatistics Epidemiology Freeware Statistical software
EpiData
[ "Mathematics", "Environmental_science" ]
442
[ "Epidemiology", "Statistical software", "Environmental social science", "Mathematical software" ]
13,450,562
https://en.wikipedia.org/wiki/To%20Hare%20Is%20Human
To Hare is Human is a 1956 Warner Bros. Merrie Melodies cartoon directed by Chuck Jones. The short was released on December 15, 1956, and stars Bugs Bunny and Wile E. Coyote. In this film, Wile builds a UNIVAC computer, and grows to rely on its answers. Plot Wile E. Coyote employs a series of increasingly elaborate schemes to capture Bugs Bunny, utilizing a foldable elevator and a super smart computer called UNIVAC. In one scenario, Wile E. Coyote ensnares Bugs Bunny in a sack rigged with dynamite but is outsmarted when Bugs escapes, causing an explosion that buys him time to flee. Undeterred, Wile E. Coyote utilizes the UNIVAC to devise new strategies, each met with humorous failure. First, Wile E. attempts to unlock Bugs' rabbit hole using the UNIVAC's guidance, only to slip on a banana peel and plummet off a cliff. In another attempt, he substitutes hand grenades for Bugs' breakfast carrots, but the plan backfires when the grenades are launched back at him. A subsequent effort involving a bathroom plunger leads to Wile E. being sucked into his own trap. In a fourth endeavor, Wile E. inserts dynamite into Bugs' vacuum cleaner, resulting in a comedic explosion when Bugs inadvertently reignites the fuse. Finally, Wile E. sets a booby trap in the carrot patch, but it backfires, leading to his own demise. The revelation that Bugs Bunny is the true mastermind behind the UNIVAC's calculations adds a humorous twist to the failed attempts, highlighting Bugs' cleverness and Wile E. Coyote's perpetual misfortune. Production notes The title is a play on the expression, "To err is human; to forgive, divine." This was also the final cartoon to be made at Termite Terrace before the studio moved to the Burbank lot. Home media The short was released on the Looney Tunes Golden Collection: Volume 4, Disc One. References External links 1956 films Merrie Melodies short films Warner Bros. Cartoons animated short films Wile E. Coyote and the Road Runner films Short films directed by Chuck Jones Films scored by Milt Franklyn Bugs Bunny films 1950s Warner Bros. animated short films Films with screenplays by Michael Maltese Films about computing UNIVAC Films produced by Edward Selzer 1950s English-language films English-language short films 1956 animated short films
To Hare Is Human
[ "Technology" ]
503
[ "Works about computing", "Films about computing" ]
13,451,543
https://en.wikipedia.org/wiki/Travelport
Travelport Worldwide Ltd provides distribution, technology, and payment solutions for the travel and tourism industry. It is the smallest, by revenue, of the top three global distribution systems (GDS) after Amadeus IT Group and Sabre Corporation. The company also provides IT services to airlines, such as shopping, ticketing, and departure control. History The company was formed by Cendant in 2001 following its acquisitions of Galileo GDS for $2.9 billion and CheapTickets for $425 million. In 2004, the company acquired Orbitz for $1.25 billion and Flairview Travel for $88 million. In 2005, the company acquired eBookers for $350 million and Gullivers Travel Associates for $1.1 billion. In August 2006, Cendant sold Orbitz and Galileo to The Blackstone Group for $4.3 billion, forming Travelport. In August 2007, Travelport acquired Worldspan for $1.4 billion. In July 2007, the company completed the partial corporate spin-off of Orbitz via an initial public offering. In May 2010, the company acquired Sprice.com. In 2011, the company sold Gullivers Travel Associates to Kuoni Travel for $720 million. On September 25, 2014, the company became a public company via an initial public offering on the New York Stock Exchange. In 2015, Travelport acquired Mobile Travel Technologies for ‎€55 million. On March 10, 2023, the company acquired Deem from Enterprise Holdings for an undisclosed amount. Awards In 2017, Travelport was the first GDS to be awarded the International Air Transport Association NDC (New Distribution Capability) Level 3 certification as an aggregator of travel content. In 2018, it became the first GDS operator to manage the live booking of flights using the NDC standard. Acquisition On May 30, 2019, the company was acquired by affiliates of Siris Capital Group and Evergreen Coast Capital, an affiliate of Elliott Management Corporation, for $4.4 billion. References External links 2006 mergers and acquisitions 2007 initial public offerings 2014 initial public offerings 2019 mergers and acquisitions Blackstone Inc. companies Companies based in Slough Companies formerly listed on the New York Stock Exchange Computer reservation systems Hospitality companies established in 2001 Privately held companies of England Travel technology
Travelport
[ "Technology" ]
454
[ "Computer reservation systems", "Computer systems" ]
13,452,645
https://en.wikipedia.org/wiki/Wick%20Buildings
Wick Buildings, Inc. is a manufacturer of shelter products, which have been offered since its founding in 1954. Wick Buildings are wood-frame structures covered with formed sheet steel. These structures serve a variety of building and shelter needs and desires including commercial offices, mini-warehouses, churches, agricultural buildings and homes. Wick buildings currently serves 37 states from Colorado to New York. History Wick Buildings, Inc. ("Wick") is an ESOP (Employee stock ownership plan) Wisconsin company. Wick Buildings was founded in 1954 by John F. Wick, incorporated in 1958, and re-organized in March 2010. Through continued growth, Wick Buildings has grown to become a major producer of shelter products. These shelter products include post-frame agricultural and commercial buildings. All Wick shelter products are sold through a network of independent builders. In the early days, these buildings were used primarily on farms in the Midwest as replacement for the two-story dairy barn. Wick Buildings consist of a wood-frame structure covered with sheet steel panels. Clear span trusses (up to 100 feet in width) are fabricated in the plant. Siding and roofing are formed and cut from rolled steel at Wick's production facilities. These components, together with the necessary pre-cut lumber, doors and trim material, are loaded on semi-trailers and delivered to the buyer's site. The buildings are either assembled on site by company personnel or the crews of Wick Building's independent contractors ("Builders"). With the sale of over 75,000 buildings since its founding in 1954, Wick Buildings is one of the nation’s largest producers of post-frame buildings. These buildings include, but not limited too, agricultural, suburban, barndominiums, equestrian, pole barns and pole barn homes. Wick's post-frame buildings are marketed and sold under the "Wick Buildings" trademark. About John Wick - Founder of Wick Buildings John F. Wick is a Wisconsin native and graduate of the University of Wisconsin–Madison. With a background in agriculture and advanced schooling in business finance and civil engineering, Mr. John Wick started the business with the sale and construction of post-frame metal buildings (pole buildings) out of Mazomanie, Wisconsin. The production facility and North American Headquarters for Wick Buildings and products are located at the Wick Manufacturing Complex in Mazomanie, Wisconsin. (25 miles west of Madison, Wisconsin off Highway 14 and on Walter Road in Mazomanie, Wisconsin.) In pop culture Derek Kolstad, grandson of the company's founder John F. Wick and writer for the 2014 American neo-noir action thriller film John Wick, named the film and its eponymous character after his grandfather. John F. Wick stated, "I was tickled by Derek using my name for a movie, and the hit man character was frosting on the cake." See also Wick Buildings Web Site Wick Buildings Facebook Page Wick Building Pinterest Page References External links National Frame Builders Association web site Property Maintenance Guide Barndominium Plans Building engineering organizations Employee-owned companies of the United States
Wick Buildings
[ "Engineering" ]
649
[ "Building engineering", "Building engineering organizations" ]
13,452,936
https://en.wikipedia.org/wiki/International%20Journal%20of%20Systematic%20and%20Evolutionary%20Microbiology
The International Journal of Systematic and Evolutionary Microbiology is a peer-reviewed scientific journal covering research in the field of microbial systematics that was established in 1951. Its scope covers the taxonomy, nomenclature, identification, characterisation, culture preservation, phylogeny, evolution, and biodiversity of all microorganisms, including prokaryotes, yeasts and yeast-like organisms, protozoa and algae. The journal is currently published monthly by the Microbiology Society. An official publication of the International Committee on Systematics of Prokaryotes (ICSP) and International Union of Microbiological Societies (Bacteriology and Applied Microbiology Division), the journal is the single official international forum for the publication of new species names for prokaryotes. In addition to research papers, the journal also publishes the minutes of meetings of the ICSP and its various subcommittees. Background and history From the first identification of a bacterial species in 1872, microbial species were named according to the binomial nomenclature, based on largely subjective descriptive characteristics. By the end of the 19th century, however, it was clear that this nomenclature and classification system required reform. Although several different comprehensive nomenclature systems were invented (most notably, that described in Bergey's Manual of Determinative Bacteriology, first published in 1923), none gained international recognition. In 1930, a single international body, now named the International Committee on Systematics of Prokaryotes (ICSP), was established to oversee all aspects of prokaryotic nomenclature. Work began in 1936 on drafting a Code of Bacteriological Nomenclature, the first version of which was approved in 1947. In 1950, at the 5th International Congress for Microbiology, a journal was established to disseminate the committee's conclusions to the microbiological community. It first appeared the following year under the title of International Bulletin of Bacteriological Nomenclature and Taxonomy. In 1980, the ICSP published an exhaustive list of all existing bacterial species considered valid in the Approved Lists of Bacterial Names. Thereafter, the committee's Code required all new names to be either published or indexed in its journal to be deemed valid. The journal was at first published quarterly by Iowa State College Press, which later increased to bimonthly. In 1966, the journal was renamed the International Journal of Systematic Bacteriology. For decades, the journal's cover quoted Dutch naturalist Otto Friedrich Müller: "the sure and definite determination (of species of bacteria) requires so much time, so much acumen of eye and judgement, so much of perseverance and patience that there is hardly anything else so difficult." Between 1971 and the end of 1997, the journal was published by the American Society for Microbiology. Publication moved to the United Kingdom in 1998, the journal being taken over by the Society for General Microbiology, in conjunction with Cambridge University Press. The title was changed to International Journal of Systematic and Evolutionary Microbiology in 2000, to reflect the broadened focus of the journal. A major redesign brought the journal into line with the three other society journals in 2003, and at the same date the printer/typesetter changed to the Charlesworth Group. The frequency increased to monthly in 2006. Role in nomenclature validation The journal publishes research papers establishing novel prokaryotic names, which are summarized in a notification list. Each monthly issue also contains a compilation of validated new names (the validation list) that have been previously published in other scientific journals or books. Since August 2002, publications relating to new bacterial taxa and validation of publication elsewhere have both required type strains to have been deposited at two recognised public collections in different countries. As of 2007, the journal has officially validated around 6500 species and 1500 genera. It was estimated in 2004 that over 300 new names had been published but not validated. Modern journal As of 2017, the editor-in-chief is Martha E. Trujillo (University of Salamanca). According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.8. References External links Microbiology journals Delayed open access journals Academic journals established in 1951 Monthly journals English-language journals Academic journals published by learned and professional societies Prokaryote taxonomy Microbiology Society academic journals
International Journal of Systematic and Evolutionary Microbiology
[ "Biology" ]
865
[ "Prokaryotes", "Taxonomy (biology)", "Prokaryote taxonomy" ]
13,454,394
https://en.wikipedia.org/wiki/The%20Planck%20Dive
"The Planck Dive" is a science fiction novelette by Australian writer Greg Egan, published in 1998. It was nominated for the 1999 Hugo Award for Best Novelette. Plot summary The story is set in the polis known as Cartan Null, where five explorers are preparing to send cloned copies of themselves on a scientific journey into a black hole. As they are about to make the dive a biographer from Earth and his daughter arrive with intentions of writing their story. Publication history After the story's initial publication in Isaac Asimov's Science Fiction Magazine in February 1998 it was included in the author's short story collection Luminous later in 1998. See also Diaspora References External links The Planck Dive - freely downloadable from the author's website. Australian science fiction short stories 1998 short stories Works originally published in Asimov's Science Fiction Short stories by Greg Egan Fiction about black holes Fiction about cloning
The Planck Dive
[ "Physics" ]
183
[ "Black holes", "Unsolved problems in physics", "Fiction about black holes" ]
13,454,849
https://en.wikipedia.org/wiki/MUMmer
MUMmer is a bioinformatics software system for sequence alignment. It is based on the suffix tree data structure. It has been used for comparing different genomes assemblies to one another, which allows scientists to determine how a genome has changed. The acronym "MUMmer" comes from "Maximal Unique Matches", or MUMs. The original algorithms in the MUMMER software package were designed by Art Delcher, Simon Kasif and Steven Salzberg. Mummer was the first whole genome comparison system developed in Bioinformatics. It was originally applied to the comparison of two related strains of bacteria. The MUMmer software is open source. The system is maintained primarily by Steven Salzberg and Arthur Delcher at Center for Computational Biology at Johns Hopkins University. MUMmer is a highly cited bioinformatics system in the scientific literature. According to Google Scholar, as of early 2013 the original MUMmer paper (Delcher et al., 1999) has been cited 691 times; the MUMmer 2 paper (Delcher et al., 2002) has been cited 455 times; and the MUMmer 3.0 article (Kurtz et al., 2004) has been cited 903 times. Overview Mummer is a fast algorithm used for the rapid alignment of entire genomes. The MUMmer algorithm is relatively new and has 4 versions. Versions of MUMmers MUMmer1 MUMmer1 or just MUMmer consists of three parts, the first part consists of the creation of suffix trees (to get MUMs), the second part in the longest increasing subsequence or longest common subsequences (to order MUMs), lastly any alignment to close gaps. Interruptions between MUMs-alignment, are known as gaps. Otherther alignment algorithms fill these gaps. The gaps fall in the following four classes: An SNPinterruption – when comparing two sequences, one character will differ. An insertion – when comparing two sequences, there is a subsequence in only appears in one of the sequences. It would be an empty gap in the other sequence at the moment of comparison of the two sequences. A highly polymorphic region – when comparing two sequences, there can be found a subsequence in which every single character differs. A repeat – it’s the repetition of a sequence. Since MUMs can only take unique sequences, that gap can be one repetition of one of the MUMs. MUMmer 2 This algorithm was redesigned to require less memory and increase speed and accuracy. It also allows for bigger genomes alignment. The improvement was the amount stored in the suffix trees by employing the one created by Kurtz. MUMmer 3 According to Stefan Kurtz and his teammates, “the most significant technical improvement in MUMmer 3.0, is a complete rewrite of the suffix-tree code, based on the compact suffix- tree representation of” the tree described in the article “Reducing the space requirement of suffix trees”. MUMmer 4 According to Guillaume and his team, there are some extra improvements in the implementation and also innovation with Query parallelism. “MUMmer4 now includes options to save and load the suffix array for a given reference." This allows the suffix tree can be built once and constructed again after running it from the saved suffix tree. Software - Open Source MUMmer has open-source software and can be accessed online. Related Sequence Alignments There are other types of sequence alignments: Edit distance BLAST Bowtie BWA Blat Mauve LASTZ BLAST References External links MUMmer home page MUMmer2 Book MUMmer Software MUMmer3 MUMmer1 MUMmer4 Bioinformatics software
MUMmer
[ "Biology" ]
746
[ "Bioinformatics", "Bioinformatics software" ]
13,454,888
https://en.wikipedia.org/wiki/Strong%20focusing
In accelerator physics strong focusing or alternating-gradient focusing is the principle that, using sets of multiple electromagnets, it is possible to make a particle beam simultaneously converge in both directions perpendicular to the direction of travel. By contrast, weak focusing is the principle that nearby circles, described by charged particles moving in a uniform magnetic field, only intersect once per revolution. Earnshaw's theorem shows that simultaneous focusing in two directions transverse to the beam axis at once by a single magnet is impossible - a magnet which focuses in one direction will defocus in the perpendicular direction. However, iron "poles" of a cyclotron or two or more spaced quadrupole magnets (arranged in quadrature) can alternately focus horizontally and vertically, and the net overall effect of a combination of these can be adjusted to focus the beam in both directions. Strong focusing was first conceived by Nicholas Christofilos in 1949 but not published (Christofilos opted instead to patent his idea). In 1952, the strong focusing principle was independently developed by Ernest Courant, M. Stanley Livingston, Hartland Snyder and J. Blewett at Brookhaven National Laboratory, who later acknowledged the priority of Christofilos' idea. The advantages of strong focusing were then quickly realised, and deployed on the Alternating Gradient Synchrotron. Courant and Snyder found that the net effect of alternating the field gradient was that both the vertical and horizontal focusing of protons could be made strong at the same time, allowing tight control of proton paths in the machine. This increased beam intensity while reducing the overall construction cost of a more powerful accelerator. The theory revolutionised cyclotron design and permitted very high field strengths to be employed, while massively reducing the size of the magnets needed by minimising the size of the beam. Most particle accelerators today use the strong-focusing principle. Multipole magnets Modern systems often use multipole magnets, such as quadrupole and sextupole magnets, to focus the beam down, as magnets give a more powerful deflection effect than earlier electrostatic systems at high beam kinetic energies. The multipole magnets refocus the beam after each deflection section, as deflection sections have a defocusing effect that can be countered with a convergent magnet 'lens'. This can be shown schematically as a sequence of divergent and convergent lenses. The quadrupoles are often laid out in what are called FODO patterns (where F focusses vertically and defocusses horizontally, and D focusses horizontally and defocusses vertically and O is a space or deflection magnet). Following the beam particles in their trajectories through the focusing arrangement, an oscillating pattern would be seen. Mathematical modeling The action upon a set of charged particles by a set of linear magnets (i.e. only dipoles, quadrupoles and the field-free drift regions between them) can be expressed as matrices which can be multiplied together to give their net effect, using ray transfer matrix analysis. Higher-order terms such as sextupoles, octupoles etc. may be treated by a variety of methods, depending on the phenomena of interest. See also Electron gun – uses cylindrical symmetric fields such as provided by a Wehnelt cylinder to focus an electron beam Maglev – has also been a suggested use of strong focusing References External links Lawrence Berkeley National Laboratory: World of Beams Accelerator physics
Strong focusing
[ "Physics" ]
715
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
13,455,085
https://en.wikipedia.org/wiki/Antec
Antec, Inc. () is a Taiwanese manufacturer of personal computer (PC) components and consumer tech products. Antec's principal products are computer cases and power supplies. Antec also offers PC cooling products, notebook accessories, and previously offered a line of Antec Mobile Products (A.M.P.) which included Bluetooth speakers, headphones, portable batteries, and charging hubs. Antec joined the handheld PC gaming market globally with the CORE HS which is a 32GB/2TB AMD 7840u based Windows 11 device with a 1080p screen and gaming controller built into the device. Originally founded in 1986 in Fremont, California, the company is now headquartered in Taipei City, with additional offices in Rotterdam, Beijing, and Fremont, California. Antec pioneered a number of innovations in the PC case industry, including the switch from beige to black exterior color, and first cases specifically designed for noise reduction with the Sonata and P180. In 2011, Antec was purchased by Ming-Jong Technologies Ltd. of Taiwan, and the combined company adopted the Antec name. Antec products are sold in over 40 countries through its online retail platform, Amazon, and distribution partners. The company is publicly traded on the Taipei stock exchange under ticker 6276. Products Computer enclosures Computer power supplies Computer cooling products Antec Mobile Products (a.m.p) In October 2012, Antec launched the now-defunct Antec Mobile Products (A.M.P.), a wholly owned subsidiary of Antec. The product range included a line of Bluetooth audio devices, USB-powered battery packs and mobile chargers. Console Gaming Antec entered the console gaming sector with the X-1 Cooler for the Xbox One in 2015 which received positive reviews. The cooler prevented the console from over-heating while in use and helped preserve the lifespan of the console. References External links Official website Antec Mobile Products website Manufacturing companies established in 1986 Computer companies of Taiwan Computer enclosure companies Computer hardware companies Computer power supply unit manufacturers 1986 establishments in California Computer hardware cooling Companies listed on the Taipei Exchange
Antec
[ "Technology" ]
428
[ "Computer hardware companies", "Computers" ]
13,455,478
https://en.wikipedia.org/wiki/Animal%20coloration
Animal colouration is the general appearance of an animal resulting from the reflection or emission of light from its surfaces. Some animals are brightly coloured, while others are hard to see. In some species, such as the peafowl, the male has strong patterns, conspicuous colours and is iridescent, while the female is far less visible. There are several separate reasons why animals have evolved colours. Camouflage enables an animal to remain hidden from view. Animals use colour to advertise services such as cleaning to animals of other species; to signal their sexual status to other members of the same species; and in mimicry, taking advantage of the warning coloration of another species. Some animals use flashes of colour to divert attacks by startling predators. Zebras may possibly use motion dazzle, confusing a predator's attack by moving a bold pattern rapidly. Some animals are coloured for physical protection, with pigments in the skin to protect against sunburn, while some frogs can lighten or darken their skin for temperature regulation. Finally, animals can be coloured incidentally. For example, blood is red because the haem pigment needed to carry oxygen is red. Animals coloured in these ways can have striking natural patterns. Animals produce colour in both direct and indirect ways. Direct production occurs through the presence of visible coloured cells known as pigment which are particles of coloured material such as freckles. Indirect production occurs by virtue of cells known as chromatophores which are pigment-containing cells such as hair follicles. The distribution of the pigment particles in the chromatophores can change under hormonal or neuronal control. For fishes it has been demonstrated that chromatophores may respond directly to environmental stimuli like visible light, UV-radiation, temperature, pH, chemicals, etc. colour change helps individuals in becoming more or less visible and is important in agonistic displays and in camouflage. Some animals, including many butterflies and birds, have microscopic structures in scales, bristles or feathers which give them brilliant iridescent colours. Other animals including squid and some deep-sea fish can produce light, sometimes of different colours. Animals often use two or more of these mechanisms together to produce the colours and effects they need. History Animal coloration has been a topic of interest and research in biology for centuries. In the classical era, Aristotle recorded that the octopus was able to change its coloration to match its background, and when it was alarmed. In his 1665 book Micrographia, Robert Hooke describes the "fantastical" (structural, not pigment) colours of the Peacock's feathers: According to Charles Darwin's 1859 theory of natural selection, features such as coloration evolved by providing individual animals with a reproductive advantage. For example, individuals with slightly better camouflage than others of the same species would, on average, leave more offspring. In his Origin of Species, Darwin wrote: Henry Walter Bates's 1863 book The Naturalist on the River Amazons describes his extensive studies of the insects in the Amazon basin, and especially the butterflies. He discovered that apparently similar butterflies often belonged to different families, with a harmless species mimicking a poisonous or bitter-tasting species to reduce its chance of being attacked by a predator, in the process now called after him, Batesian mimicry. Edward Bagnall Poulton's strongly Darwinian 1890 book The Colours of Animals, their meaning and use, especially considered in the case of insects argued the case for three aspects of animal coloration that are broadly accepted today but were controversial or wholly new at the time. It strongly supported Darwin's theory of sexual selection, arguing that the obvious differences between male and female birds such as the argus pheasant were selected by the females, pointing out that bright male plumage was found only in species "which court by day". The book introduced the concept of frequency-dependent selection, as when edible mimics are less frequent than the distasteful models whose colours and patterns they copy. In the book, Poulton also coined the term aposematism for warning coloration, which he identified in widely differing animal groups including mammals (such as the skunk), bees and wasps, beetles, and butterflies. Frank Evers Beddard's 1892 book, Animal Coloration, acknowledged that natural selection existed but examined its application to camouflage, mimicry and sexual selection very critically. The book was in turn roundly criticised by Poulton. Abbott Handerson Thayer's 1909 book Concealing-Coloration in the Animal Kingdom, completed by his son Gerald H. Thayer, argued correctly for the widespread use of crypsis among animals, and in particular described and explained countershading for the first time. However, the Thayers spoilt their case by arguing that camouflage was the sole purpose of animal coloration, which led them to claim that even the brilliant pink plumage of the flamingo or the roseate spoonbill was cryptic—against the momentarily pink sky at dawn or dusk. As a result, the book was mocked by critics including Theodore Roosevelt as having "pushed [the "doctrine" of concealing coloration] to such a fantastic extreme and to include such wild absurdities as to call for the application of common sense thereto." Hugh Bamford Cott's 500-page book Adaptive Coloration in Animals, published in wartime 1940, systematically described the principles of camouflage and mimicry. The book contains hundreds of examples, over a hundred photographs and Cott's own accurate and artistic drawings, and 27 pages of references. Cott focussed especially on "maximum disruptive contrast", the kind of patterning used in military camouflage such as disruptive pattern material. Indeed, Cott describes such applications: Animal coloration provided important early evidence for evolution by natural selection, at a time when little direct evidence was available. Evolutionary reasons for animal coloration Camouflage One of the pioneers of research into animal coloration, Edward Bagnall Poulton classified the forms of protective coloration, in a way which is still helpful. He described: protective resemblance; aggressive resemblance; adventitious protection; and variable protective resemblance. These are covered in turn below. Protective resemblance is used by prey to avoid predation. It includes special protective resemblance, now called mimesis, where the whole animal looks like some other object, for example when a caterpillar resembles a twig or a bird dropping. In general protective resemblance, now called crypsis, the animal's texture blends with the background, for example when a moth's colour and pattern blend in with tree bark. Aggressive resemblance is used by predators or parasites. In special aggressive resemblance, the animal looks like something else, luring the prey or host to approach, for example when a flower mantis resembles a particular kind of flower, such as an orchid. In general aggressive resemblance, the predator or parasite blends in with the background, for example when a leopard is hard to see in long grass. For adventitious protection, an animal uses materials such as twigs, sand, or pieces of shell to conceal its outline, for example when a caddis fly larva builds a decorated case, or when a decorator crab decorates its back with seaweed, sponges and stones. In variable protective resemblance, an animal such as a chameleon, flatfish, squid or octopus changes its skin pattern and colour using special chromatophore cells to resemble whatever background it is currently resting on (as well as for signalling). The main mechanisms to create the resemblances described by Poulton – whether in nature or in military applications – are crypsis, blending into the background so as to become hard to see (this covers both special and general resemblance); disruptive patterning, using colour and pattern to break up the animal's outline, which relates mainly to general resemblance; mimesis, resembling other objects of no special interest to the observer, which relates to special resemblance; countershading, using graded colour to create the illusion of flatness, which relates mainly to general resemblance; and counterillumination, producing light to match the background, notably in some species of squid. Countershading was first described by the American artist Abbott Handerson Thayer, a pioneer in the theory of animal coloration. Thayer observed that whereas a painter takes a flat canvas and uses coloured paint to create the illusion of solidity by painting in shadows, animals such as deer are often darkest on their backs, becoming lighter towards the belly, creating (as zoologist Hugh Cott observed) the illusion of flatness, and against a matching background, of invisibility. Thayer's observation "Animals are painted by Nature, darkest on those parts which tend to be most lighted by the sky's light, and vice versa" is called Thayer's Law. Signalling Colour is widely used for signalling in animals as diverse as birds and shrimps. Signalling encompasses at least three purposes: advertising, to signal a capability or service to other animals, whether within a species or not sexual selection, where members of one sex choose to mate with suitably coloured members of the other sex, thus driving the development of such colours warning, to signal that an animal is harmful, for example can sting, is poisonous or is bitter-tasting. Warning signals may be mimicked truthfully or untruthfully. Advertising services Advertising coloration can signal the services an animal offers to other animals. These may be of the same species, as in sexual selection, or of different species, as in cleaning symbiosis. Signals, which often combine colour and movement, may be understood by many different species; for example, the cleaning stations of the banded coral shrimp Stenopus hispidus are visited by different species of fish, and even by reptiles such as hawksbill sea turtles. Sexual selection Darwin observed that the males of some species, such as birds-of-paradise, were very different from the females. Darwin explained such male-female differences in his theory of sexual selection in his book The Descent of Man. Once the females begin to select males according to any particular characteristic, such as a long tail or a coloured crest, that characteristic is emphasized more and more in the males. Eventually all the males will have the characteristics that the females are sexually selecting for, as only those males can reproduce. This mechanism is powerful enough to create features that are strongly disadvantageous to the males in other ways. For example, some male birds-of-paradise have wing or tail streamers that are so long that they impede flight, while their brilliant colours may make the males more vulnerable to predators. In the extreme, sexual selection may drive species to extinction, as has been argued for the enormous horns of the male Irish elk, which may have made it difficult for mature males to move and feed. Different forms of sexual selection are possible, including rivalry among males, and selection of females by males. Warning Warning coloration (aposematism) is effectively the "opposite" of camouflage, and a special case of advertising. Its function is to make the animal, for example a wasp or a coral snake, highly conspicuous to potential predators, so that it is noticed, remembered, and then avoided. As Peter Forbes observes, "Human warning signs employ the same colours – red, yellow, black, and white – that nature uses to advertise dangerous creatures." Warning colours work by being associated by potential predators with something that makes the warning coloured animal unpleasant or dangerous. This can be achieved in several ways, by being any combination of: distasteful, for example caterpillars, pupae and adults of the cinnabar moth, the monarch and the variable checkerspot butterfly have bitter-tasting chemicals in their blood. One monarch contains more than enough digitalis-like toxin to kill a cat, while a monarch extract makes starlings vomit. foul-smelling, for example the skunk can eject a liquid with a long-lasting and powerful odour aggressive and able to defend itself, for example honey badgers. venomous, for example a wasp can deliver a painful sting, while snakes like the viper or coral snake can deliver a fatal bite. Warning coloration can succeed either through inborn behaviour (instinct) on the part of potential predators, or through a learned avoidance. Either can lead to various forms of mimicry. Experiments show that avoidance is learned in birds, mammals, lizards, and amphibians, but that some birds such as great tits have inborn avoidance of certain colours and patterns such as black and yellow stripes. Mimicry Mimicry means that one species of animal resembles another species closely enough to deceive predators. To evolve, the mimicked species must have warning coloration, because appearing to be bitter-tasting or dangerous gives natural selection something to work on. Once a species has a slight, chance, resemblance to a warning coloured species, natural selection can drive its colours and patterns towards more perfect mimicry. There are numerous possible mechanisms, of which the best known are: Batesian mimicry, where an edible species resembles a distasteful or dangerous species. This is most common in insects such as butterflies. A familiar example is the resemblance of harmless hoverflies (which have no sting) to bees. Müllerian mimicry, where two or more distasteful or dangerous animal species resemble each other. This is most common among insects such as wasps and bees (hymenoptera). Batesian mimicry was first described by the pioneering naturalist Henry W. Bates. When an edible prey animal comes to resemble, even slightly, a distasteful animal, natural selection favours those individuals that even very slightly better resemble the distasteful species. This is because even a small degree of protection reduces predation and increases the chance that an individual mimic will survive and reproduce. For example, many species of hoverfly are coloured black and yellow like bees, and are in consequence avoided by birds (and people). Müllerian mimicry was first described by the pioneering naturalist Fritz Müller. When a distasteful animal comes to resemble a more common distasteful animal, natural selection favours individuals that even very slightly better resemble the target. For example, many species of stinging wasp and bee are similarly coloured black and yellow. Müller's explanation of the mechanism for this was one of the first uses of mathematics in biology. He argued that a predator, such as a young bird, must attack at least one insect, say a wasp, to learn that the black and yellow colours mean a stinging insect. If bees were differently coloured, the young bird would have to attack one of them also. But when bees and wasps resemble each other, the young bird need only attack one from the whole group to learn to avoid all of them. So, fewer bees are attacked if they mimic wasps; the same applies to wasps that mimic bees. The result is mutual resemblance for mutual protection. Distraction Startle Some animals such as many moths, mantises and grasshoppers, have a repertoire of threatening or startling behaviour, such as suddenly displaying conspicuous eyespots or patches of bright and contrasting colours, so as to scare off or momentarily distract a predator. This gives the prey animal an opportunity to escape. The behaviour is deimatic (startling) rather than aposematic as these insects are palatable to predators, so the warning colours are a bluff, not an honest signal. Motion dazzle Some prey animals such as zebra are marked with high-contrast patterns which possibly help to confuse their predators, such as lions, during a chase. The bold stripes of a herd of running zebra have been claimed make it difficult for predators to estimate the prey's speed and direction accurately, or to identify individual animals, giving the prey an improved chance of escape. Since dazzle patterns (such as the zebra's stripes) make animals harder to catch when moving, but easier to detect when stationary, there is an evolutionary trade-off between dazzle and camouflage. There is evidence that the zebra's stripes could provide some protection from flies and biting insects. Physical protection Many animals have dark pigments such as melanin in their skin, eyes and fur to protect themselves against sunburn (damage to living tissues caused by ultraviolet light). Another example of photoprotective pigments are the GFP-like proteins in some corals. In some jellyfish, rhizostomins have also been hypothesized to protect against ultraviolet damage. Temperature regulation Some frogs such as Bokermannohyla alvarengai, which basks in sunlight, lighten their skin colour when hot (and darkens when cold), making their skin reflect more heat and so avoid overheating. Incidental coloration Some animals are coloured purely incidentally because their blood contains pigments. For example, amphibians like the olm that live in caves may be largely colorless as colour has no function in that environment, but they show some red because of the haem pigment in their red blood cells, needed to carry oxygen. They also have a little orange coloured riboflavin in their skin. Human albinos and people with fair skin have a similar colour for the same reason. Mechanisms of colour production in animals Animal coloration may be the result of any combination of pigments, chromatophores, structural coloration and bioluminescence. Coloration by pigments Pigments are coloured chemicals (such as melanin) in animal tissues. For example, the Arctic fox has a white coat in winter (containing little pigment), and a brown coat in summer (containing more pigment), an example of seasonal camouflage (a polyphenism). Many animals, including mammals, birds, and amphibians, are unable to synthesize most of the pigments that colour their fur or feathers, other than the brown or black melanins that give many mammals their earth tones. For example, the bright yellow of an American goldfinch, the startling orange of a juvenile red-spotted newt, the deep red of a cardinal and the pink of a flamingo are all produced by carotenoid pigments synthesized by plants. In the case of the flamingo, the bird eats pink shrimps, which are themselves unable to synthesize carotenoids. The shrimps derive their body colour from microscopic red algae, which like most plants are able to create their own pigments, including both carotenoids and (green) chlorophyll. Animals that eat green plants do not become green, however, as chlorophyll does not survive digestion. Variable coloration by chromatophores Chromatophores are special pigment-containing cells that may change their size, but more often retain their original size but allow the pigment within them to become redistributed, thus varying the colour and pattern of the animal. Chromatophores may respond to hormonal and/or neurobal control mechanisms, but direst responses to stimulation by visible light, UV-radiation, temperature, pH-changes, chemicals, etc. have also been documented. The voluntary control of chromatophores is known as metachrosis. For example, cuttlefish and chameleons can rapidly change their appearance, both for camouflage and for signalling, as Aristotle first noted over 2000 years ago: Cephalopod molluscs like squid can voluntarily change their coloration by contracting or relaxationg small muscles around their chromatophores. The energy cost of the complete activation of the chromatophore system is very high, equalling nearly as much as all the energy used by an octopus at rest. Amphibians such as frogs have three kinds of star-shaped chromatophore cells in separate layers of their skin. The top layer contains 'xanthophores' with orange, red, or yellow pigments; the middle layer contains 'iridophores' with a silvery light-reflecting pigment; while the bottom layer contains 'melanophores' with dark melanin. Structural coloration While many animals are unable to synthesize carotenoid pigments to create red and yellow surfaces, the green and blue colours of bird feathers and insect carapaces are usually not produced by pigments at all, but by structural coloration. Structural coloration means the production of colour by microscopically-structured surfaces fine enough to interfere with visible light, sometimes in combination with pigments: for example, peacock tail feathers are pigmented brown, but their structure makes them appear blue, turquoise and green. Structural coloration can produce the most brilliant colours, often iridescent. For example, the blue/green gloss on the plumage of birds such as ducks, and the purple/blue/green/red colours of many beetles and butterflies are created by structural coloration. Animals use several methods to produce structural colour, as described in the table. Bioluminescence Bioluminescence is the production of light, such as by the photophores of marine animals, and the tails of glow-worms and fireflies. Bioluminescence, like other forms of metabolism, releases energy derived from the chemical energy of food. A pigment, luciferin is catalysed by the enzyme luciferase to react with oxygen, releasing light. Comb jellies such as Euplokamis are bioluminescent, creating blue and green light, especially when stressed; when disturbed, they secrete an ink which luminesces in the same colours. Since comb jellies are not very sensitive to light, their bioluminescence is unlikely to be used to signal to other members of the same species (e.g. to attract mates or repel rivals); more likely, the light helps to distract predators or parasites. Some species of squid have light-producing organs (photophores) scattered all over their undersides that create a sparkling glow. This provides counter-illumination camouflage, preventing the animal from appearing as a dark shape when seen from below. Some anglerfish of the deep sea, where it is too dark to hunt by sight, contain symbiotic bacteria in the 'bait' on their 'fishing rods'. These emit light to attract prey. See also Albinism in biology Chromatophore Dog coat colours and patterns Cat coat genetics Deception in animals Equine coat colour Equine coat colour genetics Roan (colour) Fish coloration References Sources Cott, Hugh Bamford (1940). Adaptive Coloration in Animals. Methuen, London. Forbes, Peter (2009). Dazzled and Deceived: Mimicry and Camouflage. Yale, New Haven and London. External links Theme issue 'Animal coloration: production, perception, function and application' (Royal Society) NatureWorks: Coloration (for children and teachers) HowStuffWorks: How Animal Camouflage Works University of British Columbia: Sexual Selection (a lecture for Zoology students) Nature's Palette: How animals, including humans, produce colours Zoology Evolution of animals Mimicry Warning coloration Camouflage
Animal coloration
[ "Biology" ]
4,680
[ "Camouflage", "Animals", "Biological defense mechanisms", "Mimicry", "Zoology", "Evolution of animals" ]
13,455,599
https://en.wikipedia.org/wiki/International%20Eugenics%20Conference
Three International Eugenics Congresses took place between 1912 and 1932 and were the global venue for scientists, politicians, and social leaders to plan and discuss the application of programs to improve human heredity in the early twentieth century. Background Assessing the work of Charles Darwin, and pondering the experience of animal breeders and horticulturists, Francis Galton wondered if the human genetic make-up could be improved: “The question was then forced upon me - Could not the race of men be similarly improved? Could not the undesirables be got rid of and the desirables multiplied?” This concept of eugenics - a term he introduced - soon won many adherents, notably in North America and England. First practical steps were taken in the United States of America. The government under Theodore Roosevelt created a national Heredity Commission that was charged to investigate the genetic heritage of the country and to “(encourage) the increase of families of good blood and (discourage) the vicious elements in the cross-bred American civilization”. Charles Davenport supported by the Carnegie Institution established the Eugenics Record Office. Further significant funding for the eugenics movement came from E. H. Harriman and Vernon Kellogg. In an effort to eradicate unfit offspring sterilization laws were passed, the first one in Indiana (1907), then in other states, many strictly for eugenic reasons, "to better the race," allowing for compulsory sterilization. Other eugenic laws limited the right to marry. The First International Eugenics Congress (1912) The First International Eugenics Congress took place in London on July 24–29, 1912. It was organized by the British Eugenics Education Society and dedicated to Galton who had died the year prior. Major Leonard Darwin, the son of Charles Darwin, was presiding. The five-day meeting saw about 400 delegates at the University of London, one of three held between 1912 and 1932. It was organised by the Eugenics Education Society of Britain (Pearl, 1912). Luminaries included Winston Churchill, First Lord of the British Admiralty and Lord Alverstone, the Chief Justice, Arthur Balfour, as well as the ambassadors of Norway, Greece, and France. In his opening address Darwin indicated that the introduction of principles of better breeding procedures for humans would require moral courage. The American exhibit was sponsored by the American Breeders' Association and demonstrated the incidence of hereditary defects in human pedigrees. A report by Bleeker van Wagenen presented information about American sterilization laws and propagated compulsory sterilization as the best method to cut off “defective germ-plasm”. In the final address, Major Darwin extolled eugenics as the practical application of the principle of evolution. The Second International Eugenics Congress (1921) The second Congress, originally scheduled for New York in 1915, met at the American Museum of Natural History in New York on September 25–27, 1921 with Henry Fairfield Osborn presiding. Alexander Graham Bell was the honorary president. The State Department mailed the invitations around the world. Under American leadership and dominance - forty-one out of fifty-three scientific papers - the work of the eugenists disrupted by World War I in Europe was to resume. Delegates participated not only from Europe and North America, but also from Latin America (Mexico, Cuba, Venezuela, El Salvador, and Uruguay), and Asia (Japan, India, Siam). The major guest speaker, Major Darwin, advocated eugenic measures that needed to be taken, namely the "elimination of the unfit", the discouragement of large families in the "ill-endowed", and the encouragement of large families in the "well-endowed". The Average Young American Male composite statue created by Jane Davenport Harris was exhibited during this congress and again at the Third as visual representation of the degeneracy of the white male body that would continue if advised eugenic measures were not taken. The Third International Eugenics Congress (1932) The third meeting was arranged at the American Museum of Natural History in New York City August 22–23, 1932, dedicated to Mary Williamson Averell who had provided significant financial support, and presided by Davenport. Osborn's address emphasized birth selection over birth control as the method to better the offspring. F. Ramos from Cuba proposed that immigrants should be carefully checked for harmful traits, and suggested deportations of their descendants if inadmissible traits would become later apparent. Major Darwin, now 88 years old, was unable to attend but sent a report presented by Ronald Fisher predicting the doom of civilization unless eugenic measures were implemented. Ernst Rüdin was unanimously elected president of the International Federation of Eugenics Organizations (IFEO). The congress published "A Decade of Progress in Eugenics", Scientific Papers of the Third International Congress of Eugenics. A Fourth International Eugenics Conference was not convened. The IFEO held two more international meetings, one at Zurich in 1934 and the last one at Scheveningen in 1936. In 1932, Hermann Joseph Muller gave a speech to the Third International Eugenics Congress, and stated "eugenics might yet perfect the human race but only in a society consciously organized for the common good. See also British Eugenics Society Eugenics in the United States Nazi eugenics References External links A Decade of Progress in Eugenics, Scientific Papers of the Third International Congress of Eugenics 1912 establishments in England 1932 disestablishments in New York (state) Recurring events established in 1912 Recurring events disestablished in 1932 20th-century conferences Eugenics organizations Bioethics Winston Churchill Arthur Balfour
International Eugenics Conference
[ "Technology" ]
1,128
[ "Bioethics", "Ethics of science and technology" ]
13,457,263
https://en.wikipedia.org/wiki/Wireless%20Session%20Protocol
Wireless Session Protocol (WSP) is an open standard for maintaining high-level wireless sessions. The protocol is involved from the second that the user connects to one URL and ends when the user leaves that URL. The session-wide properties are defined once at the beginning of the session, saving bandwidth over continuous monitoring. The session-establishing process does not have long connection algorithms. WSP is based on HTTP 1.1 with few enhancements. WSP provides the upper-level application layer of WAP with a consistent interface for two session services. The first is a connection-oriented service that operates above a transaction layer protocol WTP and the second is a connectionless service that operates above a secure or non-secure data-gram transport service. Therefore, WSP exists for two reasons: First, the connection mode enhances HTTP 1.1's performance over the wireless environment. Second, it provides a session layer so the whole WAP environment resembles the OSI Reference Model. References External links Open Mobile Alliance WSP on the Wireshark Wiki Open Mobile Alliance standards Session layer protocols Wireless Application Protocol
Wireless Session Protocol
[ "Technology" ]
227
[ "Computing stubs", "Wireless networking", "Wireless Application Protocol", "Computer network stubs" ]
2,212,195
https://en.wikipedia.org/wiki/Beta%20Draconis
Beta Draconis (β Draconis, abbreviated Beta Dra, β Dra) is a binary star system and the third-brightest star in the northern circumpolar constellation of Draco. The two components are designated Beta Draconis A (officially named Rastaban , “head to sole of foot”, the traditional name of the system) and B respectively. With a combined apparent visual magnitude of 2.79, it is bright enough to be easily seen with the naked eye. Based upon parallax measurements from the Hipparcos astrometry satellite, it lies at a distance of about from the Sun. The system is drifting closer with a radial velocity of −21 km/s. The binary system consists of a bright giant orbited by a dwarf companion once every four millennia or so. The companion is about 11 magnitudes fainter than the primary star, and the two are separated by . The spectrum of the primary, Beta Draconis A, matches a stellar classification of G2Ib-IIa, showing mixed features of a bright giant and a supergiant star, and is listed as a standard star for that spectral class. It is about 65 million years old and is currently undergoing its first convective dredge-up. Compared to the Sun, Beta Draconis A is an enormous star with six times the mass and roughly 40 times the radius. At this size, it is emitting about 950 times the luminosity of the Sun from its outer envelope at an effective temperature of 5,160 K, giving it the yellow hue of a G-type star. The star has a particularly strong chromospheric emission that is generating X-ray and far-UV radiation. There is a detectable magnetic field with a longitudinal field strength of . Beta Draconis lies on or near the cepheid instability strip, yet only appears to be a microvariable with a range of about 1/100 of a magnitude. It was confirmed as a variable star with a range of about 1/100 of a magnitude by Gabriel Cristian Neagu using data from the TESS and Hipparcos missions. The variability was reported to the AAVSO (American Association of Variable Star Observers), in the Variable Star Index. Nomenclature β Draconis (Latinised to Beta Draconis) is the system's Bayer designation. The designations of the two components as Beta Draconis A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). It bore the traditional name Rastaban, which has also been used for Gamma Draconis. This name, less commonly written Rastaben, derives from the Arabic phrase ra's ath-thu'ban "head of the serpent/dragon". It was also known as Asuia and Alwaid , the latter from the Arabic al-ʽawāʼidh "the old mother camels". In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Rastaban for the component Beta Draconis A on 21 August 2016 and it is now so included in the List of IAU-approved Star Names. Beta Draconis is part of the asterism of the Mother Camels (Arabic al'awa'id), along with Gamma Draconis (Eltanin), Mu Draconis (Erakis), Nu Draconis (Kuma) and Xi Draconis (Grumium), which was later known as the Quinque Dromedarii. In Chinese, (), meaning Celestial Flail, refers to an asterism consisting of Beta Draconis, Xi Draconis, Nu Draconis, Gamma Draconis and Iota Herculis. Consequently, the Chinese name for Beta Draconis itself is known as (, ). References External links G-type bright giants G-type supergiants Binary stars Draco (constellation) Draconis, Beta BD+52 2065 Draconis, 23 159181 085670 6536 Rastaban
Beta Draconis
[ "Astronomy" ]
874
[ "Constellations", "Draco (constellation)" ]
2,212,334
https://en.wikipedia.org/wiki/Biscuit%20porcelain
Biscuit porcelain, bisque porcelain or bisque is unglazed, white porcelain treated as a final product, with a matte appearance and texture to the touch. It has been widely used in European pottery, mainly for sculptural and decorative objects that are not tableware and so do not need a glaze for protection. The term "biscuit" refers to any type of fired but unglazed pottery in the course of manufacture, but only in porcelain is biscuit or bisque a term for a final product. Unglazed earthenware as a final product is often called terracotta, and in stoneware equivalent unglazed wares (such as jasperware) are often called "dry-bodied". Many types of pottery, including most porcelain wares, have a glaze applied, either before a single firing, or at the biscuit stage, with a further firing. Small figurines and other decorative pieces have often been made in biscuit, as well as larger portrait busts and other sculptures; the appearance of biscuit is very similar to that of carved and smoothed marble, the traditional prestige material for sculpture in the West. It is hardly used in Chinese porcelain or that of other East Asian countries, but in Europe became very popular for figures in the second half of the 18th century, as Neoclassicism dominated contemporary styles. It was first used at Vincennes porcelain in 1751 by Jean-Jacques Bachelier. Biscuit figures have to be free from the common small imperfections that a glaze and painted decoration could cover up, and were therefore usually more expensive than glazed ones. They are also more difficult to keep clean. A popular use for biscuit porcelain was the manufacture of bisque dolls in the 19th century, where the porcelain was typically tinted or painted in flesh tones. In the doll world, "bisque" is usually the term used, rather than "biscuit". Parian ware is a 19th-century type of biscuit. Lithophanes were normally made with biscuit. Colour Although the great majority of biscuit figures (other than dolls) are entirely in white, there are a number of ways of using colour in the technique. Jasperware, developed by Wedgwood in the 1770s and soon very popular all over Europe, is usually classed as stoneware rather than porcelain, but the style of using two contrasting colours of biscuit was sometimes used in porcelain. The Real Fábrica del Buen Retiro in Madrid made a porcelain room in the Casita del Principe, El Escorial decorated with 234 plaques in the style, with a "Wedgwood blue" ground and the design in white biscuit porcelain in low relief. These were applied as sprigs, meaning that they made separately as thin pieces, and stuck to the main blue body before firing. The plaques are framed like paintings; they were made between 1790 and 1795. The figure by the same factory illustrated here uses elements modelled in a coloured paste, and is all biscuit. Biscuit porcelain could also be painted with unfired paint rather than the enamels normal overglaze decoration uses, the lack of a shiny surface giving a strikingly different effect in the best examples. This rare technique is called "coloured biscuit", and is found from the 19th century onwards. As with 18th-century pieces painted over the glaze, the paint may peel if not well looked after. A piece could be made with some areas left as biscuit while others are glazed and enamelled in the usual way. A Chelsea-Derby figure of George II of Great Britain (1773–1774) leaning on a classical plinth and standing on a high base has only the figure in biscuit. This part-glazing also occurs in other types of pottery, and for example is very common in the earthenware Chinese Tang dynasty tomb figures. Other pieces "reserve" areas in biscuit, by giving them a temporary coating of wax or something similar to keep the glaze off; this is a fairly common feature of Longquan celadon (which is porcelain in Chinese terms), and also found in Ming dragons. Some Chinese pieces are described as "porcelain with polychrome enamels on the biscuit" – that is, using the normal "overglaze" technique on biscuit, but with no actual glaze, often a revivalist style evoking earlier sancai wares (which were not in porcelain). The laborious and mostly 19th-century pâte-sur-pâte technique often uses biscuit for at least one of the colours. Notes References Battie, David, ed., Sotheby's Concise Encyclopedia of Porcelain, 1990, Conran Octopus. Porcelain Ceramic materials
Biscuit porcelain
[ "Engineering" ]
981
[ "Ceramic engineering", "Ceramic materials" ]
2,212,479
https://en.wikipedia.org/wiki/Georgi%E2%80%93Jarlskog%20mass%20relation
In grand unified theories of the SU(5) or SO(10) type, there is a mass relation predicted between the electron and the down quark, the muon and the strange quark and the tau lepton and the bottom quark called the Georgi–Jarlskog mass relations. The relations were formulated by Howard Georgi and Cecilia Jarlskog. At GUT scale, these are sometimes quoted as: In the same paper it is written that: Meaning that: References Grand Unified Theory
Georgi–Jarlskog mass relation
[ "Physics" ]
105
[ "Unsolved problems in physics", "Particle physics", "Grand Unified Theory", "Particle physics stubs", "Physics beyond the Standard Model" ]
2,212,557
https://en.wikipedia.org/wiki/Crystallographic%20defects%20in%20diamond
Imperfections in the crystal lattice of diamond are common. Such defects may be the result of lattice irregularities or extrinsic substitutional or interstitial impurities, introduced during or after the diamond growth. The defects affect the material properties of diamond and determine to which type a diamond is assigned; the most dramatic effects are on the diamond color and electrical conductivity, as explained by the electronic band structure. The defects can be detected by different types of spectroscopy, including electron paramagnetic resonance (EPR), luminescence induced by light (photoluminescence, PL) or electron beam (cathodoluminescence, CL), and absorption of light in the infrared (IR), visible and UV parts of the spectrum. The absorption spectrum is used not only to identify the defects, but also to estimate their concentration; it can also distinguish natural from synthetic or enhanced diamonds. Labeling of diamond centers There is a tradition in diamond spectroscopy to label a defect-induced spectrum by a numbered acronym (e.g. GR1). This tradition has been followed in general with some notable deviations, such as A, B and C centers. Many acronyms are confusing though: Some symbols are too similar (e.g., 3H and H3). Accidentally, the same labels were given to different centers detected by EPR and optical techniques (e.g., N3 EPR center and N3 optical center have no relation). Whereas some acronyms are logical, such as N3 (N for natural, i.e. observed in natural diamond) or H3 (H for heated, i.e. observed after irradiation and heating), many are not. In particular, there is no clear distinction between the meaning of labels GR (general radiation), R (radiation) and TR (type-II radiation). Defect symmetry The symmetry of defects in crystals is described by the point groups. They differ from the space groups describing the symmetry of crystals by absence of translations, and thus are much fewer in number. In diamond, only defects of the following symmetries have been observed thus far: tetrahedral (Td), tetragonal (D2d), trigonal (D3d, C3v), rhombic (C2v), monoclinic (C2h, C1h, C2) and triclinic (C1 or CS). The defect symmetry allows predicting many optical properties. For example, one-phonon (infrared) absorption in pure diamond lattice is forbidden because the lattice has an inversion center. However, introducing any defect (even "very symmetrical", such as N-N substitutional pair) breaks the crystal symmetry resulting in defect-induced infrared absorption, which is the most common tool to measure the defect concentrations in diamond. In synthetic diamond grown by the high-pressure high-temperature synthesis or chemical vapor deposition, defects with symmetry lower than tetrahedral align to the direction of the growth. Such alignment has also been observed in gallium arsenide and thus is not unique to diamond. Extrinsic defects Various elemental analyses of diamond reveal a wide range of impurities. They mostly originate, however, from inclusions of foreign materials in diamond, which could be nanometer-small and invisible in an optical microscope. Also, virtually any element can be hammered into diamond by ion implantation. More essential are elements that can be introduced into the diamond lattice as isolated atoms (or small atomic clusters) during the diamond growth. By 2008, those elements are nitrogen, boron, hydrogen, silicon, phosphorus, nickel, cobalt and perhaps sulfur. Manganese and tungsten have been unambiguously detected in diamond, but they might originate from foreign inclusions. Detection of isolated iron in diamond has later been re-interpreted in terms of micro-particles of ruby produced during the diamond synthesis. Oxygen is believed to be a major impurity in diamond, but it has not been spectroscopically identified in diamond yet. Two electron paramagnetic resonance centers (OK1 and N3) have been initially assigned to nitrogen–oxygen complexes, and later to titanium-related complexes. However, the assignment is indirect and the corresponding concentrations are rather low (few parts per million). Nitrogen The most common impurity in diamond is nitrogen, which can comprise up to 1% of a diamond by mass. Previously, all lattice defects in diamond were thought to be the result of structural anomalies; later research revealed nitrogen to be present in most diamonds and in many different configurations. Most nitrogen enters the diamond lattice as a single atom (i.e. nitrogen-containing molecules dissociate before incorporation), however, molecular nitrogen incorporates into diamond as well. Absorption of light and other material properties of diamond are highly dependent upon nitrogen content and aggregation state. Although all aggregate configurations cause absorption in the infrared, diamonds containing aggregated nitrogen are usually colorless, i.e. have little absorption in the visible spectrum. The four main nitrogen forms are as follows: C-nitrogen center The C center corresponds to electrically neutral single substitutional nitrogen atoms in the diamond lattice. These are easily seen in electron paramagnetic resonance spectra (in which they are confusingly called P1 centers). C centers impart a deep yellow to brown color; these diamonds are classed as type Ib and are commonly known as "canary diamonds", which are rare in gem form. Most synthetic diamonds produced by high-pressure high-temperature (HPHT) technique contain a high level of nitrogen in the C form; nitrogen impurity originates from the atmosphere or from the graphite source. One nitrogen atom per 100,000 carbon atoms will produce yellow color. Because the nitrogen atoms have five available electrons (one more than the carbon atoms they replace), they act as "deep donors"; that is, each substituting nitrogen has an extra electron to donate and forms a donor energy level within the band gap. Light with energy above ~2.2 eV can excite the donor electrons into the conduction band, resulting in the yellow color. The C center produces a characteristic infrared absorption spectrum with a sharp peak at 1344 cm−1 and a broader feature at 1130 cm−1. Absorption at those peaks is routinely used to measure the concentration of single nitrogen. Another proposed way, using the UV absorption at ~260 nm, has later been discarded as unreliable. Acceptor defects in diamond ionize the fifth nitrogen electron in the C center converting it into C+ center. The latter has a characteristic IR absorption spectrum with a sharp peak at 1332 cm−1 and broader and weaker peaks at 1115, 1046 and 950 cm−1. A-nitrogen center The A center is probably the most common defect in natural diamonds. It consists of a neutral nearest-neighbor pair of nitrogen atoms substituting for the carbon atoms. The A center produces UV absorption threshold at ~4 eV (310 nm, i.e. invisible to eye) and thus causes no coloration. Diamond containing nitrogen predominantly in the A form as classed as type IaA. The A center is diamagnetic, but if ionized by UV light or deep acceptors, it produces an electron paramagnetic resonance spectrum W24, whose analysis unambiguously proves the N=N structure. The A center shows an IR absorption spectrum with no sharp features, which is distinctly different from that of the C or B centers. Its strongest peak at 1282 cm−1 is routinely used to estimate the nitrogen concentration in the A form. B-nitrogen center There is a general consensus that B center (sometimes called B1) consists of a carbon vacancy surrounded by four nitrogen atoms substituting for carbon atoms. This model is consistent with other experimental results, but there is no direct spectroscopic data corroborating it. Diamonds where most nitrogen forms B centers are rare and are classed as type IaB; most gem diamonds contain a mixture of A and B centers, together with N3 centers. Similar to the A centers, B centers do not induce color, and no UV or visible absorption can be attributed to the B centers. Early assignment of the N9 absorption system to the B center have been disproven later. The B center has a characteristic IR absorption spectrum (see the infrared absorption picture above) with a sharp peak at 1332 cm−1 and a broader feature at 1280 cm−1. The latter is routinely used to estimate the nitrogen concentration in the B form. Many optical peaks in diamond accidentally have similar spectral positions, which causes much confusion among gemologists. Spectroscopists use the whole spectrum rather than one peak for defect identification and consider the history of the growth and processing of individual diamond. N3 nitrogen center The N3 center consists of three nitrogen atoms surrounding a vacancy. Its concentration is always just a fraction of the A and B centers. The N3 center is paramagnetic, so its structure is well justified from the analysis of the EPR spectrum P2. This defect produces a characteristic absorption and luminescence line at 415 nm and thus does not induce color on its own. However, the N3 center is always accompanied by the N2 center, having an absorption line at 478 nm (and no luminescence). As a result, diamonds rich in N3/N2 centers are yellow in color. Boron Diamonds containing boron as a substitutional impurity are termed type IIb. Only one percent of natural diamonds are of this type, and most are blue to grey. Boron is an acceptor in diamond: boron atoms have one less available electron than the carbon atoms; therefore, each boron atom substituting for a carbon atom creates an electron hole in the band gap that can accept an electron from the valence band. This allows red light absorption, and due to the small energy (0.37 eV) needed for the electron to leave the valence band, holes can be thermally released from the boron atoms to the valence band even at room temperatures. These holes can move in an electric field and render the diamond electrically conductive (i.e., a p-type semiconductor). Very few boron atoms are required for this to happen—a typical ratio is one boron atom per 1,000,000 carbon atoms. Boron-doped diamonds transmit light down to ~250 nm and absorb some red and infrared light (hence the blue color); they may phosphoresce blue after exposure to shortwave ultraviolet light. Apart from optical absorption, boron acceptors have been detected by electron paramagnetic resonance. Phosphorus Phosphorus could be intentionally introduced into diamond grown by chemical vapor deposition (CVD) at concentrations up to ~0.01%. Phosphorus substitutes carbon in the diamond lattice. Similar to nitrogen, phosphorus has one more electron than carbon and thus acts as a donor; however, the ionization energy of phosphorus (0.6 eV) is much smaller than that of nitrogen (1.7 eV) and is small enough for room-temperature thermal ionization. This important property of phosphorus in diamond favors electronic applications, such as UV light-emitting diodes (LEDs, at 235 nm). Hydrogen Hydrogen is one of the most technological important impurities in semiconductors, including diamond. Hydrogen-related defects are very different in natural diamond and in synthetic diamond films. Those films are produced by various chemical vapor deposition (CVD) techniques in an atmosphere rich in hydrogen (typical hydrogen/carbon ratio >100), under strong bombardment of growing diamond by the plasma ions. As a result, CVD diamond is always rich in hydrogen and lattice vacancies. In polycrystalline films, much of the hydrogen may be located at the boundaries between diamond 'grains', or in non-diamond carbon inclusions. Within the diamond lattice itself, hydrogen-vacancy and hydrogen-nitrogen-vacancy complexes have been identified in negative charge states by electron paramagnetic resonance. In addition, numerous hydrogen-related IR absorption peaks are documented. It is experimentally demonstrated that hydrogen passivates electrically active boron and phosphorus impurities. As a result of such passivation, shallow donor centers are presumably produced. In natural diamonds, several hydrogen-related IR absorption peaks are commonly observed; the strongest ones are located at 1405, 3107 and 3237 cm−1 (see IR absorption figure above). The microscopic structure of the corresponding defects is yet unknown and it is not even certain whether or not those defects originate in diamond or in foreign inclusions. Gray color in some diamonds from the Argyle mine in Australia is often associated with those hydrogen defects, but again, this assignment is yet unproven. Nickel, cobalt and chromium When diamonds are grown by the high-pressure high-temperature technique, nickel, cobalt, chromium or some other metals are usually added into the growth medium to facilitate catalytically the conversion of graphite into diamond. As a result, metallic inclusions are formed. Besides, isolated nickel and cobalt atoms incorporate into diamond lattice, as demonstrated through characteristic hyperfine structure in electron paramagnetic resonance, optical absorption and photoluminescence spectra, and the concentration of isolated nickel can reach 0.01%. This fact is by all means unusual considering the large difference in size between carbon and transition metal atoms and the superior rigidity of the diamond lattice. Numerous Ni-related defects have been detected by electron paramagnetic resonance, optical absorption and photoluminescence, both in synthetic and natural diamonds. Three major structures can be distinguished: substitutional Ni, nickel-vacancy and nickel-vacancy complex decorated by one or more substitutional nitrogen atoms. The "nickel-vacancy" structure, also called "semi-divacancy" is specific for most large impurities in diamond and silicon (e.g., tin in silicon). Its production mechanism is generally accepted as follows: large nickel atom incorporates substitutionally, then expels a nearby carbon (creating a neighboring vacancy), and shifts in-between the two sites. Although the physical and chemical properties of cobalt and nickel are rather similar, the concentrations of isolated cobalt in diamond are much smaller than those of nickel (parts per billion range). Several defects related to isolated cobalt have been detected by electron paramagnetic resonance and photoluminescence, but their structure is yet unknown. A chromium-related optical center was reported after ion implantation and subsequent annealing of Type IIA synthetic diamonds. However a subsequent study repeating the annealing conditions but without chromium implantation has questioned the original attribution of the defect centre to chromium. Silicon, germanium, tin and lead Silicon is a common impurity in diamond films grown by chemical vapor deposition and it originates either from silicon substrate or from silica windows or walls of the CVD reactor. It was also observed in natural diamonds in dispersed form. Isolated silicon defects have been detected in diamond lattice through the sharp optical absorption peak at 738 nm and electron paramagnetic resonance. Similar to other large impurities, the major form of silicon in diamond has been identified with a Si-vacancy complex (semi-divacancy site). This center is a deep donor having an ionization energy of 2 eV, and thus again is unsuitable for electronic applications. Si-vacancies constitute minor fraction of total silicon. It is believed (though no proof exists) that much silicon substitutes for carbon thus becoming invisible to most spectroscopic techniques because silicon and carbon atoms have the same configuration of the outer electronic shells. Germanium, tin and lead are normally absent in diamond, but they can be introduced during the growth or by subsequent ion implantation. Those impurities can be detected optically via the germanium-vacancy, tin-vacancy and lead-vacancy centers, respectively, which have similar properties to those of the Si-vacancy center. Similar to N-V centers, Si-V, Ge-V, Sn-V and Pb-V complexes all have potential applications in quantum computing. Sulfur Around the year 2000, there was a wave of attempts to dope synthetic CVD diamond films by sulfur aiming at n-type conductivity with low activation energy. Successful reports have been published, but then dismissed as the conductivity was rendered p-type instead of n-type and associated not with sulfur, but with residual boron, which is a highly efficient p-type dopant in diamond. So far (2009), there is only one reliable evidence (through hyperfine interaction structure in electron paramagnetic resonance) for isolated sulfur defects in diamond. The corresponding center called W31 has been observed in natural type-Ib diamonds in small concentrations (parts per million). It was assigned to a sulfur-vacancy complex – again, as in case of nickel and silicon, a semi-divacancy site. Intrinsic defects The easiest way to produce intrinsic defects in diamond is by displacing carbon atoms through irradiation with high-energy particles, such as alpha (helium), beta (electrons) or gamma particles, protons, neutrons, ions, etc. The irradiation can occur in the laboratory or in nature (see Diamond enhancement – Irradiation); it produces primary defects named Frenkel defects (carbon atoms knocked off their normal lattice sites to interstitial sites) and remaining lattice vacancies. An important difference between the vacancies and interstitials in diamond is that whereas interstitials are mobile during the irradiation, even at liquid nitrogen temperatures, however vacancies start migrating only at temperatures ~700 °C. Vacancies and interstitials can also be produced in diamond by plastic deformation, though in much smaller concentrations. Isolated carbon interstitial Isolated interstitial has never been observed in diamond and is considered unstable. Its interaction with a regular carbon lattice atom produces a "split-interstitial", a defect where two carbon atoms share a lattice site and are covalently bonded with the carbon neighbors. This defect has been thoroughly characterized by electron paramagnetic resonance (R2 center) and optical absorption, and unlike most other defects in diamond, it does not produce photoluminescence. Interstitial complexes The isolated split-interstitial moves through the diamond crystal during irradiation. When it meets other interstitials it aggregates into larger complexes of two and three split-interstitials, identified by electron paramagnetic resonance (R1 and O3 centers), optical absorption and photoluminescence. Vacancy-interstitial complexes Most high-energy particles, beside displacing carbon atom from the lattice site, also pass it enough surplus energy for a rapid migration through the lattice. However, when relatively gentle gamma irradiation is used, this extra energy is minimal. Thus the interstitials remain near the original vacancies and form vacancy-interstitials pairs identified through optical absorption. Vacancy-di-interstitial pairs have been also produced, though by electron irradiation and through a different mechanism: Individual interstitials migrate during the irradiation and aggregate to form di-interstitials; this process occurs preferentially near the lattice vacancies. Isolated vacancy Isolated vacancy is the most studied defect in diamond, both experimentally and theoretically. Its most important practical property is optical absorption, like in the color centers, which gives diamond green, or sometimes even green–blue color (in pure diamond). The characteristic feature of this absorption is a series of sharp lines called GR1-8, where GR1 line at 741 nm is the most prominent and important. The vacancy behaves as a deep electron donor/acceptor, whose electronic properties depend on the charge state. The energy level for the +/0 states is at 0.6 eV and for the 0/- states is at 2.5 eV above the valence band. Multivacancy complexes Upon annealing of pure diamond at ~700 °C, vacancies migrate and form divacancies, characterized by optical absorption and electron paramagnetic resonance. Similar to single interstitials, divacancies do not produce photoluminescence. Divacancies, in turn, anneal out at ~900 °C creating multivacancy chains detected by EPR and presumably hexavacancy rings. The latter should be invisible to most spectroscopies, and indeed, they have not been detected thus far. Annealing of vacancies changes diamond color from green to yellow-brown. Similar mechanism (vacancy aggregation) is also believed to cause brown color of plastically deformed natural diamonds. Dislocations Dislocations are the most common structural defect in natural diamond. The two major types of dislocations are the glide set, in which bonds break between layers of atoms with different indices (those not lying directly above each other) and the shuffle set, in which the breaks occur between atoms of the same index. The dislocations produce dangling bonds which introduce energy levels into the band gap, enabling the absorption of light. Broadband blue photoluminescence has been reliably identified with dislocations by direct observation in an electron microscope, however, it was noted that not all dislocations are luminescent, and there is no correlation between the dislocation type and the parameters of the emission. Platelets Most natural diamonds contain extended planar defects in the <100> lattice planes, which are called "platelets". Their size ranges from nanometers to many micrometers, and large ones are easily observed in an optical microscope via their luminescence. For a long time, platelets were tentatively associated with large nitrogen complexes — nitrogen sinks produced as a result of nitrogen aggregation at high temperatures of the diamond synthesis. However, the direct measurement of nitrogen in the platelets by EELS (an analytical technique of electron microscopy) revealed very little nitrogen. The currently accepted model of platelets is a large regular array of carbon interstitials. Platelets produce sharp absorption peaks at 1359–1375 and 330 cm−1 in IR absorption spectra; remarkably, the position of the first peak depends on the platelet size. As with dislocations, a broad photoluminescence centered at ~1000 nm was associated with platelets by direct observation in an electron microscope. By studying this luminescence, it was deduced that platelets have a "bandgap" of ~1.7 eV. Voidites Voidites are octahedral nanometer-sized clusters present in many natural diamonds, as revealed by electron microscopy. Laboratory experiments demonstrated that annealing of type-IaB diamond at high temperatures and pressures (>2600 °C) results in break-up of the platelets and formation of dislocation loops and voidites, i.e. that voidites are a result of thermal degradation of platelets. Contrary to platelets, voidites do contain much nitrogen, in the molecular form. Interaction between intrinsic and extrinsic defects Extrinsic and intrinsic defects can interact producing new defect complexes. Such interaction usually occurs if a diamond containing extrinsic defects (impurities) is either plastically deformed or is irradiated and annealed. Most important is the interaction of vacancies and interstitials with nitrogen. Carbon interstitials react with substitutional nitrogen producing a bond-centered nitrogen interstitial showing strong IR absorption at 1450 cm−1. Vacancies are efficiently trapped by the A, B and C nitrogen centers. The trapping rate is the highest for the C centers, 8 times lower for the A centers and 30 times lower for the B centers. The C center (single nitrogen) by trapping a vacancy forms the famous nitrogen-vacancy center, which can be neutral or negatively charged; the negatively charged state has potential applications in quantum computing. A and B centers upon trapping a vacancy create corresponding 2N-V (H3 and H2 centers, where H2 is simply a negatively charged H3 center) and the neutral 4N-2V (H4 center). The H2, H3 and H4 centers are important because they are present in many natural diamonds and their optical absorption can be strong enough to alter the diamond color (H3 or H4 – yellow, H2 – green). Boron interacts with carbon interstitials forming a neutral boron–interstitial complex with a sharp optical absorption at 0.552 eV (2250 nm). No evidence is known so far (2009) for complexes of boron and vacancy. In contrast, silicon does react with vacancies, creating the described above optical absorption at 738 nm. The assumed mechanism is trapping of migrating vacancy by substitutional silicon resulting in the Si-V (semi-divacancy) configuration. A similar mechanism is expected for nickel, for which both substitutional and semi-divacancy configurations are reliably identified (see subsection "nickel and cobalt" above). In an unpublished study, diamonds rich in substitutional nickel were electron irradiated and annealed, with following careful optical measurements performed after each annealing step, but no evidence for creation or enhancement of Ni-vacancy centers was obtained. See also Chemical vapor deposition of diamond Crystallographic defect Diamond color Diamond enhancement Gemstone irradiation Material properties of diamond Nitrogen-vacancy center Synthetic diamond References Diamond Crystallographic defects
Crystallographic defects in diamond
[ "Chemistry", "Materials_science", "Engineering" ]
5,211
[ "Crystallographic defects", "Crystallography", "Materials degradation", "Materials science" ]
2,212,621
https://en.wikipedia.org/wiki/Fishing%20sinker
A fishing sinker, plummet, or knoch is a weight used in conjunction with a fishing lure or hook to increase its rate of sink, anchoring ability, and/or casting distance. Fishing sinkers may be as small as for applications in shallow water, and even smaller for fly fishing applications, or as large as several pounds (>1 kg) or considerably more for deep sea fishing. They are formed into many different shapes for diverse fishing applications. Environmental concerns surround the usage of lead and other materials in fishing sinkers. Types A large variety of sinkers exist which are used depending on the fish being pursued, the environment, the current and personal preference. Pyramid sinkers Pyramid sinkers are shaped like a pyramid and are used when it is desirable to anchor on the bottom of water bodies. They are attached to the terminal end of fishing line by loops of bra. Barrel or egg sinkers Barrel or egg sinkers are rounded and often bead-like with a narrow hole through which fishing line is threaded. These sinkers are desirable on rock or debris covered substrates. Split-shot sinkers Split-shot sinkers are small and round with a split cutting halfway through the sinker. The split can be placed on a piece of fishing line and then crimped closed. This feature makes adding and removing the weights easy and quick. Bullet sinkers Bullet sinkers are bullet-shaped and used widely on largemouth bass fishing for rigging plastic worms "Texas-style". Dipsey Dipsey sinkers are ovate or egg-shaped and are attached to the fishing line with a loop of brass wire embedded in the sinker. Bank sinker Bank sinkers are long and ovate and have a small hole at the top for the fishing line to thread through. Claw sinker A claw sinker consists of a sinker weight which is typically of round shape, and a number of metal wire spikes grouped around the sinker weight acting as barbs. Claw sinkers are used in surf fishing on sandy bottoms with strong currents, mainly to prevent the sinker from getting carried off with the current. Upon casting a claw sinker, the line is briefly tugged so that the claws will dig themselves into the sand, allowing the rig to stay in place. Deep drop weight A deep drop weight is used to reach the bottom in deeper offshore fishing applications. These fishing weights are typically cylindrical in with a brass eyelet at the top for attaching to a rig. Weights for this style of sinker range from one pound to as much as fourteen pounds. Target species include tilefish, grouper, and swordfish, among others. Materials An ideal material for a fishing sinker is environmentally acceptable, cheap and dense. Density is desirable as weights must be as small as possible, in order to minimize visual cues which could drive fish away from a fishing operation. In ancient times as well as sometimes today, fishing sinkers consisted of materials found ordinarily in the natural environment, such as stones, rocks, or bone. Later, lead became the material of choice for sinkers due to its low cost, ease of production and casting, chemical inertness (resistance to corrosion), and density. However, lead is known to cause lead poisoning and enter the environment as a result of the inevitable occasional loss of fishing sinkers during routine fishing. Thus, most lead-based fishing sinkers have been outlawed in the United Kingdom (under 1 oz weight), Canada, and some states in the United States. Lead based fishing sinkers are banned in all of US and Canadian National Parks. These bans have motivated the use of various other materials in sinkers. Steel, brass, and bismuth sinkers have been marketed, but anglers have not widely adopted them due to their lower density and higher cost compared to lead. Sandsinkers have also been developed, using sand as weight. However, sand has a comparably low density to that of lead and makes a poor replacement. Tungsten is now in use, especially among largemouth bass anglers. Although several times costlier than lead, tungsten is just under twice as dense as lead and thus found desirable. The environmental effects of tungsten, however, are essentially unknown. Another variant of lead-free fishing weights is presented by the Czech brand UFO Sinker, which offers weights made of heavy concrete, that can be pulled out of the water using a magnet. More recently, terminal tackle manufacturers are experimenting with high density composite resins. These materials present a non-toxic alternative to lead sinkers at a lower monetary cost than alternative metallic sinkers. References External links Do lead fishing sinkers threaten the environment? (from The Straight Dope) Toxic Tackle (article by Aquarium Monsters Australia) Let’s Get the Lead Out (article by the Minnesota Pollution Control Agency) Stone plummets discovered in Canada Fishing equipment Weights
Fishing sinker
[ "Physics" ]
990
[ "Weights", "Physical objects", "Matter" ]
2,212,817
https://en.wikipedia.org/wiki/Umklapp%20scattering
In crystalline materials, Umklapp scattering (also U-process or Umklapp process) is a scattering process that results in a wave vector (usually written k) which falls outside the first Brillouin zone. If a material is periodic, it has a Brillouin zone, and any point outside the first Brillouin zone can also be expressed as a point inside the zone. So, the wave vector is then mathematically transformed to a point inside the first Brillouin zone. This transformation allows for scattering processes which would otherwise violate the conservation of momentum: two wave vectors pointing to the right can combine to create a wave vector that points to the left. This non-conservation is why crystal momentum is not a true momentum. Examples include electron-lattice potential scattering or an anharmonic phonon-phonon (or electron-phonon) scattering process, reflecting an electronic state or creating a phonon with a momentum k-vector outside the first Brillouin zone. Umklapp scattering is one process limiting the thermal conductivity in crystalline materials, the others being phonon scattering on crystal defects and at the surface of the sample. The left panel of Figure 1 schematically shows the possible scattering processes of two incoming phonons with wave-vectors (k-vectors) k1 and k2 (red) creating one outgoing phonon with a wave vector k3 (blue). As long as the sum of k1 and k2 stay inside the first Brillouin zone (grey squares), k3 is the sum of the former two, thus conserving phonon momentum. This process is called normal scattering (N-process). With increasing phonon momentum and thus larger wave vectors k1 and k2, their sum might point outside the first Brillouin zone (k'3). As shown in the right panel of Figure 1, k-vectors outside the first Brillouin zone are physically equivalent to vectors inside it and can be mathematically transformed into each other by the addition of a reciprocal lattice vector G. These processes are called Umklapp scattering and change the total phonon momentum. Umklapp scattering is the dominant process for electrical resistivity at low temperatures for low defect crystals (as opposed to phonon-electron scattering, which dominates at high temperatures, and high-defect lattices which lead to scattering at any temperature.) Umklapp scattering is the dominant process for thermal resistivity at high temperatures for low defect crystals. The thermal conductivity for an insulating crystal where the U-processes are dominant has 1/T dependence. History The name derives from the German word umklappen (to turn over). Rudolf Peierls, in his autobiography Bird of Passage states he was the originator of this phrase and coined it during his 1929 crystal lattice studies under the tutelage of Wolfgang Pauli. Peierls wrote, "…I used the German term Umklapp (flip-over) and this rather ugly word has remained in use…". The term Umklapp appears in the 1920 paper of Wilhelm Lenz's seed paper of the Ising model. See also Sampling theorem References Scattering
Umklapp scattering
[ "Physics", "Chemistry", "Materials_science" ]
657
[ "Condensed matter physics", "Scattering", "Particle physics", "Nuclear physics" ]
2,212,867
https://en.wikipedia.org/wiki/Detailed%20balance
The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each elementary process is in equilibrium with its reverse process. History The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility. Five years before Boltzmann, James Clerk Maxwell used the principle of detailed balance for gas kinetics with the reference to the principle of sufficient reason. He compared the idea of detailed balance with other types of balancing (like cyclic balance) and found that "Now it is impossible to assign a reason" why detailed balance should be rejected (pg. 64). In 1901, Rudolf Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles A1 -> A2 -> \cdots -> A_\mathit{n} -> A1 are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. In 1931, Lars Onsager used these relations in his works, for which he was awarded the 1968 Nobel Prize in Chemistry. Albert Einstein in 1916 used the principle of detailed balance in a background for his quantum theory of emission and absorption of radiation. The principle of detailed balance has been used in Markov chain Monte Carlo methods since their invention in 1953. In particular, in the Metropolis–Hastings algorithm and in its important particular case, Gibbs sampling, it is used as a simple and reliable condition to provide the desirable equilibrium state. Now, the principle of detailed balance is a standard part of the university courses in statistical mechanics, physical chemistry, chemical and physical kinetics. Microscopic background The microscopic "reversing of time" turns at the kinetic level into the "reversing of arrows": the elementary processes transform into their reverse processes. For example, the reaction transforms into and conversely. (Here, are symbols of components or states, are coefficients). The equilibrium ensemble should be invariant with respect to this transformation because of microreversibility and the uniqueness of thermodynamic equilibrium. This leads us immediately to the concept of detailed balance: each process is equilibrated by its reverse process. This reasoning is based on three assumptions: does not change under time reversal; Equilibrium is invariant under time reversal; The macroscopic elementary processes are microscopically distinguishable. That is, they represent disjoint sets of microscopic events. Any of these assumptions may be violated. For example, Boltzmann's collision can be represented as where is a particle with velocity v. Under time reversal transforms into . Therefore, the collision is transformed into the reverse collision by the PT transformation, where P is the space inversion and T is the time reversal. Detailed balance for Boltzmann's equation requires PT-invariance of collisions' dynamics, not just T-invariance. Indeed, after the time reversal the collision transforms into For the detailed balance we need transformation into For this purpose, we need to apply additionally the space reversal P. Therefore, for the detailed balance in Boltzmann's equation not T-invariance but PT-invariance is needed. Equilibrium may be not T- or PT-invariant even if the laws of motion are invariant. This non-invariance may be caused by the spontaneous symmetry breaking. There exist nonreciprocal media (for example, some bi-isotropic materials) without T and PT invariance. If different macroscopic processes are sampled from the same elementary microscopic events then macroscopic detailed balance may be violated even when microscopic detailed balance holds. Now, after almost 150 years of development, the scope of validity and the violations of detailed balance in kinetics seem to be clear. Detailed balance Reversibility A Markov process is called a reversible Markov process or reversible Markov chain if there exists a positive stationary distribution π that satisfies the detailed balance equationswhere Pij is the Markov transition probability from state i to state j, i.e. , and πi and πj are the equilibrium probabilities of being in states i and j, respectively. When for all i, this is equivalent to the joint probability matrix, being symmetric in i and j; or symmetric in and t. The definition carries over straightforwardly to continuous variables, where π becomes a probability density, and a transition kernel probability density from state s′ to state s:The detailed balance condition is stronger than that required merely for a stationary distribution, because there are Markov processes with stationary distributions that do not have detailed balance. Transition matrices that are symmetric or always have detailed balance. In these cases, a uniform distribution over the states is an equilibrium distribution. Kolmogorov's criterion Reversibility is equivalent to Kolmogorov's criterion: the product of transition rates over any closed loop of states is the same in both directions. For example, it implies that, for all a, b and c,For example, if we have a Markov chain with three states such that only these transitions are possible: , then they violate Kolmogorov's criterion. Closest reversible Markov chain For continuous systems with detailed balance, it may be possible to continuously transform the coordinates until the equilibrium distribution is uniform, with a transition kernel which then is symmetric. In the case of discrete states, it may be possible to achieve something similar by breaking the Markov states into appropriately-sized degenerate sub-states. For a Markov transition matrix and a stationary distribution, the detailed balance equations may not be valid. However, it can be shown that a unique Markov transition matrix exists which is closest according to the stationary distribution and a given norm. The closest Matrix can be computed by solving a quadratic-convex optimization problem. Detailed balance and entropy increase For many systems of physical and chemical kinetics, detailed balance provides sufficient conditions for the strict increase of entropy in isolated systems. For example, the famous Boltzmann H-theorem states that, according to the Boltzmann equation, the principle of detailed balance implies positivity of entropy production. The Boltzmann formula (1872) for entropy production in rarefied gas kinetics with detailed balance served as a prototype of many similar formulas for dissipation in mass action kinetics and generalized mass action kinetics with detailed balance. Nevertheless, the principle of detailed balance is not necessary for entropy growth. For example, in the linear irreversible cycle A1 -> A2 -> A3 -> A1, entropy production is positive but the principle of detailed balance does not hold. Thus, the principle of detailed balance is a sufficient but not necessary condition for entropy increase in Boltzmann kinetics. These relations between the principle of detailed balance and the second law of thermodynamics were clarified in 1887 when Hendrik Lorentz objected to the Boltzmann H-theorem for polyatomic gases. Lorentz stated that the principle of detailed balance is not applicable to collisions of polyatomic molecules. Boltzmann immediately invented a new, more general condition sufficient for entropy growth. Boltzmann's condition holds for all Markov processes, irrespective of time-reversibility. Later, entropy increase was proved for all Markov processes by a direct method. These theorems may be considered as simplifications of the Boltzmann result. Later, this condition was referred to as the "cyclic balance" condition (because it holds for irreversible cycles) or the "semi-detailed balance" or the "complex balance". In 1981, Carlo Cercignani and Maria Lampis proved that the Lorentz arguments were wrong and the principle of detailed balance is valid for polyatomic molecules. Nevertheless, the extended semi-detailed balance conditions invented by Boltzmann in this discussion remain the remarkable generalization of the detailed balance. Wegscheider's conditions for the generalized mass action law In chemical kinetics, the elementary reactions are represented by the stoichiometric equations where are the components and are the stoichiometric coefficients. Here, the reverse reactions with positive constants are included in the list separately. We need this separation of direct and reverse reactions to apply later the general formalism to the systems with some irreversible reactions. The system of stoichiometric equations of elementary reactions is the reaction mechanism. The stoichiometric matrix is , (gain minus loss). This matrix need not be square. The stoichiometric vector is the rth row of with coordinates . According to the generalized mass action law, the reaction rate for an elementary reaction is where is the activity (the "effective concentration") of . The reaction mechanism includes reactions with the reaction rate constants . For each r the following notations are used: ; ; is the reaction rate constant for the reverse reaction if it is in the reaction mechanism and 0 if it is not; is the reaction rate for the reverse reaction if it is in the reaction mechanism and 0 if it is not. For a reversible reaction, is the equilibrium constant. The principle of detailed balance for the generalized mass action law is: For given values there exists a positive equilibrium that satisfies detailed balance, that is, . This means that the system of linear detailed balance equations is solvable (). The following classical result gives the necessary and sufficient conditions for the existence of a positive equilibrium with detailed balance (see, for example, the textbook). Two conditions are sufficient and necessary for solvability of the system of detailed balance equations: If then and, conversely, if then (reversibility); For any solution of the system the Wegscheider's identity holds: Remark. It is sufficient to use in the Wegscheider conditions a basis of solutions of the system . In particular, for any cycle in the monomolecular (linear) reactions the product of the reaction rate constants in the clockwise direction is equal to the product of the reaction rate constants in the counterclockwise direction. The same condition is valid for the reversible Markov processes (it is equivalent to the "no net flow" condition). A simple nonlinear example gives us a linear cycle supplemented by one nonlinear step: A1 <=> A2 A2 <=> A3 A3 <=> A1 {A1}+A2 <=> 2A3 There are two nontrivial independent Wegscheider's identities for this system: and They correspond to the following linear relations between the stoichiometric vectors: and The computational aspect of the Wegscheider conditions was studied by D. Colquhoun with co-authors. The Wegscheider conditions demonstrate that whereas the principle of detailed balance states a local property of equilibrium, it implies the relations between the kinetic constants that are valid for all states far from equilibrium. This is possible because a kinetic law is known and relations between the rates of the elementary processes at equilibrium can be transformed into relations between kinetic constants which are used globally. For the Wegscheider conditions this kinetic law is the law of mass action (or the generalized law of mass action). Dissipation in systems with detailed balance To describe dynamics of the systems that obey the generalized mass action law, one has to represent the activities as functions of the concentrations cj and temperature. For this purpose, use the representation of the activity through the chemical potential: where μi is the chemical potential of the species under the conditions of interest, is the chemical potential of that species in the chosen standard state, R is the gas constant and T is the thermodynamic temperature. The chemical potential can be represented as a function of c and T, where c is the vector of concentrations with components cj. For the ideal systems, and : the activity is the concentration and the generalized mass action law is the usual law of mass action. Consider a system in isothermal (T=const) isochoric (the volume V=const) condition. For these conditions, the Helmholtz free energy measures the “useful” work obtainable from a system. It is a functions of the temperature T, the volume V and the amounts of chemical components Nj (usually measured in moles), N is the vector with components Nj. For the ideal systems, The chemical potential is a partial derivative: . The chemical kinetic equations are If the principle of detailed balance is valid then for any value of T there exists a positive point of detailed balance ceq: Elementary algebra gives where For the dissipation we obtain from these formulas: The inequality holds because ln is a monotone function and, hence, the expressions and have always the same sign. Similar inequalities are valid for other classical conditions for the closed systems and the corresponding characteristic functions: for isothermal isobaric conditions the Gibbs free energy decreases, for the isochoric systems with the constant internal energy (isolated systems) the entropy increases as well as for isobaric systems with the constant enthalpy. Onsager reciprocal relations and detailed balance Let the principle of detailed balance be valid. Then, for small deviations from equilibrium, the kinetic response of the system can be approximated as linearly related to its deviation from chemical equilibrium, giving the reaction rates for the generalized mass action law as: Therefore, again in the linear response regime near equilibrium, the kinetic equations are (): This is exactly the Onsager form: following the original work of Onsager, we should introduce the thermodynamic forces and the matrix of coefficients in the form The coefficient matrix is symmetric: These symmetry relations, , are exactly the Onsager reciprocal relations. The coefficient matrix is non-positive. It is negative on the linear span of the stoichiometric vectors . So, the Onsager relations follow from the principle of detailed balance in the linear approximation near equilibrium. Semi-detailed balance To formulate the principle of semi-detailed balance, it is convenient to count the direct and inverse elementary reactions separately. In this case, the kinetic equations have the form: Let us use the notations , for the input and the output vectors of the stoichiometric coefficients of the rth elementary reaction. Let be the set of all these vectors . For each , let us define two sets of numbers: if and only if is the vector of the input stoichiometric coefficients for the rth elementary reaction; if and only if is the vector of the output stoichiometric coefficients for the rth elementary reaction. The principle of semi-detailed balance means that in equilibrium the semi-detailed balance condition holds: for every The semi-detailed balance condition is sufficient for the stationarity: it implies that For the Markov kinetics the semi-detailed balance condition is just the elementary balance equation and holds for any steady state. For the nonlinear mass action law it is, in general, sufficient but not necessary condition for stationarity. The semi-detailed balance condition is weaker than the detailed balance one: if the principle of detailed balance holds then the condition of semi-detailed balance also holds. For systems that obey the generalized mass action law the semi-detailed balance condition is sufficient for the dissipation inequality (for the Helmholtz free energy under isothermal isochoric conditions and for the dissipation inequalities under other classical conditions for the corresponding thermodynamic potentials). Boltzmann introduced the semi-detailed balance condition for collisions in 1887 and proved that it guaranties the positivity of the entropy production. For chemical kinetics, this condition (as the complex balance condition) was introduced by Horn and Jackson in 1972. The microscopic backgrounds for the semi-detailed balance were found in the Markov microkinetics of the intermediate compounds that are present in small amounts and whose concentrations are in quasiequilibrium with the main components. Under these microscopic assumptions, the semi-detailed balance condition is just the balance equation for the Markov microkinetics according to the Michaelis–Menten–Stueckelberg theorem. Dissipation in systems with semi-detailed balance Let us represent the generalized mass action law in the equivalent form: the rate of the elementary process is where is the chemical potential and is the Helmholtz free energy. The exponential term is called the Boltzmann factor and the multiplier is the kinetic factor. Let us count the direct and reverse reaction in the kinetic equation separately: An auxiliary function of one variable is convenient for the representation of dissipation for the mass action law This function may be considered as the sum of the reaction rates for deformed input stoichiometric coefficients . For it is just the sum of the reaction rates. The function is convex because . Direct calculation gives that according to the kinetic equations This is the general dissipation formula for the generalized mass action law. Convexity of gives the sufficient and necessary conditions for the proper dissipation inequality: The semi-detailed balance condition can be transformed into identity . Therefore, for the systems with semi-detailed balance . Cone theorem and local equivalence of detailed and complex balance For any reaction mechanism and a given positive equilibrium a cone of possible velocities for the systems with detailed balance is defined for any non-equilibrium state N where cone stands for the conical hull and the piecewise-constant functions do not depend on (positive) values of equilibrium reaction rates and are defined by thermodynamic quantities under assumption of detailed balance. The cone theorem states that for the given reaction mechanism and given positive equilibrium, the velocity (dN/dt) at a state N for a system with complex balance belongs to the cone . That is, there exists a system with detailed balance, the same reaction mechanism, the same positive equilibrium, that gives the same velocity at state N. According to cone theorem, for a given state N, the set of velocities of the semidetailed balance systems coincides with the set of velocities of the detailed balance systems if their reaction mechanisms and equilibria coincide. This means local equivalence of detailed and complex balance. Detailed balance for systems with irreversible reactions Detailed balance states that in equilibrium each elementary process is equilibrated by its reverse process and requires reversibility of all elementary processes. For many real physico-chemical complex systems (e.g. homogeneous combustion, heterogeneous catalytic oxidation, most enzyme reactions etc.), detailed mechanisms include both reversible and irreversible reactions. If one represents irreversible reactions as limits of reversible steps, then it becomes obvious that not all reaction mechanisms with irreversible reactions can be obtained as limits of systems or reversible reactions with detailed balance. For example, the irreversible cycle A1 -> A2 -> A3 -> A1 cannot be obtained as such a limit but the reaction mechanism A1 -> A2 -> A3 <- A1 can. Gorban–Yablonsky theorem. A system of reactions with some irreversible reactions is a limit of systems with detailed balance when some constants tend to zero if and only if (i) the reversible part of this system satisfies the principle of detailed balance and (ii) the convex hull of the stoichiometric vectors of the irreversible reactions has empty intersection with the linear span of the stoichiometric vectors of the reversible reactions. Physically, the last condition means that the irreversible reactions cannot be included in oriented cyclic pathways. See also T-symmetry Microscopic reversibility Master equation Balance equation Gibbs sampling Metropolis–Hastings algorithm Atomic spectral line (deduction of the Einstein coefficients) Random walks on graphs References Non-equilibrium thermodynamics Statistical mechanics Markov models Chemical kinetics
Detailed balance
[ "Physics", "Chemistry", "Mathematics" ]
4,096
[ "Chemical reaction engineering", "Non-equilibrium thermodynamics", "Statistical mechanics", "Chemical kinetics", "Dynamical systems" ]
2,212,989
https://en.wikipedia.org/wiki/Transverse%20engine
A transverse engine is an engine mounted in a vehicle so that the engine's crankshaft axis is perpendicular to the direction of travel. Many modern front-wheel drive vehicles use this engine mounting configuration. Most rear-wheel drive vehicles use a longitudinal engine configuration, where the engine's crankshaft axis is parallel with the direction of travel, except for some rear-mid engine vehicles, which use a transverse engine and transaxle mounted in the rear instead of the front. Despite typically being used in light vehicles, it is not restricted to such designs and has also been used on armoured fighting vehicles to save interior space. History The Critchley light car, made by the Daimler Motor Company in 1899, had a transverse engine with belt drive to the rear axle. The first successful transverse-engine cars were the two-cylinder DKW F1 series of cars, which first appeared in 1931. During WWII, transverse engines were developed for armored vehicles, with the Soviet T-44 and T-54/T-55 tanks being equipped with transverse engines to save space within the hull. The T-54/55 eventually became the most produced tank in history. Postwar use After the Second World War, Saab used the configuration in their first model, the Saab 92, in 1947. The arrangement was also used for Borgward's Goliath and Hansa brand cars. The East German-built Trabant, which appeared in 1957, also had a transversely mounted two stroke engine, and this design was kept until the end of production, in 1991. However, it was with Alec Issigonis's Mini, introduced by the British Motor Corporation in 1959, that the design gained acclaim. Issigonis incorporated the car's transmission into the engine's sump, producing a drivetrain unit narrow enough to install transversely in a car only wide. While previous DKW and Saab cars used small, unrefined air-cooled two-stroke engines with poor performance, the gearbox-in-sump arrangement meant that an 848 cc four-cylinder water-cooled engine could be fitted to the Mini, providing strong performance for a car of its size. Coupled to the much greater amount of interior space afforded by the layout (the entire drivetrain only took up 20% of the car's length), this made the Mini a genuine alternative to the conventional small family car. This design reached its peak starting with Dante Giacosa's elaboration of it for Fiat. He connected the engine to its gearbox by a shaft and set the differential off-center so that it could be connected to the gearbox more easily. The half shafts from the differential to the wheels therefore differed in length, which would have made the car's steering asymmetrical were it not for their torsional stiffness being made the same. Giacosa's layout was first used in the Autobianchi Primula in 1964 and later in the popular Fiat 128. With the gearbox mounted separately to the engine, these cars were by necessity larger than the Mini, but this proved to be no disadvantage. This layout, still in use today, also provided superior refinement, easier repair and was better-suited to adopting five-speed transmissions than the original Issigonis in-sump design. The Lamborghini Miura used a transverse mid-mounted 4.0-litre V12. This configuration was unheard of in 1965, but became more common in the following decades, with cars such as the Lancia Montecarlo, Noble M12, Toyota MR2, Pontiac Fiero, and first-generation Honda NSX using such a powertrain design. The Land Rover LR2 Freelander, along with all Volvo models from 1998 on (including V8 models), employ a transversely-mounted engine in order to increase passenger space inside the vehicle. This has also allowed for improved safety in a frontal impact, due to more longitudinal engine compartment space being created. The result is a larger front crumple zone. Transverse engines have also been widely used in buses. In the United States, they were offered in the early 1930s by Twin Coach and used with limited success in Dwight Austin's Pickwick Nite-Coach. Transverse bus engines first appeared widely in the Yellow Coach 719, using Dwight Austin's V-drive; they continued in common use until the 1990s, though shorter V-configuration engines in a longitudinal "T-drive" configuration became common in the 1960s. Transverse engines were also used in the British Leyland Atlantean, in many transit buses, and in nearly all modern double decker buses. They have also been widely used by Scania, MAN, Volvo and Renault's bus divisions. Position placement of transverse engines Engines may be placed in two main positions within the motor car: Front-engine transversely-mounted / Front-wheel drive Rear mid-engine transversely-mounted / Rear-wheel drive Common types of transversely placed engines Space allowed for engines within the front wheel wells is commonly limited to the following: Single cylinder Inline-two Inline-three Inline-four Inline-five Inline-six (for rear-engined buses) V4 V6 Less common types of transversely-placed engines Inline-six: Austin Kimberley, Tasman; Austin 2200, Morris 2200, Wolseley Six); Austin/Morris 2200 HL, Wolseley Saloon, Princess; Daewoo Magnus (aka Chevrolet Epica/Evanda, Suzuki Verona); Daewoo Tosca (aka Chevrolet/Holden Epica); Land Rover Freelander 2 (aka LR2); Volvo S80; Volvo S60 (2nd generation, also V60); Volvo V70 (3rd generation); Volvo XC60; Volvo XC90 V8: Buick LaCrosse LaCrosse Super (2008–2009), Cadillac Allanté, Ford Taurus SHO (3rd generation), Hyundai Equus/Centennial (1st generation), Lancia Thema 8.32, Lincoln Continental (1995-2002 only), Mitsubishi Dignity, Mitsubishi Proudia, Oldsmobile Aurora, Volvo S80, Volvo XC90 V12 (mid-engine only): Lamborghini Miura V16 (mid-engine only): Cizeta-Moroder V16T Alternative convention with twin-cylinder motorcycles The description of the orientation of V-twin and flat-twin motorcycle engines sometimes differs from the convention as stated above. Motorcycles with a V-twin engine mounted with its crankshaft parallel to the direction of travel, e.g. the AJS S3 V-twin, Indian 841, Victoria Bergmeister, Honda CX series and several Moto Guzzis since the 1960s, are said to have "transverse" engines, while motorcycles with a V-twin mounted with its crankshaft perpendicular to the direction of travel, e.g. most Ducatis since the 1970s and most Harley-Davidsons, are said to have "longitudinal" engines. This convention uses the longest horizontal dimension (length or width) of the engine as its reference axis instead of the crankshaft. Notes References Engine technology Automotive technologies
Transverse engine
[ "Technology" ]
1,460
[ "Engine technology", "Engines" ]
2,213,137
https://en.wikipedia.org/wiki/Longitudinal%20engine
In automotive engineering, a longitudinal engine is an internal combustion engine in which the crankshaft is oriented along the long axis of the vehicle, from front to back. See also: transverse engine Use This type of motor is usually used for rear-wheel drive cars, except for some Audi, SAAB, the Oldsmobile Toronado, and the 1967 Cadillac Eldorado equipped with longitudinal engines in front wheel drive. In front-wheel drive cars a transverse engine is usually used. Trucks often have longitudinal engines with rear-wheel drive. For motorcycles, the use of a particular type depends on the drive: in the case of a chain or belt drive a transverse engine is usually used, and with shaft drives a longitudinal engine. Longitudinal engines in motorcycles do have one disadvantage: the "tipping point" of the crankshaft tilts along the entire motorcycle to a greater or lesser degree when accelerating. This is partly resolved by having other components, such as the generator and the gearbox, rotate in the opposite direction to the crankshaft. Most larger, "premium" vehicles use the longitudinal engine orientation in combination with rear wheel drive, because powerful engines such as the inline-6 and 90° big-bore V8 are usually too long to fit in a FF transverse engine bay. By contrast most mainstream modern vehicles use front wheel drive along with a transverse engine arrangement since they are usually equipped with inline-4 or V6 engines. While both layouts can be adapted for all-wheel drive, the longitudinal engine orientation has a more balanced weight distribution leading to superior handling characteristics, but is less efficient in terms of packaging and interior space. Cars with longitudinal engines usually have a smaller minimum turning circle than those with transverse engines. This is because there is more space to the sides of the engine, allowing deeper wheel arches so the front wheels are able to turn through a greater angle. In the late 1960s, GM divisions Oldsmobile and Cadillac had front-wheel drive models Toronado and Eldorado respectively, with a longitudinal V8 engine and an integrated automatic transmission and differential unit powering the front wheels. Honda and Toyota also offered front-wheel drive cars with longitudinal engines, namely Honda Vigor, Acura/Honda Legend/RL, and Toyota Tercel. Common types This is a list of typical examples of types of engines which can be placed in motor vehicles: In-line or straight engine – where two, three, four, five, six, and even eight cylinders are placed in a single plane. V engine – where two, four, six, eight, ten, twelve, or even sixteen cylinders are placed in two separate planes, looking like a "V" when viewed from the end of the crankshaft. Flat or boxer engine – where two, four, six or more cylinders are arranged in two diametrically horizontally opposed planes. W engine – where two (narrow angle) vee engines are siamesed together (within 180°), where at eight, twelve or sixteen cylinders are arranged in four separate planes. References Engine technology Automotive technologies
Longitudinal engine
[ "Technology" ]
611
[ "Engine technology", "Engines" ]
2,213,191
https://en.wikipedia.org/wiki/Church%20architecture
Church architecture refers to the architecture of Christian buildings, such as churches, chapels, convents, seminaries, etc. It has evolved over the two thousand years of the Christian religion, partly by innovation and partly by borrowing other architectural styles as well as responding to changing beliefs, practices and local traditions. From the Early Christianity to the present, the most significant objects of transformation for Christian architecture and design were the great churches of Byzantium, the Romanesque abbey churches, Gothic cathedrals and Renaissance basilicas with its emphasis on harmony. These large, often ornate and architecturally prestigious buildings were dominant features of the towns and countryside in which they stood. However, far more numerous were the parish churches in Christendom, the focus of Christian devotion in every town and village. While a few are counted as sublime works of architecture to equal the great cathedrals and churches, the majority developed along simpler lines, showing great regional diversity and often demonstrating local vernacular technology and decoration. Buildings were at first from those originally intended for other purposes but, with the rise of distinctively ecclesiastical architecture, church buildings came to influence secular ones which have often imitated religious architecture. In the 20th century, the use of new materials, such as steel and concrete, has had an effect upon the design of churches. The history of church architecture divides itself into periods, and into countries or regions and by religious affiliation. The matter is complicated by the fact that buildings put up for one purpose may have been re-used for another, that new building techniques may permit changes in style and size, that changes in liturgical practice may result in the alteration of existing buildings and that a building built by one religious group may be used by a successor group with different purposes. Origins and development of the church building The simplest church building comprises a single meeting space, built of locally available material and using the same skills of construction as the local domestic buildings. Such churches are generally rectangular, but in African countries where circular dwellings are the norm, vernacular churches may be circular as well. A simple church may be built of mud brick, wattle and daub, split logs or rubble. It may be roofed with thatch, shingles, corrugated iron or banana leaves. However, church congregations, from the 4th century onwards, have sought to construct church buildings that were both permanent and aesthetically pleasing. This had led to a tradition in which congregations and local leaders have invested time, money and personal prestige into the building and decoration of churches. Within any parish, the local church is often the oldest building and is larger than any pre-19th-century structure except perhaps a barn. The church is often built of the most durable material available, often dressed stone or brick. The requirements of liturgy have generally demanded that the church should extend beyond a single meeting room to two main spaces, one for the congregation and one in which the priest performs the rituals of the Mass. To the two-room structure is often added aisles, a tower, chapels, and vestries and sometimes transepts and mortuary chapels. The additional chambers may be part of the original plan, but in the case of a great many old churches, the building has been extended piecemeal, its various parts testifying to its long architectural history. Beginnings In the first three centuries of the Early Livia Christian Church, the practice of Christianity was illegal and few churches were constructed. In the beginning, Christians worshipped along with Jews in synagogues and in private houses. After the separation of Jews and Christians, the latter continued to worship in people's houses, known as house churches. These were often the homes of the wealthier members of the faith. Saint Paul, in his first letter to the Corinthians writes: "The churches of Asia send greetings. Aquila and Prisca, together with the church in their house, greet you warmly in the Lord." Some domestic buildings were adapted to function as churches. One of the earliest of adapted residences is at Dura Europos church, built shortly after 200 AD, where two rooms were made into one, by removing a wall, and a dais was set up. To the right of the entrance a small room was made into a baptistry. Some church buildings were specifically built as church assemblies, such as that opposite the emperor Diocletian's palace in Nicomedia. Its destruction was recorded thus: When that day dawned, in the eighth consulship of Diocletian and seventh of Maximian, suddenly, while it was yet hardly light, the prefect, together with chief commanders, tribunes, and officers of the treasury, came to the church in Nicomedia, and the gates having been forced open, they searched everywhere for an idol of the Divinity. The books of the Holy Scriptures were found, and they were committed to the flames; the utensils and furniture of the church were abandoned to pillage: all was rapine, confusion, tumult. That church, situated on rising ground, was within view of the palace; and Diocletian and Galerius stood as if on a watchtower, disputing long whether it ought to be set on fire. The sentiment of Diocletian prevailed, who dreaded lest, so great a fire being once kindled, some part of the city might he burnt; for there were many and large buildings that surrounded the church. Then the Pretorian Guards came in battle array, with axes and other iron instruments, and having been let loose everywhere, they in a few hours leveled that very lofty edifice with the ground. From house church to church From the first to the early fourth centuries most Christian communities worshipped in private homes, often secretly. Some Roman churches, such as the Basilica of San Clemente in Rome, are built directly over the houses where early Christians worshipped. Other early Roman churches are built on the sites of Christian martyrdom or at the entrance to catacombs where Christians were buried. With the victory of the Roman emperor Constantine at the Battle of Milvian Bridge in 312 AD, Christianity became a lawful and then the privileged religion of the Roman Empire. The faith, already spread around the Mediterranean, now expressed itself in buildings. Christian architecture was designed to correspond to the civic and imperial forms of ancient Roman architecture, because the architects and building craftsmen mastered these forms. The Aula regia, the typical audience hall of imperial palaces with a throne apse, became the model for aisleless churches (later developed into hall churches), whereas the Basilica building type, with a higher central nave flanked by two or more lower longitudinal aisles, commonly used for market halls in the Roman era, became the most widespread building type for churches in the East and West, sometimes with galleries and clerestories. While civic basilicas mostly had no apses, or sometimes apses at either end, the Christian basilica usually had a single apse (like the aula regia) where the bishop and presbyters sat in a dais behind the altar. While pagan basilicas had as their focus a statue of the emperor, Christian basilicas focused on the altar as place of the Eucharist. Central buildings, often modeled on Roman official buildings, with circular (rotunda - like the Pantheon, Rome), oval, square, cruciform, hexagonal, octagonal, nonagonal or higher polygonal building shapes, also served as models, for example for the 6th century Basilica of San Vitale in Ravenna. The Roman temple, on the other hand, was only suitable as a design for smaller chapels, as it only had a small cella inside, to which only the priests had access, but not the congregation, as in Christian churches. The first very large Christian churches were built in Rome in the early 4th century: Old St. Peter's Basilica, Basilica of Saint Paul Outside the Walls, San Giovanni in Laterano, Santa Maria Maggiore, and in the early 5th century Santa Sabina. The Mausoleum of Constantina, Emperor Constantine's daughter, was built as a mausoleum, just like her grandmother's Mausoleum of Helena, and only later converted into a church (Santa Costanza). In Ravenna, the temporary ruler's residence shortly before and after the fall of the Roman Empire, many early Christian churches were not only built but have also been preserved to this day: In the 5th century the Mausoleum of Galla Placidia and the Baptistery of Neon were built and in the 6th century the Basilica of Sant'Apollinare Nuovo, Basilica of Sant'Apollinare in Classe, Basilica of San Vitale, the Arian Baptistery and the Archbishop's Chapel. These early churches distinguished themselves from pagan temples by the simplicity in their execution; a lot of brickwork and little marble, no plastic arts, no “moving” scenes. The glass mosaics were suggestive (poster function) but made of comparatively cheap material. Depictions of saints like those in Ravenna were deliberately not lifelike, but rather “disembodied”. The outer walls were only lightened up by the partially large windows. It was only later that the upper part of the facade was decorated with mosaics. Characteristics of the early Christian church building The church building as we know it grew out of a number of features of the Ancient Roman period: The house church The atrium The basilica The bema The mausoleum: centrally-planned building The cruciform ground plan: Latin or Greek cross Atrium When early Christian communities began to build churches they drew on one particular feature of the houses that preceded them, the atrium, or courtyard with a colonnade surrounding it. Most of these atriums have disappeared. A fine example remains at the Basilica of San Clemente in Rome and another was built in the Romanesque period at Sant'Ambrogio, Milan. The descendants of these atria may be seen in the large square cloisters that can be found beside many cathedrals, and in the huge colonnaded squares or piazza at the Basilicas of St Peter's in Rome and St Mark's in Venice and the Camposanto (Holy Field) at the Cathedral of Pisa. Basilica Early church architecture did not draw its form from Roman temples, as they did not have large internal spaces where worshipping congregations could meet. It was the Roman basilica used for meetings, markets, and courts of law that provided a model for the large Christian church and that gave its name to the Christian basilica. Both Roman basilicas and Roman bath houses had at their core a large vaulted building with a high roof, braced on either side by a series of lower chambers or a wide arcaded passage. An important feature of the Roman basilica was that at either end it had a projecting exedra, or apse, a semicircular space roofed with a half-dome. This was where the magistrates sat to hold court. It passed into the church architecture of the Roman world and was adapted in different ways as a feature of cathedral architecture. The earliest large churches, such as the Cathedral of San Giovanni in Laterano in Rome, consisted of a single-ended basilica with one apsidal end and a courtyard, or atrium, at the other end. As Christian liturgy developed, processions became part of the proceedings. The processional door was that which led from the furthest end of the building, while the door most used by the public might be that central to one side of the building, as in a basilica of law. This is the case in many cathedrals and churches. Bema As numbers of clergy increased, the small apse which contained the altar, or table upon which the sacramental bread and wine were offered in the rite of Holy Communion, was not sufficient to accommodate them. A raised dais called a bema, a concept taken from synagogue architecture, formed part of many large basilican churches. In the case of St. Peter's Basilica and San Paolo Fuori le Mura (St Paul's outside the Walls) in Rome, this bema extended laterally beyond the main meeting hall, forming two arms so that the building took on the shape of a T with a projecting apse. From this beginning, the plan of the church developed into the so-called Latin Cross which is the shape of most Western Cathedrals and large churches. The arms of the cross are called the transept. Mausoleum One of the influences on church architecture was the mausoleum. The mausoleum of a noble Roman was a square or circular domed structure which housed a sarcophagus. The Emperor Constantine built for his daughter Costanza a mausoleum which has a circular central space surrounded by a lower ambulatory or passageway separated by a colonnade. Santa Costanza's burial place became a place of worship as well as a tomb. It is one of the earliest church buildings that was central, rather than longitudinally planned. Constantine was also responsible for the building of the circular, mausoleum-like Church of the Holy Sepulchre in Jerusalem, which in turn influenced the plan of a number of buildings, including that constructed in Rome to house the remains of the proto-martyr Stephen, San Stefano Rotondo and the Basilica of San Vitale in Ravenna. Ancient circular or polygonal churches are comparatively rare. A small number, such as the Temple Church, London were built during the Crusades in imitation of the Church of the Holy Sepulchre as isolated examples in England, France, and Spain. In Denmark such churches in the Romanesque style are much more numerous. In parts of Eastern Europe, there are also round tower-like churches of the Romanesque period but they are generally vernacular architecture and of small scale. Others, like St Martin's Rotunda at Visegrad, in the Czech Republic, are finely detailed. The circular or polygonal form lent itself to those buildings within church complexes that perform a function in which it is desirable for people to stand, or sit around, with a centralized focus, rather than an axial one. In Italy, the circular or polygonal form was used throughout the medieval period for baptisteries, while in England it was adapted for chapter houses. In France, the aisled polygonal plan was adopted as the eastern terminal and in Spain, the same form is often used as a chapel. Other than Santa Costanza and San Stefano, there was another significant place of worship in Rome that was also circular, the vast Ancient Roman Pantheon, with its numerous statue-filled niches. This too was to become a Christian church and lend its style to the development of Cathedral architecture. Latin cross and Greek cross Most cathedrals and great churches have a cruciform groundplan. In churches of Western European tradition, the plan is usually longitudinal, in the form of the so-called Latin Cross, with a long nave crossed by a transept. The transept may be as strongly projecting as at York Minster or not project beyond the aisles as at Amiens Cathedral. Many of the earliest churches of Byzantium have a longitudinal plan. At Hagia Sophia, Istanbul, there is a central dome, the frame on one axis by two high semi-domes and on the other by low rectangular transept arms, the overall plan being square. This large church was to influence the building of many later churches, even into the 21st century. A square plan in which the nave, chancel and transept arms are of equal length forming a Greek cross, the crossing generally surmounted by a dome became the common form in the Eastern Orthodox Church, with many churches throughout Eastern Europe and Russia being built in this way. Churches of the Greek Cross form often have a narthex or vestibule which stretches across the front of the church. This type of plan was also to later play a part in the development of church architecture in Western Europe, most notably in Bramante's plan for St. Peter's Basilica. Divergence of Eastern and Western church architecture The division of the Roman Empire in the fourth century AD, resulted in Christian ritual evolving in distinctly different ways in the eastern and western parts of the empire. The final break was the Great Schism of 1054. Eastern Orthodoxy and Byzantine architecture Eastern Christianity and Western Christianity began to diverge from each other from an early date. Whereas the basilica was the most common form in the west, a more compact centralized style became predominant in the east. These churches were in origin martyria, constructed as mausoleums housing the tombs of the saints who had died during the persecutions which only fully ended with the conversion of Emperor Constantine. An important surviving example is the Mausoleum of Galla Placidia in Ravenna, which has retained its mosaic decorations. Dating from the 5th century, it may have been briefly used as an oratory before it became a mausoleum. These buildings copied pagan tombs and were square, cruciform with shallow projecting arms or polygonal. They were roofed by domes which came to symbolize heaven. The projecting arms were sometimes roofed with domes or semi-domes that were lower and abutted the central block of the building. Byzantine churches, although centrally planned around a domed space, generally maintained a definite axis towards the apsidal chancel which generally extended further than the other apses. This projection allowed for the erection of an iconostasis, a screen on which icons are hung and which conceals the altar from the worshippers except at those points in the liturgy when its doors are opened. The architecture of Constantinople (Istanbul) in the 6th century produced churches that effectively combined centralized and basilica plans, having semi-domes forming the axis, and arcaded galleries on either side. The church of Hagia Sophia (now a mosque) was the most significant example and had an enormous influence on both later Christian and Islamic architecture, such as the Dome of the Rock in Jerusalem and the Umayyad Great Mosque in Damascus. Many later Eastern Orthodox churches, particularly large ones, combine a centrally planned, domed eastern end with an aisled nave at the west. A variant form of the centralized church was developed in Russia and came to prominence in the sixteenth century. Here the dome was replaced by a much thinner and taller hipped or conical roof which perhaps originated from the need to prevent snow from remaining on roofs. One of the finest examples of these tented churches is St. Basil's in Red Square in Moscow. Medieval West Participation in worship, which gave rise to the porch church, began to decline as the church became increasingly clericalized; with the rise of the monasteries church buildings changed as well. The 'two-room' church' became, in Europe, the norm. The first 'room', the nave, was used by the congregation; the second 'room', the sanctuary, was the preserve of the clergy and was where the Mass was celebrated. This could then only be seen from a distance by the congregation through the arch between the rooms (from late mediaeval times closed by a wooden partition, the Rood screen), and the elevation of the host, the bread of the communion, became the focus of the celebration: it was not at that time generally partaken of by the congregation. Given that the liturgy was said in Latin, the people contented themselves with their own private devotions until this point. Because of the difficulty of sight lines, some churches had holes, 'squints', cut strategically in walls and screens, through which the elevation could be seen from the nave. Again, from the twin principles that every priest must say his mass every day and that an altar could only be used once, in religious communities a number of altars were required for which space had to be found, at least within monastic churches. Apart from changes in the liturgy, the other major influence on church architecture was in the use of new materials and the development of new techniques. In northern Europe, early churches were often built of wood, for which reason almost none survive. With the wider use of stone by the Benedictine monks, in the tenth and eleventh centuries, larger structures were erected. The two-room church, particularly if it were an abbey or a cathedral, might acquire transepts. These were effectively arms of the cross which now made up the ground plan of the building. The buildings became more clearly symbolic of what they were intended for. Sometimes this crossing, now the central focus of the church, would be surmounted by its own tower, in addition to the west end towers, or instead of them. (Such precarious structures were known to collapse – as at Ely – and had to be rebuilt.) Sanctuaries, now providing for the singing of the offices by monks or canons, grew longer and became chancels, separated from the nave by a screen. Practical function and symbolism were both at work in the process of development. Factors affecting the architecture of churches Across Europe, the process by which church architecture developed and individual churches were designed and built was different in different regions, and sometimes differed from church to church in the same region and within the same historic period. Among the factors that determined how a church was designed and built are the nature of the local community, the location in city, town or village, whether the church was an abbey church, whether the church was a collegiate church, whether the church had the patronage of a bishop, whether the church had the ongoing patronage of a wealthy family and whether the church contained relics of a saint or other holy objects that were likely to draw pilgrimage. Collegiate churches and abbey churches, even those serving small religious communities, generally demonstrate a greater complexity of form than parochial churches in the same area and of a similar date. Churches that have been built under the patronage of a bishop have generally employed a competent church architect and demonstrate in the design refinement of style unlike that of the parochial builder. Many parochial churches have had the patronage of wealthy local families. The degree to which this has an effect on the architecture can differ greatly. It may entail the design and construction of the entire building having been financed and influenced by a particular patron. On the other hand, the evidence of patronage may be apparent only in accretion of chantry chapels, tombs, memorials, fittings, stained glass, and other decorations. Churches that contain famous relics or objects of veneration and have thus become pilgrimage churches are often very large and have been elevated to the status of basilica. However, many other churches enshrine the bodies or are associated with the lives of particular saints without having attracted continuing pilgrimage and the financial benefit that it brought. The popularity of saints, the veneration of their relics, and the size and importance of the church built to honor them are without consistency and can be dependent upon entirely different factors. Two virtually unknown warrior saints, San Giovanni and San Paolo, are honoured by one of the largest churches in Venice, built by the Dominican Friars in competition to the Franciscans who were building the Frari Church at the same time. The much smaller church that contained the body of Saint Lucy, a martyr venerated by Catholics and Protestants across the world and the titular saint of numerous locations, was demolished in the late 19th century to make way for Venice's railway station. The first truly baroque façade was built in Rome between 1568 and 1584 for the Church of the Gesù, the mother church of the Society of Jesus (Jesuits). It introduced the baroque style into architecture. Corresponding with the Society's theological task as the spearhead of the Counter-Reformation, the new style soon became a triumphant feature in Catholic church architecture. After the second world war, modern materials and techniques such as concrete and metal panels were introduced in Norwegian church construction. Bodø Cathedral for instance was built in reinforced concrete allowing a wide basilica to be built. During the 1960s there was a more pronounced break from tradition as in the Arctic Cathedral built in lightweight concrete and covered in aluminum sidings. Wooden churches In Norway, church architecture has been affected by wood as the preferred material, particularly in sparsely populated areas. Churches built until the second world war are about 90% wooden except medieval constructions. During the Middle Ages all wooden churches in Norway (about 1000 in total) were constructed in the stave church technique, but only 271 masonry constructions. After the Protestant reformation when the construction of new (or replacement of old) churches was resumed, wood was still the dominant material but the log technique became dominant. The log construction gave a lower more sturdy style of building compared to the light and often tall stave churches. Log construction became structurally unstable for long and tall walls, particularly if cut through by tall windows. Adding transepts improved the stability of the log technique and is one reason why the cruciform floor plan was widely used during 1600 and 1700s. For instance the Old Olden Church (1759) replaced a building damaged by hurricane, the 1759 church was then constructed in cruciform shape to make it withstand the strongest winds. The length of trees (logs) also determined the length of walls according to Sæther. In Samnanger church for instance, outside corners have been cut to avoid splicing logs, the result is an octagonal floor plan rather than rectangular. The cruciform constructions provided a more rigid structure and larger churches, but view to the pulpit and altar was obstructed by interior corners for seats in the transept. The octagonal floor plan offers good visibility as well as a rigid structure allowing a relatively wide nave to be constructed – Håkon Christie believes that this is a reason why the octagonal church design became popular during the 1700s. Vreim believes that the introduction of log technique after the reformation resulted in a multitude of church designs in Norway. In Ukraine, wood church constructions originate from the introduction of Christianity and continued to be widespread, particularly in rural areas, when masonry churches dominated in cities and in Western Europe. Regional styles Church architecture varies depending on both the sect of the faith, as well as the geographical location and the influences acting upon it. Variances from the typical church architecture as well as unique characteristics can be seen in many areas around the globe. Armenia England The style of churches in England has gone through many changes under the influence of geographical, geological, climatic, religious, social and historical factors. One of the earliest style changes is shown in Westminster Abbey, which was built in a foreign style and was a cause for concern for many as it heralded change. A second example is the current St Paul's Cathedral in London. There are many other notable churches that have each had their own influence on the ever-changing style in England, such as Truro, Westminster Cathedral, Liverpool and Guildford. Between the thirteenth and fourteenth centuries, the style of church architecture could be called 'Early English' and 'Decorated'. This time is considered to be when England was in its prime in the category of a church building. It was after the Black Death that the style went through another change, the 'perpendicular style', where ornamentation became more extravagant. An architectural element that appeared soon after the Black Death style change and is observed extensively in Medieval English styles is fan vaulting, seen in the Chapel of Henry VII and the King's College Chapel in Cambridge. After this, the prevalent style was Gothic for around 300 years but the style was clearly present for many years before that as well. In these late Gothic times, there was a specific way in which the foundations for the churches were built. First, a stone skeleton would be built, then the spaces between the vertical supports filled with large glass windows, then those windows supported by their own transoms and mullions. On the topic of church windows, the windows are somewhat controversial as some argue that the church should be flooded with light and some argue that they should be dim for an ideal praying environment. Most church plans in England have their roots in one of two styles, Basilican and Celtic and then we see the later emergence of a 'two-cell' plan, consisting of nave and sanctuary. In the time before the last war, there was a movement towards a new style of architecture, one that was more functional than embellished. There was an increased use of steel and concrete and a rebellion against the romantic nature of the traditional style. This resulted in a 'battle of the styles' in which one side was leaning towards the modernist, functional way of design, and the other was following traditional Romanesque, Gothic, and Renaissance styles, as reflected in the architecture of all buildings, not just churches. Wallachia In the early Romanian territory of Wallachia, there were three major influences that can be seen. The first are the western influences of Gothic and Romanesque styles, before later falling to the greater influence of the Byzantine styles. The early western influences can be seen in two places, the first is a church in Câmpulung, that showcases distinctly Romanesque styles, and the second are the remnants of a church in Drobeta-Turnu Severin, which has features of the Gothic style. There are not many remaining examples of those two styles, but the Byzantine influence is much more prominent. A few prime examples of the direct Byzantine influence are the St. Nicoara and Domneasca in Curtea de Arges, and church at Nicopolis in Bulgaria. These all show the characteristic features such as sanctuaries, rectangular naves, circular interiors with non-circular exteriors, and small chapels. The Nicopolis church and the Domneasca both have Greek-inspired plans, but the Domneasca is far more developed than the Nicopolis church. Alongside these are also traces of Serbian, Georgian, and Armenian influences that found their way to Wallachia through Serbia. United States The split between Eastern and Western Church Architecture extended its influence into the churches we see in America today as well. America's churches are an amalgamation of the many styles and cultures that collided here, examples being St. Constantine, a Ukrainian Greek Catholic Church in Minneapolis, Polish Cathedral style churches, and Russian Orthodox churches, found all across the country. There are remnants of the Byzantine inspired architecture in many of the churches, such as the large domed ceilings, extensive stonework, and a maximizing of space to be used for religious iconography on walls and such. Churches classified as Ukrainian or Catholic also seem to follow the trend of being overall much more elaborately decorated and accentuated than their Protestant counterparts, in which decoration is simple. Specifically in Texas, there are remnants of the Anglo-American colonization that are visible in the architecture itself. Texas in itself was a religious hotbed, and so ecclesiastical architecture developed at a faster pace than in other areas. Looking at the Antebellum period, (1835–1861) Church architecture shows the values and personal beliefs of the architects who created them, while also showcasing Texan cultural history. Both the Catholic and Protestant buildings showed things such as the architectural traditions, economic circumstances, religious ordinances, and aesthetic tastes of those involved. The movement to keep ethnicities segregated during this time was also present in the very foundations of this architecture. Their physical appearances vary wildly from area to area though, as each served its own local purpose, and as mentioned before, due to the multitude of religious groups, each held a different set of beliefs. Similarly, many Catholic churches in the southwestern United States – especially in the coastal portions of California – are built with exterior elements in the Mission Revival architecture style, as a tribute to the Spanish missions in California, though often with stained glass windows added and more modern interior elements. Ethiopia–Eritrea Although having its roots in the traditions of Eastern Christianity – especially the Syrian church – as well as later being exposed to European influences – the traditional architectural style of Orthodox Tewahedo (Ethiopian Orthodox-Eritrean Orthodox) churches has followed a path all its own. The earliest known churches show the familiar basilican layout. For example, the church of Debre Damo is organized around a nave of four bays separated by re-used monolithic columns; at the western end is a low-roofed narthex, while on the eastern is the maqdas, or Holy of Holies, separated by the only arch in the building. The next period, beginning in the second half of the first millennium AD and lasting into the 16th century, includes both structures built of conventional materials, and those hewn from rock. Although most surviving examples of the first are now found in caves, Thomas Pakenham discovered an example in Wollo, protected inside the circular walls of later construction. An example of these built-up churches would be the church of Yemrehana Krestos, which has many resemblances to the church of Debre Damo both in plan and construction. The other style of this period, perhaps the most famous architectural tradition of Ethiopia, are the numerous monolithic churches. This includes houses of worship carved out of the side of mountains, such as Abreha we Atsbeha, which although approximately square the nave and transepts combine to form a cruciform outline – leading experts to categorize Abreha we Atsbeha as an example of cross-in-square churches. Then there are the churches of Lalibela, which were created by excavating into "a hillside of soft, reddish tuff, variable in hardness and composition". Some of the churches, such as Biete Amanuel and the cross-shaped Bete Giyorgis, are entirely free-standing with the volcanic tuff removed from all sides, while other churches, such as Biete Gabriel-Rufael and Biete Abba Libanos, are only detached from the living rock on one or two sides. All of the churches are accessed through a labyrinth of tunnels. The final period of Ethiopian church architecture, which extends to the present day, is characterized by round churches with conical roofs – quite similar to the ordinary houses the inhabitants of the Ethiopian highlands live in. Despite this resemblance, the interiors are quite different in how their rooms are laid out, based on a three-part division of: A maqdas where the tabot is kept, and only priests may enter; An inner ambulatory called the qiddist used by communicants at mass; and An outer ambulatory, the qene mehlet, used by the dabtaras and accessible to anyone. East and Southeast Asia Chinese, Vietnamese, Korean, and Japanese architectures have been integrated into church building design. Hundreds of timber-framed churches in Northern Vietnam are constructed with traditional methods, exhibiting great cultural and historical values. During the first decades of the 20th century, a new Sino-Christian church architecture emerged. Some churches across Southeast Asia have also incorporated traditional architecture, such as the Church of the Sacred Heart, Ganjuran, Java, Indonesia, and the Holy Redeemer Church, Bangkok, Thailand. Taiwan In East Asia, Taiwan is one of several countries famous for its church architecture. The Spanish Fort Santo Domingo in the 17th century had an adjacent church. The Dutch Fort Zeelandia in Tainan also included a chapel. In modern architecture several churches have been inspired to use traditional designs. These include the Church of the Good Shepherd in Shihlin (Taipei), which was designed by Su Hsi Tsung and built in the traditional siheyuan style. The chapel of Taiwan Theological College and Seminary includes a pagoda shape and traditional tile-style roof. Zhongshan and Jinan Presbyterian churches were built during the Japanese era (1895–1945) and reflect a Japanese aesthetic. Tunghai University's Luce Memorial Chapel, designed by IM Pei's firm, is often held up as an example of a modern, contextualized style. The Philippines Spanish, Austronesian, and Chinese construction ideas merged during the Spanish era of the Philippines (late 15th to late 19th century), which is the only Christian-majority nation in the Far East together with the small island nation of East Timor. These traditions had to adapt to the tropical climate and earthquake-prone environment, which resulted in a new types of arquitectura mestiza unique to the archipelago developed over three centuries. Convents and monasteries were primarily built in the bahay na bato tradition, which had the architectural principle of native Austronesian framework, stone masonry introduced by Spaniards, and ornaments incorporated by both as well as from Chinese architects. Most early churches, though illiterate, with limited knowledge from the cooperation between Spanish friars and Chinese architects with native manpower, drew from Renaissance and Baroque traditions while complying with the tropical climate and earthquake-prone environment of the islands, resulting in an architectural style known as Filipino Baroque or Earthquake Baroque, characterized by fortress-like thick walls and contrafuetes (buttresses); squat cylindrical, rectangular or octagonal belfries also serving as watch towers; local motifs; and in some extent, Asian guardian lions as grotesques. Though still retaining its unique local characteristics, the styles became more literate as more architects arrived from the other parts of the Spanish Empire, and even started incorporating newer styles such as Neoclassical, Neo-Gothic and Neo-Romanesque. Gothic era church architecture Gothic-era architecture, originating in 12th-century France, is a style where curves, arches, and complex geometry are highly emphasized. These intricate structures, often of immense size, required great amounts of planning, effort and resources; involved large numbers of engineers and laborers; and often took hundreds of years to complete—all of which was considered a tribute to God. Characteristics The characteristics of a Gothic-style church are largely in congruence with the ideology that the more breathtaking a church is, the better it reflects the majesty of God. This was accomplished through clever math and engineering in a time period where complex shapes, especially in huge cathedrals, were not typically found in structures. Through this newly implemented skill of being able to design complex shapes churches consisted of namely pointed arches, curved lights and windows, and rib vaults. Since these newly popular designs were implemented with respect to the width of the church rather than height, width was much more desired rather than height. Art Gothic architecture in churches had a heavy emphasis on art. Just like the structure of the building, there was an emphasis on complex geometric shapes. An example of this is stained glass windows, which can still be found in modern churches. Stained glass windows were both artistic and functional in the way that they allowed colored light to enter the church and create a heavenly atmosphere. Other popular art styles in the Gothic era were sculptures. Creating lifelike depictions of figures, again with the use of complex curves and shapes. Artists would include a high level of detail to best preserve and represent their subject. Time periods and styles The Gothic era, first referred to by historiographer Giorgio Vasari, began in northeastern France and slowly spread throughout Europe. It was perhaps most characteristically expressed in the Rayonnant style, originating in the 13th century, known for its exaggerated geometrical features that made everything as astounding and eye-catching as possible. Gothic churches were often highly decorated, with geometrical features applied to already complex structural forms. By the time the Gothic period neared its close, its influence had spread to residences, guild halls, and public and government buildings. Notable examples Chartres Cathedral Santa Maria del Fiore Cologne Cathedral Notre Dame de Paris Monastery of Batalha Metz Cathedral The Reformation and its influence on church architecture In the early 16th century, the Reformation brought a period of radical change to church design. On Christmas Day 1521, Andreas Karlstadt performed the first reformed communion service. In early January 1522, the Wittenberg city council authorized the removal of imagery from churches and affirmed the changes introduced by Karlstadt on Christmas. According to the ideals of the Protestant Reformation, the spoken word, the sermon, should be central act in the church service. This implied that the pulpit became the focal point of the church interior and that churches should be designed to allow all to hear and see the minister. Pulpits had always been a feature of Western churches. The birth of Protestantism led to extensive changes in the way that Christianity was practiced (and hence the design of churches). During the Reformation period, there was an emphasis on "full and active participation". The focus of Protestant churches was on the preaching of the Word, rather than a sacerdotal emphasis. Holy Communion tables became wood to emphasise that Christ's sacrifice was made once for all and were made more immediate to the congregation to emphasise man's direct access to God through Christ. Therefore, Catholic churches were redecorated when they became reformed: Paintings and statues of saints were removed and sometimes the altar table was placed in front of the pulpit, as in Strasbourg Cathedral in 1524. The pews were turned towards the pulpit. Wooden galleries were built to allow more worshippers to follow the sermon. The first newly built Protestant church was the court chapel of Neuburg Castle in 1543, followed by the court chapel of Hartenfels Castle in Torgau, consecrated by Martin Luther on 5 October 1544. Images and statues were sometimes removed in disorderly attacks and unofficial mob actions (in the Netherlands called the Beeldenstorm). Medieval churches were stripped of their decorations, such as the Grossmünster in Zürich in 1524, a stance enhanced by the Calvinist reformation, beginning with its main church, St. Pierre Cathedral in Geneva, in 1535. At the Peace of Augsburg of 1555, which ended a period of armed conflict between Roman Catholic and Protestant forces within the Holy Roman Empire, the rulers of the German-speaking states and Charles V, the Habsburg Emperor, agreed to accept the principle Cuius regio, eius religio, meaning that the religion of the ruler was to dictate the religion of those ruled. In the Netherlands the Reformed church in Willemstad, North Brabant was built in 1607 as the first Protestant church building in the Netherlands, a domed church with an octagonal shape, according to Calvinism's focus on the sermon. The Westerkerk of Amsterdam was built between 1620 and 1631 in Renaissance style and remains the largest church in the Netherlands that was built for Protestants. By the beginning of the 17th century, in spite of the cuius regio principle, the majority of the peoples in the Habsburg monarchy had become Protestant, sparking the Counter-Reformation by the Habsburg emperors which resulted in the Thirty Years' War in 1618. In the Peace of Westphalia treaties of 1648 which ended the war, the Habsburgs were obliged to tolerate three Protestant churches in their province of Silesia, where the counter-reformation had not been completely successful, as in Austria, Bohemia and Hungary, and about half of the population still remained Protestant. However, the government ordered these three churches to be located outside the towns, not to be recognisable as churches, they had to be wooden structures, to look like barns or residential houses, and they were not allowed to have towers or bells. The construction had to be accomplished within a year. Accordingly, the Protestants built their three Churches of Peace, huge enough to give space for more than 5,000 people each. Two of them are still existing and have been declared UNESCO World Heritage Sites. When Protestant troops under Swedish leadership again threatened to invade the Habsburg territories during the Great Northern War, the Habsburgs were forced to allow more Protestant churches within their empire with the Treaty of Altranstädt (1707), however limiting these with similar requirements, the so-called Gnadenkirchen (Churches of Grace). They were mostly smaller wooden structures like the one in Hronsek (Slovakia) of 1726. In Britain during the seventeenth and eighteenth centuries, it became usual for Anglican churches to display the Royal Arms inside, either as a painting or as a relief, to symbolise the monarch's role as head of the church. During the 17th and 18th centuries, Protestant churches were built in the Baroque style that originated in Italy, however consciously more simply decorated. Some could still become fairly grand, for instance, the Katarina Church, Stockholm, St. Michael's Church, Hamburg or the Dresden Frauenkirche, built between 1726 and 1743 as a sign of the will of the citizen to remain Protestant after their ruler had converted to Catholicism. Some churches were built with a new and genuinely Protestant alignment: the transept became the main church while the nave was omitted, for instance at the Ludwigskirche in Saarbrücken of 1762; this building scheme was also quite popular in Switzerland, with the largest being the churches of Wädenswil (1767) and Horgen (1782). A new Protestant interior design scheme was established in many German Lutheran churches during the 18th century, following the example of the court chapel of Wilhelmsburg Castle of 1590: The connection of altar with baptismal font, pulpit and organ in a vertical axis. The central painting above the altar was replaced with the pulpit. Neo-Lutheranism in the early 19th century criticized this scheme as being too profane. The German Evangelical Church Conference, therefore, recommended the Gothic language of forms for church building in 1861. Gothic Revival architecture began its triumphal march. With regard to Protestant churches, it was not only an expression of historism, but also of a new theological programme which put the Lord's supper above the sermon again. The Berlin Cathedral is a triumphal Lutheran cathedral built in 1893 by Emperor Wilhelm II in high Neo-Renaissance style. Around 1880, two decades after the Neo-Gothic recommendation, liberal Lutherans and Calvinists expressed their wish for a new genuinely Protestant church architecture, conceived on the basis of liturgical requirements. The spaces for altar and worshippers should no longer be separated from each other. Accordingly, churches should not only give space for service, but also for social activities of the parish. Churches were to be seen as meeting houses for the celebrating faithful. The Ringkirche in Wiesbaden was the first church realised according to this ideology in 1892–94. The unity of the parish was expressed by an architecture that united the pulpit and the altar in its circle, following early Calvinist tradition. These ideas have also had an impact on modern Catholic church architecture: When St. Hedwig's Cathedral in Berlin was rebuilt after the Second World War, a similar arrangement was chosen, which was also retained in the most recent redesign of its interior. Modernism The idea that worship was a corporate activity and that the congregation should be in no way excluded from sight or participation derives from the Liturgical Movement. Simple one-room plans are almost of the essence of modernity in architecture. In France and Germany between the first and second World Wars, some of the major developments took place. The church at Le Raincy near Paris by Auguste Perret is cited as the starting point of process, not only for its plan but also for the materials used, reinforced concrete. More central to the development of the process was Schloss Rothenfels-am-Main in Germany which was remodelled in 1928. Rudolf Schwartz, its architect, was hugely influential on later church building, not only on the continent of Europe but also in the United States of America. Schloss Rothenfels was a large rectangular space, with solid white walls, deep windows and a stone pavement. It had no decoration. The only furniture consisted of a hundred little black cuboid moveable stools. For worship, an altar was set up and the faithful surrounded it on three sides. Corpus Christi in Aachen was Schwartz's first parish church and adheres to the same principles, very much reminiscent of the Bauhaus movement of art. Externally it is a plan cube; the interior has white walls and colourless windows, a langbau i.e. a narrow rectangle at the end of which is the altar. It was to be, said Schwartz not 'christocentric' but 'theocentric'. In front of the altar were simple benches. Behind the altar was a great white void of a back wall, signifying the region of the invisible Father. The influence of this simplicity spread to Switzerland with such architects as Fritz Metzger and Dominikus Böhm. After the Second World War, Metzger continued to develop his ideas, notably with the church of St. Franscus at Basel-Richen. Another notable building is Notre Dame du Haut at Ronchamp by Le Corbusier (1954). Similar principles of simplicity and continuity of style throughout can be found in the United States, in particular at the Roman Catholic Abbey church of St. Procopius, in Lisle, near Chicago (1971). A theological principle which resulted in change was the decree Sacrosanctum Concilium of the Second Vatican Council issued in December 1963. This encouraged 'active participation' (in Latin: participatio actuosa) by the faithful in the celebration of the liturgy by the people and required that new churches should be built with this in mind (para 124) Subsequently, rubrics and instructions encouraged the use of a freestanding altar allowing the priest to face the people. The effect of these changes can be seen in such churches as the Roman Catholic Metropolitan Cathedrals of Liverpool and the Brasília, both circular buildings with a free-standing altar. Different principles and practical pressures produced other changes. Parish churches were inevitably built more modestly. Often shortage of finances, as well as a 'market place' theology suggested the building of multi-purpose churches, in which secular and sacred events might take place in the same space at different times. Again, the emphasis on the unity of the liturgical action, was countered by a return to the idea of movement. Three spaces, one for the baptism, one for the liturgy of the word and one for the celebration of the Eucharist with a congregation standing around an altar, were promoted by Richard Giles in England and the United States. The congregation were to process from one place to another. Such arrangements were less appropriate for large congregations than for small; for the former, proscenium arch arrangements with huge amphitheatres such as at Willow Creek Community Church in Chicago in the United States have been one answer. Postmodernism As with other Postmodern movements, the Postmodern movement in architecture formed in reaction to the ideals of modernism as a response to the perceived blandness, hostility, and utopianism of the Modern movement. While rare in designs of church architecture, there are nonetheless some notable for recover and renew historical styles and "cultural memory" of Christian architecture. Notable practitioners include Dr. Steven Schloeder, Duncan Stroik, and Thomas Gordon Smith. The functional and formalized shapes and spaces of the modernist movement are replaced by unapologetically diverse aesthetics: styles collide, form is adopted for its own sake, and new ways of viewing familiar styles and space abound. Perhaps most obviously, architects rediscovered the expressive and symbolic value of architectural elements and forms that had evolved through centuries of building—often maintaining meaning in literature, poetry and art—but which had been abandoned by the modern movement. Church buildings in Nigeria evolved from its foreign monument look of old to the contemporary design which makes it look like a factory. Images of church architecture from different centuries See also Akron plan Bell-gable Cathedral architecture Church porch Gothic cathedrals and churches Marian and Holy Trinity columns Mathematics and architecture Monastery Oldest churches in the world Polish Cathedral style churches in North America Parish close Religious architecture Protestantism in Germany Tin tabernacle :Category:Church architecture References Notes Bibliography Menachery, George (ed.) The St. Thomas Christian Encyclopaedia of India, 3 volumes: Trichur 1973, Trichur 1982, Ollur 2009; hundreds of photographs on Indian church architecture. 500 Photos. Pevsner, Nikolaus (1951–1974). The Buildings of England (series), Harmondsworth: Penguin. Focusing on modern church architecture, mid-20th-century. External links Oldest Christian chapel in the Holy Land found EnVisionChurch.org, Commentaries and case studies on modern church building and architecture Photographs of European cathedrals, monasteries and cloisters Digital collection with floor plans, details, sections, and elevations of three Buffalo churches from the University at Buffalo Libraries Sacral architecture Architectural styles
Church architecture
[ "Engineering" ]
10,598
[ "Sacral architecture", "Architecture" ]
2,213,205
https://en.wikipedia.org/wiki/PSR%20B0950%2B08
PSR B0950+08 is a young pulsar that may have come from a supernova that occurred in Leo 1.8 million years ago. The large and old remnant of this supernova, located in the constellation of Antlia, may be the nearest besides the Local Bubble, and the supernova would have been as bright as the moon. Off-pulse emissions from the young pulsar were detected by the Expanded Long Wavelength Array, suggesting the presence of a pulsar wind nebula around it. PSR B0950+08 was fourth among the initial radio pulsars discovered in 1968. External links The Astrophysical Journal Letters, vol. 576, p. L41, August 2002 New Scientist, August 24, 2002 Image PSR B0950+08 References Pulsars Leo (constellation)
PSR B0950+08
[ "Astronomy" ]
172
[ "Leo (constellation)", "Constellations" ]
2,213,219
https://en.wikipedia.org/wiki/Glycogenin
Glycogenin is an enzyme involved in converting glucose to glycogen. It acts as a primer, by polymerizing the first few glucose molecules, after which other enzymes take over. It is a homodimer of 37-kDa subunits and is classified as a glycosyltransferase. It catalyzes the chemical reactions: UDP-alpha-D-glucose + glycogenin UDP + alpha-D-glucosylglycogenin UDP-alpha-D-glucose + a glucosyl-glycogenin (1,4-alpha-D-glucosyl)n-glucosyl glucogenin + UDP + H+ Thus, the two substrates of this enzyme are UDP-alpha-D-glucose and glycogenin, whereas its two products are UDP and alpha-D-glucosylglycogenin. Nomenclature This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-alpha-D-glucose:glycogenin alpha-D-glucosyltransferase. Other names in common use include: glycogenin, priming glucosyltransferase, and UDP-glucose:glycogenin glucosyltransferase. One may also notice that the naming of glycogenin hints at its function, with the glyco prefix referring to a carbohydrate and the genin suffix derived from the Latin genesis meaning novel, source, or beginning. This hints at the role of glycogenin to simply start glycogen synthesis before glycogen synthase takes over. Discovery Glycogenin was discovered in 1984 by Dr. William J. Whelan, a fellow of the Royal Society of London and former professor of Biochemistry at the University of Miami. Function The main enzyme involved in glycogen polymerisation, glycogen synthase in the liver and in the muscle glycogen synthesis is initiated by UDP-Glucose, can only add to an existing chain of at least 3 glucose residues. Glycogenin acts as the primer, to which further glucose monomers may be added. It achieves this by catalyzing the addition of glucose to itself (autocatalysis) by first binding glucose from UDP-glucose to the hydroxyl group of Tyr-194. Seven more glucoses can be added, each derived from UDP-glucose, by glycogenin's glucosyltransferase activity. Once sufficient residues have been added, glycogen synthase takes over extending the chain. Glycogenin remains covalently attached to the reducing end of the glycogen molecule. Evidence accumulates that a priming protein may be a fundamental property of polysaccharide synthesis in general; the molecular details of mammalian glycogen biogenesis may serve as a useful model for other systems. Glycogenin is able to use the other two pyrimidine nucleotides as well, namely CDP-glucose and TDP-glucose, in addition to its native substrate, UDP-glucose. Structure Isozymes In humans, there are two isoforms of glycogenin — glycogenin-1, encoded by GYG1, and expressed in muscle; and glycogenin-2, encoded by GYG2, and expressed in the liver and cardiac muscle, but not skeletal muscle. Patients have been found with defective GYG1, resulting in muscle cells with the inability to store glycogen, and consequential weakness and heart disease. References Further reading Berman, M.C. and Opie, L.A. (Eds.), Membranes and Muscle, ICSU Press/IRL Press, Oxford, 1985, p. 65-84. External links molbio.med.miami.edu Carbohydrate metabolism EC 2.4.1 Enzymes of known structure
Glycogenin
[ "Chemistry" ]
883
[ "Carbohydrate metabolism", "Carbohydrate chemistry", "Metabolism" ]
2,213,247
https://en.wikipedia.org/wiki/Generic%20filter
In the mathematical field of set theory, a generic filter is a kind of object used in the theory of forcing, a technique used for many purposes, but especially to establish the independence of certain propositions from certain formal theories, such as ZFC. For example, Paul Cohen used forcing to establish that ZFC, if consistent, cannot prove the continuum hypothesis, which states that there are exactly aleph-one real numbers. In the contemporary re-interpretation of Cohen's proof, it proceeds by constructing a generic filter that codes more than reals, without changing the value of . Formally, let P be a partially ordered set, and let F be a filter on P; that is, F is a subset of P such that: F is nonempty If p, q ∈ P and p ≤ q and p is an element of F, then q is an element of F (F is closed upward) If p and q are elements of F, then there is an element r of F such that r ≤ p and r ≤ q (F is downward directed) Now if D is a collection of dense open subsets of P, in the topology whose basic open sets are all sets of the form {q | q ≤ p} for particular p in P, then F is said to be D-generic if F meets all sets in D; that is, for all E ∈ D. Similarly, if M is a transitive model of ZFC (or some sufficient fragment thereof), with P an element of M, then F is said to be M-generic, or sometimes generic over M, if F meets all dense open subsets of P that are elements of M. See also in computability References Forcing (mathematics)
Generic filter
[ "Mathematics" ]
354
[ "Forcing (mathematics)", "Mathematical logic" ]
2,213,491
https://en.wikipedia.org/wiki/Kaliyan
Kali (Kaliyan in Tamil) was the sixth fragment of the primordial manifestation of Kroni (evil) according to Akilathirattu, the source of Ayyavazhi mythology and the holy book of Ayyavazhi religion. Unlike other previous manifestations, Kali spread in this yugam (yukam in Tamizh) as maya (illusion). Details of Kali were restated in Ayyavazhi Religion and he is the same Kali mentioned in Kalki Purana. Kali Yugam As the time is close to the ascendancy of Kali Yuga, a sage named Guru Muni told Shiva, "Your greatness, the Kroni was created and fragmented into six parts. Five of those parts were made to take birth. However, because none of them obeyed you, all were destroyed by Vishnu(Mayon/Thirumal in Tamizh), and his spirit (the spirit Vishnu took only in the Avatars) was kept in Parvatha Ucchi Malai (In Sanskrit 'uccha' means high, in Tamil 'malai' means mountain). Nonetheless, by the sixth fragment, Kroni still has a birth in Kali Yugam." He also requested that in this yugam his sixth fragment should be created with a body with eight chans (spans - distance from the end of the thumb to the end of the little finger extended). Shiva made a proposal to create Kali, in the following manner. Accepting Guru Muni's request, Shiva replied, "Good. Didn't Vishnu have to come?" Deva Muni replied, "Vishnu was in his ‘Sleep of Wisdom’ (sleeping) in the world." Hearing this, Shiva assembled the following beings: Vasishta Guru of Govuha(? Needs Sanskrit equivalent), Devas of Deiva Lokam, the members of Vaikunta Lokam Kinneras(Kina nathar in Tamizh), Kimpurusha(Kimburudar in Tamizh),and members from all other worlds. He also assembled all the Deiva Ganas. Then he asked the gathering, Is Vishnu in Vaikuntam? If he is not there, then where is he? The conclave repeated that, Vishnu was dead lying (without any activities) in the lower world (Earth). Hearing this, Shiva replied, "All the previous five fragments were destroyed since they did not respect us. Now it’s his sixth birth, but, even now, he doesn’t realize that no other chances will be provided and he will be sentenced to death. And so, in this birth, he will be created as a human being with the following talents: aesthetics, wisdom, beauty, and sharp intelligence. In the previous Yugas, Vishnu took a body of four spans and head one span. In this yuga, Kali will be given a body the same size of that of Vishnu in all previous yugas. That way, Kali will have no reason for his defeat to argue that is similar to the previous ones. Does anyone have any opinions or objections about it?" The proposal was accepted unanimously by the Devas, chiranjeevin sages, and the brahmins, those who read scriptures. Therefore, Shiva decided to create Kali. Birth of Kali At the very moment Shiva made his decision, Kali formed, as a human male, and pushed himself out of the earth in an inverted form. All of Shiva's advisors were amazed and moved by this sight. All of those who witnessed the creation of Kali went to Sivan and reported it to him. At once, Shiva rise from his seat and started walking to see this wonder. Then Nandhi/Nandheesha(Nantheesurar in Tamizh) came across him and said, "Your greatness, you are the one who can’t wholly be known, even by devas. Why are you here? Can I know the reason?" Sivan replied, "I heard that there is a being, born inverted, against dhanam (rule of the land), whose legs were towards sky and head towards flower (flower here - earth). So I am going there, because I want to see it for myself". Hearing this, Nandhi complained, "Dhanam was overtaken, not there, but here. If something happened, you would normally be on your seat, up in Kayilai, and you would control everything. But now all the procedures are disintegrated. Because you stood up from your seat, the whole world will suffer and the dharmam will dissolve. Warfare and deception will rule the world. The land will lack rainfall. The true scripture, which tells about the Brahmam, will be substantially lost." Because of his complaint, Nandhi, along with Siva, returned to Kailaasham.(Kayilai in Tamizh) Nature of Kaliyan After getting back to Kailaasham, Nandhi told Shiva, "We'll call Chithira Buthira (Chithira Buthira in Tamizh) and ask him to describe Kali". This request was honored. Chithra Buthira described Kali thusly: "He was born from the sixth fragment of Kroni. His body was made by assembling earth, sky and fire. The water will be Kali's strength. The vayu (air) will be the prana for him. Kali will be the most cruel one, of all the six fragments." Chithira Buthira also noted that Mayon had taken a 4-span body in the previous yugas, as Narasimha, Rama, and Krishna, and so Kali, in this yuga, was born with the body of same size, and he pushed himself out of the earth. After hearing this, Shiva asked Chitra Buthira to tell about Kali's lifespan, qualities, and power. Chitra Buthira responded with a narrative. Buthira said, "Because [Kali] was born without the instrumentality of normal human parentage, all the 96 Tatvas in his physique are rude and unrefined. Also, because of his unusual birth, he would have his intelligence and five senses rooted in falsehood. His eyes, legs and head will lead Kali towards sin. His nature is not knowing the true nature of the things here. He will have a life span of about a hundred years, and he will attain maturity at the age of fourteen. He will have about ten hundred thousand drops of semen in his body." (Some sources claim this as 10 lakh drops of blood, not semen). "He will attain full bodily maturity at the age of 31. Kali's body, which was built up with blood, bone, veins and muscle, are made of water and earth, and this is used for nothing (useless). His body will have 9 openings outside. This toy (i.e., body) will be controlled by a bird (i.e., soul), and when it’s time for the bird to go, it will fly away, leaving the toy here, and thereby the bird will have no relation with the toy. This bird and the body is free for him in this yuga. And until the bird is with him, Kali's savageness cannot be tolerated. Since his nature is like this, and since he is built up of savageness, he will not be thankful." Kali in Kailasham (Kayilai in Tamizh) Shiva, now even more interested in Kali, wanted to see him, so after a deep discussion, Shiva asked Yama, demons, Durga, and 3 crore (or 30 million) ghosts to bring him to Kailasham. All of them went in front of Kali and told him that, "You are asked by Shiva to come to Kailasham". Kali gave a lackluster response. He rolled over, but still his head was inside the earth and the earth was sticky. He was not able to come out. The sky ignored the going-ons as well. Because of this failure to come out, Kailasham trembled, and the witnesses shivered. So some of the spooked witnesses headed to Kailasham and told Shiva that Kali was unable to come out, and the earth was suffering because of that failure. Shiva made a plan. He asked a kammalan, or craftsman, to make a single fork, and asked Nandhi to carry Kali out, using that fork. This proved successful: Nandhi took the fork, walked towards Kali, and took him out of the earth with the fork. As soon as Kali was out, the resulting opening in the earth promptly shut. All of the parties at the site then pulled Kali to Kailasham and presented him to Shiva. Birth of Durukthi (Kalicchi in Tamizh) When Kali was presented to Shiva, Shiva asked Kali, "What would you like to have?". Kali, asking the devas or a third party, said, "Is that fellow ready to give me what I need, because he wears a snake on his neck and ash on his forehead, and because he sits on an elephant skin?" Kali disgraced Shiva with this question, causing the devas to advise Kali, "Don’t disgrace Shiva. He was the one who created the world. He was the one who offered food for every life in the world. He was the light which was not even visible to Mayon and Vethan. (This is an allusion to when Shiva appeared in a form of endless lingam of light when there was a dispute between Mayon and Nathan as to who is the greatest and in order to subdue their pride.) Don't worry, he will be able to give what you claim." Kali replied, "If he was the one who made the world, ask him to create a beautiful girl for me." As per his request, a lady named Durukthi was created from his left rib-bone. And then Kali understood the power of Sivan. Kali Claiming boons Seeing all these, the Devas asked Kali to thank Shiva, and the Devas also asked him to claim the boons (blessings and permissions). Kali accepted this, saying, "Your highness, thank you for creating a lady for me. Would you mind giving me other items I need?" Shiva told, "What do you want?" Since Shiva had agreed to offer Kali boons, he started asking for boons which control the whole Universe. This panicked all of the logas, who shivered on hearing the boons of Kali. Upon hearing of this fright, Shiva asked Parvathi what to do. She replied, "You must satisfy all of Kali's requests, but the fulfillment should be performed under technique." Shiva provided those boons in a roundabout way: he created a man named Agastya(Agatheesar in tamizh, Agasthyar in Grantha Tamizh) with his mind, so that the boons would be given through that man. Agastya was also created with great knowledge of every subject. Having done this, Shiva ordered the newly made Agastya to give Kali all of the boons that he had claimed. Obeying this, Agastya Rushi offered all the boons to Kali and taught him all of the subjects, including the 'Technique of Living Forever'. Subsequently, Agastya reported those accomplishments to Sivan. Among the boons reportedly given (or sold to) Kali, according to Agastya, were the Chakram and Crown of Vishnu. Agastya concluded in his report, "If all these boons are with [Kali], it is impossible to destroy him." The boons were: The Crown, Chakram and features of Thirumal. Sacred ash of Sivan. Birth of Brahmins. Power, and features of Siva. Power and features of Shakti. Power of Austerity. Power and Features of Nathan. Power and features of Lakshmi. Power and features of Devas. Power and features of Yama. Power and features of Virgin Saraswati. Power and features of Kali. Power and features of Ganesh. Power and features of Muruga. Power to screen the activities of Ekam. Qualities and Features of Prophets. Power and Features of the Whole Universe. The technique of transferring from one body to another. The technique of destroying the world by serious diseases and robbery. The technique of making the whole world fall sleep, by which he might fulfill his needs. The capability of sensing danger. The technique of controlling one's power of speech. The technique of separating husband and wife. The technique of creating frustration among common people, by which destroying them. The technique of killing by practicing magic. The technique of arresting the actions of Nature. The embryo (from which it forms) of nature. The rules and regulations for practicing witchcraft, blackmagic etc. The capability of controlling and creating desires. The rules and regulations of Puja. The rules and regulations of Theetchai. The fate of Sivam. The technique of floating on water and fire. The capability to land on and control the moon. The technique of commanding and controlling animals. The technique of controlling the planets and astrological phenomena which might disturb him. The formula of curing disease (medicine). The forms of Trimurti, and the technique of knowing their origin and the formula for commanding them. The birth of Devas. The formula and technique of flying. The formula of commanding various gods. The formula of Screening. The formula for commanding Thirumal. The formula for commanding Shakti. The formula for commanding Kali. The formula for commanding Devas. The formula for knowing the Fate of the future. The technique of stopping various exploding weapons, and escaping from them. The formula for controlling various venomous beings. He asked that by these techniques he might live in this world with the dynasty of people born from his semen. And he also asked that all his Five senses should not forget the women (i.e.) his wife Kalicchi. Promise of Kali Meanwhile, Vishnu, in the form of an old beggar (Pantaram in Tamizh) on his way to say good-bye to Shiva, met with Kali. Vishnu asked Kali to donate him some of the boons which was given to him. Otherwise, Vishnu threatened, he would overwhelm Kali physically and then proceed with Kali's boons. Kali retorted, "You are an age-old person. Also, you have no army and no sword or any weapons. If I quarrel with you, even the lady on my side will degrade me. So how about this: why not move aside, and leave?". Then Vishnu replied, "Ok, make me a promise [to behave nicely]." Kali asked, "What should I make a promise on?" Vishnu responded, "Promise on your boons, kingdom, your lady, your military and your dynasty." Kali thus declared: "I promise that if I create any troubles to beggars on earth, I, my lady, my boons, my kingdoms, and my military will all fail, my dynasty and I will die, and both my dynasty and I will go to hell." As Kali made his oath, Vishnu got the Chakram from Kali, cursed the Chakram as Money, and gave it back to him. As soon as it was cursed, the chakram asked Vishnu, "When will your curse finish?". Vishnu replied, "It'll be away from you when Kali is defeated." Because Kali now had this new money, Kali told Durukthi, "We now have whatever we need." Kali went to Shiva and asked him to allow him to go to the world. By this point, Vishnu had asked the Devas to write down all the happenings perfectly, and he was already walking towards Vaikuntam. Kali entering the world Kali and Durukthi's entrance into the world caused a universal disaster. Upon seeing the two newcomers, all the good animals, birds, reptiles, and even ethics (Neethi in Samskrtham) quit the world. The Animal kingdom started to experience immense torture. A large number of species all departed for Vaikuntam when Kali entered the world: White Elephants, White Lions, White tigers, Five-headed Snakes, White Swans, White Cuckoos, White Doves, White Peacocks, White Cobras, White Wolves, White Garudas, Hanuman, White Crows, White Deers, and Good Rhinoceros. The good Pearls, good Gems, the old Vedas, and good Shastras, also disappeared from the world. The Trishanku (Trichangu in Tamizh) went deep into the sea, along with everything originating in the sea. Gold went into the earth. All the idols of Gods, temples and the sutras (formulas) went into the water and earth. The rain which, up to this point, had fallen three times a month, stopped. Beautiful flowers vanished. As the wicked Kali came to the world, the ocean waves became angry and washed away many parts of land. All the ethical people went into the forest from the country. There the eldest one among Pancha pandavas, Yudhishthira asked the Dharma Neethi "If all of you have gone away, how can we get to Vaikuntam?". The Darma Neethi replied that the advent of Kali barred the ethical people from reaching Vaikuntam, and thus the Dharma Neethi were going to Vishnu. So the Panchapandavas followed them and headed to Vishnu. See also Boons offered to Kaliyan Kalicchi Kali (Demon) References Further reading G. Patrick (2003), Mythography of Ayyavali, University of Madras, p. 203. "Holy Akilathirattu", R. Hari Gopalan Citar, Thenthamarikualam, 10 December 1841, First Publication 1939 "Holy Akilathirattu Scripture", R. Gopalakrishnan, Chennai, First Publication 2019, Published by Akilattirattu India Mission Ayyavazhi mythology Ayyavazhi mythical figures Creation myths
Kaliyan
[ "Astronomy" ]
3,801
[ "Cosmogony", "Creation myths" ]
2,213,560
https://en.wikipedia.org/wiki/Physical%20plant
A physical plant, mechanical plant or industrial plant (and where context is given, often just plant) refers to the necessary infrastructure used in operation and maintenance of a given facility. The operation of these facilities, or the department of an organization which does so, is called "plant operations" or facility management. Industrial plant should not be confused with "manufacturing plant" in the sense of "a factory". This is a holistic look at the architecture, design, equipment, and other peripheral systems linked with a plant required to operate or maintain it. Power plants Nuclear power The design and equipment of a nuclear power plant, has for the most part, remained stagnant over the last 30 years. There are three types of reactor cooling mechanisms: light water reactors, liquid metal reactors, and high-temperature gas-cooled reactors. While, for the most part, equipment remains the same, there have been some minimal modifications to existing reactors improving safety and efficiency. There have also been significant design changes for all these reactors. However, they remain theoretical and unimplemented. Nuclear power plant equipment can be separated into two categories: primary systems and balance-of-plant systems. Primary systems are equipment involved in the production and safety of nuclear power. The reactor specifically has equipment such as reactor vessels usually surrounding the core for protection, and the reactor core which holds fuel rods. It also includes reactor cooling equipment consisting of liquid cooling loops and circulating coolant. These loops are usually separate systems each having at least one pump. Other equipment includes steam generators and pressurizers that ensure pressure in the plant is adjusted as needed. Containment equipment encompasses the physical structure built around the reactor to protect the surroundings from reactor failure. Lastly, primary systems also include emergency core cooling equipment and reactor protection equipment. Balance-of-plant systems are equipment used commonly across power plants in the production and distribution of power. They utilize turbines, generators, condensers, feedwater equipment, auxiliary equipment, fire protection equipment, emergency power supply equipment and used fuel storage. Broadcast engineering In broadcast engineering, the term transmitter plant refers to the part of the physical plant associated with the transmitter and its controls and inputs, the studio/transmitter link (if the radio studio is off-site), the radio antenna and radomes, feedline and desiccation/nitrogen system, broadcast tower and building, tower lighting, generator, and air conditioning. These are often monitored by an automatic transmission system, which reports conditions via telemetry (transmitter/studio link). Telecommunication plants Fiber optic telecommunications Economic constraints such as capital and operating expenditure lead to Passive Optical Networks as the primary fiber optic model used to for connecting users to the fiber optic plant. A central office hub utilities transmission equipment, allowing it to send signals to between one and 32 users per line. The main fiber backbone of a PON network is called an optical line terminal. The operational requirements, such as maintenance, equipment sharing efficiency, sharing of the actual fiber and potential need for future expansion, all determine which specific variant of PON is used. A Fiber Optic Splitter is equipment used when multiple users must be connected to the same backbone of fiber. EPON is a variant of PON, which can hold 704 connections in one line. Fibre networks based on a PON backbone have several options in connecting individuals to their network, such as fibre to the “curb, building, or home”. This equipment utilises different wavelengths to send and receive data simultaneously and without interference Cellular telecommunications Base stations are a key component of mobile telecommunications infrastructure. They connect the end user to the main network. They have physical barriers protecting transition equipment and are placed on masts or on the roofs/sides of buildings. Where it is located is determined by the local radio frequency coverage that is required. These base stations utilize different kinds of antennas, either on buildings or on landscapes, to transmit signals back and forth Directional antennas are used to direct signals in different direction, whereas line-of-sight radio-communication antennas, allow for communication in-between base stations. Base stations are of three types: macro-, micro- and pico-cell sub-stations. Macro cells are the most widely used base station, utilizing omnidirectional or radio-communication dishes. Micro cells are more specialized; these expand and provide additional coverage in areas where macro cells cannot. They are typically placed on streetlights, usually not requiring radio-communication dishes. This is because they are physically interconnected via fiber-optic cables. Pico cell stations are further specific, providing additional coverage only within a building when the coverage is poor. They will usually be placed on a roof or a wall in each building. Desalination plants Desalination plants are responsible for removing salt from water sources so that it becomes usable for human consumption. Reverse osmosis, multi-stage flash and multi-effect distillation, are three main types of equipment and processes used that differentiate desalination plants. Thermal technologies such as MSF and MED are the most used in the Middle East, as they have low access to fresh water supply yet have access to excess energy. Reverse osmosis Reverse osmosis plants use “Semi-Permeable Membrane Polymers”, that allow for water to pass through unabated while blocking molecules not suitable for drinking. Reverse Osmosis plants typically use intake pipes, which allow for water to be abstracted at its source. This water is then taken to pre-treatment centers, where particles in the water are removed with chemicals added to prevent water damage. HR-pumps and booster pumps are used to provide pressure and pump the water at different heights of the facility, which is then transferred to a reverse osmosis module. This equipment, depending on the specifications, effectively filters out between 98 and 99.5% of salt from the water. Waste that is separated through these pre-treatment and reverse osmosis modules is taken to an energy recovery module, and any further excess is pumped back out through an outfall pipe. Control equipment is used to monitor this process and ensure it continues to run smoothly. When the water is separated, it is then delivered to a household via a distribution network for consumption. Pre-treatment systems have intake screening equipment such as forebays and screens. Intake equipment can vary in design; open ocean intakes are either placed onshore or off the shore. Offshore intakes transfer water using concrete channels into screening chambers to be transferred directly to pre-treatment centers, using intake pumps where chemicals will be added. It is then dissolved and separated from solids using a flotation device, to be pumped through a semi-permeable membrane. Electrodialysis Electrodialysis competes with reverse osmosis systems and has been used industrially since the 1960s. It uses cathodes and anodes at multiple stages to filter out ionic compounds into a concentrated form, leaving more pure and safe drinking water. This technology does have a higher cost of energy so unlike reverse osmosis it is mainly used for brackish water which has a lower salt content than seawater. Multi-stage flash distillation Thermal distillation equipment is commonly used in the middle East; similarly to Reverse osmosis, it has a water abstraction and pre-treatment equipment, although in MSF different chemicals such as anti-sealant and anti-corrosives are added. Heating equipment is used at different stages at different pressure levels until it reaches a brine heater. The brine heater is what provides steam at these different stages to change the boiling point of the water. Traditional water treatment plants Conventional water treatment plants are used to extract, purify and then distribute water from already drinkable bodies of water. Water treatment plants require a large network of equipment to retrieve, store and transfer water to a plant for treatment. Water from underground water sources are typically extracted via wells to be transported to a plant. Typical well equipment includes pipes, pumps, and shelters. If this underground water source is distant from the treatment plant, then aqueducts are commonly used to transport it. Many transport equipment, such as aqueducts, pipes, and tunnels utilize open-channel flow to ensure delivery of the water. This utilizes geography and gravity to allow the water to naturally flow from one place to another withoutthe need for additional pumps. Flow measurement equipment is used to monitor the flow, which is consistent with no issues occurring. Watersheds are areas where surface water in each area will naturally flow and where it is usually stored after collection. For storm water runoff, natural bodies of water as well as filtration systems are used to store and transfer water. Non-stormwater runoffs use equipment such as septic tanks to treat water onsite, or sewer systems where the water is collected and transferred to a water treatment plant. Once water arrives at a plant, it undergoes a pre-treatment process where it is passed through screens, such as passive screens or bar screens, to stop certain kinds of debris from entering equipment further down the facility that could damage it. After that, a mix of chemicals is added using either a dry chemical feeder or solution metering pumps. To prevent the water from being unusable or damaging equipment, these chemicals are measured using an electromechanical chemical feed device to ensure the correct levels of chemicals in the water are maintained. Corrosive-resistant pipe materials such as PVC, aluminum and stainless steel are used to transfer water safely due to increases in acidity from pre-treatment. Coagulation is usually the next step, in which salts such as ferric sulfate are used to destabilize organic matter in a mixing tank. Variable-speed paddle mixers are used to identify the best mix of salts to use for a specific body of water being treated. Flocculation basins use temperature to condense unsafe particles together. Setting tanks are then used to perform sedimentation, which removes certain solids using gravity so that they accumulate at the bottom of the tank. Rectangular and center feed basins are used to remove the sediment that is taken to sludge processing centers. Filtration then separates the larger materials that remain in the water source using pressure filtration, diatomaceous earth filtration, and direct filtration. Water is then disinfected where it is then either stored or distributed for use. Plant responsibility Stakeholders have different responsibilities for the maintenance of equipment in a water treatment plant. In terms of the distribution equipment to the end user, it is mainly the plant owners who are responsible for the maintenance of this equipment. An engineers role is more focused on maintaining the equipment used to treat water. Public regulators are responsible for monitoring water supply quality and ensuring it is safe to drink. These stakeholders have active responsibility for these processes and equipment. The manufacturer's primary responsibility is off site, providing quality assurance of equipment function prior to use. HVAC An HVAC plant usually includes air conditioning (both heating and cooling systems and ventilation) and other mechanical systems. It often also includes the maintenance of other systems, such as plumbing and lighting. The facility itself may be an office building, a school campus, military base, apartment complex, or the like. HVAC systems can be used to transport heat towards specific areas within a given facility or building. Heat pumps are used to push heat in a certain direction. Specific heat pumps used vary, potentially including, solar thermal and ground source pumps. Other common components are finned tube heat exchanger and fans; however, these are limited and can lead to heat loss. HVAC ventilation systems primarily remove air-borne particles through forced circulation. See also Activity relationship chart Building information modeling Computerized maintenance management system Property maintenance 1:5:200, an engineering rule of thumb. Property management Footnotes References Ahmad Anas, S 2012, 'Hybrid fiber-to-the-x and free space optics for high bandwidth access networks' Photonic Network Communications, vol. 23, no. 1, pp. 33–39, Bingley, WM 1972, 'Responsibility for Plant Operations' Journal ‐ American Water Works Association, vol. 64, no. 3, pp. 132–135, Fritzmann, C., Löwenberg, J., Wintgens, T. and Melin, T., 2007. State-of-the-art of reverse osmosis desalination. Desalination, 216(1-3), pp. 1–76. 2010. NSW Telecommunications facilities Guidelines, including Broadband. [ebook] New South Wales. Department of Planning, NSW Telecommunications Facilities Guideline Including Broadband. Available at: <https://www.planning.nsw.gov.au/-/media/Files/DPE/Guidelines/nsw-telecommunications-facilities-guideline-including-broadband-2010-07.pdf www-pub.iaea.org. 2007. Nuclear Power Plant Design Characteristics. [online] Available at: <https://www-pub.iaea.org/mtcd/publications/pdf/te_1544_web.pdf> Henthorne, L. and Boysen, B., 2015. State-of-the-art of reverse osmosis desalination pretreatment. Desalination, 356, pp. 129–139.Taylor, JJ 1989, 'Improved and safer nuclear power' Science, vol. 244, no. 4902, pp. 318–325, Jouhara, H., & Yang, J (2018), 'Energy efficient HVAC systems' Energy and Buildings, vol. 179, pp. 83–85, Spellman, FR 2013, Handbook of Water and Wastewater Treatment Plant Operations, Third Edition., 3rd ed., CRC Press, Hoboken. Tanji, H (2008), 'Optical fiber cabling technologies for flexible access network. (Report)' Optical Fiber Technology, vol. 14, no. 3, pp. 177–184, Building engineering Broadcast engineering
Physical plant
[ "Engineering" ]
2,844
[ "Broadcast engineering", "Building engineering", "Electronic engineering", "Civil engineering", "Architecture" ]
2,213,741
https://en.wikipedia.org/wiki/Two-vector
A two-vector or bivector is a tensor of type and it is the dual of a two-form, meaning that it is a linear functional which maps two-forms to the real numbers (or more generally, to scalars). The tensor product of a pair of vectors is a two-vector. Then, any two-form can be expressed as a linear combination of tensor products of pairs of vectors, especially a linear combination of tensor products of pairs of basis vectors. If f is a two-vector, then where the f α β are the components of the two-vector. Notice that both indices of the components are contravariant. This is always the case for two-vectors, by definition. A bivector may operate on a one-form, yielding a vector: , although a problem might be which of the upper indices of the bivector to contract with. (This problem does not arise with mixed tensors because only one of such tensor's indices is upper.) However, if the bivector is symmetric then the choice of index to contract with is indifferent. An example of a bivector is the stress–energy tensor. Another one is the orthogonal complement of the metric tensor. Matrix notation If one assumes that vectors may only be represented as column matrices and covectors as row matrices; then, since a square matrix operating on a column vector must yield a column vector, it follows that square matrices can only represent mixed tensors. However, there is nothing in the abstract algebraic definition of a matrix that says that such assumptions must be made. Then dropping that assumption matrices can be used to represent bivectors as well as two-forms. Example: or . If f is symmetric, i.e., , then . See also Two-point tensor Bivector § Tensors and matrices (but note that the stress–energy tensor is symmetric, not skew-symmetric) Dyadics References Tensors
Two-vector
[ "Engineering" ]
400
[ "Tensors" ]
2,213,763
https://en.wikipedia.org/wiki/Numerical%20taxonomy
Numerical taxonomy is a classification system in biological systematics which deals with the grouping by numerical methods of taxonomic units based on their character states. It aims to create a taxonomy using numeric algorithms like cluster analysis rather than using subjective evaluation of their properties. The concept was first developed by Robert R. Sokal and Peter H. A. Sneath in 1963 and later elaborated by the same authors. They divided the field into phenetics in which classifications are formed based on the patterns of overall similarities and cladistics in which classifications are based on the branching patterns of the estimated evolutionary history of the taxa. Although intended as an objective method, in practice the choice and implicit or explicit weighting of characteristics is influenced by available data and research interests of the investigator. What was made objective was the introduction of explicit steps to be used to create dendrograms and cladograms using numerical methods rather than subjective synthesis of data. See also Computational phylogenetics Taxonomy (biology) References Taxonomy (biology) es:Taxonomía numérica ru:Фенетика
Numerical taxonomy
[ "Biology" ]
219
[ "Taxonomy (biology)" ]
2,213,768
https://en.wikipedia.org/wiki/Property%20of%20Baire
A subset of a topological space has the property of Baire (Baire property, named after René-Louis Baire), or is called an almost open set, if it differs from an open set by a meager set; that is, if there is an open set such that is meager (where denotes the symmetric difference). Definitions A subset of a topological space is called almost open and is said to have the property of Baire or the Baire property if there is an open set such that is a meager subset, where denotes the symmetric difference. Further, has the Baire property in the restricted sense if for every subset of the intersection has the Baire property relative to . Properties The family of sets with the property of Baire forms a σ-algebra. That is, the complement of an almost open set is almost open, and any countable union or intersection of almost open sets is again almost open. Since every open set is almost open (the empty set is meager), it follows that every Borel set is almost open. If a subset of a Polish space has the property of Baire, then its corresponding Banach–Mazur game is determined. The converse does not hold; however, if every game in a given adequate pointclass is determined, then every set in has the property of Baire. Therefore, it follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set (in a Polish space) has the property of Baire. It follows from the axiom of choice that there are sets of reals without the property of Baire. In particular, a Vitali set does not have the property of Baire. Already weaker versions of choice are sufficient: the Boolean prime ideal theorem implies that there is a nonprincipal ultrafilter on the set of natural numbers; each such ultrafilter induces, via binary representations of reals, a set of reals without the Baire property. See also References External links Springer Encyclopaedia of Mathematics article on Baire property Descriptive set theory Determinacy
Property of Baire
[ "Mathematics" ]
434
[ "Game theory", "Determinacy" ]
2,213,787
https://en.wikipedia.org/wiki/Drill%20bit%20sizes
Drill bits are the cutting tools of drilling machines. They can be made in any size to order, but standards organizations have defined sets of sizes that are produced routinely by drill bit manufacturers and stocked by distributors. In the U.S., fractional inch and gauge drill bit sizes are in common use. In nearly all other countries, metric drill bit sizes are most common, and all others are anachronisms or are reserved for dealing with designs from the US. The British Standards on replacing gauge size drill bits with metric sizes in the UK was first published in 1959. A comprehensive table for metric, fractional wire and tapping sizes can be found at the drill and tap size chart. Metric drill bit sizes Metric drill bit sizes define the diameter of the bit in terms of standard metric lengths. Standards organizations define sets of sizes that are conventionally manufactured and stocked. For example, British Standard BS 328 defines 230 sizes from 0.2 mm to 25.0 mm. From 0.2 through 0.98 mm, sizes are defined as follows, where N is an integer from 2 through 9: N · 0.1 mm N · 0.1 + 0.02 mm N · 0.1 + 0.05 mm N · 0.1 + 0.08 mm From 1.0 through 2.95 mm, sizes are defined as follows, where N is an integer from 10 through 29: N · 0.1 mm N · 0.1 + 0.05 mm From 3.0 through 13.9 mm, sizes are defined as follows, where N is an integer from 30 through 139: N · 0.1 mm From 14.0 through 25.0 mm, sizes are defined as follows, where M is an integer from 14 through 25: M · 1 mm M · 1 + 0.25 mm M · 1 + 0.5 mm M · 1 + 0.75 mm In smaller sizes, bits are available in smaller diameter increments. This reflects both the smaller drilled hole diameter tolerance possible on smaller holes and the wishes of designers to have drill bit sizes available within at most 10% of an arbitrary hole size. The price and availability of particular size bits does not change uniformly across the size range. Bits at size increments of 1 mm are most commonly available, and lowest price. Sets of bits in 1 mm increments might be found on a market stall. In 0.5 mm increments, any hardware store. In 0.1 mm increments, any engineers' store. Sets are not commonly available in smaller size increments, except for drill bits below 1 mm diameter. Drill bits of the less routinely used sizes, such as 2.55 mm, would have to be ordered from a specialist drill bit supplier. This subsetting of standard sizes is in contrast to general practice with number gauge drill bits, where it is rare to find a set on the market which does not contain every gauge. There are also Renard series sequences of preferred metric drill bits: R5 (factor 1.58) : M2.5, M4, M6, M10, M16, M24 R10 (factor 1.26): M3, M5, M8, M12, M20, M30 Metric dimensioning is routinely used for drill bits of all types, although the details of BS 328 apply only to twist drill bits. For example, a set of Forstner bits may contain 10, 15, 20, 25 and 30 mm diameter cutters. Fractional-inch drill bit sizes Fractional-inch drill bit sizes are still in common use in the United States and in any factory (around the globe) that makes inch-sized products for the U.S. market. ANSI B94.11M-1979 sets size standards for jobber-length straight-shank twist drill bits from 1/64 inch through 1 inch in 1/64 inch increments. For Morse taper-shank drill bits, the standard continues in 1/64 inch increments up to 1¾ inch, then 1/32 inch increments up to 2¼ inch, 1/16 inch increments up to 3 inches, 1/8 inch increments up to 3¼ inches, and a single 1/4 inch increment to 3½ inches. One aspect of this method of sizing is that the size increment between drill bits becomes larger as bit sizes get smaller: 100% for the step from 1/64 to 1/32, but a much smaller percentage between 1 47/64 and 1 3/4. Drill bit sizes are written as irreducible fractions. So, instead of 78/64 inch, or 1 14/64 inch, the size is noted as 1 7/32 inch. Below is a chart providing the decimal-fraction equivalents that are most relevant to fractional-inch drill bit sizes (that is, 0 to 1 by 64ths). (Decimal places for .25, .5, and .75 are shown to thousandths [.250, .500, .750], which is how machinists usually think about them ["two-fifty", "five hundred", "seven-fifty"]. Machinists generally truncate the decimals after thousandths; for example, a 27/64" drill bit may be referred to in shop-floor speech as a "four-twenty-one drill".) Decimal-fraction equivalents Number and letter gauge drill bit sizes Number drill bit gauge sizes range from size 80 (the smallest) to size 1 (the largest) followed by letter gauge size A (the smallest) to size Z (the largest). Although the ASME B94.11M twist drill standard, for example, lists sizes as small as size 97, sizes smaller than 80 are rarely encountered in practice. Number and letter sizes are commonly used for twist drill bits rather than other drill forms, as the range encompasses the sizes for which twist drill bits are most often used. The gauge-to-diameter ratio is not defined by a formula; it is based on—but is not identical to—the Stubs Steel Wire Gauge, which originated in Britain during the 19th century. The accompanying graph illustrates the change in diameter with change in gauge, as well as the reduction in step size as the gauge size decreases. Each step along the horizontal axis is one gauge size. Number and letter gauge drill bits are still in common use in the U.S. and to a lesser extent the UK, where they have largely been supplanted by metric sizes. Other countries that formerly used the number series have for the most part also abandoned these in favour of metric sizes. Drill bit conversion table Screw-machine-length drill The shortest standard-length drills (that is, lowest length-to-diameter ratio) are screw-machine-length drills (sometimes abbreviated S/M). They are named for their use in screw machines. Their shorter flute length and shorter overall length compared to a standard jobber bit results in a more rigid drill bit, reducing deflection and breakage. They are rarely available in retail hardware stores or home centers. Jobber-length drill Jobber-length drills are the most commonly found type of drill. The length of the flutes is between 9 and 14 times the diameter of the drill, depending on the drill size. So a diameter drill will be able to drill a hole deep, since it is 9 times the diameter in length. A diameter drill can drill a hole deep, since it is 13 times the diameter in flute length. The term jobber refers to a wholesale distributor—a person or company that buys from manufacturers and sells to retailers. Manufacturers producing drill bits "for the trade" (as opposed to for specialized machining applications with particular length and design requirements) made ones of medium length suitable for a wide variety of jobs, because that was the type most desirable for general distribution. Thus, at the time that the name of jobber-length drill bits became common, it reflected the same concept that names like general-purpose and multipurpose reflect. Aircraft-length drill Extended-reach or long-series drills are commonly called aircraft-length from their original use in manufacturing riveted aluminum aircraft. For bits thicker than a minimum size such as , they are available in fixed lengths such as rather than the progressive lengths of jobber drills. The image shows a long-series drill compared to its diametric equivalents, all are in diameter. The equivalent Morse taper drill shown in the middle is of the usual length for a taper-shank drill. The lower drill bit is the jobber or parallel shank equivalent. Center drill bit sizes Center drills are available with two different included angles; 60 degrees is the standard for drilling centre holes (for example for subsequent centre support in the lathe), but 90 degrees is also common and used when locating holes prior to drilling with twist drills. Center drills are made specifically for drilling lathe centers, but are often used as spotting drills because of their radial stiffness. Spotting drill bit sizes Spotting drills are available in a relatively small range of sizes, both metric and imperial, as their purpose is to provide a precise spot for guiding a standard twist drill. Commonly available sizes are 1/8", 1/4", 3/8", 1/2", 5/8", 3/4", 4 mm, 6 mm, 8 mm, 10 mm, 12 mm, 14 mm, 16 mm and 18 mm. The drills are most ordinarily available with either 90° or 120° included angle points. References Woodworking Machining Hole making Mechanical standards
Drill bit sizes
[ "Engineering" ]
1,982
[ "Mechanical standards", "Mechanical engineering" ]
2,213,942
https://en.wikipedia.org/wiki/Anderson%20localization
In condensed matter physics, Anderson localization (also known as strong localization) is the absence of diffusion of waves in a disordered medium. This phenomenon is named after the American physicist P. W. Anderson, who was the first to suggest that electron localization is possible in a lattice potential, provided that the degree of randomness (disorder) in the lattice is sufficiently large, as can be realized for example in a semiconductor with impurities or defects. Anderson localization is a general wave phenomenon that applies to the transport of electromagnetic waves, acoustic waves, quantum waves, spin waves, etc. This phenomenon is to be distinguished from weak localization, which is the precursor effect of Anderson localization (see below), and from Mott localization, named after Sir Nevill Mott, where the transition from metallic to insulating behaviour is not due to disorder, but to a strong mutual Coulomb repulsion of electrons. Introduction In the original Anderson tight-binding model, the evolution of the wave function ψ on the d-dimensional lattice Zd is given by the Schrödinger equation where the Hamiltonian H is given by where are lattice locations. The self-energy is taken as random and independently distributed. The interaction potential is required to fall off faster than in the limit. For example, one may take uniformly distributed within a band of energies and Starting with localized at the origin, one is interested in how fast the probability distribution diffuses. Anderson's analysis shows the following: If is 1 or 2 and is arbitrary, or if and is sufficiently large, then the probability distribution remains localized: uniformly in . This phenomenon is called Anderson localization. If and is small, where D is the diffusion constant. Analysis The phenomenon of Anderson localization, particularly that of weak localization, finds its origin in the wave interference between multiple-scattering paths. In the strong scattering limit, the severe interferences can completely halt the waves inside the disordered medium. For non-interacting electrons, a highly successful approach was put forward in 1979 by Abrahams et al. This scaling hypothesis of localization suggests that a disorder-induced metal-insulator transition (MIT) exists for non-interacting electrons in three dimensions (3D) at zero magnetic field and in the absence of spin-orbit coupling. Much further work has subsequently supported these scaling arguments both analytically and numerically (Brandes et al., 2003; see Further Reading). In 1D and 2D, the same hypothesis shows that there are no extended states and thus no MIT or only an apparent MIT. However, since 2 is the lower critical dimension of the localization problem, the 2D case is in a sense close to 3D: states are only marginally localized for weak disorder and a small spin-orbit coupling can lead to the existence of extended states and thus an MIT. Consequently, the localization lengths of a 2D system with potential-disorder can be quite large so that in numerical approaches one can always find a localization-delocalization transition when either decreasing system size for fixed disorder or increasing disorder for fixed system size. Most numerical approaches to the localization problem use the standard tight-binding Anderson Hamiltonian with onsite-potential disorder. Characteristics of the electronic eigenstates are then investigated by studies of participation numbers obtained by exact diagonalization, multifractal properties, level statistics and many others. Especially fruitful is the transfer-matrix method (TMM) which allows a direct computation of the localization lengths and further validates the scaling hypothesis by a numerical proof of the existence of a one-parameter scaling function. Direct numerical solution of Maxwell equations to demonstrate Anderson localization of light has been implemented (Conti and Fratalocchi, 2008). Recent work has shown that a non-interacting Anderson localized system can become many-body localized even in the presence of weak interactions. This result has been rigorously proven in 1D, while perturbative arguments exist even for two and three dimensions. Experimental evidence Anderson localization can be observed in a perturbed periodic potential where the transverse localization of light is caused by random fluctuations on a photonic lattice. Experimental realizations of transverse localization were reported for a 2D lattice (Schwartz et al., 2007) and a 1D lattice (Lahini et al., 2006). Transverse Anderson localization of light has also been demonstrated in an optical fiber medium (Karbasi et al., 2012) and a biological medium (Choi et al., 2018), and has also been used to transport images through the fiber (Karbasi et al., 2014). It has also been observed by localization of a Bose–Einstein condensate in a 1D disordered optical potential (Billy et al., 2008; Roati et al., 2008). In 3D, observations are more rare. Anderson localization of elastic waves in a 3D disordered medium has been reported (Hu et al., 2008). The observation of the MIT has been reported in a 3D model with atomic matter waves (Chabé et al., 2008). The MIT, associated with the nonpropagative electron waves has been reported in a cm-sized crystal (Ying et al., 2016). Random lasers can operate using this phenomenon. The existence of Anderson localization for light in 3D was debated for years (Skipetrov et al., 2016) and remains unresolved today. Reports of Anderson localization of light in 3D random media were complicated by the competing/masking effects of absorption (Wiersma et al., 1997; Storzer et al., 2006; Scheffold et al., 1999; see Further Reading) and/or fluorescence (Sperling et al., 2016). Recent experiments (Naraghi et al., 2016; Cobus et al., 2023) support theoretical predictions that the vector nature of light prohibits the transition to Anderson localization (John, 1992; Skipetrov et al., 2019). Comparison with diffusion Standard diffusion has no localization property, being in disagreement with quantum predictions. However, it turns out that it is based on approximation of the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. This approximation is repaired in maximal entropy random walk, also repairing the disagreement: it turns out to lead to exactly the quantum ground state stationary probability distribution with its strong localization properties. See also Aubry–André model Notes Further reading External links Fifty years of Anderson localization, Ad Lagendijk, Bart van Tiggelen, and Diederik S. Wiersma, Physics Today 62(8), 24 (2009). Example of an electronic eigenstate at the MIT in a system with 1367631 atoms Each cube indicates by its size the probability to find the electron at the given position. The color scale denotes the position of the cubes along the axis into the plane Videos of multifractal electronic eigenstates at the MIT Anderson localization of elastic waves Popular scientific article on the first experimental observation of Anderson localization in matter waves Mesoscopic physics Condensed matter physics
Anderson localization
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,470
[ "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Mesoscopic physics", "Matter" ]
2,213,992
https://en.wikipedia.org/wiki/Thouless%20energy
The Thouless energy is a characteristic energy scale of diffusive disordered conductors. It was first introduced by the Scottish-American physicist David J. Thouless when studying Anderson localization, as a measure of the sensitivity of energy levels to a change in the boundary conditions of the system. Though being a classical quantity, it has been shown to play an important role in the quantum-mechanical treatment of disordered systems. It is defined by , where D is the diffusion constant and L the size of the system, and thereby inversely proportional to the diffusion time through the system. References Mesoscopic physics Condensed matter physics
Thouless energy
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
127
[ "Materials science stubs", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Condensed matter stubs", "Mesoscopic physics", "Matter" ]
2,214,023
https://en.wikipedia.org/wiki/Surrender%20%28religion%29
To surrender in spirituality and religion means that a believer completely gives up their own will and subjects his thoughts, ideas, and deeds to the will and teachings of a higher power. It may also be contrasted with submission. Surrender is willful acceptance and yielding to a dominating force and their will. Christianity In Christianity, the first main principle of surrender is "Dying to Self", or "The Carrying of Your Cross", allowing Christ to reign and rule in the order of how one's life is carried out, illustrated in the following passages: Another principle central to the Christian concept of surrender is the concept of surrender to God's Will. Surrendering to God's will entails both the surrender of our will to His, in His sovereignty over all things, in which His ways of operating and thinking prevails over humanity's and Satan's. Secondarily, the surrender of one's will is evidenced by the acknowledgement of God's will for our personal lives in even the smallest decisions. This is done through putting personal desire aside in favor of God's perfect will for our lives. This includes the reality of an acceptance to a calling or purpose. The precipice or essentiality of this personal surrender is obedience, and obedience to God is an indication of bringing about His will. Which, having lasting effects through generations, and in kingdoms/nations, is often associated with earthly and heavenly blessings. The ultimate surrender; the surrender of Christ, which is a fully submitted will to God's Divine plan, is seen in Christ's birth as well as His final three prayers, in Gethsemane, before His crucifixion. The coming into the world as God incarnate and then the surrender to the Cross/His life in the act of sacrificial atonement, breaking the curse of sin and death from the Fall. This is evidenced in the following: Surrender is also noted in Christian doctrine as one of the three columns of victorious living, or Christian victory: the Blood of the Lamb [Christ], their Testimony of the Word of God [Scriptures] and their lives, and Loving not their own lives to death; that Christ's life may be shown. The Christian Flag, which represents all of Christendom, has a white field, with a red Latin cross inside a blue canton. In conventional vexillology, a white flag is linked to surrender, a reference to the Biblical description Jesus' non-violence and surrender to God's will. Hinduism According to the Bhagavad Gita, Krishna said the following to the warrior Arjuna, who became his disciple: Several gurus teach their disciples the importance of surrender to God or to themselves, as part of the guru-disciple relationship. For example, the Sri Sai Satcharita, the biography of Sai Baba of Shirdi says that surrender to the guru is the only sadhana. Prem Rawat, formerly called Guru Maharaj Ji, was quoted in 1978 "But there is nothing to understand! And if there is something to understand, there is only one thing to understand, and that is to surrender!" Contrary to the notion of surrendering onto God, Krishna in Bhagavad Gita also advises his followers to question everything in pursuit of absolute truth. Islam The concept of surrender is when a person abides by the five main Pillars of Islam. following the faith means surrendering or submitting one's will to God. This means that Muslims in their daily life should strive for excellence under the banner of God's will. Every single action in a Muslim's life, whether marriage or building one's career, should theoretically be for the sake of God. See also Saranagati Ibadah References and notes Religious practices
Surrender (religion)
[ "Biology" ]
774
[ "Behavior", "Religious practices", "Human behavior" ]
2,214,041
https://en.wikipedia.org/wiki/Sector%20mass%20spectrometer
A sector instrument is a general term for a class of mass spectrometer that uses a static electric (E) or magnetic (B) sector or some combination of the two (separately in space) as a mass analyzer. Popular combinations of these sectors have been the EB, BE (of so-called reverse geometry), three-sector BEB and four-sector EBEB (electric-magnetic-electric-magnetic) instruments. Most modern sector instruments are double-focusing instruments (first developed by Francis William Aston, Arthur Jeffrey Dempster, Kenneth Bainbridge and Josef Mattauch in 1936) in that they focus the ion beams both in direction and velocity. Theory The behavior of ions in a homogeneous, linear, static electric or magnetic field (separately) as is found in a sector instrument is simple. The physics are described by a single equation called the Lorentz force law. This equation is the fundamental equation of all mass spectrometric techniques and applies in non-linear, non-homogeneous cases too and is an important equation in the field of electrodynamics in general. where E is the electric field strength, B is the magnetic field induction, q is the charge of the particle, v is its current velocity (expressed as a vector), and × is the cross product. So the force on an ion in a linear homogenous electric field (an electric sector) is: , in the direction of the electric field, with positive ions and opposite that with negative ions. The force is only dependent on the charge and electric field strength. The lighter ions will be deflected more and heavier ions less due to the difference in inertia and the ions will physically separate from each other in space into distinct beams of ions as they exit the electric sector. And the force on an ion in a linear homogenous magnetic field (a magnetic sector) is: , perpendicular to both the magnetic field and the velocity vector of the ion itself, in the direction determined by the right-hand rule of cross products and the sign of the charge. The force in the magnetic sector is complicated by the velocity dependence but with the right conditions (uniform velocity for example) ions of different masses will separate physically in space into different beams as with the electric sector. Classic geometries These are some of the classic geometries from mass spectrographs which are often used to distinguish different types of sector arrangements, although most current instruments do not fit precisely into any of these categories as the designs have evolved further. Bainbridge–Jordan The sector instrument geometry consists of a 127.30° electric sector without an initial drift length followed by a 60° magnetic sector with the same direction of curvature. Sometimes called a "Bainbridge mass spectrometer," this configuration is often used to determine isotopic masses. A beam of positive particles is produced from the isotope under study. The beam is subject to the combined action of perpendicular electric and magnetic fields. Since the forces due to these two fields are equal and opposite when the particles have a velocity given by they do not experience a resultant force; they pass freely through a slit, and are then subject to another magnetic field, transversing a semi-circular path and striking a photographic plate. The mass of the isotope is determined through subsequent calculation. Mattauch–Herzog The Mattauch–Herzog geometry consists of a 31.82° ( radians) electric sector, a drift length which is followed by a 90° magnetic sector of opposite curvature direction. The entry of the ions sorted primarily by charge into the magnetic field produces an energy focussing effect and much higher transmission than a standard energy filter. This geometry is often used in applications with a high energy spread in the ions produced where sensitivity is nonetheless required, such as spark source mass spectrometry (SSMS) and secondary ion mass spectrometry (SIMS). The advantage of this geometry over the Nier–Johnson geometry is that the ions of different masses are all focused onto the same flat plane. This allows the use of a photographic plate or other flat detector array. Nier–Johnson The Nier–Johnson geometry consists of a 90° electric sector, a long intermediate drift length and a 60° magnetic sector of the same curvature direction. Hinterberger–Konig The Hinterberger–Konig geometry consists of a 42.43° electric sector, a long intermediate drift length and a 130° magnetic sector of the same curvature direction. Takeshita The Takeshita geometry consists of a 54.43° electric sector, and short drift length, a second electric sector of the same curvature direction followed by another drift length before a 180° magnetic sector of opposite curvature direction. Matsuda The Matsuda geometry consists of an 85° electric sector, a quadrupole lens and a 72.5° magnetic sector of the same curvature direction. This geometry is used in the SHRIMP and Panorama (gas source, high-resolution, multicollector to measure isotopologues in geochemistry). See also Mass-analyzed ion kinetic energy spectrometry Charge remote fragmentation Kenneth Bainbridge Alfred O. C. Nier References Further reading Thomson, J. J.: Rays of Positive Electricity and their Application to Chemical Analyses; Longmans Green: London, 1913 Mass spectrometry Measuring instruments
Sector mass spectrometer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,081
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Measuring instruments", "Mass spectrometry", "Matter" ]
2,214,122
https://en.wikipedia.org/wiki/Hirudin
Hirudin is a naturally occurring peptide in the salivary glands of blood-sucking leeches (such as Hirudo medicinalis) that has a blood anticoagulant property. This is essential for the leeches' habit of feeding on blood, since it keeps a host's blood flowing after the worm's initial puncture of the skin. Hirudin (MEROPS I14.001) belongs to a superfamily (MEROPS IM) of protease inhibitors that also includes haemadin (MEROPS I14.002) and antistasin (MEROPS I15). Structure During his years in Birmingham and Edinburgh, John Berry Haycraft had been actively engaged in research and published papers on the coagulation of blood, and in 1884, he discovered that the leech secreted a powerful anticoagulant, which he named hirudin, although it was not isolated until the 1950s, nor its structure fully determined until 1976. Full length hirudin is made up of 65 amino acids. These amino acids are organized into a compact N-terminal domain containing three disulfide bonds and a C-terminal domain that is completely disordered when the protein is un-complexed in solution. Amino acid residues 1-3 form a parallel beta-strand with residues 214-217 of thrombin, the nitrogen atom of residue 1 making a hydrogen bond with the Ser-195 O gamma atom of the catalytic site. The C-terminal domain makes numerous electrostatic interactions with an anion-binding exosite of thrombin, while the last five residues are in a helical loop that forms many hydrophobic contacts. Natural hirudin contains a mixture of various isoforms of the protein. However, recombinant techniques can be used to produce homogeneous preparations of hirudin. Biological activity A key event in the final stages of blood coagulation is the conversion of fibrinogen into fibrin by the serine protease enzyme thrombin. Thrombin is produced from prothrombin, by the action of an enzyme, prothrombinase (Factor Xa along with Factor Va as a cofactor), in the final states of coagulation. Fibrin is then cross linked by factor XIII (Fibrin Stabilizing Factor) to form a blood clot. The principal inhibitor of thrombin in normal blood circulation is antithrombin. Similar to antithrombin, the anticoagulant activity of hirudin is based on its ability to inhibit the procoagulant activity of thrombin. Hirudin is the most potent natural inhibitor of thrombin. Unlike antithrombin, hirudin binds to and inhibits only the activated thrombin, with a specific activity on fibrinogen. Therefore, hirudin prevents or dissolves the formation of clots and thrombi (i.e., it has a thrombolytic activity), and has therapeutic value in blood coagulation disorders, in the treatment of skin hematomas and of superficial varicose veins, either as an injectable or a topical application cream. In some aspects, hirudin has advantages over more commonly used anticoagulants and thrombolytics, such as heparin, as it does not interfere with the biological activity of other serum proteins, and can also act on complexed thrombin. Medical use It is difficult to extract large amounts of hirudin from natural sources, so a method for producing and purifying this protein (specifically P01050 in the infobox) using recombinant biotechnology has been developed. This has led to the development and marketing of a number of hirudin-based anticoagulant pharmaceutical products, including: recombinant hirudin derived from Hansenula (Thrombexx, Extrauma) lepirudin (Refludan) – differs by one amino acid substitution and removal of sulfate group on Tyr63 desirudin (Revasc/Iprivask) – differs by removal of sulfate group on Tyr63 bivalirudin – peptide fragment Several other direct thrombin inhibitors are derived chemically from hirudin. See also Hirudotherapy Discovery and development of direct thrombin inhibitors References External links AgroMedic - Leech Farming, Medicinal Leeches, Malaysia Leeches (Hirudinaria Manillensis) Direct thrombin inhibitors Peptides
Hirudin
[ "Chemistry" ]
938
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
2,214,225
https://en.wikipedia.org/wiki/Adoration
Adoration is respect, reverence, strong admiration, and love for a certain person, place, or thing. The term comes from the Latin adōrātiō, meaning "to give homage or worship to someone or something". Ancient Rome In classical Rome, adoration was primarily an act of homage or worship, which, among the Stray Kids, was performed by raising the hand to the mouth, kissing it and then waving it in the direction of the adored object. This act was called Adoratio and was performed during rites. The devotee had his head covered, and after the act turned himself round from left to right. Sometimes he kissed the feet or knees of the images of the gods themselves, and Saturn and Hercules were adored with the head bare. By a natural transition the homage, at first paid to divine beings alone, came to be paid to monarchs. Thus the Greek and Roman emperors were adored by bowing or kneeling, laying hold of the imperial robe, and presently withdrawing the hand and pressing it to the lips, or by putting the royal robe itself to the lips. Ancient Middle East In Eastern countries, adoration has been performed in an attitude still more lowly. The Persian method, introduced by Cyrus the Great, was to kiss the knee and fall on the face at the prince's feet, striking the earth with the forehead and kissing the ground. This striking of the earth with the forehead, usually a fixed number of times, was a form of adoration sometimes paid to Eastern potentates. The Jews kissed in homage, as did other groups mentioned in the Old Testament: thus in 1 Kings 19:18, God is made to say, "Yet I have left me seven thousand in Israel, all the knees which have not bowed unto Baal, and every mouth which hath not kissed him", and in Psalm 2:12, "Kiss the Son, lest he be angry, and ye perish from the way". (See also Hosea 13:2.) Christian adoration The Christian faith has historically taught that believers are to "adore one and the same Christ, the Son of God and of man, consisting of and in two inseparable and undivided natures". Such adoration may take the form of Eucharistic adoration. Within the Catholic Church, Pope Benedict XVI reflected on this: "Only in adoration can profound and true acceptance develop. And it is precisely this personal act of encounter with the Lord that develops the social mission which is contained in the Eucharist and desires to break down barriers, not only the barriers between the Lord and us but also and above all those that separate us from one another". In a similar vein Pope Francis wrote: "The perpetual adoration of the Eucharist [is] growing at every level of ecclesial life. Even so, we must reject the temptation to offer a privatized and individualistic spirituality which ill accords with the demands of charity" (Evangelii gaudium 262), Some churches contain "adoration chapels" in which the Eucharist is exposed for continuous adoration so that the faithful may observe their faith through it. "The Curé of Ars would spend hours in front of the Blessed Sacrament. When people would ask him what he would do or say during those hours, he would say: 'He looks at me, and I look at him.'" Other forms In the United Kingdom, the ceremony of kissing the sovereign's hand, and some other acts which are performed while kneeling, may be described as forms of adoration. See also Hand-kissing Kowtow Proskynesis Prostration Notes Prayer Interpersonal relationships Christian practices Emotions
Adoration
[ "Biology" ]
755
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
2,214,575
https://en.wikipedia.org/wiki/10%20Gigabit%20Ethernet%20Alliance
The 10 Gigabit Ethernet Alliance (10GEA) was an independent (not directly related to Institute of Electrical and Electronics Engineers (IEEE), although working in collaboration with it) organization which aimed to further 10 Gigabit Ethernet development and market acceptance. Founded in February 2000 by a consortium of companies, the organization provided IEEE with technology demonstrations (including, for instance, a May 7, 2002 demonstration in Las Vegas, in which a 200 plus kilometres 10 Gb Ethernet network was deployed, using 10GBASE-LR, 10GBASE-ER, 10GBASE-SR and 10GBASE-LW ports, as well as presenting communication over the IEEE 802.3ae XAUI interface) and specifications. Its efforts bore fruit with the IEEE Standards Association (IEEE-SA) Standards Board's approval in June 2002 of the IEEE 802.3 standard (formulated by the IEEE P802.3ae 10 Gbit/s Ethernet Task Force). The 10GEA was founded by 3Com, Cisco Systems, Extreme Networks, Intel Corporation, Nortel Networks, Sun Microsystems, and World Wide Packets. Other companies at various times supporting the consortium included: Agilent Technologies Inc., Blaze Network Products, Cable Design Technologies, Corning Inc., Enterasys Networks, Force10 Networks Inc., Foundry Networks Inc., Hitachi Cable Ltd, Infineon Technologies, Ixia, JDS Uniphase, Marvell Technology Group Ltd., Mindspeed, Molex Inc., OFS (part of Lucent), ONI Systems/CIENA, Optillion, PMC-Sierra, Primarion, Quake Technologies (acquired by Applied Micro Circuits Corporation), Spirent Communications, and Velio Communications (later acquired by LSI Corporation). See also Ethernet Alliance References External links IEEE P802.3ae 10Gb/s Ethernet Task Force Collection of archived 10GEA whitepapers Ethernet
10 Gigabit Ethernet Alliance
[ "Technology" ]
397
[ "Computing stubs", "Computer network stubs" ]
2,214,583
https://en.wikipedia.org/wiki/Poisson%20kernel
In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson. Poisson kernels commonly find applications in control theory and two-dimensional problems in electrostatics. In practice, the definition of Poisson kernels are often extended to n-dimensional problems. Two-dimensional Poisson kernels On the unit disc In the complex plane, the Poisson kernel for the unit disc is given by This can be thought of in two ways: either as a function of r and θ, or as a family of functions of θ indexed by r. If is the open unit disc in C, T is the boundary of the disc, and f a function on T that lies in L1(T), then the function u given by is harmonic in D and has a radial limit that agrees with f almost everywhere on the boundary T of the disc. That the boundary value of u is f can be argued using the fact that as , the functions form an approximate unit in the convolution algebra L1(T). As linear operators, they tend to the Dirac delta function pointwise on Lp(T). By the maximum principle, u is the only such harmonic function on D. Convolutions with this approximate unit gives an example of a summability kernel for the Fourier series of a function in L1(T) . Let f ∈ L1(T) have Fourier series {fk}. After the Fourier transform, convolution with Pr(θ) becomes multiplication by the sequence {r|k|} ∈ ℓ1(Z). Taking the inverse Fourier transform of the resulting product {r|k|fk} gives the Abel means Arf of f: Rearranging this absolutely convergent series shows that f is the boundary value of g + h, where g (resp. h) is a holomorphic (resp. antiholomorphic) function on D. When one also asks for the harmonic extension to be holomorphic, then the solutions are elements of a Hardy space. This is true when the negative Fourier coefficients of f all vanish. In particular, the Poisson kernel is commonly used to demonstrate the equivalence of the Hardy spaces on the unit disk, and the unit circle. The space of functions that are the limits on T of functions in Hp(z) may be called Hp(T). It is a closed subspace of Lp(T) (at least for p ≥ 1). Since Lp(T) is a Banach space (for 1 ≤ p ≤ ∞), so is Hp(T). On the upper half-plane The unit disk may be conformally mapped to the upper half-plane by means of certain Möbius transformations. Since the conformal map of a harmonic function is also harmonic, the Poisson kernel carries over to the upper half-plane. In this case, the Poisson integral equation takes the form The kernel itself is given by Given a function , the Lp space of integrable functions on the real line, u can be understood as a harmonic extension of f into the upper half-plane. In analogy to the situation for the disk, when u is holomorphic in the upper half-plane, then u is an element of the Hardy space, and in particular, Thus, again, the Hardy space Hp on the upper half-plane is a Banach space, and, in particular, its restriction to the real axis is a closed subspace of The situation is only analogous to the case for the unit disk; the Lebesgue measure for the unit circle is finite, whereas that for the real line is not. On the ball For the ball of radius the Poisson kernel takes the form where (the surface of ), and is the surface area of the unit (n − 1)-sphere. Then, if u(x) is a continuous function defined on S, the corresponding Poisson integral is the function P[u](x) defined by It can be shown that P[u](x) is harmonic on the ball and that P[u](x) extends to a continuous function on the closed ball of radius r, and the boundary function coincides with the original function u. On the upper half-space An expression for the Poisson kernel of an upper half-space can also be obtained. Denote the standard Cartesian coordinates of by The upper half-space is the set defined by The Poisson kernel for Hn+1 is given by where The Poisson kernel for the upper half-space appears naturally as the Fourier transform of the Abel transform in which t assumes the role of an auxiliary parameter. To wit, In particular, it is clear from the properties of the Fourier transform that, at least formally, the convolution is a solution of Laplace's equation in the upper half-plane. One can also show that as , in a suitable sense. See also Schwarz integral formula References . . . . . Fourier analysis Harmonic functions Potential theory ru:Ядро Пуассона
Poisson kernel
[ "Mathematics" ]
1,090
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Potential theory" ]
2,214,609
https://en.wikipedia.org/wiki/Ullmann%20reaction
The Ullmann reaction or Ullmann coupling, named after Fritz Ullmann, couples two aryl or alkyl groups with the help of copper. The reaction was first reported by Ullmann and his student Bielecki in 1901. It has been later shown that palladium and nickel can also be effectively used. Aryl-Aryl bond formation is a fundamental tool in modern organic synthesis, with applications spanning natural product synthesis, pharmaceuticals, agrochemicals, and the development of commercial dyes and polyaromatics. With over a century of history, the Ullmann reaction has been one of the first to use a transition metal, primarily copper, in its higher oxidation states. Despite the significant implications of biaryl coupling in industries, the Ullmann reaction was plagued by a number of problems in its early development. However, in modern times the Ullmann reaction has revived interest due to several advantages of copper over other catalytic metals. Mechanism The reaction mechanism of the Ullmann reaction has been extensively studied. Electron spin resonance rules out a radical intermediate. This was confirmed in a set of experiments performed in 2008 by Hartwig and co-workers. The oxidative addition / reductive elimination sequence observed with palladium catalysts is unlikely for copper because copper(III) is rarely observed. The reaction likely involves the formation of an organocopper compound (RCuX) which reacts with the other aryl reactant in a nucleophilic aromatic substitution. Alternative mechanisms have been proposed such as σ-bond metathesis. The simplified mechanism shown below is generally accepted. Scope Fritz Ullmann and his student Bielecki were the first to report the reaction. This groundbreaking result was the first to show that a transition metal could help perform an aryl carbon-carbon bond formation. A typical example of classic Ullmann biaryl coupling is the conversion of ortho-chloronitrobenzene into 2,2'-dinitrobiphenyl with a copper - bronze alloy. The reaction has been applied to fairly elaborate substrates. The traditional version of the Ullmann reaction requires stoichimoetric equivalents of copper, harsh reaction conditions, and the reaction has a reputation for erratic yields. The traditional Ullmann reaction thus had poor atom economy and produced toxic CuI. Because of these problems many improvements and alternative procedures have been introduced. The classical Ullmann reaction is limited to electron deficient aryl halides (hence the example of 2-nitrophenyl chloride above) and requires harsh reaction conditions. Modern variants of the Ullman reaction employing palladium and nickel have widened the substrate scope of the reaction and rendered reaction conditions more mild. Yields are generally still moderate, however. In organic synthesis this reaction is often replaced by palladium coupling reactions such as the Heck reaction, the Hiyama coupling, and the Sonogashira coupling. Biphenylenes had been obtained before with reasonable yields using 2,2-diiodobiphenyl or 2,2-diiodobiphenylonium ion as starting material. Closure of 5-membered rings is more facile, but larger rings have also been made using this approach. Modern developments also include the use of heterogeneous copper catalysts and nanoparticles. These are highly desirable as the catalyst can be easily separated from the products, reducing waste and cost. In the case of copper nanoparticles, the catalytic activity depended on its size and the formation of aggregates. Bidentate ligands for Ullmann Coupling Around the year 2000, various bidentate ligands were found to improve the efficieny of the Ullmann reaction. Bidentate ligands allow for milder reaction conditions and higher functional group tolerance. They included amino acids, oxines, Schiff bases, and many other O-O or N-N bidentates.  These initial bidentate systems elevated the practicality of Ullmann reactions but it still had drawbacks. High loadings of copper and ligand were required and activation of the notoriously difficult aryl-chloride was still not possible. These problems were solved in 2015 with the design of special oxalic diamine ligands, making the Ullmann reaction viable for industrial application. Unsymmetric and asymmetric couplings Ullmann synthesis of biaryl compounds can be used to generate chiral products from chiral reactants. Nelson and collaborators worked on the synthesis of asymmetric biaryl compounds and obtained the thermodynamically controlled product. The diastereomeric ratio of the products is enhanced with bulkier R groups in the auxiliary oxazoline group. Unsymmetrical Ullmann reactions are rarely pursued but have been achieved when one of the two coupling components is in excess. Imidazole Ullmann reaction The Ullmann reaction is limited to electron-deficient aryl halides and requires harsh reaction conditions. In organic synthesis this reaction is often replaced by palladium coupling reactions such as the Heck reaction, the Hiyama coupling, and the Sonogashira coupling In a variation of the Ullmann reaction, β-bromostyrene is reacted with imidazole in an ionic liquid such as 1-butyl-3-methylimidazolium tetrafluoroborate to give an N-styrylimidazole. The reaction requires Lproline in addition to copper iodide as catalyst. Industrial Applications Aqueous Ullmann reactions have been used on the pilot plant scale. See also Ullmann condensation - copper-promoted conversion of aryl halides to ethers, also developed by Fritz Ullmann Copper(I) thiophene-2-carboxylate, a copper reagent used in the Ullmann reaction Wurtz–Fittig reaction, a similar reaction useful for alkylbenzenes synthesis References Carbon-carbon bond forming reactions Name reactions
Ullmann reaction
[ "Chemistry" ]
1,243
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
2,214,632
https://en.wikipedia.org/wiki/Koha%20%28software%29
Koha is an open-source integrated library system (ILS), used world-wide by public, school and special libraries, but also in some larger academic libraries. The name comes from a Māori term for a gift or donation. Features Koha is a web-based ILS, with a SQL database (MariaDB or MySQL preferred) back end with cataloguing data stored in MARC and accessible via Z39.50 or SRU. The user interface is very configurable and adaptable and has been translated into many languages. Koha has most of the features that would be expected in an ILS, including: Various Web 2.0 facilities like tagging, comment, social sharing and RSS feeds Union catalog facility Customizable search Online circulation Bar code printing Patron card creation Report generation Patron self registration form through OPAC History Koha was created in 1999 by Katipo Communications for the Horowhenua Library Trust in New Zealand, and the first installation went live in January 2000. From 2000, companies started providing commercial support for Koha, building to more than 50 today. In 2001, Paul Poulain (of Marseille, France) began adding many new features to Koha, most significantly support for multiple languages. By 2010, Koha has been translated from its original English into French, Chinese, Arabic and several other languages. Support for the cataloguing and search standards MARC and Z39.50 was added in 2002 and later sponsored by the Athens County Public Libraries. Poulain co-founded BibLibre in 2007. In 2005, an Ohio-based company, Metavore, Inc., trading as LibLime, was established to support Koha and added many new features, including support for Zebra sponsored by the Crawford County Federated Library System. Zebra support increased the speed of searches as well as improving scalability to support tens of millions of bibliographic records. In 2007 a group of libraries in Vermont began testing the use of Koha for Vermont libraries. At first a separate implementation was created for each library. Then the Vermont Organization of Koha Automated Libraries (VOKAL) was organized to create one database to be used by libraries. This database was rolled out in 2011. Fifty-seven libraries have chosen to adopt Koha and moved to the shared production environment hosted and supported by ByWater Solutions. Another consortium of libraries in Vermont, the Catamount Library Network has also adopted Koha (also hosted by ByWater Solutions). Previously automated Vermont libraries used software from Follett, or other commercial software vendors. In 2010 the King's Fund, supported by PTFS Europe, completed their migration to Koha after an extensive feasibility study. In 2011 the Spanish Ministry of Culture began maintenance of KOBLI, a tailored version of Koha based on an earlier report. The project was concluded in 2018. In 2014 the Ministry of Culture (Turkey) started to use Koha–Devinim in 1,136 public libraries with more than 17 million items and around 2 million active users. Specialized libraries such as music libraries have adopted Koha because its open-source nature offers easier customization for their particular use cases. A 2017 Library Technology Reports article claimed that Koha "holds the position as the most widely implemented open source integrated library system (ILS) in the world". According to ohloh (now OpenHub), in 2019 Koha had a "[v]ery large, active development team" and a "[m]ature, well-established codebase", with hundreds of contributors and over 20 monthly contributors each month from 2011 to 2019. Dispute with LibLime / PTFS In 2009 a dispute arose between LibLime and other members of the Koha community. The dispute centred on LibLime's apparent reluctance to be inclusive with the content of the sites and the non-contribution of software patches back to the community. A number of participants declared that they believed that LibLime had forked the software and the community. A separate web presence, source code repository and community was established. The fork continued after March 2010, when LibLime was purchased by PTFS. In November 2011, LibLime announced they had been granted a provisional trademark on the use of the name koha in New Zealand by Intellectual Property Office of New Zealand. The Koha community and Catalyst IT Ltd (NZ) successfully appealed against the provisional trademark grant, with a decision handed down in December 2013 and with LibLime to pay costs. Releases Koha releases follow a regular, calendar based, pattern with monthly maintenance releases and bi-annual feature releases. Each Koha release has a version number that consists of the year and month number of the release. Koha 22.11 was the first release with Long Term Support / LTS. Awards 2000 winner of the Not for Profit section of the 2000 Interactive New Zealand Awards 2000 winner of the LIANZA / 3M Award for Innovation in Libraries 2003 winner of the public organisation section of the Les Trophées du Libre 2004 winner Use of IT in a Not-for-Profit Organisation Computerworld Excellence Awards 2014 Finalist Open Source Software Project New Zealand Open Source Awards See also List of free and open-source software packages References External links Library automation Servers (computing) Free library and information science software Information technology in New Zealand Perl software Software forks
Koha (software)
[ "Engineering" ]
1,091
[ "Library automation", "Automation" ]
2,214,847
https://en.wikipedia.org/wiki/Geometry%20processing
Geometry processing is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing. For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator, geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator. Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment and classical computer-aided design, to biomedical computing, reverse engineering, and scientific computing. Geometry processing is a common research topic at SIGGRAPH, the premier computer graphics academic conference, and the main topic of the annual Symposium on Geometry Processing. Geometry processing as a life cycle Geometry processing involves working with a shape, usually in 2D or 3D, although the shape can live in a space of arbitrary dimensions. The processing of a shape involves three stages, which is known as its life cycle. At its "birth," a shape can be instantiated through one of three methods: a model, a mathematical representation, or a scan. After a shape is born, it can be analyzed and edited repeatedly in a cycle. This usually involves acquiring different measurements, such as the distances between the points of the shape, the smoothness of the shape, or its Euler characteristic. Editing may involve denoising, deforming, or performing rigid transformations. At the final stage of the shape's "life," it is consumed. This can mean it is consumed by a viewer as a rendered asset in a game or movie, for instance. The end of a shape's life can also be defined by a decision about the shape, like whether or not it satisfies some criteria. Or it can even be fabricated in the real world, through a method such as 3D printing or laser cutting. Discrete Representation of a Shape Like any other shape, the shapes used in geometry processing have properties pertaining to their geometry and topology. The geometry of a shape concerns the position of the shape's points in space, tangents, normals, and curvature. It also includes the dimension in which the shape lives (ex. or ). The topology of a shape is a collection of properties that do not change even after smooth transformations have been applied to the shape. It concerns dimensions such as the number of holes and boundaries, as well as the orientability of the shape. One example of a non-orientable shape is the Mobius strip. In computers, everything must be discretized. Shapes in geometry processing are usually represented as triangle meshes, which can be seen as a graph. Each node in the graph is a vertex (usually in ), which has a position. This encodes the geometry of the shape. Directed edges connect these vertices into triangles, which by the right hand rule, then have a direction called the normal. Each triangle forms a face of the mesh. These are combinatoric in nature and encode the topology of the shape. In addition to triangles, a more general class of polygon meshes can also be used to represent a shape. More advanced representations like progressive meshes encode a coarse representation along with a sequence of transformations, which produce a fine or high resolution representation of the shape once applied. These meshes are useful in a variety of applications, including geomorphs, progressive transmission, mesh compression, and selective refinement. Properties of a shape Euler Characteristic One particularly important property of a 3D shape is its Euler characteristic, which can alternatively be defined in terms of its genus. The formula for this in the continuous sense is , where is the number of connected components, is number of holes (as in donut holes, see torus), and is the number of connected components of the boundary of the surface. A concrete example of this is a mesh of a pair of pants. There is one connected component, 0 holes, and 3 connected components of the boundary (the waist and two leg holes). So in this case, the Euler characteristic is -1. To bring this into the discrete world, the Euler characteristic of a mesh is computed in terms of its vertices, edges, and faces. . Surface reconstruction Poisson reconstruction from surface points to mesh Depending on how a shape is initialized or "birthed," the shape might exist only as a nebula of sampled points that represent its surface in space. To transform the surface points into a mesh, the Poisson reconstruction strategy can be employed. This method states that the indicator function, a function that determines which points in space belong to the surface of the shape, can actually be computed from the sampled points. The key concept is that gradient of the indicator function is 0 everywhere, except at the sampled points, where it is equal to the inward surface normal. More formally, suppose the collection of sampled points from the surface is denoted by , each point in the space by , and the corresponding normal at that point by . Then the gradient of the indicator function is defined as: The task of reconstruction then becomes a variational problem. To find the indicator function of the surface, we must find a function such that is minimized, where is the vector field defined by the samples. As a variational problem, one can view the minimizer as a solution of Poisson's equation. After obtaining a good approximation for and a value for which the points with lie on the surface to be reconstructed, the marching cubes algorithm can be used to construct a triangle mesh from the function , which can then be applied in subsequent computer graphics applications. Registration One common problem encountered in geometry processing is how to merge multiple views of a single object captured from different angles or positions. This problem is known as registration. In registration, we wish to find an optimal rigid transformation that will align surface with surface . More formally, if is the projection of a point x from surface onto surface , we want to find the optimal rotation matrix and translation vector that minimize the following objective function: While rotations are non-linear in general, small rotations can be linearized as skew-symmetric matrices. Moreover, the distance function is non-linear, but is amenable to linear approximations if the change in is small. An iterative solution such as Iterative Closest Point (ICP) is therefore employed to solve for small transformations iteratively, instead of solving for the potentially large transformation in one go. In ICP, n random sample points from are chosen and projected onto . In order to sample points uniformly at random across the surface of the triangle mesh, the random sampling is broken into two stages: uniformly sampling points within a triangle; and non-uniformly sampling triangles, such that each triangle's associated probability is proportional to its surface area. Thereafter, the optimal transformation is calculated based on the difference between each and its projection. In the following iteration, the projections are calculated based on the result of applying the previous transformation on the samples. The process is repeated until convergence. Smoothing When shapes are defined or scanned, there may be accompanying noise, either to a signal acting upon the surface or to the actual surface geometry. Reducing noise on the former is known as data denoising, while noise reduction on the latter is known as surface fairing. The task of geometric smoothing is analogous to signal noise reduction, and consequently employs similar approaches. The pertinent Lagrangian to be minimized is derived by recording the conformity to the initial signal and the smoothness of the resulting signal, which approximated by the magnitude of the gradient with a weight : . Taking a variation on emits the necessary condition . By discretizing this onto piecewise-constant elements with our signal on the vertices we obtain where our choice of is chosen to be for the cotangent Laplacian and the term is to map the image of the Laplacian from areas to points. Because the variation is free, this results in a self-adjoint linear problem to solve with a parameter : When working with triangle meshes one way to determine the values of the Laplacian matrix is through analyzing the geometry of connected triangles on the mesh. Where and are the angles opposite the edge The mass matrix M as an operator computes the local integral of a function's value and is often set for a mesh with m triangles as follows: Parameterization Occasionally, we need to flatten a 3D surface onto a flat plane. This process is known as parameterization. The goal is to find coordinates u and v onto which we can map the surface so that distortions are minimized. In this manner, parameterization can be seen as an optimization problem. One of the major applications of mesh parameterization is texture mapping. Mass springs method One way to measure the distortion accrued in the mapping process is to measure how much the length of the edges on the 2D mapping differs from their lengths in the original 3D surface. In more formal terms, the objective function can be written as: Where is the set of mesh edges and is the set of vertices. However, optimizing this objective function would result in a solution that maps all of the vertices to a single vertex in the uv-coordinates. Borrowing an idea from graph theory, we apply the Tutte Mapping and restrict the boundary vertices of the mesh onto a unit circle or other convex polygon. Doing so prevents the vertices from collapsing into a single vertex when the mapping is applied. The non-boundary vertices are then positioned at the barycentric interpolation of their neighbours. The Tutte Mapping, however, still suffers from severe distortions as it attempts to make the edge lengths equal, and hence does not correctly account for the triangle sizes on the actual surface mesh. Least-squares conformal mappings Another way to measure the distortion is to consider the variations on the u and v coordinate functions. The wobbliness and distortion apparent in the mass springs methods are due to high variations in the u and v coordinate functions. With this approach, the objective function becomes the Dirichlet energy on u and v: There are a few other things to consider. We would like to minimize the angle distortion to preserve orthogonality. That means we would like . In addition, we would also like the mapping to have proportionally similar sized regions as the original. This results to setting the Jacobian of the u and v coordinate functions to 1. Putting these requirements together, we can augment the Dirichlet energy so that our objective function becomes: To avoid the problem of having all the vertices mapped to a single point, we also require that the solution to the optimization problem must have a non-zero norm and that it is orthogonal to the trivial solution. Deformation Deformation is concerned with transforming some rest shape to a new shape. Typically, these transformations are continuous and do not alter the topology of the shape. Modern mesh-based shape deformation methods satisfy user deformation constraints at handles (selected vertices or regions on the mesh) and propagate these handle deformations to the rest of shape smoothly and without removing or distorting details. Some common forms of interactive deformations are point-based, skeleton-based, and cage-based. In point-based deformation, a user can apply transformations to small set of points, called handles, on the shape. Skeleton-based deformation defines a skeleton for the shape, which allows a user to move the bones and rotate the joints. Cage-based deformation requires a cage to be drawn around all or part of a shape so that, when the user manipulates points on the cage, the volume it encloses changes accordingly. Point-based deformation Handles provide a sparse set of constraints for the deformation: as the user moves one point, the others must stay in place. A rest surface immersed in can be described with a mapping , where is a 2D parametric domain. The same can be done with another mapping for the transformed surface . Ideally, the transformed shape adds as little distortion as possible to the original. One way to model this distortion is in terms of displacements with a Laplacian-based energy. Applying the Laplace operator to these mappings allows us to measure how the position of a point changes relative to its neighborhood, which keeps the handles smooth. Thus, the energy we would like to minimize can be written as: . While this method is translation invariant, it is unable to account for rotations. The As-Rigid-As-Possible deformation scheme applies a rigid transformation to each handle i, where is a rotation matrix and is a translation vector. Unfortunately, there's no way to know the rotations in advance, so instead we pick a “best” rotation that minimizes displacements. To achieve local rotation invariance, however, requires a function which outputs the best rotation for every point on the surface. The resulting energy, then, must optimize over both and : Note that the translation vector is not present in the final objective function because translations have constant gradient. Inside-Outside Segmentation While seemingly trivial, in many cases, determining the inside from the outside of a triangle mesh is not an easy problem. In general, given a surface we pose this problem as determining a function which will return if the point is inside , and otherwise. In the simplest case, the shape is closed. In this case, to determine if a point is inside or outside the surface, we can cast a ray in any direction from a query point, and count the number of times it passes through the surface. If was outside then the ray must either not pass through (in which case ) or, each time it enters it must pass through twice, because S is bounded, so any ray entering it must exit. So if is outside, is even. Likewise if is inside, the same logic applies to the previous case, but the ray must intersect one extra time for the first time it leaves . So: Now, oftentimes we cannot guarantee that the is closed. Take the pair of pants example from the top of this article. This mesh clearly has a semantic inside-and-outside, despite there being holes at the waist and the legs. The naive attempt to solve this problem is to shoot many rays in random directions, and classify as being inside if and only if most of the rays intersected an odd number of times. To quantify this, let us say we cast rays, . We associate a number which is the average value of from each ray. Therefore: In the limit of shooting many, many rays, this method handles open meshes, however it in order to become accurate, far too many rays are required for this method to be computationally ideal. Instead, a more robust approach is the Generalized Winding Number. Inspired by the 2D winding number, this approach uses the solid angle at of each triangle in the mesh to determine if is inside or outside. The value of the Generalized Winding Number at , is proportional to the sum of the solid angle contribution from each triangle in the mesh: For a closed mesh, is equivalent to the characteristic function for the volume represented by . Therefore, we say: Because is a harmonic function, it degrades gracefully, meaning the inside-outside segmentation would not change much if we poked holes in a closed mesh. For this reason, the Generalized Winding Number handles open meshes robustly. The boundary between inside and outside smoothly passes over holes in the mesh. In fact, in the limit, the Generalized Winding Number is equivalent to the ray-casting method as the number of rays goes to infinity. Applications Computer-aided design (CAD) 3D Surface Reconstruction, e.g. range scanners in airport security, autonomous vehicles, medical scanner data reconstruction Image-to-world Registration, e.g. Image-guided surgery Architecture, e.g. creating, reverse engineering Physics simulations Computer games e.g. collision detection Geologic modelling Visualization (graphics) e.g. Information visualizations, mathematical visualizations Texture mapping Modelling biological systems e.g. muscle and bone modelling, real-time hand tracking See also Calculus of variations Computer graphics 3D computer graphics Graphics processing unit (GPU) Computer-aided design (CAD) Digital image Digital image processing Discrete differential geometry Glossary of differential geometry and topology Industrial CT scanning List of interactive geometry software MeshLab Signal processing Digital signal processing Digital signal processor (DSP) Topology References External links Symposium on Geometry Processing Multi-Res Modeling Group, Caltech Mathematical Geometry Processing Group, Free University of Berlin Computer Graphics Group, RWTH Aachen University Polygon Mesh Processing Book Polygon Mesh Processing Library Discrete Differential Geometry: An Applied Introduction, course notes by Keenan Crane et al. Video tutorials from SGP 2017 grad school libigl geometry processing library CGAL The Computational Geometry Algorithms Library (see section on Polygon Mesh Processing) 3D imaging 3D computer graphics Geometry Computational geometry Differential geometry
Geometry processing
[ "Mathematics" ]
3,457
[ "Computational geometry", "Computational mathematics", "Geometry" ]
2,214,885
https://en.wikipedia.org/wiki/Fosmidomycin
Fosmidomycin is an antibiotic that was originally isolated from culture broths of bacteria of the genus Streptomyces. It specifically inhibits DXP reductoisomerase, a key enzyme in the non-mevalonate pathway of isoprenoid biosynthesis. It is a structural analogue of 2-C-methyl-D-erythrose 4-phosphate. It inhibits the E. coli enzyme with a KI value of 38 nM (4), MTB at 80 nM, and the Francisella enzyme at 99 nM. Several mutations in the E. coli DXP reductoisomerase were found to confer resistance to fosmidomycin. Use in malaria The discovery of the non-mevalonate pathway in malaria parasites has indicated the use of fosmidomycin and other such inhibitors as antimalarial drugs. Indeed, fosmidomycin has been tested in combination treatment with clindamycin for treatment of malaria with favorable results. It has been shown that an increase in copy number of the target enzyme (DXP reductoisomerase) correlates with in vitro fosmidomycin resistance in the lethal malaria parasite, Plasmodium falciparum. References Antibiotics Oxidoreductase inhibitors Phosphonic acids
Fosmidomycin
[ "Biology" ]
283
[ "Antibiotics", "Biocides", "Biotechnology products" ]
2,214,999
https://en.wikipedia.org/wiki/Plasma%20scaling
The parameters of plasmas, including their spatial and temporal extent, vary by many orders of magnitude. Nevertheless, there are significant similarities in the behaviors of apparently disparate plasmas. Understanding the scaling of plasma behavior is of more than theoretical value. It allows the results of laboratory experiments to be applied to larger natural or artificial plasmas of interest. The situation is similar to testing aircraft or studying natural turbulent flow in wind tunnels with smaller-scale models. Similarity transformations (also called similarity laws) help us work out how plasma properties change in order to retain the same characteristics. A necessary first step is to express the laws governing the system in a nondimensional form. The choice of nondimensional parameters is never unique, and it is usually only possible to achieve by choosing to ignore certain aspects of the system. One dimensionless parameter characterizing a plasma is the ratio of ion to electron mass. Since this number is large, at least 1836, it is commonly taken to be infinite in theoretical analyses, that is, either the electrons are assumed to be massless or the ions are assumed to be infinitely massive. In numerical studies the opposite problem often appears. The computation time would be intractably large if a realistic mass ratio were used, so an artificially small but still rather large value, for example 100, is substituted. To analyze some phenomena, such as lower hybrid oscillations, it is essential to use the proper value. A commonly used similarity transformation One commonly used similarity transformation was derived for gas discharges by James Dillon Cobine (1941), Alfred Hans von Engel and Max Steenbeck (1934). They can be summarised as follows: This scaling applies best to plasmas with a relatively low degree of ionization. In such plasmas, the ionization energy of the neutral atoms is an important parameter and establishes an absolute energy scale, which explains many of the scalings in the table: Since the masses of electrons and ions cannot be varied, the velocities of the particles are also fixed, as is the speed of sound. If velocities are constant, then time scales must be directly proportional to distance scales. In order that charged particles falling through an electric potential gain the same energy, the potentials must be invariant, implying that the electric field scales inversely with the distance. Assuming that the magnitude of the E-cross-B drift is important and should be invariant, the magnetic field must scale like the electric field, namely inversely with the size. This is also the scaling required by Faraday's law of induction and Ampère's law. Assuming that the speed of the Alfvén wave is important and must remain invariant, the ion density (and with it the electron density) must scale with B2, that is, inversely with the square of the size. Considering that the temperature is fixed, this also ensures that the ratio of thermal to magnetic energy, known as beta, remains constant. Furthermore, in regions where quasineutrality is violated, this scaling is required by Gauss's law. Ampère's law also requires that current density scales inversely with the square of the size, and therefore that current itself is invariant. The electrical conductivity is current density divided by electric field and thus scales inversely with the length. In a partially ionized plasma, the electrical conductivity is proportional to the electron density and inversely proportional to the neutral gas density, implying that the neutral density must scale inversely with the length, and ionization fraction scales inversely with the length. Limitations While these similarity transformations capture some basic properties of plasmas, not all plasma phenomena scale in this way. Consider, for example, the degree of ionization, which is dimensionless and thus would ideally remain unchanged when the system is scaled. The number of charged particles per unit volume is proportional to the current density, which scales as x−2, whereas the number of neutral particles per unit volume scales as x−1 in this transformation, so the degree of ionization does not remain unchanged but scales as x−1. See also Similitude (model) References scaling
Plasma scaling
[ "Physics" ]
841
[ "Plasma theory and modeling", "Plasma physics" ]
2,215,109
https://en.wikipedia.org/wiki/Genome%20Research
Genome Research is a peer-reviewed scientific journal published by Cold Spring Harbor Laboratory Press. Disregarding review journals, Genome Research ranks 2nd in the category 'Genetics and Genomics' after Nature Genetics. The focus of the journal is on research that provides novel insights into the genome biology of all organisms, including advances in genomic medicine. This scope includes genome structure and function, comparative genomics, molecular evolution, genome-scale quantitative and population genetics, proteomics, epigenomics, and systems biology. The journal also features interesting gene discoveries and reports of cutting-edge computational biology and high-throughput biology methodologies. New data in these areas are published as research papers, or methods and resource reports that provide novel information on technologies or tools that will be of interest to a broad readership. The journal was established in 1991 as PCR Methods and Applications and obtained its current title in 1995. According to the Journal Citation Reports, the journal has a 2023 impact factor of 6.4, which peaked in 2014 at 14.630. References External links Genetic engineering journals Delayed open access journals Academic journals established in 1991 Cold Spring Harbor Laboratory Press academic journals Monthly journals English-language journals 1991 establishments in New York (state) Genomics journals
Genome Research
[ "Engineering", "Biology" ]
250
[ "Genetic engineering journals", "Genetic engineering" ]
2,215,167
https://en.wikipedia.org/wiki/The%20Sex%20Files
The Sex Files is a television program broadcasting on Discovery Channel Canada and was broadcast on CTV network stations after the watershed, due to its highly explicit discussion of the nature of sexuality issues and behavior, from genetics, reproduction, sexual orientation, puberty, etc. As one would expect of a show of its nature, it frequently featured nudity, but portrayed in a scientific manner for visually aid in learning, highly beneficial information on sexuality and the biology behind it. In Europe the show was called Sex Sense and featured a male narrator. The number of episodes and their titles were the same, but the episodes themselves were slightly different as the more explicit scenes were replaced. It aired on Discovery Channel Europe. Starting with episode 41 (season 4), it is broadcast in high-definition. The Sex Files episode list Season 1 The Erection Breasts Orgasm The Birds and the Bees Aphrodisiacs Fantasy The Affair Fetish Gender Hair What is Sexy? Girl Power Birth Control Season 2 Sex Drive The Act The Vagina Sexual Signals Sexual Senses Sex for One Puberty Homosexuality The Rear End Sexual Cycle Love Juices Circumcision Intersexed People Myths Season 3 Testicles Celibacy Behavioral addiction Better Sex Sexual Reconstruction Menopause Future Sex Sex & Culture Sex & Disabilities Sex vs. Love Healing Sex Pregnancy Sexpertise Season 4 Sex Toys Kinky Sex Rated X Beyond Monogamy Pleasure and Pain The Kiss The Strip The Bi Way Baring it All No Sex Please Dirty Jokes Makin' it Work More Kinky Sex Season 5 First Date Sun, Sand and Sex The Boob Tube Top Ten Sexiest Clothes The Love Glove Top Ten Sexy Things Sexercise Top Ten Sexual Fantasies Erotic Origins His Sexy Makeover Her Sexy Makeover The Wedding Making of the Sex Files Season 6 Virginity Marriage Makeover The Brothel Breasts Sex and Beauty Girls on Top Sex and Aging Top Ten Myths Touch Sex and Rock n' Roll Satisfaction Top Ten Archetypes Sextracurricular Activities Sexual Secrets Sexual Secrets uses footage from The Sex Files, with contents rearranged to 1-hour episodes. Sexual Secrets was premiered on Life Network and Discovery Health. Episodes 1-18 use footage from first three seasons of The Sex Files. High-definition episodes are available starting with episode 19, which corresponds to the fourth and later seasons of The Sex Files. Sexual Secrets episode list Pleasure Zones (09/27/2002) Love Potions (10/04/2002) Sex Appeal (09/20/2002)to Sex Drive (09/27/2002) Forbidden Fruit (10/04/2002) The Mating Game (11/21/2003) Love Juices (10/18/2002) Doing It Right (11/01/2002) Love Triangles (11/08/2002) The Body Beautiful (11/08/2002) Sex 101 (04/25/2003) Cheeky Secrets (04/25/2003) Ultimate Sex (04/11/2003) Sex, Lies and Abstinence Constant Cravings (05/02/2003) Designer Sex (05/16/2003) Sexpecting (05/23/2003) Prescription: Sex (05/30/2003) Bi Now, Tri Later Dating Secrets (03/05/2005) Wild and Whipped Kinky Secrets Ultimate Makeover Adults Only Beach Bums Ultimate Sex List (12/31/2004) Under the Hood, and down the drain. Put It On, Turn It On (02/26/2005) Sex Ed for Grownups (04/14/2006) Sex, Lies and Myth (04/21/2006) Aged to Satisfaction (04/28/2006) Rock n’ Roll in the Hay (5/05/2006) Alpha Dames: Sex, Sass and Secrets (05/12/2006) Beauty and the Breast (5/19/2006) Pure and not so Simple (05/26/2006) Sexy Marriage Makeover: From Hot to Not (06/02/2006) External links Sex Files, The Discovery Channel (Canadian TV channel) original programming Works about sex
The Sex Files
[ "Biology" ]
823
[ "Works about sex", "Behavior", "Sexuality" ]
2,215,177
https://en.wikipedia.org/wiki/Stagnation%20temperature
In thermodynamics and fluid mechanics, stagnation temperature is the temperature at a stagnation point in a fluid flow. At a stagnation point, the speed of the fluid is zero and all of the kinetic energy has been converted to internal energy and is added to the local static enthalpy. In both compressible and incompressible fluid flow, the stagnation temperature equals the total temperature at all points on the streamline leading to the stagnation point. See gas dynamics. Derivation Adiabatic Stagnation temperature can be derived from the first law of thermodynamics. Applying the steady flow energy equation and ignoring the work, heat and gravitational potential energy terms, we have: where: mass-specific stagnation (or total) enthalpy at a stagnation point mass-specific static enthalpy at the point of interest along the stagnation streamline velocity at the point of interest along the stagnation streamline Substituting for enthalpy by assuming a constant specific heat capacity at constant pressure () we have: or where: specific heat capacity at constant pressure stagnation (or total) temperature at a stagnation point temperature (or static temperature) at the point of interest along the stagnation streamline velocity at the point of interest along the stagnation streamline Mach number at the point of interest along the stagnation streamline Ratio of Specific Heats (), ~1.4 for air at ~300 K Flow with heat addition q = Heat per unit mass added into the system Strictly speaking, enthalpy is a function of both temperature and density. However, invoking the common assumption of a calorically perfect gas, enthalpy can be converted directly into temperature as given above, which enables one to define a stagnation temperature in terms of the more fundamental property, stagnation enthalpy. Stagnation properties (e.g., stagnation temperature, stagnation pressure) are useful in jet engine performance calculations. In engine operations, stagnation temperature is often called total air temperature. A bimetallic thermocouple is frequently used to measure stagnation temperature, but allowances for thermal radiation must be made. Solar thermal collectors Performance testing of solar thermal collectors utilizes the term stagnation temperature to indicate the maximum achievable collector temperature with a stagnant fluid (no motion), an ambient temperature of 30C, and incident solar radiation of 1000W/m2. The aforementioned figures are 'worst case scenario values' that allow collector designers to plan for potential overheat scenarios in the event of collector system malfunctions. See also Stagnation point Stagnation pressure Total air temperature References Fluid dynamics
Stagnation temperature
[ "Chemistry", "Engineering" ]
568
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
2,215,288
https://en.wikipedia.org/wiki/Stagnation%20point
In fluid dynamics, a stagnation point is a point in a flow field where the local velocity of the fluid is zero. The Bernoulli equation shows that the static pressure is highest when the velocity is zero and hence static pressure is at its maximum value at stagnation points: in this case static pressure equals stagnation pressure. The Bernoulli equation applicable to incompressible flow shows that the stagnation pressure is equal to the dynamic pressure and static pressure combined. In compressible flows, stagnation pressure is also equal to total pressure as well, provided that the fluid entering the stagnation point is brought to rest isentropically. A plentiful, albeit surprising, example of such points seem to appear in all but the most extreme cases of fluid dynamics in the form of the "no-slip condition" - the assumption that any portion of a flow field lying along some boundary consists of nothing but stagnation points (the question as to whether this assumption reflects reality or is simply a mathematical convenience has been a continuous subject of debate since the principle was first established). Pressure coefficient This information can be used to show that the pressure coefficient at a stagnation point is unity (positive one): where: is pressure coefficient is static pressure at the point at which pressure coefficient is being evaluated is static pressure at points remote from the body (freestream static pressure) is dynamic pressure at points remote from the body (freestream dynamic pressure) Stagnation pressure minus freestream static pressure is equal to freestream dynamic pressure; therefore the pressure coefficient at stagnation points is +1. Kutta condition On a streamlined body fully immersed in a potential flow, there are two stagnation points—one near the leading edge and one near the trailing edge. On a body with a sharp point such as the trailing edge of a wing, the Kutta condition specifies that a stagnation point is located at that point. The streamline at a stagnation point is perpendicular to the surface of the body. See also Stagnation point flow Notes Fluid dynamics
Stagnation point
[ "Chemistry", "Engineering" ]
427
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
2,215,760
https://en.wikipedia.org/wiki/Baby%20oil
Baby oil is an inert oil used to keep skin soft and supple, named for its use on babies and also often used on adults for skincare and massage. The skin of an infant, especially a premature one, is sensitive, thin, and fragile. The skin's neutral pH on the surface significantly reduces the protection against excessive bacterial growth. The epidermis and dermis are thinner than those of adults and the epidermal barrier is not yet fully developed. Consequences can for example be dry skin, infections, peeling, blister formation and poor thermoregulation. The application of different oils to the skin of the newborn is routinely practiced in many countries. In general, these oils are used for cleansing, to maintain the skin's moisture and to protect its surface. Additionally, baby oil is used for the massage of newborns and as additive in lotions and creams. Ingredients Some baby oils are based on mineral oil; others are based on vegetable oils. Products based on mineral oil Typical components of baby oils are the highly purified mineral oil products, such as liquid paraffin (INCI name: paraffinum liquidum) and petroleum jelly (INCI name: petrolatum). These compounds are odorless and tasteless, dermatologically tested and approved, not allergenic, and hydrophobic; they contain no pesticides or herbicides. Preservatives and antioxidants are not necessary, because, unlike vegetable oils, paraffins cannot become rancid. Nevertheless, the use of mineral oil in cosmetics is being criticized. Natural-cosmetic companies claim that mineral oil causes skin occlusion. Conventional cosmetic manufacturers and dermatologists and cosmetic chemists argue against that, and studies have shown no statistical difference between paraffin oil and vegetable oils in skin penetration and skin occlusion. On the contrary, petrolatum-based preparations have been shown to be effective to the skin barrier function, even in premature infants. Products based on vegetable oils Vegetable oils are produced by plants with the highest concentration being present in seeds and fruits. About 95% of each vegetable oil is primarily composed of triglycerides. Coconut oil and palm oil contain mainly saturated fatty acids, while other oils largely contain unsaturated fatty acids, for example oleic acid and linoleic acid. Accompanying substances in vegetable oils are, inter alia, phospholipids, glycolipids, sulfolipids, squalene, carotenoids, vitamin E, polyphenols and triterpene alcohols. To avoid rancidity, preservatives or antioxidants are added to baby oils based on vegetable oils. On cosmetic products, these oils are listed according to the International Nomenclature of Cosmetic Ingredients (INCI), e.g.: Cocos Nucifera Oil (coconut oil) Elaeis Guineensis Oil (palm oil) Glycine Soja Oil (soy oil) Olea Europaea Oil (olive oil) Persea Gratissima Oil (avocado oil) Prunus Amygdalus Dulcis Oil (almond oil) Shea Butter Glycerides (shea butter) Simmondsia chinensis Oil (jojoba oil) Helianthus Annuus Seed Oil (sunflower oil) Vegetable oils are not to be confused with essential oils, both being sourced from plants. Usage Baby oils are largely used as skin care products and their principal use remains as skin moisturizers. In particular, baby oils find application in the treatment of various skin diseases like atopic dermatitis, xerosis, psoriasis and other eczematous conditions. Another area of use is the oil massage of the newborn which has been a tradition in India and other Asian countries since time immemorial. In addition to its principal usage, liquid paraffin-based baby oil is commonly used in the automotive maintenance industry as a fuel for diagnostic smoke test machines, which generate smoke used to detect leaks in engine induction systems, brake system, manifolds, gaskets and similar sealed systems. When heated to approximately 300°C in a low oxygen environment, liquid paraffin creates a thick and visible smoke which is injected into the sealed system. Leaks in a system can easily be found by observing the place at which smoke escapes. References Babycare Oils Skin care
Baby oil
[ "Chemistry" ]
898
[ "Oils", "Carbohydrates" ]
2,215,829
https://en.wikipedia.org/wiki/Key%20Tronic
Key Tronic Corporation (branded Keytronic) is a technology company founded in 1969 by Lewis G. Zirkle. Its core products initially included keyboards, mice and other input devices. KeyTronic currently specializes in PCBA and full product assembly. The company is among the ten largest contract manufacturers providing electronic manufacturing services in the US. The company offers full product design or assembly of a wide variety of household goods and electronic products such as keyboards, printed circuit board assembly, plastic molding, thermometers, toilet bowl cleaners, satellite tracking systems, etc. Keyboards After the introduction of the IBM PC, Keytronic began manufacturing keyboards compatible with those computer system units. Most of their keyboards are based on the 8048 microcontroller to communicate to the computer. Their early keyboards used an Intel 8048 MCU. However, as the company evolved, they began to use their own 8048-based and 83C51KB-based MCUs. In 1978, Keytronic Corporation introduced keyboards with capacitive-based switches, one of the first keyboard technologies to not use self-contained switches. There was simply a sponge pad with a conductive-coated Mylar plastic sheet on the switch plunger, and two half-moon trace patterns on the printed circuit board below. As the key was depressed, the capacitance between the plunger pad and the patterns on the PCB below changed, which was detected by integrated circuits (IC). These keyboards were claimed to have the same reliability as the other "solid-state switch" keyboards such as inductive and Hall-Effect, but competitive with direct-contact keyboards. Natural Keyboard Microsoft ergonomic keyboards, starting from 1994, were originally designed for Microsoft by Ziba Design with assistance and manufacturing by Key Tronic. The Microswitch division of Honeywell, which was responsible for that company's keyboards and was acquired by Key Tronic in early 1994, is also credited with design input. This keyboard also introduced three new keys purposed for Microsoft's upcoming operating system: two Windows logo keys () between the and keys on each side, and a key between the right Windows and Ctrl keys. Although it was not the first ergonomic keyboard, it was the first widely available sub-$100 offering. The Natural Keyboard sold over 600,000 per month at its peak. Over 3 million units had been sold by February 1998, when its successor, the Natural Keyboard Elite, was introduced. Like the original Natural Keyboard, the Elite was manufactured by Key Tronic, who also assisted in its development. ErgoForce Among modern keyboard enthusiasts, Keytronic is known mostly for its "ErgoForce" technology, where different keys have rubber domes with different stiffness. The alphabetic keys intended to be struck with the little finger need only 35 grams of force to actuate, while other alphabetic keys need 45 grams. Other keys can be as stiff as 80 grams. Corporate information The company, which has been described as a contract manufacturer, was founded in 1969, went public in 1983, and has an estimated 5,000 employees. During 2016–2017, statements and press releases about Cemtrex's proposed acquisition of Keytronic have been released. References External links Circuitsassembly.com: "Key Tronic named the CIRCUITS ASSEMBLY EMS Company of the Year in 2009" Computer peripheral companies Computer companies of the United States Computer hardware companies Companies based in Spokane, Washington Computer companies established in 1969 Electronics companies established in 1969 Companies listed on the Nasdaq Computer keyboard companies Electronics manufacturing companies
Key Tronic
[ "Technology" ]
730
[ "Computer hardware companies", "Computers" ]
2,215,843
https://en.wikipedia.org/wiki/IEEE%201675-2008
IEEE 1675-2008 was a standard for broadband over power lines developed by the IEEE Standards Association. It provided electric utility companies with a comprehensive standard for safely installing hardware required for Internet access capabilities over their power lines. The standard was published 7 January 2008. The IEEE 1901 standard was another related attempt published in 2011. See also Power-line communication External links IEEE P1675 Official site IEEE standards
IEEE 1675-2008
[ "Technology" ]
80
[ "Computer standards", "IEEE standards", "Computing stubs", "Computer network stubs" ]
2,215,849
https://en.wikipedia.org/wiki/Magnetogravity%20wave
A magnetogravity wave is a type of plasma wave. A magnetogravity wave is an acoustic gravity wave which is associated with fluctuations in the background magnetic field. In this context, gravity wave refers to a classical fluid wave, and is completely unrelated to the relativistic gravitational wave. Examples Magnetogravity waves are found in the corona of the Sun. See also Wave Plasma Magnetosonic wave Helioseismology References Astrophysics Waves in plasmas Waves
Magnetogravity wave
[ "Physics", "Astronomy" ]
99
[ "Waves in plasmas", "Physical phenomena", "Plasma physics", "Plasma phenomena", "Astronomy stubs", "Astrophysics", "Waves", "Astrophysics stubs", "Motion (physics)", "Plasma physics stubs", "Astronomical sub-disciplines" ]
2,215,872
https://en.wikipedia.org/wiki/Z2%20%28computer%29
The Z2 was an electromechanical (mechanical and relay-based) digital computer that was completed by Konrad Zuse in 1940. It was an improvement on the Z1 Zuse built in his parents' home, which used the same mechanical memory. In the Z2, he replaced the arithmetic and control logic with 600 electrical relay circuits, weighing over 600 pounds. The Z2 could read 64 words from punch cards. Photographs and plans for the Z2 were destroyed by the Allied bombing during World War II. In contrast to the Z1, the Z2 used 16-bit fixed-point arithmetic instead of 22-bit floating point. Zuse presented the Z2 in 1940 to members of the DVL (today DLR) and member , whose support helped fund the successor model Z3. Specifications See also Z1 Z3 Z4 References Further reading External links Z2 via Horst Zuse (son) web page Electro-mechanical computers Z02 Mechanical computers Computer-related introductions in 1940 Konrad Zuse German inventions of the Nazi period 1940s computers Computers designed in Germany
Z2 (computer)
[ "Physics", "Technology" ]
220
[ "Machines", "Computer hardware stubs", "Physical systems", "Mechanical computers", "Computing stubs" ]
2,215,876
https://en.wikipedia.org/wiki/Burrow
A burrow is a hole or tunnel excavated into the ground by an animal to construct a space suitable for habitation or temporary refuge, or as a byproduct of locomotion. Burrows provide a form of shelter against predation and exposure to the elements, and can be found in nearly every biome and among various biological interactions. Many animal species are known to form burrows. These species range from small amphipods, to very large vertebrate species such as the polar bear. Burrows can be constructed into a wide variety of substrates and can range in complexity from a simple tube a few centimeters long to a complex network of interconnecting tunnels and chambers hundreds or thousands of meters in total length; an example of the latter level of complexity, a well-developed burrow, would be a rabbit warren. Vertebrate burrows A large variety of vertebrates construct or use burrows in many types of substrate; burrows can range widely in complexity. Some examples of vertebrate burrowing animals include a number of mammals, amphibians, fish (dragonet and lungfish), reptiles, and birds (including small dinosaurs). Mammals are perhaps most well known for burrowing. Mammal species such as Insectivora like the mole, and rodents like the gopher, great gerbil and groundhog are often found to form burrows. Some other mammals that are known to burrow are the platypus, pangolin, pygmy rabbit, armadillo, rat and weasel. Some rabbits, members of the family Leporidae, are well-known burrowers. Some species, such as the groundhog, can construct burrows that occupy a full cubic metre, displacing about of dirt. There is evidence that rodents may construct the most complex burrows of all vertebrate burrowing species. For example, great gerbils live in family groups in extensive burrows, which can be seen on satellite images. Even the unoccupied burrows can remain visible in the landscape for years. The burrows are distributed regularly, although the occupied burrows appear to be clustered in space. Even Carnivora like the meerkat, and marsupials, such as wombats are burrowers. Wombat burrows are large and some have been mapped using a drone. The largest burrowing animal is probably the polar bear when it makes its maternity den in snow or earth. Lizards are also known to construct and live in burrows, and may exhibit territorial behaviour over the burrows as well. There is also evidence that a burrow provides protection for the Adelaide pygmy blue-tongue skink (Tiliqua adelaidensis) when fighting, as they may fight from inside their burrows. Burrows by birds are usually made in soft soils; some penguins and other pelagic seabirds are noted for such burrows. The Magellanic penguin is an example, constructing burrows along coastal Patagonian regions of Chile and Argentina. Other burrowing birds are puffins, kingfishers, and bee-eaters. Kangaroo mice construct burrows in fine sand. Invertebrate burrows Scabies mites construct their burrows in the skin of the infested animal or human. Termites and some wasps construct burrows in the soil and wood. Ants construct burrows in the soil. Some sea urchins and clams can burrow into rock. The burrows produced by invertebrate animals can be filled actively or passively. Dwelling burrows which remain open during the occupation by an organism are filled passively, by gravity rather than by the organism. Actively filled burrows, on the other hand, are filled with material by the burrowing organism itself. The establishment of an invertebrate burrow often involves the soaking of surrounding sediment in mucus to prevent collapse and to seal off water flow. Examples of burrowing invertebrates are insects, spiders, sea urchins, crustaceans, clams and worms. Excavators, modifiers, and occupants Burrowing animals can be divided into three categories: primary excavators, secondary modifiers and simple occupants. Primary excavators are the animals that originally dig and construct the burrow, and are generally very strong. Some animals considered to be primary excavators are the prairie dog, aardvark and wombat. Pygmy gerbils are an example of secondary modifiers, as they do not build an original burrow, but will live inside a burrow made by other animals and improve or change some aspects of the burrow for their own purpose. The third category, simple occupants, neither build nor modify the burrow but simply live inside or use it for their own purpose. Some species of bird make use of burrows built by tortoises, which is an example of simple occupancy. These animals can also be referred to as commensals. Protection Some species may spend the majority of their days inside a burrow, indicating it must have good conditions and provide some benefit to the animal. Burrows may be used by certain species as protection from harsh conditions, or from predators. Burrows may be found facing the direction of sunlight or away from the direction of cold wind. This could help with heat retention and insulation, providing protection from temperatures and conditions outside. Insects such as the earwig may construct burrows to live in during winter, and use them for physical protection. Some species will also use burrows to store and protect food. This provides a benefit to the animal as it can keep food away from other competition. It also allows the animal to keep a good stock of food inside the burrow to avoid extreme weather conditions or seasons where certain food sources may be unavailable. Additionally, burrows can protect animals that have just had their young, providing good conditions and safety for vulnerable newborn animals. Burrows may also provide shelter to animals residing in areas frequently destroyed by fire, as animals deep underground in a burrow may be kept dry, safe and at a stable temperature. Fossil burrows Burrows are also commonly preserved in the fossil record as burrow fossils, a type of trace fossil. See also Holt Maternity den Sett - a network of badger tunnels. Spreite Subterranean fauna References Ethology Shelters built or used by animals de:Behausung#Tierische Behausungen
Burrow
[ "Biology" ]
1,278
[ "Behavioural sciences", "Ethology", "Behavior", "Shelters built or used by animals" ]
2,216,044
https://en.wikipedia.org/wiki/Nuclear%20reaction%20analysis
Nuclear reaction analysis (NRA) is a nuclear method of nuclear spectroscopy in materials science to obtain concentration vs. depth distributions for certain target chemical elements in a solid thin film. Mechanism of NRA If irradiated with select projectile nuclei at kinetic energies Ekin, target solid thin-film chemical elements can undergo a nuclear reaction under resonance conditions for a sharply defined resonance energy. The reaction product is usually a nucleus in an excited state which immediately decays, emitting ionizing radiation. To obtain depth information the initial kinetic energy of the projectile nucleus (which has to exceed the resonance energy) and its stopping power (energy loss per distance traveled) in the sample has to be known. To contribute to the nuclear reaction the projectile nuclei have to slow down in the sample to reach the resonance energy. Thus each initial kinetic energy corresponds to a depth in the sample where the reaction occurs (the higher the energy, the deeper the reaction). NRA profiling of hydrogen For example, a commonly used reaction to profile hydrogen with an energetic 15N ion beam is 15N + 1H → 12C + α + γ (4.43 MeV) with a sharp resonance in the reaction cross section at 6.385 MeV of only 1.8 keV. Since the incident 15N ion loses energy along its trajectory in the material it must have an energy higher than the resonance energy to induce the nuclear reaction with hydrogen nuclei deeper in the target. This reaction is usually written 1H(15N,αγ)12C. It is inelastic because the Q-value is not zero (in this case it is 4.965 MeV). Rutherford backscattering (RBS) reactions are elastic (Q = 0), and the interaction (scattering) cross-section σ given by the famous formula derived by Lord Rutherford in 1911. But non-Rutherford cross-sections (so-called EBS, elastic backscattering spectrometry) can also be resonant: for example, the 16O(α,α)16O reaction has a strong and very useful resonance at 3038.1 ± 1.3 keV. In the 1H(15N,αγ)12C reaction (or indeed the 15N(p,αγ)12C inverse reaction), the energetic emitted γ ray is characteristic of the reaction and the number that are detected at any incident energy is proportional to the hydrogen concentration at the respective depth in the sample. Due to the narrow peak in the reaction cross section primarily ions of the resonance energy undergo a nuclear reaction. Thus, information on the hydrogen distribution can be straight forward obtained by varying the 15N incident beam energy. Hydrogen is an element inaccessible to Rutherford backscattering spectrometry since nothing can backscatter from H (since all atoms are heavier than hydrogen!). But it is often analysed by elastic recoil detection. Non-resonant NRA NRA can also be used non-resonantly (of course, RBS is non-resonant). For example, deuterium can easily be profiled with a 3He beam without changing the incident energy by using the 3He + D = α + p + 18.353 MeV reaction, usually written 2H(3He,p)α. The energy of the fast proton detected depends on the depth of the deuterium atom in the sample. See also Rutherford backscattering spectrometry (RBS) References External links Details of many known reactions are hosted by the IAEA at http://www-nds.iaea.org/ibandl/. The energy released in nuclear reactions (the "Q value") can easily be calculated (from E=mc2): see http://nucleardata.nuclear.lu.se/database/masses/. NRA at JSI Microanalytical center in Ljubljana, Slovenia Materials science Surface science
Nuclear reaction analysis
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
808
[ "Ion beam methods", "Applied and interdisciplinary physics", "Materials science", "Surface science", "Condensed matter physics", "nan" ]
2,216,121
https://en.wikipedia.org/wiki/Asp%20%28rocket%29
ASP (Atmospheric Sounding Projectile) is the designation of an American sounding rocket family. ASP was used for a variety of uses, including research into hypersonic speed and to propel rocket sleds. In NASA service it was flown from a number of locations as a sounding rocket. The selection by NASA of the Apache and Javelin rockets for the jobs performed by ASP led to its retirement. Versions ASP-I ASP-I was used to sample nuclear explosions and resultant clouds The ASP was the fastest single stage sounding rocket when developed. The Asp was manufactured by Cooper Development Corporation, California. The solid propellant motor was made by Grand Central Rocket company. The ASP-I has a payload ability of 11 kg, a maximum flight height of 110 km, a takeoff thrust of 42.00 kN, a mass of 111 kg, a diameter of 0.17 m, a length of 3.68 m and a fin span of 0.51 m. ASP-I was launched 30 times from December 1, 1955 to June 14, 1962 from White Sands, Cape Canaveral, Point Mugu, Bikini, China Lake, Mercury site and Tonopah. ASP-II ASP-II (Cleansweep I) had a slightly lower total impulse and a significantly shorter burn time (3.6 seconds vs. 5.6). Cleansweep I was used to collect particulate air sample from nuclear explosions at the Nevada Test Range. It was launched once in 1959 from Tonopah with an apogee of 30 km. ASP-III ASP-III (Cleansweep II) had slightly lower specs. It was also modified for use in the South Pacific. Two or four LOKI rockets were strapped on the basic ASP. Results were less than expected and ASP-III was a failure. It was launched four times from White Sands between 1957 and 1958. ASP-IV ASP-IV used an ASP motor case with B.F. Goodrich E-107M propellant. It was launched two times, on May 18 and 19, 1960 from Wallops Island to an apogee of 80 km. ASP-V ASP-V was to utilize a polysulfide propellant but erratic burning and resultant burn through proved insoluble. ASP-V was canceled. ASPAN ASP was combined with a Nike booster to create the ASPAN which exceeded performance of the Nike-Cajun and Nike Deacon. Pogo-Hi-III This is a single stage vehicle using an ASP motor, intended as a high-altitude radar target. It was launched three times from White Sands in 1959 to an apogee of 60 km. ASCAMP When ASP-I was combined with a one-fifth scale Sergeant this was designates as ASCAMP (also known as Nike-ASP). ASCAMP had to be launched from a remotely controlled launcher due to the necessary closeness to the nuclear blast. It was launched 27 times in August 1958 from Johnston Island to an apogee of 100 km. Stages The following table summarizes the various ASP versions and stages: References Books Sounding rockets of the United States
Asp (rocket)
[ "Astronomy" ]
656
[ "Rocketry stubs", "Astronomy stubs" ]
2,216,289
https://en.wikipedia.org/wiki/Salami%20slicing%20tactics
Salami slicing tactics, also known as salami slicing, salami tactics, the salami-slice strategy, or salami attacks, is the practice of using a series of many small actions to produce a much larger action or result that would be difficult or unlawful to perform all at once. Salami tactics are used extensively in geopolitics and war games as a method of achieving goals gradually without provoking significant escalation. In finance, the term "salami attack" is used to describe schemes by which large sums are fraudulently accumulated by repeated transfers of imperceptibly small sums of money. Financial schemes Computerized banking systems make it possible to repeatedly divert tiny amounts of money, typically due to rounding off, to a beneficiary's account. This general concept is used in popular automatic-savings apps. It has also been said to be behind fraudulent schemes, whereby bank transactions calculated to the nearest smallest unit of currency leave unaccounted-for fractions of a unit, for fraudsters to divert into other amounts. Snopes in 2001 dismissed a popular account of such an embezzlement scheme as a legend. In Los Angeles, in October 1998, district attorneys charged four men with fraud for allegedly installing computer chips in gasoline pumps that cheated consumers by slightly overstating the amounts pumped. The fraud was noticed by consumers who found that they had been charged for volumes of gasoline greater than their cars' gas tank capacities. In 2008, a man was arrested for fraudulently creating 58,000 accounts which he used to collect money through verification deposits from online brokerage firms, a few cents at a time. In 1996, a fare box serviceman in Edmonton, Canada, was sentenced to four years' imprisonment for stealing coins from the city's transit agency fare boxes. Over 13 years, he stole 37 tonnes of coins, with a face value of nearly million, using a magnet to lift the coins (made primarily of steel or nickel at the time) out of the fare boxes one at a time. In Buffalo, New York, a fare box serviceman stole more than US$200,000 in quarters from the local transit agency over an eight-year period stretching from 2003 to 2011, and was sentenced to thirty months in prison. Politics The first use of salami slicing in politics - and the original Hungarian term () - is commonly attributed to Stalinist dictator Mátyás Rákosi, who described the actions of the Hungarian Communist Party in its drive for power in the Hungarian Soviet Republic. China's salami slice strategy The European Parliamentary Research Service has accused China of using the salami slice strategy to gradually increase its presence in the South China Sea. Salami slicing in scientific publishing Scientists are often evaluated by a number of papers published and similar criteria. In this context, salami slicing refers to "fragmenting single coherent bodies of research into as many publications as possible". If the fragment is too small it may be too hard to publish, so this includes forming minimal publishable items. It can be harder to collect, digest, understand and evaluate the research when scattered in a number of sources. It also leads to repetitive descriptions of context, bibliography lists and so on. Regarding that it is costly to scientific dissemination process, it is often considered a bad practice or even unethical. Some authors managed to divide research to extreme proportions. Salami slicing "can result in a distortion of the literature by leading unsuspecting readers to believe that data presented in each salami slice (i.e., journal article) is derived from a different subject sample". Salami slicing is considered a type of scientific misconduct. Cultural references Film In the 2016 film Arrival, Agent Halpern mentions a Hungarian word meaning to eliminate your enemies one by one. It is thought that this alludes to szalámitaktika. Salami slicing has played a key role in the plots of several films, including Hackers, Superman III, and Office Space. Television In a 1972 episode of the TV series M*A*S*H, Radar attempts to ship an entire Jeep home from Korea one piece at a time. Hawkeye commented that his mailman "would have a retroactive hernia" if he found out. Music Johnny Cash's "One Piece at a Time" has a similar plot to the aforementioned M*A*S*H episode, but with a Cadillac made up of parts spanning model years 1949 through 1973. See also Creeping normality Death by a thousand cuts Defeat in detail Gradualism Goodhart's law Slippery slope Structuring Deterrence theory References Management Finance fraud Negotiation Metaphors referring to food and drink Scientific misconduct
Salami slicing tactics
[ "Technology" ]
949
[ "Scientific misconduct", "Ethics of science and technology" ]
2,216,518
https://en.wikipedia.org/wiki/Phosgene%20oxime
Phosgene oxime, or CX, is an organic compound with the formula . It is a potent chemical weapon, specifically a nettle agent. The compound itself is a colorless solid, but impure samples are often yellowish liquids. It has a strong, disagreeable and irritating odor. It is used as a reagent in organic chemistry. Preparation and reactions Phosgene oxime can be prepared by reduction of chloropicrin using a combination of tin metal and hydrochloric acid as the source of the active hydrogen reducing agent: The observation of a transient violet color in the reaction suggests intermediate formation of trichloronitrosomethane (Cl3CNO). Early preparations, using stannous chloride as the reductant, also started with chloropicrin. The compound is electrophilic and thus sensitive to nucleophiles, including bases, which destroy it: Phosgene oxime has been used to prepare heterocycles that contain N-O bonds, such as isoxazoles. Dehydrohalogenation upon contact with mercuric oxide generates chlorine fulminate, a reactive nitrile oxide: Toxicity Phosgene oxime is classified as a vesicant even though it does not produce blisters. It is toxic by inhalation, ingestion, or skin contact. The effects of the poisoning occur almost immediately. No antidote for phosgene oxime poisoning is known. Generally, any treatment is supportive. Typical physical symptoms of CX exposure are as follows: Skin: Blanching surrounded by an erythematous ring can be observed within 30 seconds of exposure. A wheal develops on exposed skin within 30 minutes. The original blanched area acquires a brown pigmentation by 24 hours. An eschar forms in the pigmented area by 1 week and sloughs after approximately 3 weeks. Initially, the effects of CX can easily be misidentified as mustard gas exposure. However, the onset of skin irritation resulting from CX exposure is a great deal faster than mustard gas, which typically takes several hours or more to cause skin irritation. Eyes: Eye examination typically demonstrates conjunctivitis, lacrimation, lid edema, and blepharospasm after even minute exposures. More severe exposures can result in keratitis, iritis, corneal perforation, and blindness. Respiratory: Irritation of the mucous membranes may be observed on examination of the oropharynx and nose. Evidence of pulmonary edema, including rales and wheezes, may be noted on auscultation. Pulmonary thromboses are prominent features of severe CX exposure. Gastrointestinal: Some animal data suggest that CX may cause hemorrhagic inflammatory changes in the GI tract. References External links EMedicine: Urticants, Phosgene Oxime Center for the Study of Bioterrorism: Phosgene Oxime Centers for Disease Control: Facts About Phosgene Oxime Virtual Naval Hospital: Phosgene Oxime Chemical weapons Organochlorides Oximes
Phosgene oxime
[ "Chemistry", "Biology" ]
659
[ "Chemical accident", "Chemical weapons", "Functional groups", "Oximes", "Biochemistry" ]
2,216,649
https://en.wikipedia.org/wiki/SRM%20firmware
The SRM firmware (also called the SRM console) is the boot firmware written by Digital Equipment Corporation (DEC) for computer systems based on the DEC Alpha microprocessor. SRM are the initials of (Alpha) System Reference Manual, the publication detailing the Alpha AXP architecture and which specified various features of the SRM firmware. The SRM console was initially designed to boot DEC's OSF/1 AXP (later called Digital UNIX and finally Tru64 UNIX) and OpenVMS operating systems, although various other operating systems (such as Linux, NetBSD, OpenBSD, and FreeBSD, for example) were also written to boot from the SRM console. The third proprietary operating system published for the Alpha AXP architecture – Microsoft Windows NT – did not boot from SRM; instead, Windows booted from the ARC (multi platform "Advanced RISC Computing") boot firmware. (ARC is also known as AlphaBIOS.) On many Alpha computer systems – for example, the Digital Personal Workstation – both SRM and ARC could be loaded onto the EEPROM which held the boot firmware. However, on some smaller systems (or large systems which were never intended to boot Windows), only one of the two boot firmware variants could fit onto the EEPROM at one time. For example, the flash EEPROM of certain models of the DEC Multia, which was a small, personal Alpha AXP workstation designed to run Windows NT, was only large enough to hold a single firmware. The SRM console is capable of display on either a graphical adapter (such as a PCI VGA card) or, if no graphical console and/or local keyboard is detected, on a serial connection to a VT100-compatible terminal. In this way the SRM console is similar to the Open Firmware used in SPARC and Apple PowerMac computers, for example. Upon system initialization, an Alpha AXP computer set to boot from the SRM console displays a short report of the software version of the firmware, and presents the "three chevron prompt" consisting of three greater-than signs: Digital Personal WorkStation 433au Console V7.2-1 Mar 6 2000 14:47:02 >>> Several commands are available by typing them at the prompt, and a list of possible commands is available by entering the command help or man at the prompt. Various system variables for establishing automatic boot settings, parameter strings to be passed to an operating system and the like may also be set from the SRM prompt. The SRM firmware contains drivers for booting from boot media including SCSI hard disks and CD-ROM drives attached to a supported SCSI adapter, various IDE ATA and ATAPI devices, and network booting via BOOTP or DHCP is possible with supported network adapters. When an appropriate disk boot device is available, the SRM console locates and loads the target primary bootstrap image using information written in the target disk boot block; in logical block zero. The boot block contains the disk location and block size of the target primary bootstrap image file, and SRM will load that into memory and will then transfer control to it. External links Red Hat Documentation: The SRM Firmware Console SRM Console Reference Alpha SRM Console for Alpha Microprocessor Motherboards User’s Guide What is SRM? SRM Firmware Howto SRM Firmware OpenVMS
SRM firmware
[ "Technology" ]
719
[ "OpenVMS", "Computing platforms" ]
2,216,678
https://en.wikipedia.org/wiki/Term%20algebra
In universal algebra and mathematical logic, a term algebra is a freely generated algebraic structure over a given signature. For example, in a signature consisting of a single binary operation, the term algebra over a set X of variables is exactly the free magma generated by X. Other synonyms for the notion include absolutely free algebra and anarchic algebra. From a category theory perspective, a term algebra is the initial object for the category of all X-generated algebras of the same signature, and this object, unique up to isomorphism, is called an initial algebra; it generates by homomorphic projection all algebras in the category. A similar notion is that of a Herbrand universe in logic, usually used under this name in logic programming, which is (absolutely freely) defined starting from the set of constants and function symbols in a set of clauses. That is, the Herbrand universe consists of all ground terms: terms that have no variables in them. An atomic formula or atom is commonly defined as a predicate applied to a tuple of terms; a ground atom is then a predicate in which only ground terms appear. The Herbrand base is the set of all ground atoms that can be formed from predicate symbols in the original set of clauses and terms in its Herbrand universe. These two concepts are named after Jacques Herbrand. Term algebras also play a role in the semantics of abstract data types, where an abstract data type declaration provides the signature of a multi-sorted algebraic structure and the term algebra is a concrete model of the abstract declaration. Universal algebra A type is a set of function symbols, with each having an associated arity (i.e. number of inputs). For any non-negative integer , let denote the function symbols in of arity . A constant is a function symbol of arity 0. Let be a type, and let be a non-empty set of symbols, representing the variable symbols. (For simplicity, assume and are disjoint.) Then the set of terms of type over is the set of all well-formed strings that can be constructed using the variable symbols of and the constants and operations of . Formally, is the smallest set such that:   — each variable symbol from is a term in , and so is each constant symbol from . For all and for all function symbols and terms , we have the string   — given terms , the application of an -ary function symbol to them represents again a term. The term algebra of type over is, in summary, the algebra of type that maps each expression to its string representation. Formally, is defined as follows: The domain of is . For each nullary function in , is defined as the string . For all and for each n-ary function in and elements in the domain, is defined as the string . A term algebra is called absolutely free because for any algebra of type , and for any function , extends to a unique homomorphism , which simply evaluates each term to its corresponding value . Formally, for each : If , then . If , then . If where and , then . Example As an example type inspired from integer arithmetic can be defined by , , , and for each . The best-known algebra of type has the natural numbers as its domain and interprets , , , and in the usual way; we refer to it as . For the example variable set , we are going to investigate the term algebra of type over . First, the set of terms of type over is considered. We use to flag its members, which otherwise may be hard to recognize due to their uncommon syntactic form. We have e.g. , since is a variable symbol; , since is a constant symbol; hence , since is a 2-ary function symbol; hence, in turn, since is a 2-ary function symbol. More generally, each string in corresponds to a mathematical expression built from the admitted symbols and written in Polish prefix notation; for example, the term corresponds to the expression in usual infix notation. No parentheses are needed to avoid ambiguities in Polish notation; e.g. the infix expression corresponds to the term . To give some counter-examples, we have e.g. , since is neither an admitted variable symbol nor an admitted constant symbol; , for the same reason, , since is a 2-ary function symbol, but is used here with only one argument term (viz. ). Now that the term set is established, we consider the term algebra of type over . This algebra uses as its domain, on which addition and multiplication need to be defined. The addition function takes two terms and and returns the term ; similarly, the multiplication function maps given terms and to the term . For example, evaluates to the term . Informally, the operations and are both "sluggards" in that they just record what computation should be done, rather than doing it. As an example for unique extendability of a homomorphism consider defined by and . Informally, defines an assignment of values to variable symbols, and once this is done, every term from can be evaluated in a unique way in . For example, In a similar way, one obtains . Herbrand base The signature σ of a language is a triple <O, F, P> consisting of the alphabet of constants O, function symbols F, and predicates P. The Herbrand base of a signature σ consists of all ground atoms of σ: of all formulas of the form R(t1, ..., tn), where t1, ..., tn are terms containing no variables (i.e. elements of the Herbrand universe) and R is an n-ary relation symbol (i.e. predicate). In the case of logic with equality, it also contains all equations of the form t1 = t2, where t1 and t2 contain no variables. Decidability Term algebras can be shown decidable using quantifier elimination. The complexity of the decision problem is in NONELEMENTARY because binary constructors are injective and thus pairing functions. See also Answer-set programming Clone (algebra) Domain of discourse / Universe (mathematics) Rabin's tree theorem (the monadic theory of the infinite complete binary tree is decidable) Initial algebra Abstract data type Term rewriting system References Further reading Joel Berman (2005). "The structure of free algebras". In Structural Theory of Automata, Semigroups, and Universal Algebra. Springer. pp. 47–76. . External links Universal algebra Mathematical logic Free algebraic structures Unification (computer science)
Term algebra
[ "Mathematics" ]
1,356
[ "Mathematical structures", "Automated theorem proving", "Unification (computer science)", "Mathematical logic", "Mathematical objects", "Universal algebra", "Equations", "Fields of abstract algebra", "Category theory", "Algebraic structures", "Free algebraic structures" ]
2,216,689
https://en.wikipedia.org/wiki/Controlled%20explosion
A controlled explosion is the deliberate detonation of an explosive, generally as a means of demolishing a building or destroying a second improvised or manufactured explosive device. Demolition During demolition, controlled explosions can be used to collapse the exterior walls of a building inward, limiting damage to neighboring buildings. This requires careful placement of explosive charges on supports and load bearing walls. Bomb disposal Controlled explosions are used by bomb disposal teams to detonate a bomb at a specific time in order to limit damage to nearby structures, vehicles, and people. The bomb may be moved to a clear location away from bystanders or buildings before detonation unless it has (or is suspected of having) an anti-handling mechanism. If an anti-handling mechanism is found, the bomb may be left in place while the site around it is cleared. If it is not possible to clear the area or move the bomb, a containment vessel may be used to limit the damage from the explosion or transport the device to a cleared area. Once the area is clear, a second explosive or shaped charge is placed on the device by explosive ordinance disposal (EOD or police bomb disposal) personnel, either manually or with a bomb disposal robot, and detonated remotely. The controlled explosion should also detonate or disable the suspected bomb. See also Bomb disposal Demolition DEMIRA EOD References External links BBC News summary of "controlled explosions" Law enforcement techniques Bomb disposal
Controlled explosion
[ "Chemistry" ]
286
[ "Explosion protection", "Bomb disposal" ]
2,216,711
https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s%20constant
In mathematics, Apéry's constant is the infinite sum of the reciprocals of the positive integers, cubed. That is, it is defined as the number where is the Riemann zeta function. It has an approximate value of . It is named after Roger Apéry, who proved that it is an irrational number. Uses Apéry's constant arises naturally in a number of physical problems, including in the second- and third-order terms of the electron's gyromagnetic ratio using quantum electrodynamics. It also arises in the analysis of random minimum spanning trees and in conjunction with the gamma function when solving certain integrals involving exponential functions in a quotient, which appear occasionally in physics, for instance, when evaluating the two-dimensional case of the Debye model and the Stefan–Boltzmann law. The reciprocal of (0.8319073725807... ) is the probability that any three positive integers, chosen at random, will be relatively prime, in the sense that as approaches infinity, the probability that three positive integers less than chosen uniformly at random will not share a common prime factor approaches this value. (The probability for n positive integers is .) In the same sense, it is the probability that a positive integer chosen at random will not be evenly divisible by the cube of an integer greater than one. (The probability for not having divisibility by an n-th power is .) Properties was named Apéry's constant after the French mathematician Roger Apéry, who proved in 1978 that it is an irrational number. This result is known as Apéry's theorem. The original proof is complex and hard to grasp, and simpler proofs were found later. Beukers's simplified irrationality proof involves approximating the integrand of the known triple integral for , by the Legendre polynomials. In particular, van der Poorten's article chronicles this approach by noting that where , are the Legendre polynomials, and the subsequences are integers or almost integers. Many people have tried to extend Apéry's proof that is irrational to other values of the Riemann zeta function with odd arguments. Although this has so far not produced any results on specific numbers, it is known that infinitely many of the odd zeta constants are irrational. In particular at least one of , , , and must be irrational. Apéry's constant has not yet been proved transcendental, but it is known to be an algebraic period. This follows immediately from the form of its triple integral. Series representations Classical In addition to the fundamental series: Leonhard Euler gave the series representation: in 1772, which was subsequently rediscovered several times. Fast convergence Since the 19th century, a number of mathematicians have found convergence acceleration series for calculating decimal places of . Since the 1990s, this search has focused on computationally efficient series with fast convergence rates (see section "Known digits"). The following series representation was found by A. A. Markov in 1890, rediscovered by Hjortnaes in 1953, and rediscovered once more and widely advertised by Apéry in 1979: The following series representation gives (asymptotically) 1.43 new correct decimal places per term: The following series representation gives (asymptotically) 3.01 new correct decimal places per term: The following series representation gives (asymptotically) 5.04 new correct decimal places per term: It has been used to calculate Apéry's constant with several million correct decimal places. The following series representation gives (asymptotically) 3.92 new correct decimal places per term: Digit by digit In 1998, Broadhurst gave a series representation that allows arbitrary binary digits to be computed, and thus, for the constant to be obtained by a spigot algorithm in nearly linear time and logarithmic space. Thue-Morse sequence The following representation was found by Tóth in 2022: where is the term of the Thue-Morse sequence. In fact, this is a special case of the following formula (valid for all with real part greater than ): Others The following series representation was found by Ramanujan: The following series representation was found by Simon Plouffe in 1998: collected many series that converge to Apéry's constant. Integral representations There are numerous integral representations for Apéry's constant. Some of them are simple, others are more complicated. Simple formulas The following formula follows directly from the integral definition of the zeta function: More complicated formulas Other formulas include and Also, A connection to the derivatives of the gamma function is also very useful for the derivation of various integral representations via the known integral formulas for the gamma and polygamma functions. Continued fraction Apéry's constant is related to the following continued fraction: with and . Its simple continued fraction is given by: Known digits The number of known digits of Apéry's constant has increased dramatically during the last decades, and now stands at more than . This is due both to the increasing performance of computers and to algorithmic improvements. {| class="wikitable" |+ Number of known decimal digits of Apéry's constant ! Date || Decimal digits || Computation performed by |- | 1735 ||align="right"| 16 || Leonhard Euler |- | Unknown ||align="right"| 16 || Adrien-Marie Legendre |- | 1887 ||align="right"| 32 || Thomas Joannes Stieltjes |- | 1996 ||align="right"| || Greg J. Fee & Simon Plouffe |- | 1997 ||align="right"| || Bruno Haible & Thomas Papanikolaou |- | May 1997 ||align="right"| || Patrick Demichel |- | February 1998 ||align="right"| || Sebastian Wedeniwski |- | March 1998 ||align="right"| || Sebastian Wedeniwski |- | July 1998 ||align="right"| || Sebastian Wedeniwski |- | December 1998 ||align="right"| || Sebastian Wedeniwski |- | September 2001 ||align="right"| || Shigeru Kondo & Xavier Gourdon |- | February 2002 ||align="right"| || Shigeru Kondo & Xavier Gourdon |- | February 2003 ||align="right"| || Patrick Demichel & Xavier Gourdon |- | April 2006 ||align="right"| || Shigeru Kondo & Steve Pagliarulo |- | January 21, 2009 ||align="right"| || Alexander J. Yee & Raymond Chan |- | February 15, 2009 ||align="right"| || Alexander J. Yee & Raymond Chan |- | September 17, 2010 ||align="right"| || Alexander J. Yee |- | September 23, 2013 ||align="right"| || Robert J. Setti |- | August 7, 2015 ||align="right"| || Ron Watkins |- | December 21, 2015 ||align="right"| || Dipanjan Nag |- | August 13, 2017 ||align="right"| || Ron Watkins |- | May 26, 2019 ||align="right"| || Ian Cutress |- | July 26, 2020 ||align="right"| || Seungmin Kim |- | December 22, 2023 ||align="right"| || Andrew Sun |} See also Riemann zeta function Basel problem — Catalan's constant List of sums of reciprocals Notes References . . . . . . . . . . . . . . . . . . . . (Message to Simon Plouffe, with all decimal places but a shorter text edited by Simon Plouffe). (Message to Simon Plouffe, with original text but only some decimal places). . . . Further reading . External links . Mathematical constants Analytic number theory Irrational numbers Zeta and L-functions
Apéry's constant
[ "Mathematics" ]
1,697
[ "Analytic number theory", "Irrational numbers", "Mathematical objects", "nan", "Mathematical constants", "Numbers", "Number theory" ]