id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
916,157 | https://en.wikipedia.org/wiki/Vandermonde%27s%20identity | In combinatorics, Vandermonde's identity (or Vandermonde's convolution) is the following identity for binomial coefficients:
for any nonnegative integers r, m, n. The identity is named after Alexandre-Théophile Vandermonde (1772), although it was already known in 1303 by the Chinese mathematician Zhu Shijie.
There is a q-analog to this theorem called the q-Vandermonde identity.
Vandermonde's identity can be generalized in numerous ways, including to the identity
Proofs
Algebraic proof
In general, the product of two polynomials with degrees m and n, respectively, is given by
where we use the convention that ai = 0 for all integers i > m and bj = 0 for all integers j > n. By the binomial theorem,
Using the binomial theorem also for the exponents m and n, and then the above formula for the product of polynomials, we obtain
where the above convention for the coefficients of the polynomials agrees with the definition of the binomial coefficients, because both give zero for all i > m and j > n, respectively.
By comparing coefficients of xr, Vandermonde's identity follows for all integers r with 0 ≤ r ≤ m + n. For larger integers r, both sides of Vandermonde's identity are zero due to the definition of binomial coefficients.
Combinatorial proof
Vandermonde's identity also admits a combinatorial double counting proof, as follows. Suppose a committee consists of m men and n women. In how many ways can a subcommittee of r members be formed? The answer is
The answer is also the sum over all possible values of k, of the number of subcommittees consisting of k men and r − k women:
Geometrical proof
Take a rectangular grid of r x (m+n−r) squares. There are
paths that start on the bottom left vertex and, moving only upwards or rightwards, end at the top right vertex (this is because r right moves and m+n-r up moves must be made (or vice versa) in any order, and the total path length is m + n). Call the bottom left vertex (0, 0).
There are paths starting at (0, 0) that end at (k, m−k), as k right moves and m−k upward moves must be made (and the path length is m). Similarly, there are paths starting at (k, m−k) that end at (r, m+n−r), as a total of r−k right moves and (m+n−r) − (m−k) upward moves must be made and the path length must be r−k + (m+n−r) − (m−k) = n. Thus there are
paths that start at (0, 0), end at (r, m+n−r), and go through (k, m−k). This is a subset of all paths that start at (0, 0) and end at (r, m+n−r), so sum from k = 0 to k = r (as the point (k, m−k) is confined to be within the square) to obtain the total number of paths that start at (0, 0) and end at (r, m+n−r).
Generalizations
Generalized Vandermonde's identity
One can generalize Vandermonde's identity as follows:
This identity can be obtained through the algebraic derivation above when more than two polynomials are used, or through a simple double counting argument.
On the one hand, one chooses elements out of a first set of elements; then out of another set, and so on, through such sets, until a total of elements have been chosen from the sets. One therefore chooses elements out of in the left-hand side, which is also exactly what is done in the right-hand side.
Chu–Vandermonde identity
The identity generalizes to non-integer arguments. In this case, it is known as the Chu–Vandermonde identity (see Askey 1975, pp. 59–60) and takes the form
for general complex-valued s and t and any non-negative integer n. It can be proved along the lines of the algebraic proof above by multiplying the binomial series for and and comparing terms with the binomial series for .
This identity may be rewritten in terms of the falling Pochhammer symbols as
in which form it is clearly recognizable as an umbral variant of the binomial theorem (for more on umbral variants of the binomial theorem, see binomial type). The Chu–Vandermonde identity can also be seen to be a special case of Gauss's hypergeometric theorem, which states that
where is the hypergeometric function and is the gamma function. One regains the Chu–Vandermonde identity by taking a = −n and applying the identity
liberally.
The Rothe–Hagen identity is a further generalization of this identity.
The hypergeometric probability distribution
When both sides have been divided by the expression on the left, so that the sum is 1, then the terms of the sum may be interpreted as probabilities. The resulting probability distribution is the hypergeometric distribution. That is the probability distribution of the number of red marbles in r draws without replacement from an urn containing n red and m blue marbles.
See also
Pascal's identity
Hockey-stick identity
Rothe–Hagen identity
References
Factorial and binomial topics
Algebraic identities
Articles containing proofs | Vandermonde's identity | Mathematics | 1,184 |
53,935,097 | https://en.wikipedia.org/wiki/Sindoh | Sindoh (), formerly Sindoricoh (), is a South Korean company that makes multi-function printers, fax machines, Thermal paper and 3D printers. Headquartered in Seoul, South Korea, Sindoh's main market for 2D printers is Korea, the United States, and Europe for its 3D printers.
The company was founded in 1960 under the title of Sindoh Trading Co., Ltd. The name was changed to Sindoh Co., Ltd. in 1969 after the company entered into a partnership with Japanese corporation Ricoh.
History
1960: Founded on July 7.
1964: Production of RICOPY555, Korea's first copier
1969: Signed a partnership with Ricoh, Japan. Produced BS-1, the first electronic copying machine in Korea.
1971: Head office construction was completed in Seongsu-dong, Seoul.
1975: Produced DT1200, the first plain-paper copier in Korea
1981: Produced FAX3300H, Korea's first facsimile
1982: Produced Thermal paper for fax
1983: Completed the construction of Asan factory
1986: Inauguration of CEO and Chairman Woo Sang-gi and CEO and President Woo Suk-hyung
1990: Production of High Sensitive Thermal Paper for fax
1991: Launched copier FT-1000 with its own technology
2003: Launched the operation of Qingdao 1st factory in China
2006: Launched the operation of Qingdao 2nd factory in China
2014: Launched Sindoh VINA Headquarters (SVH) operation for the Southeast Asian production base
2015: Launched Sindoh VINA Marketing (SVM) operation for the Southeast Asian sales base
2016: Launched 3DWOX DP200, its first 3D printer.
2017: Launched 3DWOX 2X, its first prosumer 3D printer.
2017: Launched the operation of Sindoh VINA 2nd factory in Vietnam
2017: Launched the official international Sindoh YouTube Channel
2018: Launched 3DWOX 1, with better solution
2018: Launched 3DWOX 1, with better solution
2019: Launched 3DWOX 1X, its substance 3D printer
2019: Launched Sindoh A1/A1+, its first and only SLA 3D Printer.
2019: Launched 3DWOX 7X, This 3D printer supports large output.
2020: Launched Sindoh S100, its first and only SLS 3D Printer.
2021: Launched Sindoh A1SD, its first and only MSLA(Masked Stereolithography) & LCD 3D Printer.
2022: Launched Sindoh fabWeaver type A530, its new FFF 3D Printer
Logo
Sindoh renewed its corporate identity in 2013, as part of efforts to strengthen its brand.
Printer / Multi-function printer
Sindoh produces and sales Mono/Color Multi-function Printer from various A3/A4 lineup.
3D printers
Sindoh entered the 3D printer market with its own brand, 3DWOX. In 2016 the company launched DP200 and DP201, two models under the 3DWOX brand name. The DP200 printer was designed to introduce inexperienced users to 3D printing technology.
In 2016 Sindoh, in conjunction with SolidWorks, introduced 3D-printing software which enables users to 3D print using a CAD program without slicer software. The company entered into a partnership with SolidWorks, developing an "Apps for Kids" program in 2017 which allows children to 3D print easily and from the cloud. And, Sindoh launched 3DWOX 2X. The 3D printer market can be distinguished from the personal consumer market and professional market. 3DWOX 2X was developed in the middle of the market, Prosumer.
Printing solutions
Sindoh offers solutions to optimize corporate printing environments. In 2008, Sindoh introduced their managed printing service (MPS), which reduces maintenance costs of document management.
See also
Multi-function printer
List of 3D printer manufacturers
References
Manufacturing companies based in Seoul
Electronics companies of South Korea
South Korean brands
3D printers
3D printer companies
Computer companies of South Korea
Computer hardware companies
Computer printer companies | Sindoh | Technology | 831 |
2,647,638 | https://en.wikipedia.org/wiki/Open%20Rights%20Group | The Open Rights Group (ORG) is a UK-based organisation that works to preserve digital rights and freedoms by campaigning on digital rights issues and by fostering a community of grassroots activists. It campaigns on numerous issues including mass surveillance, internet filtering and censorship, and intellectual property rights.
History
The organisation was started by Danny O'Brien, Cory Doctorow, Ian Brown, Rufus Pollock, James Cronin, Stefan Magdalinski, Louise Ferguson and Suw Charman after a panel discussion at Open Tech 2005. O'Brien created a pledge on PledgeBank, placed on 23 July 2005, with a deadline of 25 December 2005: "I will create a standing order of 5 pounds per month to support an organisation that will campaign for digital rights in the UK but only if 1,000 other people will too." The pledge reached 1000 people on 29 November 2005. The Open Rights Group was launched at a "sell-out" meeting in Soho, London.
Work
The group has made submissions to the All Party Internet Group (APIG) inquiry into digital rights management and the Gowers Review of Intellectual Property.
The group was honoured in the 2008 Privacy International Big Brother Awards alongside No2ID, Liberty, Genewatch UK and others, as a recognition of their efforts to keep state and corporate mass surveillance at bay.
In 2010 the group worked with 38 Degrees to oppose the introduction of the Digital Economy Act, which was passed in April 2010.
The group opposes measures in the draft Online Safety Bill introduced in 2021, that it sees as infringing free speech rights and online anonymity.
The group campaigns against the Department for Digital, Culture, Media and Sport's plan to switch to an opt-out model for cookies. The group spokesperson stated that "[t]he UK government propose to make online spying the default option" in response to the proposed switch.
Goals
To collaborate with other digital rights and related organisations.
To nurture a community of campaigning volunteers, from grassroots activists to technical and legal experts.
To preserve and extend traditional civil liberties in the digital world.
To provide a media clearinghouse, connecting journalists with experts and activists.
To raise awareness in the media of digital rights abuses.
Areas of interest
The organisation, though focused on the impact of digital technology on the liberty of UK citizens, operates with an apparently wide range of interests within that category. Its interests include:
Access to knowledge
Copyright
Creative Commons
Free and open source software
The public domain
Crown copyright
Digital Restrictions Management
Software patents
Free speech and censorship
Internet filtering
Right to parody
s. 127 Communications Act 2003
Government and democracy
Electronic voting
Freedom of information legislation
Privacy, surveillance and censorship
Automatic Vehicle Tracking
Communications data retention
Identity management
Net Neutrality
NHS patients' medical database
Police DNA Records
RFID
Structure
ORG has a paid staff, whose members include:
Jim Killock (executive director)
Javier Ruiz Diaz (Campaigner)
Former staff include Suw Charman-Anderson and Becky Hogge, both executive directors, e-voting coordinator Jason Kitcat, campaigner Peter Bradwell, grassroots campaigner Katie Sutton and administrator Katerina Maniadaki. The group's patron is Neil Gaiman. As of October 2019 the group had over 3,000 paying supporters.
Advisory council and board of directors
In addition to staff members and volunteers, there is an advisory panel of over thirty members, and a board of directors, which oversees the group's work, staff, fundraising and policy. The current board members are:
In January 2015, the Open Rights Group announced the formation of a Scottish Advisory Council which will be handling matters relating to Scottish digital rights and campaigns. The Advisory Council is made up of:
From the existing UK Advisory Council:
Judith Rauhofer
Keith Mitchell
Lilian Edwards
Wendy Grossman
And from the Open Rights Group Board:
Milena Popova
Owen Blacker
Simon Phipps
One of the first projects is to raise awareness and opposition to the Scottish Identity Database.
ORGCON
ORGCON was the first ever conference dedicated to digital rights in the UK, marketed as "a crash course in digital rights". It was held for the first time in 2010 at City University in London and included keynote talks from Cory Doctorow, politicians and similar pressure groups including Liberty, NO2ID and Big Brother Watch. ORGCON has since been held in 2012, 2013, 2014, 2017, and 2019 where the keynote was given by Edward Snowden.
See also
Campaign Against Censorship
Censorship in the United Kingdom
Internet censorship
Open Genealogy Alliance
References
External links
Access to Knowledge movement
Articles containing video clips
Civil liberties advocacy groups
Computer law organizations
Copyright law organizations
Digital media
Digital rights management
Digital rights organizations
Election and voting-related organizations
Intellectual property activism
Intellectual property organizations
Internet in the United Kingdom
Internet privacy organizations
Internet-related activism
Organizations established in 2005
Political advocacy groups in the United Kingdom
Politics and technology
Politics of the United Kingdom
Public domain
Radio-frequency identification | Open Rights Group | Technology,Engineering | 974 |
13,411,300 | https://en.wikipedia.org/wiki/Astellas%20Institute%20for%20Regenerative%20Medicine | Astellas Institute for Regenerative Medicine is a subsidiary of Astellas Pharma located in Marlborough, Massachusetts, US, developing stem cell therapies with a focus on diseases that cause blindness. It was formed in 1994 as a company named Advanced Cell Technology, Incorporated (ACT), which was renamed to Ocata Therapeutics in November 2014. In February 2016 Ocata was acquired by Astellas for $379 million USD.
History
Advanced Cell Technology was formed in 1994 and was led from 2005 to late 2010 by William M. Caldwell IV, Chairman and Chief Executive Officer. Upon Mr. Caldwell's death on December 13, 2010, Gary Rabin, a member of ACT's board of directors with experience in investment and capital raising, assumed the role of Chairman and CEO.
In 2007 the company's Chief Scientific Officer (CSO), Michael D. West, PhD, also founder of Geron left Ocata to join a regenerative medicine firm, BioTime as CEO. In 2008, for $250,000 plus royalties up to a total of $1 million, the company licensed its "ACTCellerate" technology to BioTime. Robert Lanza was appointed CSO.
On November 22, 2010, the company announced that it had received approval from the U.S. Food and Drug Administration (FDA) to initiate the first human clinical trial using embryonic stem cells to treat retinal diseases. A preliminary report of the trial published in 2012, and a follow-up article was published in February 2015.
In July 2014, Ocata announced that Paul K. Wotton, previously of Antares Pharma Inc (ATRS:NASDAQ CM), became President and Chief Executive Officer.
On August 27, 2014, Ocata announced a 1-100 reverse stock split of its common stock. Ocata was listed on NASDAQ in February 2015.
Research
Macular degeneration
On November 30, 2010, Ocata filed an Investigational New Drug application with the U.S. FDA for the first clinical trial using embryonic stem cells to regenerate retinal pigment epithelium to treat Dry Age-Related Macular Degeneration (Dry AMD). Dry AMD is the most common form of macular degeneration and represents a market size of $25–30 Billion in the U.S. and Europe.
Stargardt's disease
In November 2010 the FDA allowed Ocata to begin a Phase I/II human clinical trial to use its retinal pigment epithelium cell therapy to treat Stargardt disease, a form of inherited juvenile macular degeneration.
See also
Key stem cell research events
Somatic cell nuclear transfer
Stem cells without embryonic destruction
References
Astellas Pharma
Biotechnology companies of the United States
Stem cells
Biotechnology companies established in 1994
Life sciences industry | Astellas Institute for Regenerative Medicine | Biology | 593 |
57,147,096 | https://en.wikipedia.org/wiki/Electronarcosis | Electronarcosis, also called electric stunning or electrostunning, is a profound stupor produced by passing an electric current through the brain. Electronarcosis may be used as a form of electrotherapy in treating certain mental illnesses in humans, or may be used to render livestock unconscious prior to slaughter.
History
In 1902, Stephen Leduc discovered he could produce a narcotic-like state in animals, and eventually, he tried it on himself, where he remained conscious but unable to move in a dream-like state.
In 1951, an American psychiatrist Hervey M. Cleckley published a paper on the results of treating 110 patients having anxiety neuroses with electronarcosis therapy. He argued that patients may benefit from electronarcosis after other treatments have failed.
A 1974 paper discussed the advantage of using electronarcosis for short-term general anesthesia. Researchers achieved electronarcosis by applying 180 mA at a frequency of 500 Hertz to the mastoid part of the temporal bone.
Phases
Electronarcosis results in a condition similar to an epileptic seizure, with the three phases called tonic, clonic, and recovery.
During the tonic phase, the patient or animal collapses and becomes rigid.
During the clonic, muscles relax and some movement occurs.
During recovery, the patient or animal becomes aware.
Livestock
Electronarcosis is one of the methods used to render animals unconscious before slaughter and unable to feel pain. Electronarcosis may be followed immediately by electrocution or by bleeding.
Modern electronarcosis is typically performed by applying 200 volts of high frequency alternating current of about 1500 hertz for 3 seconds to the animal's head. A high-frequency current is alleged to not be felt as an electric shock or cause skeletal muscle contractions. A wet animal will pass a current of over an ampere. If other procedures do not follow electronarcosis, the animal will usually recover.
Studies have been used to determine optimal parameters for effective electronarcosis.
See also
Electrical stunning
Louise G. Rabinovitch Used electricity on patients as an analgesic.
Electro-immobilisation
References
Electrotherapy
Neuroscience
Physical psychiatric treatments
Treatment of depression
Analgesics
Animal killing | Electronarcosis | Biology | 456 |
990,491 | https://en.wikipedia.org/wiki/Anti-fouling%20paint | Anti-fouling paint is a specialized category of coatings applied as the outer (outboard) layer to the hull of a ship or boat, to slow the growth of and facilitate detachment of subaquatic organisms that attach to the hull and can affect a vessel's performance and durability. It falls into a category of commercially available underwater hull paints, also known as bottom paints.
Anti-fouling paints are often applied as one component of multi-layer coating systems which may have other functions in addition to their antifouling properties, such as acting as a barrier against corrosion on metal hulls that will degrade and weaken the metal, or improving the flow of water past the hull of a fishing vessel or high-performance racing yachts. Although commonly discussed as being applied to ships, antifouling paints are also of benefit in many other sectors such as off-shore structures and fish farms.
History
In the Age of Sail, sailing vessels suffered severely from the growth of barnacles and weeds on the hull, called "fouling". Starting in the mid-1700s thin sheets of copper and approximately 100 years later, Muntz metal, were nailed onto the hull in an attempt to prevent marine growth. One famous example of the traditional use of metal sheathing is the clipper Cutty Sark, which is preserved as a museum ship in dry-dock at Greenwich in England. Marine growth affected performance (and profitability) in many ways:
The maximum speed of a ship decreases as its hull becomes fouled with marine growth, and its displacement increases.
Fouling hampers a ship's ability to sail upwind.
Some marine growth, such as shipworms, would bore into the hull causing severe damage over time.
The ship may transport harmful marine organisms to other areas.
While anti-fouling coatings began to be developed from 1840 onwards, the first practical commercial anti-fouling coatings were established around 1860. One of the first successful commercial patents was for 'McIness', a metallic soap compound with copper sulphate that was applied heated over a quick-drying rosin varnish primer with an iron oxide pigment. The Bonnington Chemical Works began marketing copper sulphide anti-fouling paint around 1850. Other widely used anti-fouling paints were developed in the late 19th century, with some 213 anti-fouling patents being recorded by 1872. Among the most widely used in the 1880s and 1890s was a hot plastic composition known as Italian Morovian.
In an official 1900 Letter from the U.S. Navy to the U.S. Senate Committee on Naval Affairs, it was noted that the (British) Admiralty had considered a proposal in 1847 to limit the number of iron ships (only recently introduced into naval service) and even to consider the sale of all iron ships in its possession, due to significant problems with biofouling. However, once an antifouling paint "with very fair results" was found, the iron ships were instead retained and continued to be built.
During World War II, which included a substantial naval component, the U.S. Navy provided significant funding to the Woods Hole Oceanographic Institution to gather information and conduct research on marine biofouling and technologies for its prevention. This work was published as a book in 1952, the contents of which are available online as individual chapters. The third and final part of this book includes a number of chapters that go into the state of the art at that time for the formulation of anti-fouling paints. Lunn (1974) provides further history.
Modern antifouling paints
In modern times, antifouling paints are formulated with cuprous oxide (or other copper compounds) and/or other biocides—special chemicals which impede growth of barnacles, algae, and marine organisms. Historically, copper paints were red, leading to ship bottoms still being painted red today.
"Soft", or ablative bottom paints slowly slough off in the water, releasing a copper or zinc based biocide into the water column. The movement of water increases the rate of this action. Ablative paints are widely used on the hulls of recreational vessels and typically are reapplied every 1–3 years.
"Contact leaching" paints "create a porous film on the surface. Biocides are held in the pores, and released slowly." Another type of hard bottom paint includes Teflon and silicone coatings which are too slippery for growth to stick. SealCoat systems, which must be professionally applied, dry with small fibers sticking out from the coating surface. These small fibers move in the water, preventing bottom growth from adhering.
Environmental concerns
In the 1960s and 1970s, commercial vessels commonly used bottom paints containing tributyltin, which has been banned in the International Convention on the Control of Harmful Anti-fouling Systems on Ships of the International Maritime Organization due to its serious toxic effects on marine life (such as the collapse of a French shellfish fishery). Now that tributyltin has been banned, the most commonly used anti-fouling bottom paints are copper-based. Copper-based antifouling paints can also have adverse effects on marine organisms. Copper occurs naturally in aquatic systems but can build up in ports or marinas where there are lots of boats. Copper can leach out of anti-fouling paint from the hulls of the boats or fall off the hulls in different sized paint particles. This can lead to higher-than-normal concentrations of copper in the ports or bays.
This excess of copper in the marine ecosystem can have adverse effects on the marine environment and its organisms. In marinas, the river nerite, a brackish water snail, was found to have higher mortality, negative growth, and a large decrease in reproduction compared to areas with no boating. The snails in marinas had more tissue (histopathological) issues and alternations in areas like their gills and gonads as well. Increased exposure to copper from antifouling paint has also been found to decrease enzyme activity in brine shrimp.
Antifouling paint particles can be eaten by zooplankton or other marine species and move up the food chain, bioaccumulating in fish. This accumulation of copper through the food web can cause damage to not only the species eating the particle, but those that are accumulating it in their tissues from their diet. Antifouling paint particles can also end up in the sediment of harbors or bays and damage the benthic environment or the organisms that live in them. These are the known effects of copper based antifouling paint; however, it has not been a large focus of study so the extent of the effects is not fully known. More research is needed to fully understand how these paints and the metals in them affect their environments.
The Port of San Diego is investigating how to reduce copper input from copper-based antifouling coatings, and Washington State has passed a law which may phase in a ban on copper antifouling coatings on recreational vessels beginning in January 2018. However, despite the toxic chemistry of bottom paint and its accumulation in water ways across the globe, a similar ban was rescinded in the Netherlands after the European Union's Scientific Committee on Health and Environmental Risks concluded The Hague had insufficiently justified the law. In an expert opinion, the committee concluded the Netherlands government's explanation "does not provide sufficient sound scientific evidence to show that the use of copper-based antifouling paints in leisure boats presents significant environmental risk."
"Sloughing bottom paints", or "ablative" paints, are an older type of paint designed to create a hull coating which ablates (wears off) slowly, exposing a fresh layer of biocides. Scrubbing a hull with sloughing bottom paint while it is in the water releases its biocides into the environment. One way to reduce the environmental impact from hulls with sloughing bottom paint is to have them hauled out and cleaned at boatyards with a "closed loop" system.
Some innovative bottom paints that do not rely on copper or tin have been developed in response to the increasing scrutiny that copper-based ablative bottom paints have received as environmental pollutants.
A possible future replacement for antifouling paint may be slime. A mesh would cover a ship's hull beneath which a series of pores would supply the slime compound. The compound would turn into a viscous slime on contact with water and coat the mesh. The slime would constantly slough off, carrying away micro-organisms and barnacle larvae.
See also
Biofouling
Biomimetic antifouling coating
Environmental impact of paint
References
External links
Selecting an anti-fouling paint, West Marine
Clean Boating Tip Sheet, Selecting a Bottom Paint, .pdf chart, Maryland Dept. of Natural Resources
Bottom Paint for Racing Boats, Sailing World, 2007
Are foul-release paints for you? Coating calculator, National Fisherman
Using Antifouling paint against the Gribble Menace, Teamac Marine Coatings
Paints
Shipbuilding
Fouling | Anti-fouling paint | Chemistry,Materials_science,Engineering | 1,873 |
1,271 | https://en.wikipedia.org/wiki/Analytical%20engine | The analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's Difference Engine, which was a design for a simpler mechanical calculator.
The analytical engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-Complete. In other words, the structure of the analytical engine was essentially the same as that which has dominated computer design in the electronic era. The analytical engine is one of the most successful achievements of Charles Babbage.
Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until 1941 that Konrad Zuse built the first general-purpose computer, Z3, more than a century after Babbage had proposed the pioneering analytical engine in 1837.
Design
Babbage's first attempt at a mechanical computing device, the Difference Engine, was a special-purpose machine designed to tabulate logarithms and trigonometric functions by evaluating finite differences to create approximating polynomials. Construction of this machine was never completed; Babbage had conflicts with his chief engineer, Joseph Clement, and ultimately the British government withdrew its funding for the project.
During this project, Babbage realised that a much more general design, the analytical engine, was possible. The work on the design of the analytical engine started around 1833.
The input, consisting of programs ("formulae") and data, was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter, and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic.
There was to be a store (that is, a memory) capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.6 kB). An arithmetic unit (the "mill") would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially (1838) it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. Later drawings (1858) depict a regularised grid layout. Like the central processing unit (CPU) in a modern computer, the mill would rely upon its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify.
The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards. Babbage developed some two dozen programs for the analytical engine between 1837 and 1840, and one program later. These programs treat polynomials, iterative formulas, Gaussian elimination, and Bernoulli numbers.
In 1842, the Italian mathematician Luigi Federico Menabrea published a description of the engine in French, based on lectures Babbage gave when he visited Turin in 1840. In 1843, the description was translated into English and extensively annotated by Ada Lovelace, who had become interested in the engine eight years earlier. In recognition of her additions to Menabrea's paper, which included a way to calculate Bernoulli numbers using the machine (widely considered to be the first complete computer program), she has been described as the first computer programmer.
Construction
Late in his life, Babbage sought ways to build a simplified version of the machine, and assembled a small part of it before his death in 1871.
In 1878, a committee of the British Association for the Advancement of Science described the analytical engine as "a marvel of mechanical ingenuity", but recommended against constructing it. The committee acknowledged the usefulness and value of the machine, but could not estimate the cost of building it, and were unsure whether the machine would function correctly after being built.
Intermittently from 1880 to 1910, Babbage's son Henry Prevost Babbage was constructing a part of the mill and the printing apparatus. In 1910, it was able to calculate a (faulty) list of multiples of pi. This constituted only a small part of the whole engine; it was not programmable and had no storage. (Popular images of this section have sometimes been mislabelled, implying that it was the entire mill or even the entire engine.) Henry Babbage's "analytical engine mill" is on display at the Science Museum in London. Henry also proposed building a demonstration version of the full engine, with a smaller storage capacity: "perhaps for a first machine ten (columns) would do, with fifteen wheels in each". Such a version could manipulate 20 numbers of 25 digits each, and what it could be told to do with those numbers could still be impressive. "It is only a question of cards and time", wrote Henry Babbage in 1888, "... and there is no reason why (twenty thousand) cards should not be used if necessary, in an analytical engine for the purposes of the mathematician".
In 1991, the London Science Museum built a complete and working specimen of Babbage's Difference Engine No. 2, a design that incorporated refinements Babbage discovered during the development of the analytical engine. This machine was built using materials and engineering tolerances that would have been available to Babbage, quelling the suggestion that Babbage's designs could not have been produced using the manufacturing technology of his time.
In October 2010, John Graham-Cumming started a "Plan 28" campaign to raise funds by "public subscription" to enable serious historical and academic study of Babbage's plans, with a view to then build and test a fully working virtual design which will then in turn enable construction of the physical analytical engine. As of May 2016, actual construction had not been attempted, since no consistent understanding could yet be obtained from Babbage's original design drawings. In particular it was unclear whether it could handle the indexed variables which were required for Lovelace's Bernoulli program. By 2017, the "Plan 28" effort reported that a searchable database of all catalogued material was available, and an initial review of Babbage's voluminous Scribbling Books had been completed.
Many of Babbage's original drawings have been digitised and are publicly available online.
Instruction set
Babbage is not known to have written down an explicit set of instructions for the engine in the manner of a modern processor manual. Instead he showed his programs as lists of states during their execution, showing what operator was run at each step with little indication of how the control flow would be guided.
Allan G. Bromley has assumed that the card deck could be read in forwards and backwards directions as a function of conditional branching after testing for conditions, which would make the engine Turing-complete:
...the cards could be ordered to move forward and reverse (and hence to loop)...
The introduction for the first time, in 1845, of user operations for a variety of service functions including, most importantly, an effective system for user control of looping in user programs.
There is no indication how the direction of turning of the operation and variable cards is specified. In the absence of other evidence I have had to adopt the minimal default assumption that both the operation and variable cards can only be turned backward as is necessary to implement the loops used in Babbage's sample programs. There would be no mechanical or microprogramming difficulty in placing the direction of motion under the control of the user.
In their emulator of the engine, Fourmilab say:
The Engine's Card Reader is not constrained to simply process the cards in a chain one after another from start to finish. It can, in addition, directed by the very cards it reads and advised by whether the Mill's run-up lever is activated, either advance the card chain forward, skipping the intervening cards, or backward, causing previously-read cards to be processed once again.
This emulator does provide a written symbolic instruction set, though this has been constructed by its authors rather than based on Babbage's original works. For example, a factorial program would be written as:
N0 6
N1 1
N2 1
×
L1
L0
S1
–
L0
L2
S0
L2
L0
CB?11
where the CB is the conditional branch instruction or "combination card" used to make the control flow jump, in this case backward by 11 cards.
Influence
Predicted influence
Babbage understood that the existence of an automatic computer would kindle interest in the field now known as algorithmic efficiency, writing in his Passages from the Life of a Philosopher, "As soon as an analytical engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise—By what course of calculation can these results be arrived at by the machine in the shortest time?"
Computer science
From 1872, Henry continued diligently with his father's work and then intermittently in retirement in 1875.
Percy Ludgate wrote about the engine in 1914 and published his own design for an analytical engine in 1909. It was drawn up in detail, but never built, and the drawings have never been found. Ludgate's engine would be much smaller (about , which corresponds to cube of side length ) than Babbage's, and hypothetically would be capable of multiplying two 20-decimal-digit numbers in about six seconds.
In his work Essays on Automatics (1914) Leonardo Torres Quevedo, inspired by Babbage, designed a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also contains the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which consisted of an arithmetic unit connected to a (possibly remote) typewriter, on which commands could be typed and the results printed automatically.
Vannevar Bush's paper Instrumental Analysis (1936) included several references to Babbage's work. In the same year he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.
Despite this groundwork, Babbage's work fell into historical obscurity, and the analytical engine was unknown to builders of electromechanical and electronic computing machines in the 1930s and 1940s when they began their work, resulting in the need to re-invent many of the architectural innovations Babbage had proposed. Howard Aiken, who built the quickly-obsoleted electromechanical calculator, the Harvard Mark I, between 1937 and 1945, praised Babbage's work likely as a way of enhancing his own stature, but knew nothing of the analytical engine's architecture during the construction of the Mark I, and considered his visit to the constructed portion of the analytical engine "the greatest disappointment of my life". The Mark I showed no influence from the analytical engine and lacked the analytical engine's most prescient architectural feature, conditional branching. J. Presper Eckert and John W. Mauchly similarly were not aware of the details of Babbage's analytical engine work prior to the completion of their design for the first electronic general-purpose computer, the ENIAC.
Comparison to other early computers
If the analytical engine had been built, it would have been digital, programmable and Turing-complete. It would, however, have been very slow. Luigi Federico Menabrea reported in Sketch of the Analytical Engine: "Mr. Babbage believes he can, by his engine, form the product of two numbers, each containing twenty figures, in three minutes".
By comparison the Harvard Mark I could perform the same task in just six seconds (though it is debatable that computer is Turing complete; the ENIAC, which is, would also have been faster). A modern CPU could do the same thing in under a billionth of a second.
In popular culture
The cyberpunk novelists William Gibson and Bruce Sterling co-authored a steampunk novel of alternative history titled The Difference Engine in which Babbage's difference and analytical engines became available to Victorian society. The novel explores the consequences and implications of the early introduction of computational technology.
Moriarty by Modem, a short story by Jack Nimersheim, describes an alternative history where Babbage's analytical engine was indeed completed and had been deemed highly classified by the British government. The characters of Sherlock Holmes and Moriarty had in reality been a set of prototype programs written for the analytical engine. This short story follows Holmes as his program is implemented on modern computers and he is forced to compete against his nemesis yet again in the modern counterparts of Babbage's analytical engine.
A similar setting to The Difference Engine is used by Sydney Padua in the webcomic The Thrilling Adventures of Lovelace and Babbage. It features an alternative history where Ada Lovelace and Babbage have built the analytical engine and use it to fight crime at Queen Victoria's request. The comic is based on thorough research on the biographies of and correspondence between Babbage and Lovelace, which is then twisted for humorous effect.
The Orion's Arm online project features the Machina Babbagenseii, fully sentient Babbage-inspired mechanical computers. Each is the size of a large asteroid, only capable of surviving in microgravity conditions, and processes data at 0.5% the speed of a human brain.
Charles Babbage and Ada Lovelace appear in an episode of Doctor Who, "Spyfall Part 2", where the engine is displayed and referenced.
References
Bibliography
External links
The Babbage Papers, Science Museum archive
The Analytical Engine at Fourmilab, includes historical documents and online simulations
Image of a later Plan of Analytical Engine with grid layout (1858)
First working Babbage "barrel" actually assembled, circa 2005
Special issue, IEEE Annals of the History of Computing, Volume 22, Number 4, October–December 2000
Babbage, Science Museum, London (archived)
Plan 28: Building Charles Babbage's Analytical Engine
Charles Babbage
Computer-related introductions in 1837
English inventions
Mechanical calculators
Mechanical computers
One-of-a-kind computers
Ada Lovelace | Analytical engine | Physics,Technology | 3,101 |
56,870,719 | https://en.wikipedia.org/wiki/Ritualized%20aggression | Ritualized aggression or ritualized fighting is when animals use a range of behaviours as posture or warning but without engaging in serious aggression or fighting, which would be expensive in terms of energy and the risk of injury. Ritualized aggression involves a graded series of behaviours or displays that include threatening gestures (such as vocalizations, spreading of wings or gill covers, lifting and presentation of claws, head bobbing, tail beating, lunging, etc.) and occasionally posturing physical actions such as inhibited (non-injurious) bites.
This behavior is explained by evolutionary game theory.
Examples
Cats
Domestic cats (Felis catus) are very territorial and defend their territories with ritualized body posturing, stalking, staring, spitting, yowling and howling.
Spider monkeys
Spider monkeys (genus Ateles) defend their territory by screams, barks, rattling or dropping branches, and urinating and defecating on intruders below.
Oscar cichlids
Oscar cichlids (Astronotus ocellatus) are able to rapidly alter their colouration, a trait which facilitates ritualised territorial and combat behaviours amongst conspecifics. Individuals of another cichlid species, the blunthead cichlid (Tropheus moorii), defend their feeding territory with a display, quivering the tail and fins to intimidate, or an attack, darting at the intruder and chasing them away. Astatotilapia burtoni cichlids have similar displays of aggressive behaviour if they are territorial, which include threat displays and chasing.
Ring-tailed lemur
Male ring-tailed lemurs have scent glands on their wrists, chests, and in the genital area. During encounters with rival males they may perform ritualized aggression by having a "stink fight". The males anoint their tails by rubbing the ends of their tails on the inside of their wrists and on their chests. They then arch their tails over their bodies and wave them at their opponent. The male toward which this is directed either responds with a display of his own, physical aggression, or flees. "Stink fights" can last from 10 minutes to one hour.
Creek chub
The creek chub (Semotilus atromaculatus) engages in ritualized aggression when others of the species invade its territory. Engaging in parallel swimming, the fish widens its fins and mouth and swims at a caudal fin beat. Intimidating opponent fish throughout these rituals, the forward fish stops and directs blows to the head of the other fish to ensure territory dominance.
See also
Ritual warfare
Ritualized combat
Trading blows
Agonistic behavior and courtship
Ritualised fighting in meat ants
References
Ethology | Ritualized aggression | Biology | 550 |
3,809,560 | https://en.wikipedia.org/wiki/Suicidology | Suicidology is the scientific study of suicidal behaviour, the causes of suicidalness and suicide prevention. Every year, about one million people die by suicide, which is a mortality rate of sixteen per 100,000 or one death every forty seconds. Suicidologists believe that suicide is largely preventable with the right actions, knowledge about suicide, and a change in society's view of suicide to make it more acceptable to talk about suicide. There are many different fields and disciplines involved with suicidology, the two primary ones being psychology and sociology.
Short history
Most suicidologists think about the history of suicide in terms of courts, church, press, morals, and society. In Ancient Greece, there were several opinions about suicide. It was tolerated and even lauded when committed by patricians (generals and philosophers) but condemned if committed by plebeians (common people) or slaves. In Rome, suicide was viewed rather neutrally, even positively because life was held cheaply. During early Christianity, excessive martyrdom and a penchant toward suicide frightened church elders sufficiently for them to introduce a serious deterrent. Suicide was thought of as a crime because it precluded possibility of repentance, and it violated the sixth commandment which is Thou shall not kill. During this time, St. Thomas Aquinas emphasized that suicide was a mortal sin because it disrupted God's power over man's life and death. This belief took hold and for hundreds of years thereafter played an important part in the Western view of suicide.
Over the last 200 years, the main focus of interventions to prevent suicide has moved from appeals to religious beliefs (which do not always motivate people in contemporary society, which is more secular) to effort at understanding, and preventing the psychological and social influences that lead to suicide.
Parts of study
There are many points of study within suicidology. Suicidology studies not only death by suicide and attempted suicide but also partial self-destruction, suicidal ideation, parasuicide, and self-destructive behaviors and attitudes. Suicidal ideation is when someone is having thoughts or showing gestures of suicide. For example, it could be as simple as someone saying that "life is not worth living any more" or it can be extreme as "I'm going to kill myself by jumping off a bridge." Parasuicide is when someone causes deliberate harm to themselves. For example, if someone were to take an overdose of medicine and live. Self-destructive behaviors are anything that cause harm to oneself. This can be intentional or unintentional. Some examples are alcoholism, risky sports, some sexual disorders, and eating disorders. By way of a suicide note the person who is suiciding has the last word. It is also a way for the person to explain, bring closure (or not), to give guilt, to dictate wishes, to control, to forgive or to blame. Here is a list of the parts that might go into a suicide note.
Contributors
One of the first to contribute to the study of suicidology is Edwin S. Shneidman. Shneidman is considered to be the father of suicidology. Shneidman's definition of suicide is a conscious act of self-induced annihilation, best understood as a multidimensional malaise in a needful individual who defines an issue for which suicide is perceived as the best solution. He thought of suicide as psychache or intolerable psychological pain. Another notable person in the field of suicidology is Emile Durkheim. To Durkheim the word suicide is applied to all cases of death resulting directly or indirectly from a positive or negative act of the victim himself, which he knows will produce this result. Basically he saw suicide as an external and constraining social fact independent of individual psychopathology.
In David J. Mayo's definition there were four elements to suicide:
A suicide has taken place only if a death has occurred.
The death must be of one's own doing.
The agency of suicide can be active or passive.
It implies intentionally ending one's own life.
Sigmund Freud and Karl Menninger had similar views on suicide. Their definition of suicide had three different aspects. One was a murder involving hatred or the wish to kill. The second one was a murder by the self often involving guilt or the wish to be killed. The last one is the wish to die. They thought of suicide being a murderous death wish that was turned back upon one's own self. Freud also believed that we had two opposing basic instincts—life (eros) and death (thanatos)—and all instincts sought tension reduction. He also believed that suicide is more likely in advanced civilizations requiring greater repression of sexual and aggressive energy. Jean Baechler's definition of suicide was that suicide denotes all behavior that seeks and finds the solution to an existential problem by making an attempt on the life of the subject. Another worker in the field of suicidology was Joseph H. Davis. The definition he gave for suicide is a fatal willful self-inflicted life-threatening act without apparent desire to live; implicit are two basic components lethality and intent. Albert Camus also did some work in this field. He believed that whether one can live or chooses to live is the only truly serious philosophical problem. He also claimed that man created a god in order to be able to live without a wish to kill himself and that the only human liberty is to come to terms with death. He introduced Darwinian thought into his teachings.
See also
American Association of Suicidology
Depression (mood)
Epidemiology of depression
Epidemiology of suicide
Euthanasia
Gender differences in suicide
List of countries by suicide rate
List of suicides
Major depressive disorder
Mental health first aid
Mood disorder
Philosophy of suicide
Prevalence of mental disorders
Psychiatry
Psychology
Suicide attempt
Suicide crisis
Suicide prevention
References
External links
Samaritans Suicide info
The Irish Association of Suicidology
The National Suicide Research Foundation of Ireland
The American Association of Suicidology
Journals related to suicidology
Crisis: The Journal of Crisis Intervention and Suicide Prevention
Psychiatric research
Interdisciplinary subfields of sociology | Suicidology | Biology | 1,271 |
5,795,043 | https://en.wikipedia.org/wiki/Implicational%20propositional%20calculus | In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus that uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by "implies", "if ..., then ...", "→", "", etc..
Functional (in)completeness
Implication alone is not functionally complete as a logical operator because one cannot form all other two-valued truth functions from it.
For example, the two-place truth function that always returns false is not definable from → and arbitrary propositional variables: any formula constructed from → and propositional variables must receive the value true when all of its variables are evaluated to true.
It follows that {→} is not functionally complete.
However, if one adds a nullary connective ⊥ for falsity, then one can define all other truth functions. Formulas over the resulting set of connectives {→, ⊥} are called f-implicational. If P and Q are propositions, then:
¬P is equivalent to P → ⊥
P ∧ Q is equivalent to (P → (Q → ⊥)) → ⊥
P ∨ Q is equivalent to (P → Q) → Q
P ↔ Q is equivalent to ((P → Q) → ((Q → P) → ⊥)) → ⊥
Since the above operators are known to be functionally complete, it follows that any truth function can be expressed in terms of → and ⊥.
Axiom system
The following statements are considered tautologies (irreducible and intuitively true, by definition).
Axiom schema 1 is P → (Q → P).
Axiom schema 2 is (P → (Q → R)) → ((P → Q) → (P → R)).
Axiom schema 3 (Peirce's law) is ((P → Q) → P) → P.
The one non-nullary rule of inference (modus ponens) is: from P and P → Q infer Q.
Where in each case, P, Q, and R may be replaced by any formulas that contain only "→" as a connective. If Γ is a set of formulas and A a formula, then means that A is derivable using the axioms and rules above and formulas from Γ as additional hypotheses.
Łukasiewicz (1948) found an axiom system for the implicational calculus that replaces the schemas 1–3 above with a single schema
((P → Q) → R) → ((R → P) → (S → P)).
He also argued that there is no shorter axiom system.
Basic properties of derivation
Since all axioms and rules of the calculus are schemata, derivation is closed under substitution:
If then
where σ is any substitution (of formulas using only implication).
The implicational propositional calculus also satisfies the deduction theorem:
If , then
As explained in the deduction theorem article, this holds for any axiomatic extension of the system containing axiom schemas 1 and 2 above and modus ponens.
Completeness
The implicational propositional calculus is semantically complete with respect to the usual two-valued semantics of classical propositional logic. That is, if Γ is a set of implicational formulas, and A is an implicational formula entailed by Γ, then .
Proof
A proof of the completeness theorem is outlined below. First, using the compactness theorem and the deduction theorem, we may reduce the completeness theorem to its special case with empty Γ, i.e., we only need to show that every tautology is derivable in the system.
The proof is similar to completeness of full propositional logic, but it also uses the following idea to overcome the functional incompleteness of implication. If A and F are formulas, then is equivalent to where A* is the result of replacing in A all, some, or none of the occurrences of F by falsity. Similarly, is equivalent to So under some conditions, one can use them as substitutes for saying A* is false or A* is true respectively.
We first observe some basic facts about derivability:
Indeed, we can derive A → (B → C) using Axiom 1, and then derive A → C by modus ponens (twice) from Ax. 2.
This follows from () by the deduction theorem.
If we further assume C → B, we can derive using (), then we derive C by modus ponens. This shows , and the deduction theorem gives . We apply Ax. 3 to obtain ().
Let F be an arbitrary fixed formula. For any formula A, we define and Consider only formulas in propositional variables p1, ..., pn. We claim that for every formula A in these variables and every truth assignment e,
We prove () by induction on A. The base case A = pi is trivial. Let We distinguish three cases:
e(C) = 1. Then also e(A) = 1. We have
by applying () twice to the axiom Since we have derived by the induction hypothesis, we can infer
e(B) = 0. Then again e(A) = 1. The deduction theorem applied to () gives
Since we have derived by the induction hypothesis, we can infer
e(B) = 1 and e(C) = 0. Then e(A) = 0. We have
thus by the deduction theorem. We have derived and by the induction hypothesis, hence we can infer This completes the proof of ().
Now let F be a tautology in variables p1, ..., pn. We will prove by reverse induction on k = n,...,0 that for every assignment e,
The base case k = n follows from a special case of () using
and the fact that F→F is a theorem by the deduction theorem.
Assume that () holds for k + 1, we will show it for k. By applying deduction theorem to the induction hypothesis, we obtain
by first setting e(pk+1) = 0 and second setting e(pk+1) = 1. From this we derive () using modus ponens.
For k = 0 we obtain that the tautology F is provable without assumptions. This is what was to be proved.
This proof is constructive. That is, given a tautology, one could actually follow the instructions and create a proof of it from the axioms. However, the length of such a proof increases exponentially with the number of propositional variables in the tautology, hence it is not a practical method for any but the very shortest tautologies.
The Bernays–Tarski axiom system
The Bernays–Tarski axiom system is often used. In particular, Łukasiewicz's paper derives the Bernays–Tarski axioms from Łukasiewicz's sole axiom as a means of showing its completeness.
It differs from the axiom schemas above by replacing axiom schema 2, (P→(Q→R))→((P→Q)→(P→R)), with
Axiom schema 2': (P→Q)→((Q→R)→(P→R)),
which is called hypothetical syllogism.
This makes derivation of the deduction meta-theorem a little more difficult, but it can still be done.
We show that from P→(Q→R) and P→Q one can derive P→R. This fact can be used in lieu of axiom schema 2 to get the meta-theorem.
P→(Q→R) given
P→Q given
(P→Q)→((Q→R)→(P→R)) ax 2'
(Q→R)→(P→R) mp 2,3
(P→(Q→R))→(((Q→R)→(P→R))→(P→(P→R))) ax 2'
((Q→R)→(P→R))→(P→(P→R)) mp 1,5
P→(P→R) mp 4,6
(P→(P→R))→(((P→R)→R)→(P→R)) ax 2'
((P→R)→R)→(P→R) mp 7,8
(((P→R)→R)→(P→R))→(P→R) ax 3
P→R mp 9,10 qed
Satisfiability and validity
Satisfiability in the implicational propositional calculus is trivial, because every formula is satisfiable: just set all variables to true.
Falsifiability in the implicational propositional calculus is NP-complete, meaning that validity (tautology) is co-NP-complete.
In this case, a useful technique is to presume that the formula is not a tautology and attempt to find a valuation that makes it false. If one succeeds, then it is indeed not a tautology. If one fails, then it is a tautology.
Example of a non-tautology:
Suppose [(A→B)→((C→A)→E)]→([F→((C→D)→E)]→[(A→F)→(D→E)]) is false.
Then (A→B)→((C→A)→E) is true; F→((C→D)→E) is true; A→F is true; D is true; and E is false.
Since D is true, C→D is true. So the truth of F→((C→D)→E) is equivalent to the truth of F→E.
Then since E is false and F→E is true, we get that F is false.
Since A→F is true, A is false. Thus A→B is true and (C→A)→E is true.
C→A is false, so C is true.
The value of B does not matter, so we can arbitrarily choose it to be true.
Summing up, the valuation that sets B, C and D to be true and A, E and F to be false will make [(A→B)→((C→A)→E)]→([F→((C→D)→E)]→[(A→F)→(D→E)]) false. So it is not a tautology.
Example of a tautology:
Suppose ((A→B)→C)→((C→A)→(D→A)) is false.
Then (A→B)→C is true; C→A is true; D is true; and A is false.
Since A is false, A→B is true. So C is true. Thus A must be true, contradicting the fact that it is false.
Thus there is no valuation that makes ((A→B)→C)→((C→A)→(D→A)) false. Consequently, it is a tautology.
Adding an axiom schema
What would happen if another axiom schema were added to those listed above? There are two cases: (1) it is a tautology; or (2) it is not a tautology.
If it is a tautology, then the set of theorems remains the set of tautologies as before. However, in some cases it may be possible to find significantly shorter proofs for theorems. Nevertheless, the minimum length of proofs of theorems will remain unbounded, that is, for any natural number n there will still be theorems that cannot be proved in n or fewer steps.
If the new axiom schema is not a tautology, then every formula becomes a theorem (which makes the concept of a theorem useless in this case). What is more, there is then an upper bound on the minimum length of a proof of every formula, because there is a common method for proving every formula. For example, suppose the new axiom schema were ((B→C)→C)→B. Then ((A→(A→A))→(A→A))→A is an instance (one of the new axioms) and also not a tautology. But [((A→(A→A))→(A→A))→A]→A is a tautology and thus a theorem due to the old axioms (using the completeness result above). Applying modus ponens, we get that A is a theorem of the extended system. Then all one has to do to prove any formula is to replace A by the desired formula throughout the proof of A. This proof will have the same number of steps as the proof of A.
An alternative axiomatization
The axioms listed above primarily work through the deduction metatheorem to arrive at completeness. Here is another axiom system that aims directly at completeness without going through the deduction metatheorem.
First we have axiom schemas that are designed to efficiently prove the subset of tautologies that contain only one propositional variable.
aa 1: ꞈA→A
aa 2: (A→B)→ꞈ(A→(C→B))
aa 3: A→((B→C)→ꞈ((A→B)→C))
aa 4: A→ꞈ(B→A)
The proof of each such tautology would begin with two parts (hypothesis and conclusion) that are the same. Then insert additional hypotheses between them. Then insert additional tautological hypotheses (which are true even when the sole variable is false) into the original hypothesis. Then add more hypotheses outside (on the left). This procedure will quickly give every tautology containing only one variable. (The symbol "ꞈ" in each axiom schema indicates where the conclusion used in the completeness proof begins. It is merely a comment, not a part of the formula.)
Consider any formula Φ that may contain A, B, C1, ..., Cn and ends with A as its final conclusion. Then we take
aa 5: Φ−→(Φ+→ꞈΦ)
as an axiom schema where Φ− is the result of replacing B by A throughout Φ and Φ+ is the result of replacing B by (A→A) throughout Φ. This is a schema for axiom schemas since there are two level of substitution: in the first Φ is substituted (with variations); in the second, any of the variables (including both A and B) may be replaced by arbitrary formulas of the implicational propositional calculus. This schema allows one to prove tautologies with more than one variable by considering the case when B is false Φ− and the case when B is true Φ+.
If the variable that is the final conclusion of a formula takes the value true, then the whole formula takes the value true regardless of the values of the other variables. Consequently if A is true, then Φ, Φ−, Φ+ and Φ−→(Φ+→Φ) are all true. So without loss of generality, we may assume that A is false. Notice that Φ is a tautology if and only if both Φ− and Φ+ are tautologies. But while Φ has n+2 distinct variables, Φ− and Φ+ both have n+1. So the question of whether a formula is a tautology has been reduced to the question of whether certain formulas with one variable each are all tautologies. Also notice that Φ−→(Φ+→Φ) is a tautology regardless of whether Φ is, because if Φ is false then either Φ− or Φ+ will be false depending on whether B is false or true.
Examples:
Deriving Peirce's law
[((P→P)→P)→P]→([((P→(P→P))→P)→P]→[((P→Q)→P)→P]) aa 5
P→P aa 1
(P→P)→((P→P)→(((P→P)→P)→P)) aa 3
(P→P)→(((P→P)→P)→P) mp 2,3
((P→P)→P)→P mp 2,4
[((P→(P→P))→P)→P]→[((P→Q)→P)→P] mp 5,1
P→(P→P) aa 4
(P→(P→P))→((P→P)→(((P→(P→P))→P)→P)) aa 3
(P→P)→(((P→(P→P))→P)→P) mp 7,8
((P→(P→P))→P)→P mp 2,9
((P→Q)→P)→P mp 10,6 qed
Deriving Łukasiewicz' sole axiom
[((P→Q)→P)→((P→P)→(S→P))]→([((P→Q)→(P→P))→(((P→P)→P)→(S→P))]→[((P→Q)→R)→((R→P)→(S→P))]) aa 5
[((P→P)→P)→((P→P)→(S→P))]→([((P→(P→P))→P)→((P→P)→(S→P))]→[((P→Q)→P)→((P→P)→(S→P))]) aa 5
P→(S→P) aa 4
(P→(S→P))→(P→((P→P)→(S→P))) aa 2
P→((P→P)→(S→P)) mp 3,4
P→P aa 1
(P→P)→((P→((P→P)→(S→P)))→[((P→P)→P)→((P→P)→(S→P))]) aa 3
(P→((P→P)→(S→P)))→[((P→P)→P)→((P→P)→(S→P))] mp 6,7
((P→P)→P)→((P→P)→(S→P)) mp 5,8
[((P→(P→P))→P)→((P→P)→(S→P))]→[((P→Q)→P)→((P→P)→(S→P))] mp 9,2
P→(P→P) aa 4
(P→(P→P))→((P→((P→P)→(S→P)))→[((P→(P→P))→P)→((P→P)→(S→P))]) aa 3
(P→((P→P)→(S→P)))→[((P→(P→P))→P)→((P→P)→(S→P))] mp 11,12
((P→(P→P))→P)→((P→P)→(S→P)) mp 5,13
((P→Q)→P)→((P→P)→(S→P)) mp 14,10
[((P→Q)→(P→P))→(((P→P)→P)→(S→P))]→[((P→Q)→R)→((R→P)→(S→P))] mp 15,1
(P→P)→((P→(S→P))→[((P→P)→P)→(S→P)]) aa 3
(P→(S→P))→[((P→P)→P)→(S→P)] mp 6,17
((P→P)→P)→(S→P) mp 3,18
(((P→P)→P)→(S→P))→[((P→Q)→(P→P))→(((P→P)→P)→(S→P))] aa 4
((P→Q)→(P→P))→(((P→P)→P)→(S→P)) mp 19,20
((P→Q)→R)→((R→P)→(S→P)) mp 21,16 qed
Using a truth table to verify Łukasiewicz' sole axiom would require consideration of 16=24 cases since it contains 4 distinct variables. In this derivation, we were able to restrict consideration to merely 3 cases: R is false and Q is false, R is false and Q is true, and R is true. However because we are working within the formal system of logic (instead of outside it, informally), each case required much more effort.
See also
Deduction theorem
Peirce's law
Propositional calculus
Tautology (logic)
Truth table
Valuation (logic)
References
Further reading
Mendelson, Elliot (1997) Introduction to Mathematical Logic, 4th ed. London: Chapman & Hall.
Systems of formal logic
Propositional calculus
Articles containing proofs
Conditionals | Implicational propositional calculus | Mathematics | 4,661 |
26,610,144 | https://en.wikipedia.org/wiki/D-Shape | D-Shape is a large 3-dimensional printer that uses binder-jetting, a layer-by-layer printing process, to bind sand with inorganic seawater and magnesium-based binder in order to create stone-like objects. Invented by Enrico Dini, founder of Monolite UK Ltd, the first model of the D-Shape printer used epoxy resin, commonly used as an adhesive in the construction of skis, cars, and airplanes, as a binder. Dini patented this model in 2006. After experiencing problems with the epoxy, Dini changed the binder to the current magnesium-based one and patented the printer again in September 2008.
Technical description
The D-Shape 3-D printer sits in a 6 m by 6 m aluminum frame. The frame consists of a square base that moves upwards along four vertical beams during the printing process. Stepper motors on each beam control this movement, allowing precise positioning and holding at specific heights. A printer head, spanning the full 6-meter horizontal length of the base, contains 300 nozzles spaced 20 millimeters apart. An aluminum beam runs perpendicular to the printer head, connecting it to the base.
Process
Before printing, a 3-D model of the object to be printed must be created using CAD, a software that allows a designer to create 3-D models on a computer. Once the model is finished, the CAD file is sent to the printer head. The printing process begins when a layer of sand, 5 to 10 mm thick, mixed with solid magnesium oxide (MgO), is evenly distributed by the printer head in the area enclosed by the frame. 3-D printing software slices the 3-D model into 2-D layers for printing. Then, starting with the bottom slice, the head moves across the base and deposits an inorganic binding liquid made up of a solution that includes magnesium chloride, at a resolution of . The binder and sand chemically react to form a sandstone material. It takes about 24 hours for the material to completely solidify. The material resembles, by composition, Sorel cement.
An electric piston moves the printer head perpendicular to the motion to fill gaps and ensure uniform binder application. D-Shape completes each layer with four forward and backward strokes. Stepper motors on the vertical beams adjust the base upwards after a layer is completed. The hollow framework above the printer head is refilled cyclically, distributing new sand into the frame to form the next layer. During the printing process, excess sand supports the solidifying material and can be reused for subsequent printings. The process continues uninterrupted until the desired structure is fully printed.
After the printer is done with this process, the final structure must be extruded from the sand. Workers use shovels to remove the excess sand and reveal the final product. The magnesium oxide in the sand chemically reacts with the binder, forming a mineral-like material, resulting in a mineral-like material with a microcrystalline structure. Compared to concrete, which has low resistance to tension and, as a result, needs iron reinforcement, D-Shape's structures have relatively high tension resistance and do not need iron reinforcement. The entire building process is reported to take a quarter of the time and a third to a half of the cost of building the same structure with traditional means using Portland cement, the material currently used in building construction.
Awards and achievements
NYC Waterfront Construction Competition
In the fall of 2012, D-Shape entered into the NYC Waterfront Construction Competition hosted by the New York City Economic Development Corporation (NYCEDC) in which competitors had to create a solution to help strengthen New York City's deteriorating piers and coastline structures. D-Shape's idea called, "Digital Concrete," was to take 3-D scans of each piece of pier or infrastructure and then print a support jacket for each specific piece. D-Shape won first place and received $50,000 for the idea, which is estimated to save New York City $2.9 billion.
Radiolaria
In 2009, D-Shape printed a large 3-D sculpture, Radiolaria. The sculpture was created by Italian architect Andrea Morgante and inspired by radiolarians, unicellular organisms with intricate mineral skeletons. The current version of the sculpture is only a 3 x 3 x 3 m scale model of the full-size Radiolaria that is planned to be put in a roundabout in Pontedera, Italy.
Recent Developments
Currently, Jake Wake-Walker and Marc Webb are working on a documentary titled The Man Who Prints Houses, about Enrico Dini and his invention.
D-Shape is still in development. It has printed a trullo, but the printer is unable to print larger structures.
Lunar bases
Because of D-Shape's capabilities, the European Space Agency (ESA) has taken interest in using the printer to build Moon bases using lunar regolith. D-Shape has been successful in printing components for the lunar bases with a simulated regolith and has tested to see how the printer will work in the environment on the Moon.
References
External links
Discovery Channel Covering D-Shape https://www.youtube.com/watch?v=RYaRUVTwIVc
2006 introductions
Construction
Building technology | D-Shape | Engineering | 1,074 |
76,125,137 | https://en.wikipedia.org/wiki/2010%20California%20contrail%20incident | On the evening of Monday, November 8, 2010, an unusually conspicuous contrail appeared about 35miles west of Los Angeles, California in the vicinity of Catalina Island. News footage of the event from a KCBS helicopter led to intense media coverage and speculation about a potential military missile launch, with many reporters and experts discussing the contrail and theorizing about its source.
Coverage continued for several days. The Pentagon released a statement on November 9 that it could not identify the source of the vapor trail, but both the North American Aerospace Defense Command (NORAD) and U.S. Northern Command stated it was not a foreign military launch. On November 9, the FAA also issued a statement that it had not approved any commercial space launches in the area for the prior day. On November 10, some 30 hours after the "mystery missile" first gained press attention, a Pentagon spokesman stated there was "no evidence to suggest" the plume was anything but an aircraft contrail. Some experts, however, held that the vapor trail could not be identified as an aircraft contrail with total certainty, and others stated it was a missile.
While some uncertainty over the vapor trail's origin persisted, the incident came to be seen as an example of news outlets being "captives of their sources" and irresponsibly pushing unverified theses; it was also interpreted as a lesson in the importance of exploring alternative hypotheses that fit available data. U.S. federal and military authorities were also criticized for giving a series of "inconclusive" answers about the event and allowing the issue, in the words of one commentator, to "fester for days" without a clear resolution.
Background
Contrails, short for "condensation trails," are linear cloud formations produced by aircraft exhaust or air pressure changes, usually at commercial cruising altitudes several miles above the ground. Contrails often only last for minutes, but can last for hours and expand to several miles across, coming to resemble naturally formed cirrus or altocumulus clouds.A contrail from an airplane flying towards an observer can create the illusion of a vertically moving object, as happened with a contrail off the coast of San Clemente, California on December 31, 2009, which some observers mistook for a missile launch. The San Clemente "New Year's Eve Contrail" was a horizontal trail at about 32,000 feet, or six miles, in altitude, that appeared to be oriented vertically due to the ground-level perspective from which it was observed and photographed. There are also historic examples of observers mistaking aircraft contrails for other phenomena, especially when contrails were still uncommon, including incidents in Galveston, Texas on October 27, 1951; in several areas of Iowa on April 15, 1950; and throughout the San Francisco Peninsula on January 11, 1950, when a B-50 Superfortress bomber flying at 35,000 feet caused many residents to call police stations to report a "burning plane," "meteors," and "flying saucers."
While a number of experts concluded the 2010 "mystery missile" was simply a common aircraft contrail, other experts held that while an airplane was the most likely source a missile launch could not be entirely ruled out based on existing evidence. San Nicolas Island, approximately 75 miles west of Los Angeles and the site of a U.S. military installation, has hosted a number of secret operations. The 2010 incident occurred not far from San Nicolas, leading some experts to speculate about a connection between the vapor trail and activity on the island. On Friday, November 5, 2010, several days prior to the contrail incident, Vandenberg Air Force Base launched a Delta II rocket carrying a Thales Alenia Space-Italia COSMO SkyMed satellite, but a sergeant from Vandenberg informed CBS News 8 that there had been no subsequent launches.
Incident and response
At around 5:00p.m. Pacific Time on Monday, November 8, 2010, a helicopter from the KCBS news station recorded the vapor trail of what was described as a "missile" about 35miles west of Los Angeles, California and somewhat north of Catalina Island. It was later characterized as a "large vertical column set against the bright orange sky at sunset" and was clearly visible from the Los Angeles area. Scott Diener, the news director at KCBS, stated that the experts interviewed by his station on "Tuesday night and Wednesday morning had leaned toward the missile theory" to explain the vapor plume. News anchors continued to cover the event and, by the end of Tuesday, it had attracted international press attention, being described as a "mystery missile" or "vapor trail reminiscent of a missile launch."
Several experts argued that the plume was simply a jet contrail, yet others disagreed, and U.S. government sources did not immediately reach a public conclusion about the vapor trail or its source. Pentagon spokesman Colonel David Lapan said that any missile test in the area was "implausible" due to the close proximity of the sighting to Los Angeles International Airport. Col. Lapan also stated that were no known airspace closures or notifications to mariners at the time of the incident as would be expected for a missile test. Robert Ellsworth, former U.S. Ambassador to NATO and former Deputy Secretary of Defense, stated to CBS News 8 that it did not appear to be a Tomahawk missile but, stressing it was only a theory, also remarked: "It could be a test firing of an intercontinental ballistic missile from an underwater submarine, to demonstrate mainly to Asia, that we can do that." Ellsworth further said of the vapor trail: "It's spectacular...It takes people's breath away," and described the projectile as "a big missile." Doug Richardson, editor of Jane's Missiles and Rockets, said, "It's a solid propellant missile, you can tell from the efflux [smoke] but they're not showing enough of the tape to show whether it's staging [jettisoning its sections]." Richardson theorized it might be a standard interceptor missile of the type used by U.S. Navy Aegis guided-missile cruisers.
The United States Northern Command and the North American Aerospace Defense Command released a statement in response to the sighting, saying, "At this time, we can confirm that there is no threat to our nation and from all indications this was not a launch by a foreign military." The Pentagon released a statement on November 9 declaring that it could not identify the source of the vapor trail. Col. Lapan stated that officials were "still trying to find out what the contrail off the coast of southern California was caused by," but that currently, "all indications are that it was not a DoD activity." The Pentagon determined that there was no "scheduled or inadvertent" missile launches off the coast of California on the night of November 8. Adm. Gary Roughead, Chief of Naval Operations, stated to The Washington Post that it "wasn't a Navy missile," yet declined to offer more detail.
The website ContrailScience.com produced a widely circulated report that explained how an airplane contrail moving directly toward a viewer has the appearance of rising vertically. The website referenced the December 31, 2009 San Clemente "New Year's Eve Contrail," a horizontal contrail which some observers thought was a vertical missile launch.
John E. Pike, the director of GlobalSecurity.org, stated that the flying object producing the vapor trail was not a missile because it was too slow, and described it as "obviously an airplane contrail." On Tuesday, Pike said that what the KCBS crew recorded was "clearly an airplane contrail. It's an optical illusion that looks like it's going up, whereas in reality it's going towards the camera. The tip of the contrail is moving far too slowly to be a rocket. When it's illuminated by the sunset, you can see hundreds of miles of it...all the way to the horizon." Light at the head of the contrail that was initially speculated to be an exhaust "flame" was later interpreted as sun reflecting from an aircraft exterior.As reported on November 10 by CNN, an unnamed official from the U.S. Northern Command stated the vapor trail may have been caused by a plane, and likened it to observers mistaking the New Year's Eve contrail for a missile. FAA spokesman Ian Gregor stated: "The FAA ran radar replays of a large area west of Los Angeles based on media reports of the possible missile launch at approximately 5 p.m. (PT) on Monday. The radar replays did not reveal any fast moving, unidentified targets in that area...The FAA did not receive reports...of unusual sightings from pilots who were flying in the area on Monday afternoon." Eventually, on November 10, about 30 hours after the contrail first gained press attention, a Pentagon spokesman stated “there is no evidence to suggest that this is anything else other than a condensation trail from an aircraft.” U.S. Airways flight 808 from Honolulu, Hawaii—a Boeing 757-200—emerged as a candidate for the contrail's source. UPS flight 902—a McDonnell Douglas/Boeing MD-11—was also raised as a possibility.
Eventually, several news outlets came to report that the vapor trail was simply an airplane contrail or optical illusion, as various experts had argued. The public and news reaction to the event was characterized by CNN as the "mystery missile mania." Nonetheless, some experts continued to hold that the plume could not be conclusively identified. In a November 14, 2010 article, the New York Times quoted Theodore A. Postol, a former Pentagon science adviser and professor at the Massachusetts Institute of Technology, who stated that while he inclined to the jet explanation, he could not "rule out a missile launch." On November 16, Space.com published an article discussing an image taken by the NASA/NOAA GOES 11 satellite on November 8 that reportedly showed the "mystery" contrail visible as a "horizontal white streak." The NASA website itself also discussed the GOES 11 imagery in an article. Patrick Minnis, a contrail expert in the Science Directorate at the NASA Langley Research Center, said he first "assumed it was a missile" but after research concluded that an aircraft was the "most likely" source of the contrail.
Aftermath
In the November 14 New York Times article, several days after the event, the incident was cast as an example of how news outlets can be "captives of their sources" and irresponsibly push unproven theses; it was also interpreted as a lesson in the importance of exploring alternative hypotheses. Federal and military authorities in the United States also faced criticism in the aftermath of the event, particularly for the "inconclusive" nature of the answers provided to the public, and a perceived delay in resolving the issue.
Time magazine ranked the incident as number two on its "Top 10 Oddball News Stories" list for 2010, remarking how "within hours, the footage had been picked up by every major news network," but also observing that "as quickly as the mystery missile arrived on the scene...it was set aside." ABC News also mentioned the incident in a 2012 story about test missile launches from Fort Wingate and the White Sands Missile Range, recalling the 2010 "'mystery missile'...spotted off the coast of Southern California by a TV news helicopter." The Telegraph referred to the incident in a July 2017 article, stating how in "November 2010, The Pentagon was left baffled by what was reported to be a 'mystery missile launch' off the coast of California." Live Science also mentioned the event in a 2017 article.
Former Director of Israel Missile Defense, Uzi Rubin, cited the incident in a September 2014 briefing on Israeli Air Defense held in Washington, D.C. and broadcast by C-SPAN. Aviation Week Network reported on the briefing, describing how Rubin used "the November 2010 'mystery missile launch' seen from California" as an example of foreshortened perspective, to illustrate how observers can mistake the direction that a rocket is traveling.
See also
Contrail
Twilight phenomenon
Channel Islands (California)
Robert Ellsworth
Battle of Los Angeles
Notes
References
External links
Wall Street Journal video of the "Mystery Missile"
Photos sent to ABC News, taken by Richard Warren
Contrail Science page, with raw download of Warren’s ABC photos
Fox News coverage of the "mystery missile" November 10, 2010
"Missile Launched Off Calif. Coast" CBS video
"Mystery Missile, Not A Missile" CBS video
"Who fired the 'mystery missile?" Channel 4 video
"Mystery missile launch caught on camera off California coast" WMAR-2 ABC video
"Mystery Missile" News 5 Cleveland ABC video
Michio Kaku on ABC News: "Professor Explains Mystery Plume"
November 10, 2010 Wikinews article on the incident
2010 in aviation
2010 in California
Santa Catalina Island (California)
Atmospheric optical phenomena
November 2010 events in the United States
Aviation accidents and incidents in the United States in 2010
2010 in American politics
Los Angeles
Boeing 757
North American Aerospace Defense Command
US Airways Group
Channel Islands of California
Honolulu
Los Angeles International Airport | 2010 California contrail incident | Physics | 2,731 |
4,331,734 | https://en.wikipedia.org/wiki/Ozone%20monitor | An ozone monitor is electronic equipment that monitors for ozone concentrations in the air. The instrument may be used to monitor ozone values for industrial applications or to determine the amount of ambient ozone at ground level and determine whether these values violate National Ambient Air Quality Standards (NAAQS). Different types of ozone monitoring methods have been used throughout the decades, the two most notable and common methods being the Federal Reference Method and the Federal Equivalent Method.
Federal Reference Method
The Federal Reference Method (FRM) was the original method of measuring ozone concentration in the air, being used throughout the United States around the 1970s and 1980s. It uses what is known as gas-phase ethylene-chemiluminescence or ET-CL. The ozone content is measured based on the reaction when the air around the monitor reacts with the ethylene reactant gas within the monitor. As of 2015, the EPA added an additional format to the FRM using nitric oxide chemiluminescence or NO-CL. It functions in a very similar manner to that of the ET-CL format except it uses nitric oxide instead of ethylene gas. The FRM has, for the most part, been phased due water vapor causing skewed results and has been replaced with the Federal Equivalent Method which uses ultraviolet absorption. However, the FRM it still used occasionally as the Federal Equivalent Method can be skewed by concentration of other pollutants in higher quantities such as mercury, sulfur dioxide, carbon dioxide, VOCs, and others.
Federal Equivalent Method
The Federal Equivalent Method (FEM) relies on the use of ultraviolet Absorption, more accurately, the ozone molecule absorbs ultraviolet radiation. Most ozone monitors utilized in regulatory applications use ultraviolet absorption to accurately quantify ozone levels. An ozone monitor of this type operates by pulling an air sample from the atmosphere into the machine with an air pump. During one cycle, the ozone monitor will take one air sample through the air inlet, and scrub the ozone from the air; for the next cycle, an air sample bypasses the scrubber and the ozone value calculated. The solenoid valve is electronically activated to shift the air flow either through the scrubber or to bypass it on a timed sequence. The difference between the two sampled values determines the actual ozone value at that time. The monitor may also have options to account for air pressure and air temperature to calculate the value of ozone.
Measurement
The concentration of ozone is determined using the Beer-Lambert Law that basically says that the absorption of light is proportional to the concentration. For ozone, a 254 nanometer wavelength of light created by a mercury lamp is shined through a specific length of tubing with reflective mirrors. A photodiode at the other end of the tube detects the changes of brightness from the light.
The onboard electronics process the values obtained and display the value on the screen and can also output an electrical signal in volts or a 4-20 mA current that can be read by an electronic data logger. Other options for output are RS232 serial port or ethernet or internal data storage on flash memory.
See also
Environmental science
References
Ozone
Measuring instruments | Ozone monitor | Chemistry,Technology,Engineering | 634 |
42,520,223 | https://en.wikipedia.org/wiki/Sednoid | A sednoid is a trans-Neptunian object with a large semi-major axis and a high perihelion, similar to the orbit of the dwarf planet Sedna. The consensus among astronomers is that there are only three objects that are known from this population: Sedna, , and 541132 Leleākūhonua (). All three have perihelia greater than . These objects lie outside an apparently nearly empty gap in the Solar System and have no significant interaction with the planets. They are usually grouped with the detached objects. Some astronomers consider the sednoids to be Inner Oort Cloud (IOC) objects, though the inner Oort cloud, or Hills cloud, was originally predicted to lie beyond 2,000 AU, beyond the aphelia of the three known sednoids.
One attempt at a precise definition of sednoids is any body with a perihelion greater than and a semi-major axis greater than .
However, this definition applies to the objects , , and which have perihelia beyond 50 AU and semi-major axes over 700 AU. Despite this, these objects are thought to not belong to the sednoids, but rather to the same dynamical class as 474640 Alicanto, and .
With their high eccentricities (greater than 0.8), sednoids are distinguished from the high-perihelion objects with moderate eccentricities that are in a stable resonance with Neptune, namely , , ("Buffy"), and .
Unexplained orbits
The sednoids' orbits cannot be explained by perturbations from the giant planets, nor by interaction with the galactic tides. If they formed in their current locations, their orbits must originally have been circular; otherwise accretion (the coalescence of smaller bodies into larger ones) would not have been possible because the large relative velocities between planetesimals would have been too disruptive. Their present elliptical orbits can be explained by several hypotheses:
These objects could have had their orbits and perihelion distances "lifted" by the passage of a nearby star when the Sun was still embedded in its birth star cluster.
They could have been captured from around passing stars, most likely in the Sun's birth cluster.
Their orbits could have been disrupted by an as-yet-unknown planet-sized body beyond the Kuiper belt such as the hypothesized Planet Nine.
Their perihelion distances could have been "lifted" by a temporarily-present rogue planet in the early solar system.
Known members
The first three known sednoids, like all of the more extreme detached objects (objects with semi-major axes > 150 AU and perihelia > 30 AU; the orbit of Neptune), have a similar orientation (argument of perihelion) of ≈ 0° (). This is not due to an observational bias and is unexpected, because interaction with the giant planets should have randomized their arguments of perihelion (ω), with precession periods between 40 Myr and 650 Myr and 1.5 Gyr for Sedna. This suggests that one or more undiscovered massive perturbers may exist in the outer Solar System. A super-Earth at 250 AU would cause these objects to librate around ω = for billions of years. There are multiple possible configurations and a low-albedo super-Earth at that distance would have an apparent magnitude below the current all-sky-survey detection limits. This hypothetical super-Earth has been dubbed Planet Nine. Larger, more-distant perturbers would also be too faint to be detected.
, 27 known objects have a semi-major axis greater than 150 AU, a perihelion beyond Neptune, an argument of perihelion of , and an observation arc of more than 1 year., , , , , and are near the limit of perihelion of 50 AU, but are not considered sednoids.
On 1 October 2018, Leleākūhonua, then known as , was announced with perihelion of 65 AU and a semi-major axis of 1094 AU. With an aphelion over 2100 AU, it brings the object further out than Sedna.
In late 2015, V774104 was announced at the Division for Planetary Science conference as a further candidate sednoid, but its observation arc was too short to know whether its perihelion was even outside Neptune's influence. The talk about V774104 was probably meant to refer to Leleākūhonua () even though V774104 is the internal designation for non-sednoid .
Sednoids might constitute a proper dynamical class, but they may have a heterogeneous origin; the spectral slope of is very different from that of Sedna.
Malena Rice and Gregory Laughlin applied a targeted shift-stacking search algorithm to analyze data from TESS sectors 18 and 19 looking for candidate outer Solar System objects. Their search recovered known objects like Sedna and produced 17 new outer Solar System body candidates located at geocentric distances in the range 80–200 AU, that need follow-up observations with ground-based telescope resources for confirmation. Early results from a survey with the William Herschel Telescope aimed at recovering these distant TNO candidates have failed to confirm two of them.
Theoretical population
Each of the proposed mechanisms for Sedna's extreme orbit would leave a distinct mark on the structure and dynamics of any wider population. If a trans-Neptunian planet were responsible, all such objects would share roughly the same perihelion (≈80 AU). If Sedna had been captured from another planetary system that rotated in the same direction as the Solar System, then all of its population would have orbits on relatively low inclinations and have semi-major axes ranging from 100 to 500 AU. If it rotated in the opposite direction, then two populations would form, one with low and one with high inclinations. The perturbations from passing stars would produce a wide variety of perihelia and inclinations, each dependent on the number and angle of such encounters.
Acquiring a larger sample of such objects would therefore help in determining which scenario is most likely. "I call Sedna a fossil record of the earliest Solar System", said Brown in 2006. "Eventually, when other fossil records are found, Sedna will help tell us how the Sun formed and the number of stars that were close to the Sun when it formed." A 2007–2008 survey by Brown, Rabinowitz and Schwamb attempted to locate another member of Sedna's hypothetical population. Although the survey was sensitive to movement out to 1,000 AU and discovered the likely dwarf planet Gonggong, it detected no new sednoids. Subsequent simulations incorporating the new data suggested about 40 Sedna-sized objects probably exist in this region, with the brightest being about Eris's magnitude (−1.0).
Following the discovery of Leleākūhonua, Sheppard et al. concluded that it implies a population of about 2 million Inner Oort Cloud objects larger than 40 km, with a total mass in the range of , about the mass of Pluto and several times the mass of the asteroid belt.
See also
Trans-Neptunian objects category
Extreme trans-Neptunian object
References
External links
New icy body hints at planet lurking beyond Pluto
Solar System | Sednoid | Physics,Astronomy | 1,541 |
12,257,271 | https://en.wikipedia.org/wiki/Elevator%3A2010 | Elevator:2010 was an inducement prize contest with the purpose of developing space elevator and space elevator-related technologies. Elevator:2010 organized annual competitions for climbers, ribbons and power-beaming systems, and was operated by a partnership between Spaceward Foundation and the NASA Centennial Challenges.
History
On March 23, 2005 NASA's Centennial Challenges program announced a partnership with the Spaceward Foundation regarding Elevator:2010, to raise the amounts of monetary prizes and to get more teams involved in the competitions. The partnership was not renewed after its initial 5-year term.
There were two (out of an intended seven) competitions of the NASA Centennial Challenges which fell under the Elevator:2010 banner: The Tether Challenge and the Beam Power Challenge. There were also the two original competitions.
Tether Challenge
This competition presented the challenge of constructing super-strong tethers, a crucial component of a space elevator. The 2005 contest was to award US$50,000 to the team which constructed the strongest tether, with contests in future years requiring that each winner outperform that of the previous year by 50%. No competing tether surpassed the commercial off-the-shelf baseline and the prize was increased to $200,000 in 2006.
Of the four teams competing, three were disqualified for not following length rules—one of these cases by a fraction of a millimeter. Ultimately, the 'House Tether' won against the remaining team. The 'House Tether' is composed of Zylon fiber and M77 adhesive. It was stronger than the machine used to test the tether itself: it began to fail at , forcing the test to be called off.
Beam Power Challenge
The Beam Power Challenge was a competition to build a wirelessly-powered ribbon-climbing robot. The contest involves having the robot raise a specified payload to a specific height within a limited period of time. The first competition in 2005 would have awarded , US$20,000, and US$10,000 to the three best-performing teams meeting the minimum benchmark of . However no team met the minimum standard in 2005.
In 2006 the prize for first place increased to $150,000 with the goal of climbing 50 meters in under 1 minute. It was held October 20–21, 2006 at the Las Cruces International Airport at the Wirefly X PRIZE Cup. 13 teams entered the competition. Only one team, University of Saskatchewan, was able to climb the tether in under 1 minute, reaching the top in .
The Challenge had $500,000 in prize money for the 2007 competition.
At the 2009 Challenge, on November 6, 2009, LaserMotive successfully used lasers to drive a device up a cable suspended from a helicopter. Energy is transmitted to the climber using a high-power infrared beam. LaserMotive's entry, which was the only climber to top the cable, reached an average speed of and earned a $900,000 prize. This marked both a performance record, and the first award of a cash prize at the Challenge.
LaserMotive won the prize for the Level 1 power beaming prize in 2009 with the achievement of climber speed over a sub-kilometer climb. The Level 2 power beaming prize, for a climb, remains available for future competitions.
Future competitions
After LaserMotive claimed the prize for the Level 1 power beaming prize in 2009, the Space Elevator games being conducted by Elevator:2010 planned to offer a prize purse for future competitions of , for both the Power Beaming (Climber) Competition and the Tether Strength Competition.
The Japan Space Elevator Association conducted climbing competitions in August 2013.
See also
KC Space Pirates
Launch loop
Lightcraft
Lunar space elevator
Non-rocket spacelaunch
Skyhook (structure)
Space elevator construction
Space elevator competitions
Space elevator economics
Space elevators in fiction
Space elevator safety
Space fountain
Space gun
Tether propulsion
References
External links
The Spaceward Foundation
Space elevator
Challenge awards
Awards established in 2005 | Elevator:2010 | Astronomy,Technology | 794 |
43,783,024 | https://en.wikipedia.org/wiki/Jay%20Frank%20Schamberg | Jay Frank Schamberg (November 6, 1870 – March 30, 1934) was a physician and prominent dermatologist/syphilogist in Philadelphia, PA during the first third of the twentieth century. He first became known as a strong advocate for smallpox vaccination prior to 1910. He had two diseases named for him, one of which continues to carry his eponym. During World War I, Schamberg's research laboratory successfully synthesized Salvarsan, the standard treatment for syphilis, which had previously only been available from Germany.
Early life
Schamberg was born to Gustave Schamberg and Emma Frank Schamberg in Philadelphia on 6 November 1870. His father was born in Herleshausen in what is now the Hesse State in Germany and emigrated to the United States around 1848. He was a meat broker. Jay had 3 brothers (Lewis, Morrie and Herbert) and two sisters (Zella and Eta). Schamberg attended Central High School in Philadelphia and received his medical degree from the University of Pennsylvania in 1892. In 1905 he married May Ida Bamberger. They had two children, Elizabeth, mother of Pulitzer Prize winning Journalist J. Anthony Lukas, and Ira Leo.
Professional work
In the first decade of the 20th Century two diseases carried Schamberg's name. Today only one does. In 1901, Schamberg described a new eruptive skin disorder, which was prevalent in the spring and fall. The condition recurred every year in and around Philadelphia, but its cause was unknown. Until 1910 the condition was known as Schamberg's Disease. In 1909, the disease struck the crew of a yacht in the Philadelphia harbor. Because of the prominence of the yacht's owner, Mr. P.A.B. Widener, Dr. Joseph Goldberger of the United States Public Health Service was sent to Philadelphia to work with Schamberg on ascertaining the cause of the eruption. Schamberg and Goldberger demonstrated that the eruption was caused by mites found in the straw of the crew members' bunks. Subsequently the eponymous designation of the disease was dropped and it was referred to as "straw mattress disease" or "grain itch". It was described by Schamberg as acaro-dermatitis urticarioides.
Also in 1901, Schamberg described a peculiar progressive pigmentary dermatosis caused by extravasation of blood from the capillaries in the skin. It is most common in the lower extremities, but its underlying etiology has not been firmly established. This condition continues to be called Schamberg's Disease.
Salvarsan and World War I
Widener's generosity allowed Schamberg to start the Research Institute of Cutaneous Medicine in 1912. At the time of the outbreak of World War I, the only effective treatments for syphilis were based on arsphenamine which was called by its brand name, Salvarsan. The only source of the drug was from the German company Hoechst AG, which held the patent. Early in the war the Research Institute developed a process for the manufacture of arsphenamine. By early 1916, the supply of arsphenamine from Germany was virtually exhausted. As early as March, 1916 the Research Institute was supplying arsphenamine to all 48 states and to the military. In Nov of 1917, the FTC abrogated Hoechst's patent and licensed Schamberg's laboratory as the sole manufacturer in the United States. Following the war, Hoechst accused Schamberg and his associates of illegal patent infringement. Herman Metz represented Hoechst in this matter and a settlement was eventually reached. In March and April, 1922 the Senatorial Commission on Dye Stuffs heard testimony from Schamberg and Metz.
Smallpox vaccination
Schamberg was a life-long champion of smallpox vaccination. In 1910, he authored an article in Ladies Home Journal entitled "What Has Vaccination Really Done". This was a rebuttal to an article by John Pitcairn representing the views of the Anti-Vaccination League of America. Eighteen years later, in 1928, George Bernard Shaw, who was a well known anti-vaccinationist, responded to the article in Ladies' Home Journal by impugning Schamberg's credentials and his assertion that vaccination saves lives. The tone of the letter is captured in its last sentence: "In short, my dear Doctor, your 4000 cases only prove that clinical experience is not enough, and that as to the rest of you (sic) are 100 years out of date."
Schamberg held Professorships in dermatology at the University of Pennsylvania, Jefferson Medical, and Temple University. He was president of the American Dermatologic Association in 1920-22, chairman of the AMA's Section Dermatology and Syphilology in 1928-29 and an editor of the Archives of Dermatology and Syphilology from 1927 to 1934. He died of cardiovascular disease on March 30, 1934.
Selected publications
Vaccination and Its Relation to Animal Experimentation (1911)
Smallpox and Vaccination (1914)
References
External links
1870 births
1934 deaths
American dermatologists
Physicians from Philadelphia
University of Pennsylvania alumni
Vaccination advocates
Vivisection activists | Jay Frank Schamberg | Chemistry,Biology | 1,087 |
2,806,550 | https://en.wikipedia.org/wiki/SK%20Hynix | SK Hynix Inc. () is a South Korean supplier of dynamic random-access memory (DRAM) chips and flash memory chips. SK Hynix is one of the world's largest semiconductor vendors.
Founded as Hyundai Electronics in 1983, SK Hynix was integrated into the SK Group in 2012 following a series of mergers, acquisitions, and restructuring efforts. After being incorporated into the SK Group, SK Hynix became a major affiliate alongside SK Innovation and SK Telecom.
The company's major customers include Microsoft, Apple, Asus, Dell, MSI, HP Inc., and Hewlett Packard Enterprise (formerly Hewlett-Packard). Other products that use Hynix memory include DVD players, cellular phones, set-top boxes, personal digital assistants, networking equipment, and hard disk drives.
History
Beginning
Hyundai Electronics
Hyundai Electronics was founded in 1983 by Chung Ju-yung, the founder of Hyundai Group. In the early 1980s, Chung recognized the growing importance of electronics in the automobile industry, one of Hyundai's primary business areas. He saw the potential for Hyundai to expand beyond its core operations in automobiles, shipbuilding, and heavy industries and wanted to establish a presence in the promising electronics industry. The company's primary focus was on semiconductor production and industrial electronics.
Hyundai had to pay a very high entry price to set up an efficient production system and to stabilize the yield rate compared to its rival Samsung, who at least had prior experience in semiconductor manufacturing. Hyundai's decision to produce SRAMs was later proven to be a mistake, as the technological sophistication of SRAMs made it difficult for Hyundai to achieve a satisfactory yield rate. In 1985, Hyundai altered its strategy for DRAM manufacturing by subcontracting from foreign firms and importing their chip designs, as it had lost time developing its own chips. Hyundai's DRAM chip, produced by importing Vitelic Corporation's design and technology, again failed in mass production due to a low yield rate.
Hyundai's approach to manufacturing memory chips as a foundry for foreign firms under OEM agreements was successful. The OEM agreements between General Instruments and Texas Instruments were helpful to Hyundai, which was facing technological and financial difficulties. By 1992, Hyundai had become the world's ninth DRAM manufacturer, and by 1995, it ranked among the world's top 20 semiconductor manufacturing companies. In 1996, Hyundai acquired Maxtor, a U.S.-based disk-drive manufacturer.
LG Semicon
GoldStar, which later became LG Electronics, entered the semiconductor business by acquiring a small company from Taihan Electric Wire in 1979. The company was subsequently renamed GoldStar Semiconductor. LG Semicon was established as Goldstar Electron in 1983 by merging the semiconductor operations of Goldstar Electronics and Goldstar Semiconductors. In 1990, Goldstar Electron commenced operations at Cheongju Plant I, followed by the completion of Cheongju Plant II in 1994. The company underwent a name change to LG Semicon in 1995. LG Semicon operated from three sites, including Seoul, Cheongju, and Gumi.
Merger
During the 1997 Asian financial crisis, the South Korean government initiated the restructuring of the nation's five major conglomerates, including their semiconductor businesses. Among five chaebols, Samsung, LG, and Hyundai were engaged in the semiconductor business. Samsung was exempt from the restructuring due to its competitive position in the global market. However, LG and Hyundai were pressured by the government to merge, as both companies faced significant losses during the semiconductor recession of early 1996. In 1998, Hyundai Electronics acquired LG Semicon for US$2.1 billion, positioning itself in direct competition with Micron Technology. Subsequently, LG Semicon was rebranded as Hyundai Semiconductor and later merged with Hyundai Electronics.
Hynix
Although the South Korean government aimed to merge the two companies to alleviate the supply glut in the global market, competition in the semiconductor industry had intensified. Hyundai faced near collapse during the chip industry's downturn in 2001, when global memory chip prices dropped by 80 percent, resulting in a 5 trillion won annual loss for the company. Creditor banks, many of them under government control at the time, intervened to provide assistance.
In 2001, Hyundai Electronics rebranded as Hynix Semiconductor, a portmanteau of "high" and "electronics". Alongside this change, Hynix began selling or spinning off business units to recover from a cash squeeze. Hynix separated several business units, including Hyundai Curitel, a mobile phone manufacturer; Hyundai SysComm, a CDMA mobile communication chip maker; Hyundai Autonet, a car navigation system producer; ImageQuest, a flat panel display company; and its TFT-LCD unit, among others. The divestiture was part of a bailout plan requested by the major creditor, Korea Development Bank, to provide fresh funds to the insolvent semiconductor maker.
In 2003, Hyundai Group affiliates, including Hyundai Merchant Marine, Hyundai Heavy Industries, Hyundai Elevator, and Chung Mong-hun, the chairman of Hyundai Asan, consented to forfeit their voting rights and sell their stakes in Hynix. Hynix was then formally spun-off from the Hyundai Group in August 2003.
SK Hynix
The Hynix creditors, including Korea Exchange Bank, Woori Bank, Shinhan Bank and Korea Finance Corporation, attempted to sell their stake in Hynix several times but failed. Korean companies such as Hyosung, Dongbu CNI, and former stakeholders, including Hyundai Heavy Industries and LG, were considered potential bidders but were either denied or withdrew from the bidding. In July 2011, SK Telecom, the nation's largest telecommunication company, and STX Group officially entered the bid. STX dropped its deal in September 2011, leaving SK Telecom as the sole bidder. In the end, SK acquired Hynix for US$3 billion in February 2012. As Hynix was incorporated into SK Group, its name was changed to SK Hynix.
In 2021, Hynix acquired Intel's NAND business for $9 billion, resulting in the establishment of Solidigm.
SK hynix, September 26, 2024, said it has begun mass production of 12-layer high bandwidth memory (HBM) chips, the first in the world.
Corporate governance
As of December 2023
Operations
SK Hynix has production facilities in Icheon and Cheongju, South Korea, and in Wuxi, Chongqing and Dalian, China.
Products
Hynix produces a variety of semiconductor memories, including:
Computing memory
Consumer and network memory
Graphics memory
Mobile memory
NAND flash memory
CMOS image sensors
Solid-state drives (SSDs)
High Bandwidth Memory: SK Hynix supplies high-bandwidth memory (HBM) chips that are used in AI. The company also supplies the HBM3E, a fifth-generation HBM, to Nvidia.
Logo
See also
List of semiconductor fabrication plants
Semiconductor industry in South Korea
References
External links
Companies listed on the Korea Exchange
Companies in the KOSPI 200
Companies in the S&P Asia 50
Computer companies of South Korea
Computer hardware companies
Computer memory companies
Electronics companies established in 1983
Electronics companies of South Korea
Hyundai Group
Price fixing convictions
Semiconductor companies of South Korea
Hynix
South Korean brands
South Korean companies established in 1983
Technology companies of South Korea | SK Hynix | Technology | 1,510 |
1,854,369 | https://en.wikipedia.org/wiki/Hydraulic%20head | Hydraulic head or piezometric head is a specific measurement of liquid pressure above a vertical datum.
It is usually measured as a liquid surface elevation, expressed in units of length, at the entrance (or bottom) of a piezometer. In an aquifer, it can be calculated from the depth to water in a piezometric well (a specialized water well), and given information of the piezometer's elevation and screen depth. Hydraulic head can similarly be measured in a column of water using a standpipe piezometer by measuring the height of the water surface in the tube relative to a common datum. The hydraulic head can be used to determine a hydraulic gradient between two or more points.
Definition
In fluid dynamics, head is a concept that relates the energy in an incompressible fluid to the height of an equivalent static column of that fluid. From Bernoulli's principle, the total energy at a given point in a fluid is the kinetic energy associated with the speed of flow of the fluid, plus energy from static pressure in the fluid, plus energy from the height of the fluid relative to an arbitrary datum. Head is expressed in units of distance such as meters or feet. The force per unit volume on a fluid in a gravitational field is equal to ρg where ρ is the density of the fluid, and g is the gravitational acceleration. On Earth, additional height of fresh water adds a static pressure of about 9.8 kPa per meter (0.098 bar/m) or 0.433 psi per foot of water column height.
The static head of a pump is the maximum height (pressure) it can deliver. The capability of the pump at a certain RPM can be read from its Q-H curve (flow vs. height).
Head is useful in specifying centrifugal pumps because their pumping characteristics tend to be independent of the fluid's density.
There are generally four types of head:
Velocity head is due to the bulk motion (kinetic energy) of a fluid. Note that is equal to the dynamic pressure for irrotational flow.
Elevation head is due to the fluid's weight, the gravitational force acting on a column of fluid. The elevation head is simply the elevation (h) of the fluid above an arbitrarily designated zero point:
Pressure head is due to the static pressure, the internal molecular motion of a fluid that exerts a force on its container. It is equal to the pressure divided by the force/volume of the fluid in a gravitational field:
Resistance head (or friction head or Head Loss) is due to the frictional forces acting against a fluid's motion by the container. For a continuous medium, this is described by Darcy's law which relates volume flow rate (q) to the gradient of the hydraulic head through the hydraulic conductivity K: while in a piped system head losses are described by the Hagen–Poiseuille equation and Bernoulli’s equation.
Components
After free falling through a height in a vacuum from an initial velocity of 0, a mass will have reached a speed
where is the acceleration due to gravity. Rearranged as a head:
The term is called the velocity head, expressed as a length measurement. In a flowing fluid, it represents the energy of the fluid due to its bulk motion.
The total hydraulic head of a fluid is composed of pressure head and elevation head. The pressure head is the equivalent gauge pressure of a column of water at the base of the piezometer, and the elevation head is the relative potential energy in terms of an elevation. The head equation, a simplified form of the Bernoulli principle for incompressible fluids, can be expressed as:
where
is the hydraulic head (Length in m or ft), also known as the piezometric head.
is the pressure head, in terms of the elevation difference of the water column relative to the piezometer bottom (Length in m or ft), and
is the elevation at the piezometer bottom (Length in m or ft)
In an example with a 400 m deep piezometer, with an elevation of 1000 m, and a depth to water of 100 m: z = 600 m, ψ = 300 m, and h = 900 m.
The pressure head can be expressed as:
where
is the gauge pressure (Force per unit area, often Pa or psi),
is the unit weight of the liquid (Force per unit volume, typically N·m−3 or lbf/ft3),
is the density of the liquid (Mass per unit volume, frequently kg·m−3), and
is the gravitational acceleration (velocity change per unit time, often m·s−2)
Fresh water head
The pressure head is dependent on the density of water, which can vary depending on both the temperature and chemical composition (salinity, in particular). This means that the hydraulic head calculation is dependent on the density of the water within the piezometer. If one or more hydraulic head measurements are to be compared, they need to be standardized, usually to their fresh water head, which can be calculated as:
where
is the fresh water head (Length, measured in m or ft), and
is the density of fresh water (Mass per unit volume, typically in kg·m−3)
Hydraulic gradient
The hydraulic gradient is a vector gradient between two or more hydraulic head measurements over the length of the flow path. For groundwater, it is also called the Darcy slope, since it determines the quantity of a Darcy flux or discharge. It also has applications in open-channel flow where it is also known as stream gradient and can be used to determine whether a reach is gaining or losing energy. A dimensionless hydraulic gradient can be calculated between two points with known head values as:
where
is the hydraulic gradient (dimensionless),
is the difference between two hydraulic heads (length, usually in m or ft), and
is the flow path length between the two piezometers (length, usually in m or ft)
The hydraulic gradient can be expressed in vector notation, using the del operator. This requires a hydraulic head field, which can be practically obtained only from numerical models, such as MODFLOW for groundwater or standard step or HEC-RAS for open channels. In Cartesian coordinates, this can be expressed as:
This vector describes the direction of the groundwater flow, where negative values indicate flow along the dimension, and zero indicates 'no flow'. As with any other example in physics, energy must flow from high to low, which is why the flow is in the negative gradient. This vector can be used in conjunction with Darcy's law and a tensor of hydraulic conductivity to determine the flux of water in three dimensions.
In groundwater
The distribution of hydraulic head through an aquifer determines where groundwater will flow. In a hydrostatic example (first figure), where the hydraulic head is constant, there is no flow. However, if there is a difference in hydraulic head from the top to bottom due to draining from the bottom (second figure), the water will flow downward, due to the difference in head, also called the hydraulic gradient.
Atmospheric pressure
Even though it is convention to use gauge pressure in the calculation of hydraulic head, it is more correct to use absolute pressure (gauge pressure + atmospheric pressure), since this is truly what drives groundwater flow. Often detailed observations of barometric pressure are not available at each well through time, so this is often disregarded (contributing to large errors at locations where hydraulic gradients are low or the angle between wells is acute.)
The effects of changes in atmospheric pressure upon water levels observed in wells has been known for many years. The effect is a direct one, an increase in atmospheric pressure is an increase in load on the water in the aquifer, which increases the depth to water (lowers the water level elevation). Pascal first qualitatively observed these effects in the 17th century, and they were more rigorously described by the soil physicist Edgar Buckingham (working for the United States Department of Agriculture (USDA)) using air flow models in 1907.
Head loss
In any real moving fluid, energy is dissipated due to friction; turbulence dissipates even more energy for high Reynolds number flows. This dissipation, called head loss, is divided into two main categories, "major losses" associated with energy loss per length of pipe, and "minor losses" associated with bends, fittings, valves, etc. The most common equation used to calculate major head losses is the Darcy–Weisbach equation. Older, more empirical approaches are the Hazen–Williams equation and the Prony equation.
For relatively short pipe systems, with a relatively large number of bends and fittings, minor losses can easily exceed major losses. In design, minor losses are usually estimated from tables using coefficients or a simpler and less accurate reduction of minor losses to equivalent length of pipe, a method often used for shortcut calculations of pneumatic conveying lines pressure drop.
See also
Borda–Carnot equation
Dynamic pressure
Minor losses in pipe flow
Total dynamic head
Stage (hydrology)
Head (hydrology)
Hydraulic accumulator
Notes
References
Bear, J. 1972. Dynamics of Fluids in Porous Media, Dover. .
for other references which discuss hydraulic head in the context of hydrogeology, see that page's further reading section
Aquifers
Water
Hydrology
Fluid dynamics
Water wells
Pressure | Hydraulic head | Physics,Chemistry,Engineering,Environmental_science | 1,924 |
62,696,810 | https://en.wikipedia.org/wiki/Pleroma%20%28software%29 | Pleroma is a free and open-source microblogging social networking service. Unlike popular microblogging services such as Twitter or Weibo, Pleroma can be self-hosted and operated by anyone with a server and a web domain, a combination commonly referred to as an instance. Instance administrators can manage their own code of conduct, terms of service, and content moderation policies, allowing users to have more control over the content they view as well as their experience. It was named after the religious concept of pleroma, or the totality of divine powers.
The software also implements the ActivityPub protocol, which allows users to communicate and interact with content from other Pleroma instances or any server that is running software that supports ActivityPub (such as Mastodon, Misskey, Pixelfed, etc.), a decentralized network commonly referred to as the Fediverse.
As of July 2024, over 138k user accounts (1.3% of the total amount of fediverse users) have been found on over a thousand Pleroma instances.
History
In 2016, the Pleroma project was created by a German developer under the pseudonym "lain". It was originally designed as an alternative user interface for GNU social with many similarities to Qvitter, a popular frontend at the time which resembled an early Twitter user interface. The frontend was written with the Vue.js JavaScript framework.
As development of the frontend continued, it was perceived that there were many disadvantages to GNU social's design of using plugins to implement features, as well as issues with its codebase and usage of PHP, which led to the development of a backend to replace GNU social. The first commit to the repository hosting the Pleroma backend was made on March 17, 2017.
Releases
On February 22, 2019, the first stable release of the Pleroma backend, 0.9.9, was released. The backend includes the Pleroma frontend as the main user interface, federation of user content using OStatus and ActivityPub and support for the GNU social and Mastodon client APIs. The backend was built using the Elixir programming language and the Phoenix web framework, and uses PostgreSQL for its database.
On June 28, 2019, Pleroma 1.0 was released. This release adds the ability to create polls, report content and schedule posts to be posted at a later date. A new website containing documentation for users and administrators was also launched.
On March 8, 2020, Pleroma 2.0 was released. This release drops support for the OStatus protocol due to a lack of usage and active maintenance, introduces a new user interface for administration and adds post reactions using Unicode emoji.
On August 28, 2020, Pleroma 2.1 was released. This release includes a federated instant messaging system based on ActivityPub, an alternative to the direct messages system used by other software such as Mastodon.
On October 29, 2023, Pleroma 2.6 was released. This release implements quoting posts as well as the ability to use custom emoji for post reactions.
Pleroma was originally developed with its frontend and backend releasing new versions in sync, but starting with Pleroma 2.6.1 the policy was discontinued.
On August 1, 2024, Pleroma 2.7 was released, adding support for uploading files via IPFS, bookmark categorization, improved theming and various quality-of-life improvements.
Forks
Akkoma
Akkoma is a fork of Pleroma that started development in 2022. The fork was made to support a faster pace of development, as well as to support more user customization.
Features
Pleroma has been described as being more lightweight than alternatives like Mastodon, due to being less intensive on system resources and requiring fewer software dependencies.
Pleroma's default post length limit is 5000 characters, and can be configured by instance administrators. It is capable of uploading and sharing multimedia, as well as polls. Posts are created by default using plaintext, but can also be translated from a variety of markup languages such as HTML, BBCode and Markdown.
While Pleroma comes with its own frontend by default, instance administrators can install additional user interfaces, such as a port of Mastodon's advanced mode (similar to TweetDeck) as well as a interface for the Gopher protocol.
Pleroma includes a system known as the Message Rewrite Facility (or MRF), which allows administrators of a Pleroma instance to modify the content that it sends and receives. By default, Pleroma provides a selection of policies, including a basic moderation policy that can create restrictions on federation with other instances. Custom MRF policies can be written using any language based on the BEAM virtual machine. This system has been used as a method to study how content moderation works in the Fediverse and the challenges that it faces, since the list of active policies is publicly shown by default through both the API and the frontend.
Adoption
The Debian community hosts their own microblogging service using Pleroma, as part of a project to establish a suite of social networking services for maintainers.
Pleroma has received funding through the NLNet Foundation to aid development.
See also
ActivityPub
Comparison of microblogging services
Comparison of software and protocols for distributed social networking
Fediverse
GNU social, a service that Pleroma's user interface previously supported
Mastodon
References
External links
Presentation of Pleroma at ElixirConf 2019
2016 software
Software that federates via ActivityPub
Free and open-source software
Free software websites
Microblogging software
Social media
Social networking services
Software using the GNU Affero General Public License
Web applications | Pleroma (software) | Technology | 1,210 |
1,118,198 | https://en.wikipedia.org/wiki/Garmin | Garmin Ltd. (shortened to Garmin, stylized as GARMIN, and formerly known as ProNav) is an American, Swiss-domiciled multinational technology company founded in 1989 by Gary Burrell and Min Kao in Lenexa, Kansas, United States, with operational headquarters in Olathe, Kansas. Since 2010, the company is legally incorporated in Schaffhausen, Switzerland.
The company specializes in GNSS technology for automotive, aviation, marine, outdoor, and sport activities. Due to their development in wearable technology, they have also been competing with activity tracker and smartwatch consumer developers such as Fitbit and Apple.
History
Founding and initial growth: 1989 to 1999
In 1983, Gary Burrell recruited Min H. Kao from the defense contractor Magnavox while working for the former King Radio. They founded Garmin in 1989 in Lenexa, Kansas, as "ProNav". ProNav's first product was a GPS unit for boaters called GPS 100. It debuted at the 1990 International Marine Technology Exposition, where it garnered 5,000 orders. A short time later, in 1991, the company opened a manufacturing facility in Taiwan.
The company was later renamed "Garmin", a portmanteau of its two founders, Gary Burrell and Min H. Kao. In 1991, the U.S. Army became their first customer.
In 1994, Garmin released GPS 155, the first IFR-certified aviation navigation system. By 1995, Garmin's sales had reached $102 million, and it had achieved a profit of $23 million. In 1996, the company headquarters moved to Olathe, Kansas. A year later, Garmin sold its one millionth unit.
In 1998, Garmin released the GNS 430 and StreetPilot. GNS 430 was an integrated avionics system that served as both GPS navigation receiver and communications transceiver. StreetPilot was Garmin’s first portable navigation system for cars.
By 1999, sales had reached $232.6 million with a profit of $64 million. Garmin reported a 2006 total revenue of $1.77 billion, up 73% from $1.03 billion in 2005.
GPS growth and additional markets: 2000 to 2018
On Dec. 8, 2000, Garmin began public trading on NASDAQ with a stock price of $14 per share. Twenty-one years later on Dec. 7, 2021, the company transferred its listing to the New York Stock Exchange.
By 2000, Garmin had sold three million GNSS devices, and was producing 50 different models. Its products were sold in 100 countries and carried by 2,500 independent distributors. As of August 22, 2000, the company held 35 patents on GNSS technology. By the end of June 2000, the company employed 1,205 people: 541 in the United States, 635 in Taiwan, and 29 in the United Kingdom.
In 2003, Garmin announced its G1000 integrated cockpit system (though it was not available until 2004 when it received FAA certification). It was first adopted by aircraft makers including Cessna and Diamond Aircraft, and later would be installed as forward-fit and retrofit applications in regional airliners, business jets and turboprops, light airplanes, helicopters, and military and government aircraft.
That same year, Garmin launched Forerunner 201, a fitness smartwatch for runners that was the first wrist-based GPS trainer.
In 2005, Garmin launched nüvi, its first compact car navigator. In 2006, Garmin released its first GPS-enabled cycling computer, Edge. That same year, the company introduced a new corporate logo, and opened its first retail store, located on Michigan Avenue in Chicago, Illinois.
In 2007, the company introduced its first touchscreen marine chartplotters, the GPSMAP 5000 series for international boaters.
In 2011, Garmin released its first GPS watch for the sport of golfing: the Approach S1. A year later in 2012, the company released its fēnix adventure smartwatch, designed for outdoor sports and recreation.
2014 saw the release of Vivofit, Garmin’s first wearable fitness band with a replaceable battery with over one year of battery life. Vivofit tracks a wearer’s steps and learns an individual’s activity level in order to adjust daily goals. 2014 was also the year that Garmin acquired the New Zealand company Fusion Electronics Limited and its subsidiaries. After the acquisition, the company, which sold integrated marine audio products and accessories, became known as Garmin New Zealand Ltd.
In 2015, Garmin launched Panoptix, the first product to provide real-time live sonar for anglers.
A year later, in 2016, Garmin acquired DeLorme, which gave Garmin DeLorme’s inReach satellite communication technology with interactive SOS messaging. The inReach Satellite Communicator had been the first personal satellite communication device equipped for two-way text messaging using satellites. In 2017, Garmin released their first devices made with inReach: the inReach SE+ and Explorer+.
In 2017, Garmin released its first dive computer with surface GPS, the Descent Mk1. The Mk1 also provides an altimeter and HR monitor, and uses Garmin’s fenix 5X platform for everyday activity tracking.
Recent market expansion: 2018 to present
In 2018, Garmin improved its Panoptix technology by combining it with Livescope. The new Panoptix Livescope provided both scanning or imaging sonar as well as real-time, live sonar.
In April 2018, Garmin launched Connect IQ 3.0 along with new apps—MySwim Pro, Yelp, Trailforks and iHeartRadio. In May 2018, Garmin partnered with the University of Kansas Medical Center to tackle sleep apnea and atrial fibrillation.
In 2019, Garmin announced the release of new technologies in several fields. In its Automotive segment, there was an all-terrain, all-in-one GPS, the Garmin Overlander; for the Marine segment, a freshwater trolling motor, the Force; and under Garmin’s Aviation segment, an emergency autonomous landing system for aircraft, Garmin Autoland.
In 2020, Garmin Autoland won the Robert J. Collier Trophy for outstanding contributions to aviation and aerospace.
In 2022, Garmin released a new health monitoring device with its first smart blood pressure monitor, Index BPM. Index BPM is FDA-cleared, and can be used by up to 16 different people. The following year, Garmin introduced the FDA-cleared ECG app, allowing users to record heart rhythm and check for atrial fibrillation.
In 2023, Garmin announced a two-year study with the U.S. Space Force. Under the study, over 6000 Garmin Forerunner 55 and Instinct 2 Solar watches were given to members of Space Force (known as Guardians). The study aims to answer the question of whether or not regular active fitness testing can be replaced by fitness assessments made with data from the smartwatches. In addition to their health and wellness features, the watches were chosen because they have the ability to disable GPS functionality, should there be a need for higher military privacy and security. That same year, the company announced that Garmin fenix 7 watches would be used by crew members during the Polaris Dawn space mission to monitor health stats and vitals.
In 2024, the Independent Boat Builders, Inc. (IBBI) selected Garmin as its exclusive marine electronics and audio supplier. The selection starts in model year 2025 and runs through 2029.
Public offering
In December 2021, Garmin began trading on the New York Stock Exchange (NYSE) under the ticker symbol NYSE: GRMN. Previously, the company had traded on the NASDAQ exchange.
Acquisitions
In August 2003, Garmin completed acquisition of UPS Aviation Technologies, Inc. based in Salem, Oregon, a subsidiary of United Parcel Service, Inc., expanding its product line of panel-mounted GPS/NAV/COMM units and integrated cockpit systems for private and commercial aircraft. The acquired company changed its name to Garmin AT, Inc. and continued operations as a wholly owned subsidiary of Garmin International, Inc.
Garmin has acquired Dynastream Innovations, EME Tec Sat SAS (EME), and Digital Cyclone. Dynastream, in Cochrane, Alberta, produces personal monitoring technology (ANT+)—such as foot pods and heart rate monitors for sports and fitness products—and also ultra-low-power and low-cost wireless connectivity devices for a wide range of applications (ANT). EME Tec Sat SAS is the distributor of Garmin's consumer products in France; following the acquisition, EME changed its name to Garmin France SAS. Digital Cyclone Inc (DCI), located in Chanhassen, Minnesota, provides mobile weather solutions for consumers, pilots, and outdoor enthusiasts.
In 2007, Garmin bought Nautamatic Marine Systems, an Oregon-based company that makes autopilot systems for boats. In July 2011, Garmin finished its acquisition of the German satellite navigation company Navigon.
In 2015, Garmin acquired South Africa's iKubu Ltd. for its Backtracker on-bicycle low power radar system.
In 2016, Garmin acquired DeLorme, which gave Garmin DeLorme’s inReach satellite communication technology.
In 2017, Garmin acquired Navionics, a privately held manufacturer of nautical charts and mobile applications.
In 2019, Garmin acquired Tacx, a privately held Dutch company that designs and manufacturers indoor bike trainers, tools and accessories, as well as indoor training software and applications.
In 2020, Garmin acquired Firstbeat Analytics from Firstbeat Technologies. Firstbeat Analytics designs physiological-measurement algorithms used by health and wellness devices. Prior to the acquisition, Garmin and Firstbeat had a partnership to create dynamic training programs for athletes based on activity and fitness data captured throughout the day.
In 2021 Garmin acquired AeroData, a Scottsdale, Arizona based company that provides aircraft performance software for over 135 airlines worldwide. The company will continue to operate under the AeroData brand. Garmin also acquired Fltplan.com, a company that provides flight-planning, scheduling, and trip-support services; and, Geos Worldwide, an emergency monitoring and response service.
In 2021, Garmin acquired GEOS Worldwide, a provider of emergency monitoring and incident response services.
In 2022, Garmin acquired Vesper Marine, a privately-held provider of AIS, VH, and vessel monitoring solutions.
In 2023, Garmin announced a definitive agreement to acquire JL Audio.
Corporate governance
Burrell retired in 2002 as Garmin's chief executive officer and in 2004 retired as co-chairman of its board of directors. He remained chairman emeritus until his death in 2019. Kao became CEO in 2003, and chairman in 2004.
In 2005, Forbes estimated Kao's net worth at $1.5 billion. He has donated $17.5 million to the University of Tennessee. The same year Forbes estimated Burrell's net worth as $940 million. Cliff Pemble is the current CEO of Garmin.
July 2020 outage
On July 23, 2020, Garmin shut down its call centres, website and some online services, including Garmin Connect and flyGarmin, after a ransomware attack encrypted its internal network and some production systems. The company did not say it was a ransomware attack, but company employees writing on social media described it as such, with some speculation about a ransomware strain called WastedLocker later confirmed. Hackers reportedly demanded a $10 million ransom from Garmin. The company instituted a "multi-day maintenance window" to deal with the attack's impacts. Some Garmin online services began to function again on July 27, 2020, though delays in synchronising data with connected applications were expected; Strava anticipated a delay of "a week or longer". Experts speculated that Garmin had paid hackers a reported $10m ransom, or brokered some other kind of deal.
The outage meant Garmin could not receive calls or emails, or conduct online chats. Athlete users of Garmin wearables could not upload mileage, location, heart rate, and other data. Pilots were unable to download data for Garmin aircraft navigational systems, preventing flight scheduling. Garmin said there was "no indication" that personal information had been stolen.
Operations
In 2010, Garmin opened a facility in Cary, North Carolina as part of the Research Triangle Park. Garmin operates in several other countries besides the UK, USA, and Taiwan. It operates as Formar (Belgium), Garmin AMB (Canada), Belanor (Norway), Trepat (Spain), and Garmin-Cluj (Romania).
Products
Aviation
Garmin designs, manufactures, and markets a number of aircraft avionics products, systems, and services. These include: integrated flight decks, electronic flight displays and instrumentation, navigation and communication products, automatic flight control systems and safety-enhancing technologies, audio control systems, engine indication systems, traffic awareness and avoidance solutions, ads-b and transponders, weather information and avoidance solutions, and datalink and connectivity.
A few select products in this category include G1000, Autoland, Garmin PlaneSync, Runway Occupancy Awareness, and GTN Touchscreen avionics.
Fitness
Garmin produces a range of products and applications for use in health, wellness, and fitness activities, to include running and multisport watches, cycling products, smartwatch devices, scales and monitors, Garmin Connect and Garmin Connect Mobile, and the Connect IQ application development platform.
A few select products in this category include Forerunner, Edge, Index, Varia, Vivofit and Venu. Some of the features include optical heart rate sensors, Body Battery energy monitoring, contactless payment.
Outdoor recreation
Garmin offers a range of products designed for use in outdoor activities. These include adventure watches, dive computers, golf watches and rangefinders, outdoor handhelds and satellite communicators, consumer automotive GPS devices, and dog tracking and training devices.
A few select products in this category include fenix (a multisport adventure watch), Descent, Approach, inReach, the Garmin Drive series and Alpha. Notable outdoor segment feature integrations, applications and services include Solar charging technology, the Garmin Golf app and Garmin Response international emergency coordination center.
Marine
Garmin manufactures a number of recreational marine electronics with products that include chartplotters and multifunction displays, cartography, fishfinders, SONAR, Autopilot Systems, RADAR, VHF communication radios, and handheld wearable devices.
A few select products in this category include Livescope, Chartplotters, Force Trolling Motor, Navionics, and Garmin’s marine audio products Fusion and JL Audio.
The company's first product was the GPS 100, a panel-mounted GPS receiver aimed at the marine market, priced at $2,500. It made its debut at the 1990 International Marine Technology Exposition in Chicago.
Handheld GPS
Another early product, a handheld GPS receiver, was sold to US military personnel serving in Kuwait and Saudi Arabia during the 1991 Gulf War.
In the early 2000s Garmin launched a series of personal GNSS devices aimed at recreational runners called the Forerunner. The Garmin Foretrex is a similar wrist-worn GNNS device with two-dimensional GPS tracking and waypoint projection called.
eTrex
The compact eTrex was introduced in 2000; several models with different features have been released since. The original eTrex, commonly nicknamed "eTrex Yellow", offered a lightweight (5.3 oz/150 g), waterproof, palm-sized 12-channel GPS receiver, along with a battery life of up to 22 hours on two AA-size batteries. It was replaced in 2007 by the eTrex H, which added a high-sensitivity receiver. Other eTrex models include the Summit, Venture, Legend, and Vista, each with various additional features such as WAAS, altimeter, digital compass, city database, and highway maps. Many of these models come in color and expandable-memory versions.
In May 2011 Garmin refreshed the eTrex product line with new mechanical design and support for advances in cartography and hardware technology with its release of the eTrex 10, eTrex 20, and eTrex 30, Garmin became the first company to manufacture and distribute a worldwide consumer navigation product supporting both GPS and GLONASS satellite constellations. On May 13, 2015, Garmin released the eTrex 20x and 30x, which succeeded the eTrex 20 and 30. The main upgrade was a higher resolution screen and 4GB storage, double of the previous models.
On July 2, 2015, Garmin introduced its eTrex Touch line, releasing three models (25, 35 and 35t), all featuring a 2.6" touch screen. The 35t model designation is not used in Europe, but the European market 35 is essentially the 35t, and both the European 25 and 35 include Garmin TopoActive Europe maps and 8GB of internal storage.
The Geko series was a compact line of handheld GPS receivers aimed at the budget or lightweight hiking market.
In 2004, Garmin introduced its 60C line of handheld GPS mapping receivers, featuring increased sensitivity and storage capacity along with a battery life of up to 30 hours in battery-save mode. This was followed by the 60Cx and 60CSx with improved color map displays.
With the GTM-11, GTM 20 and GTM 25, a Garmin GPS device receives and uses traffic message channel (TMC) information. Also, some Garmin nüvi (1690, 1490T, 1450T, 1390T, 1390, 1350, 1260, 1250 and 265WT, 265T, 265W, 265, 255w and 255) comes with an integrated TMC receiver.
iQue PDA receivers
In 2003, Garmin launched the iQue line of integrated PDA–GPS receivers. On October 31, 2005, the iQue M4 became the first PDA that did not require a PC to preload the maps. The American version came with built-in maps of North America, while the UK version was supplied pre-loaded with maps of Western Europe.
Dog tracking and training
Garmin produces a line of dog trackers and trainers under the Astro and Alpha brands.
Fishfinders
Garmin also manufactures a line of sonar fishfinders, including some units that also have GPS capability, and some that use spread spectrum technology.
Laptop GPS and mobile apps
In April 2008, Garmin launched Garmin Mobile PC, a GPS navigation software program for laptop PCs and other computers, based on the Microsoft Windows operating system, now discontinued.
Garmin offers mobile apps for various purposes for Android, Windows Phone, and for iPhone.
Nüvifone
In early 2009, Garmin announced it would be manufacturing a location-specific cellular telephone in cooperation with Asus. Called the Garmin-Asus nüvifone G60, the United States release on AT&T was scheduled for October 4, 2009. Four other models in this line have since been released: two Windows Mobile-powered models for the European and Asian market, and two Android models, one for the Europe/Asia market and another for T-Mobile USA.
Personal trainers
The Garmin Edge and certain models of Garmin Forerunner are a suite of GPS-enabled devices for use while running or cycling.
Avionics
Garmin Aviation offers electronically integrated cockpits for aircraft: panel mount displays, primary flight displays (PFD) and multi-function displays (MFD), transponders, radar, and other types of avionic systems. Garmin entered this market in 1991 with the GPS-100AVD panel-mounted receiver. Its first portable unit, the GPS-95, was introduced in 1993. In 1994, the GPS-155 panel-mounted unit was the first GPS receiver on the market to receive full FAA certification for instrument approaches. In 1998, Garmin introduced the GNS-430, an integrated GPS navigation receiver/communications transceiver. That same year, the company rolled out its first integrated GPS, COM, VOR, LOC and glideslope product, the GNS 430. More than 125,000 GNS navigators are now installed in aircraft. Garmin reached its one millionth delivery in November 2017.
The G1000 is an all-glass avionics suite for OEM aircraft, the similar G950 is used in experimental aircraft, and the G600 is a retrofit.
On October 30, 2019, Garmin announced that the Piper M600 and Cirrus Vision Jet would become the first general aviation aircraft certified with the company's emergency autoland system, which is capable of automatically landing the aircraft with the push of a button and will be a part of both aircraft's G3000 integrated avionics suite in 2020. Garmin calls the new technology "Autonomí".
Garmin plans to equip other platforms in 2020, like the TBM 940, and hopes to eventually expand its offer to the G1000 avionics suite. In June 2021, Garmin Autoland won the 2020 Collier Trophy.
Garmin-AT subsidiary
In 2003, Garmin acquired UPS Aviation Technologies, including that firm's II Morrow Apollo line of aircraft MFD/GPS/NAV/COMM units. II Morrow had been founded in Salem, Oregon in 1982 as a manufacturer of LORAN C marine and general aviation products. In 1982 its aircraft navigator 602 LORAN C receiver permitted point to point navigation. Some examples of its LORAN units are Apollo II 616B Aviation LORAN panel mount (1986), II Morrow Apollo 604 Loran Navigator (1987) and Apollo 820 GPS Flybuddy (1991). In 1986, United Parcel Service (UPS) purchased the company to expand the use of electronic technology in the package delivery and tracking business.
II Morrow shifted focus from marine business to development of package process automation technology for UPS such as vehicle management systems, automated high speed package sorting systems, as well as delivery and tracking systems. In 1999, II Morrow was renamed to UPS Aviation Technologies, and re-focused towards modernizing UPS' Boeing 7xx series Heavy Iron Transport Category Aircraft fleet, as well they also re-entered the general aviation marketplace. It certified the first Gamma 3 WAAS GPS engine for FAA Certified Precision GPS approaches. The new certified WAAS engine yielded vertical and horizontal accuracy of one meter RMS in guidance into airports without existing ILS approaches. This GPS technology met the FAA's TSO-C146a primary navigation standards for en route, terminal and approach phases of flight—with WAAS augmentation as the sole means of navigation. This primary GPS "sole source" navigation capability was integrated into the CNX-80. The CNX-80 WAAS GPS/COM/NAV integrated navigator was the first product in the industry approved for primary GPS navigation. It also enabled LPV glideslope approaches without requiring ground nav aids. New LNAV (GPS) approaches provide the accuracy and safety of an ILS—without the ground-based localizer and glideslope equipment. Later, the CNX-80 was renamed the GNS-480, under Garmin.
In 1999: Flight International magazine presented UPS Aviation Technologies with its Aerospace Industry Award for the development of ADS-B, a surveillance technology intended to reduce aviation delays while improving safety.
Automotive OEM (Original Equipment Manufacturer)
Garmin has relationships with several leading automobile manufacturers to provide a variety of hardware and software solutions for their vehicles. This includes BMW Group, Mercedes-Benz, Honda, Daimler, Ford, Chrysler, Toyota, PSA/Citroen, Geely, Honda Motorcycle, Kawasaki, BMW Motorrad, Aston Martin, and Yamaha Motor.
The product categories Garmin manufactures include domain controllers, infotainment units, map databases, and cameras.
Wristwear
Garmin produces activity trackers and sports watches, aimed at activities such as running, watersports, golf, cycling and swimming with sensors such as heart rate and gps. Some recent models add Bluetooth music playback and pulse-oximetry.
The Vivofit and Vivosmart ranges are activity trackers. The Garmin Vivofit 3 measures the wearer's duration and quality of sleep, quantifies body movement, records heart rate, counts steps and the number of stairs climbed. Garmin produces the Vivosmart HR. It comes with the touch screen and includes heart rate monitoring, media player controls, smart notifications and phone finder features.
The Forerunner series is aimed primarily at runners, but the watches are more broadly focused, especially at the higher end. The 735 XT has multi-sport tracking capabilities (automatically switching between sports, for example in a triathlon) and a variety of special profiles for jogging, swimming, cycling, skiing, paddle sports, various weight loss activities, and hiking. It comes with a built-in heart rate sensor and GPS.
The Fenix range, such as the Fenix 6 released in August 2019, is a more rugged, multisport range that also offers a solar charging model.
The Vivomove is a traditionally-styled watch with activity tracking capabilities. It has a built-in accelerometer (calculates distance during indoor workouts, without the need for a foot pod), step counter, auto goal (learns the wearer's activity level and assigns a daily step goal), move bar, and sleep-monitoring capabilities.
Other series include the Quatix aimed at water sports, the D2 aviator watches, the Approach golf watches.
In 2018, Garmin added support for maps, Bluetooth music playback, NFC contactless payment (using a digital wallet branded Garmin Pay), and pulse-oximetry for its wristwear.
Sport sponsorship
In 2007 Garmin began sponsorship of English Premier League football club Middlesbrough in a one-year deal that was carried into a second year for the 2008/09 season. In 2008 Garmin began sponsorship of cycling team to promote its Edge line of bicycle computers. In 2015, the team became Cannondale–Garmin.
In 2014 Garmin paired up with Premier League side Southampton FC in a global partnership. Garmin's European head office is located in Southampton.
See also
Automotive navigation system
Comparison of commercial GPS software
Garmin–Sharp
Geotab
References
External links
1989 establishments in Kansas
American brands
Avionics companies
Companies based in Kansas
Companies based in the Kansas City metropolitan area
Companies listed on the New York Stock Exchange
Electronics companies established in 1989
Electronics companies of the United States
Fishing equipment manufacturers
Satellite navigation
Marine electronics
Navigation system companies
Watch brands
2000 initial public offerings
Companies based in the canton of Schaffhausen
Schaffhausen
Collier Trophy recipients | Garmin | Engineering | 5,570 |
4,600,562 | https://en.wikipedia.org/wiki/Surface%20finish | Surface finish, also known as surface texture or surface topography, is the nature of a surface as defined by the three characteristics of lay, surface roughness, and waviness. It comprises the small, local deviations of a surface from the perfectly flat ideal (a true plane).
Surface texture is one of the important factors that control friction and transfer layer formation during sliding. Considerable efforts have been made to study the influence of surface texture on friction and wear during sliding conditions. Surface textures can be isotropic or anisotropic. Sometimes, stick-slip friction phenomena can be observed during sliding, depending on surface texture.
Each manufacturing process (such as the many kinds of machining) produces a surface texture. The process is usually optimized to ensure that the resulting texture is usable. If necessary, an additional process will be added to modify the initial texture. The latter process may be grinding (abrasive cutting), polishing, lapping, abrasive blasting, honing, electrical discharge machining (EDM), milling, lithography, industrial etching/chemical milling, laser texturing, or other processes.
Lay
Lay is the direction of the predominant surface pattern, ordinarily determined by the production method used. The term is also used to denote the winding direction of fibers and strands of a rope.
Surface roughness
Surface roughness, commonly shortened to roughness, is a measure of the total spaced surface irregularities. In engineering, this is what is usually meant by "surface finish." A Lower number constitutes finer irregularities, i.e., a smoother surface.
Waviness
Waviness is the measure of surface irregularities with a spacing greater than that of surface roughness. These irregularities usually occur due to warping, vibrations, or deflection during machining.
Measurement
Surface finish may be measured in two ways: contact and non-contact methods. Contact methods involve dragging a measurement stylus across the surface; these instruments are called profilometers. Non-contact methods include: interferometry, confocal microscopy, focus variation, structured light, electrical capacitance, electron microscopy, atomic force microscopy and photogrammetry.
Specification
In the United States, surface finish is usually specified using the ASME Y14.36M standard. The other common standard is International Organization for Standardization (ISO) 1302:2002, although the same has been withdrawn in favour of ISO 21920-1:2021.
Many factors contribute to the surface finish in manufacturing. In forming processes, such as molding or metal forming, surface finish of the die determines the surface finish of the workpiece. In machining, the interaction of the cutting edges and the microstructure of the material being cut both contribute to the final surface finish.
In general, the cost of manufacturing a surface increases as the surface finish improves. Any given manufacturing process is usually optimized enough to ensure that the resulting texture is usable for the part's intended application. If necessary, an additional process will be added to modify the initial texture. The expense of this additional process must be justified by adding value in some way—principally better function or longer lifespan. Parts that have sliding contact with others may work better or last longer if the roughness is lower. Aesthetic improvement may add value if it improves the saleability of the product.
A practical example is as follows. An aircraft maker contracts with a vendor to make parts. A certain grade of steel is specified for the part because it is strong enough and hard enough for the part's function. The steel is machinable although not free-machining. The vendor decides to mill the parts. The milling can achieve the specified roughness (for example, ≤ 3.2 μm) as long as the machinist uses premium-quality inserts in the end mill and replaces the inserts after every 20 parts (as opposed to cutting hundreds before changing the inserts). There is no need to add a second operation (such as grinding or polishing) after the milling as long as the milling is done well enough (correct inserts, frequent-enough insert changes, and clean coolant). The inserts and coolant cost money, but the costs that grinding or polishing would incur (more time and additional materials) would cost even more than that. Obviating the second operation results in a lower unit cost and thus a lower price. The competition between vendors elevates such details from minor to crucial importance. It was certainly possible to make the parts in a slightly less efficient way (two operations) for a slightly higher price; but only one vendor can get the contract, so the slight difference in efficiency is magnified by competition into the great difference between the prospering and shuttering of firms.
Just as different manufacturing processes produce parts at various tolerances, they are also capable of different roughnesses. Generally, these two characteristics are linked: manufacturing processes that are dimensionally precise create surfaces with low roughness. In other words, if a process can manufacture parts to a narrow dimensional tolerance, the parts will not be very rough.
Due to the abstractness of surface finish parameters, engineers usually use a tool that has a variety of surface roughnesses created using different manufacturing methods.
See also
Gloss (optics)
References
Bibliography
Metalworking terminology
Tribology
F | Surface finish | Chemistry,Materials_science,Engineering | 1,092 |
170,097 | https://en.wikipedia.org/wiki/Mean%20free%20path | In physics, mean free path is the average distance over which a moving particle (such as an atom, a molecule, or a photon) travels before substantially changing its direction or energy (or, in a specific context, other properties), typically as a result of one or more successive collisions with other particles.
Scattering theory
Imagine a beam of particles being shot through a target, and consider an infinitesimally thin slab of the target (see the figure). The atoms (or particles) that might stop a beam particle are shown in red. The magnitude of the mean free path depends on the characteristics of the system. Assuming that all the target particles are at rest but only the beam particle is moving, that gives an expression for the mean free path:
where is the mean free path, is the number of target particles per unit volume, and is the effective cross-sectional area for collision.
The area of the slab is , and its volume is . The typical number of stopping atoms in the slab is the concentration times the volume, i.e., . The probability that a beam particle will be stopped in that slab is the net area of the stopping atoms divided by the total area of the slab:
where is the area (or, more formally, the "scattering cross-section") of one atom.
The drop in beam intensity equals the incoming beam intensity multiplied by the probability of the particle being stopped within the slab:
This is an ordinary differential equation:
whose solution is known as Beer–Lambert law and has the form , where is the distance traveled by the beam through the target, and is the beam intensity before it entered the target; is called the mean free path because it equals the mean distance traveled by a beam particle before being stopped. To see this, note that the probability that a particle is absorbed between and is given by
Thus the expectation value (or average, or simply mean) of is
The fraction of particles that are not stopped (attenuated) by the slab is called transmission , where is equal to the thickness of the slab.
Kinetic theory of gases
In the kinetic theory of gases, the mean free path of a particle, such as a molecule, is the average distance the particle travels between collisions with other moving particles. The derivation above assumed the target particles to be at rest; therefore, in reality, the formula holds for a beam particle with a high speed relative to the velocities of an ensemble of identical particles with random locations. In that case, the motions of target particles are comparatively negligible, hence the relative velocity .
If, on the other hand, the beam particle is part of an established equilibrium with identical particles, then the square of relative velocity is:
In equilibrium, and are random and uncorrelated, therefore , and the relative speed is
This means that the number of collisions is times the number with stationary targets. Therefore, the following relationship applies:
and using (ideal gas law) and (effective cross-sectional area for spherical particles with diameter ), it may be shown that the mean free path is
where k is the Boltzmann constant, is the pressure of the gas and is the absolute temperature.
In practice, the diameter of gas molecules is not well defined. In fact, the kinetic diameter of a molecule is defined in terms of the mean free path. Typically, gas molecules do not behave like hard spheres, but rather attract each other at larger distances and repel each other at shorter distances, as can be described with a Lennard-Jones potential. One way to deal with such "soft" molecules is to use the Lennard-Jones σ parameter as the diameter.
Another way is to assume a hard-sphere gas that has the same viscosity as the actual gas being considered. This leads to a mean free path
where is the molecular mass, is the density of ideal gas, and μ is the dynamic viscosity. This expression can be put into the following convenient form
with being the specific gas constant, equal to 287 J/(kg*K) for air.
The following table lists some typical values for air at different pressures at room temperature. Note that different definitions of the molecular diameter, as well as different assumptions about the value of atmospheric pressure (100 vs 101.3 kPa) and room temperature (293.17 K vs 296.15 K or even 300 K) can lead to slightly different values of the mean free path.
In other fields
Radiography
In gamma-ray radiography the mean free path of a pencil beam of mono-energetic photons is the average distance a photon travels between collisions with atoms of the target material. It depends on the material and the energy of the photons:
where μ is the linear attenuation coefficient, μ/ρ is the mass attenuation coefficient and ρ is the density of the material. The mass attenuation coefficient can be looked up or calculated for any material and energy combination using the National Institute of Standards and Technology (NIST) databases.
In X-ray radiography the calculation of the mean free path is more complicated, because photons are not mono-energetic, but have some distribution of energies called a spectrum. As photons move through the target material, they are attenuated with probabilities depending on their energy, as a result their distribution changes in process called spectrum hardening. Because of spectrum hardening, the mean free path of the X-ray spectrum changes with distance.
Sometimes one measures the thickness of a material in the number of mean free paths. Material with the thickness of one mean free path will attenuate to 37% (1/e) of photons. This concept is closely related to half-value layer (HVL): a material with a thickness of one HVL will attenuate 50% of photons. A standard x-ray image is a transmission image, an image with negative logarithm of its intensities is sometimes called a number of mean free paths image.
Electronics
In macroscopic charge transport, the mean free path of a charge carrier in a metal is proportional to the electrical mobility , a value directly related to electrical conductivity, that is:
where q is the charge, is the mean free time, m* is the effective mass, and vF is the Fermi velocity of the charge carrier. The Fermi velocity can easily be derived from the Fermi energy via the non-relativistic kinetic energy equation. In thin films, however, the film thickness can be smaller than the predicted mean free path, making surface scattering much more noticeable, effectively increasing the resistivity.
Electron mobility through a medium with dimensions smaller than the mean free path of electrons occurs through ballistic conduction or ballistic transport. In such scenarios electrons alter their motion only in collisions with conductor walls.
Optics
If one takes a suspension of non-light-absorbing particles of diameter d with a volume fraction Φ, the mean free path of the photons is:
where Qs is the scattering efficiency factor. Qs can be evaluated numerically for spherical particles using Mie theory.
Acoustics
In an otherwise empty cavity, the mean free path of a single particle bouncing off the walls is:
where V is the volume of the cavity, S is the total inside surface area of the cavity, and F is a constant related to the shape of the cavity. For most simple cavity shapes, F is approximately 4.
This relation is used in the derivation of the Sabine equation in acoustics, using a geometrical approximation of sound propagation.
Nuclear and particle physics
In particle physics the concept of the mean free path is not commonly used, being replaced by the similar concept of attenuation length. In particular, for high-energy photons, which mostly interact by electron–positron pair production, the radiation length is used much like the mean free path in radiography.
Independent-particle models in nuclear physics require the undisturbed orbiting of nucleons within the nucleus before they interact with other nucleons.
See also
Scattering theory
Ballistic conduction
Vacuum
Knudsen number
Optics
References
External links
Mean free path calculator
Gas Dynamics Toolbox: Calculate mean free path for mixtures of gases using VHS model
Statistical mechanics
Scattering, absorption and radiative transfer (optics) | Mean free path | Physics,Chemistry | 1,673 |
36,284,117 | https://en.wikipedia.org/wiki/Alexander%20Arhangelskii | Alexander Vladimirovich Arhangelskii (, Aleksandr Vladimirovich Arkhangelsky, born 13 March 1938 in Moscow) is a Russian mathematician. His research, comprising over 200 published papers, covers various subfields of general topology. He has done particularly important work in metrizability theory and generalized metric spaces, cardinal functions, topological function spaces and other topological groups, and special classes of topological maps. After a long and distinguished career at Moscow State University, he moved to the United States in the 1990s. In 1993 he joined the faculty of Ohio University, from which he retired in 2011.
Biography
Arhangelskii was the son of Vladimir Alexandrovich Arhangelskii and Maria Pavlova Radimova, who divorced by the time he was four years old. He was raised in Moscow by his father. He was also close to his uncle, childless aircraft designer Alexander Arkhangelsky. In 1954, Arhangelskii entered Moscow State University, where he became a student of Pavel Alexandrov. At the end of his first year, Arhangelskii told Alexandrov that he wanted to specialize in topology.
In 1959, in the thesis he wrote for his specialist degree, he introduced the concept of a network of a topological space. Now considered a fundamental topological notion, a network is a collection of subsets that is similar to a basis, without the requirement that the sets be open. Also in 1959 he married Olga Constantinovna.
He received his Candidate of Sciences degree (equivalent to a Ph.D.) in 1962 from the Steklov Institute of Mathematics, supervised by Alexandrov. He was granted the Doctor of Sciences degree in 1966.
It was in 1969 that Arhangelskii published what is considered his most significant mathematical result. Solving a problem posed in 1923 by Alexandrov and Urysohn, he proved that a first-countable, compact Hausdorff space must have a cardinality no greater than the continuum. In fact, his theorem is much more general, giving an upper bound on the cardinality of any Hausdorff space in terms of two cardinal functions. Specifically, he showed that for any Hausdorff space X,
where χ(X) is the character, and L(X) is the Lindelöf number. Chris Good referred to Arhangelskii's theorem as an "impressive result", and "a model for many other results in the field." Richard Hodel has called it "perhaps the most exciting and dramatic of the difficult inequalities", a "beautiful inequality", and "the most important inequality in cardinal invariants."
In 1970 Arhangelskii became a full professor, still at Moscow State University. He spent 1972–75 on leave in Pakistan, teaching at the University of Islamabad under a UNESCO program.
Arhangelskii took advantage of the few available opportunities to travel to mathematical conferences outside of the Soviet Union. He was at a conference in Prague when the 1991 Soviet coup d'état attempt took place. Returning under very uncertain conditions, he began to seek academic opportunities in the United States. In 1993 he accepted a professorship at Ohio University, where he received the Distinguished Professor Award in 2003.
Arhangelskii was one of the founders of the journal Topology and its Applications, and volume 153 issue 13, July 2006, was a special issue, with most of the papers based on talks given at a special conference held at Brooklyn College 30 June–3 July 2003 in honor of his 65th birthday.
Selected publications
Books
Papers
References
External links
Personal profile at Ohio University
1938 births
Living people
Moscow State University alumni
Academic staff of Moscow State University
Ohio University faculty
Mathematicians from Moscow
Topologists
Russian expatriates in the United States
Soviet mathematicians | Alexander Arhangelskii | Mathematics | 771 |
435,315 | https://en.wikipedia.org/wiki/Khinchin%27s%20constant | In number theory, Khinchin's constant is a mathematical constant related to the simple continued fraction expansions of many real numbers. In particular Aleksandr Yakovlevich Khinchin proved that for almost all real numbers x, the coefficients ai of the continued fraction expansion of x have a finite geometric mean that is independent of the value of x. It is known as Khinchin's constant and denoted by K0.
That is, for
it is almost always true that
The decimal value of Khinchin's constant is given by:
Although almost all numbers satisfy this property, it has not been proven for any real number not specifically constructed for the purpose. The following numbers whose continued fraction expansions apparently do have this property (based on empirical data) are:
π
Roots of equations with a degree > 2, e.g. cubic roots and quartic roots
Natural logarithms, e.g. ln(2) and ln(3)
The Euler-Mascheroni constant γ
Apéry's constant ζ(3)
The Feigenbaum constants δ and α
Khinchin's constant
However, not a single real number x has been verified to have this property.
Among the numbers x whose continued fraction expansions are known not to have this property are:
Rational numbers
Roots of quadratic equations, e.g. the square roots of integers and the golden ratio (however, the geometric mean of all coefficients for square roots of nonsquare integers from 2 to 24 is about 2.708, suggesting that quadratic roots collectively may give the Khinchin constant as a geometric mean);
The base of the natural logarithm e.
Khinchin is sometimes spelled Khintchine (the French transliteration of Russian Хинчин) in older mathematical literature.
Series expressions
Khinchin's constant can be given by the following infinite product:
This implies:
Khinchin's constant may also be expressed as a rational zeta series in the form
or, by peeling off terms in the series,
where N is an integer, held fixed, and ζ(s, n) is the complex Hurwitz zeta function. Both series are strongly convergent, as ζ(n) − 1 approaches zero quickly for large n. An expansion may also be given in terms of the dilogarithm:
Integrals
There exist a number of integrals related to Khinchin's constant:
Sketch of proof
The proof presented here was arranged by Czesław Ryll-Nardzewski and is much simpler than Khinchin's original proof which did not use ergodic theory.
Since the first coefficient a0 of the continued fraction of x plays no role in Khinchin's theorem and since the rational numbers have Lebesgue measure zero, we are reduced to the study of irrational numbers in the unit interval, i.e., those in . These numbers are in bijection with infinite continued fractions of the form [0; a1, a2, ...], which we simply write [a1, a2, ...], where a1, a2, ... are positive integers. Define a transformation T:I → I by
The transformation T is called the Gauss–Kuzmin–Wirsing operator. For every Borel subset E of I, we also define the Gauss–Kuzmin measure of E
Then μ is a probability measure on the σ-algebra of Borel subsets of I. The measure μ is equivalent to the Lebesgue measure on I, but it has the additional property that the transformation T preserves the measure μ. Moreover, it can be proved that T is an ergodic transformation of the measurable space I endowed with the probability measure μ (this is the hard part of the proof). The ergodic theorem then says that for any μ-integrable function f on I, the average value of is the same for almost all :
Applying this to the function defined by f([a1, a2, ...]) = ln(a1), we obtain that
for almost all [a1, a2, ...] in I as n → ∞.
Taking the exponential on both sides, we obtain to the left the geometric mean of the first n coefficients of the continued fraction, and to the right Khinchin's constant.
Generalizations
The Khinchin constant can be viewed as the first in a series of the Hölder means of the terms of continued fractions. Given an arbitrary series {an}, the Hölder mean of order p of the series is given by
When the {an} are the terms of a continued fraction expansion, the constants are given by
This is obtained by taking the p-th mean in conjunction with the Gauss–Kuzmin distribution. This is finite when .
The arithmetic average diverges: , and so the coefficients grow arbitrarily large: .
The value for K0 is obtained in the limit of p → 0.
The harmonic mean (p = −1) is
.
Open problems
Many well known numbers, such as , the Euler–Mascheroni constant γ, and Khinchin's constant itself, based on numerical evidence, are thought to be among the numbers for which the limit converges to Khinchin's constant. However, none of these limits have been rigorously established. In fact, it has not been proven for any real number, which was not specifically constructed for that exact purpose.
The algebraic properties of Khinchin's constant itself, e. g. whether it is a rational, algebraic irrational, or transcendental number, are also not known.
See also
Lochs' theorem
Lévy's constant
Somos' constant
List of mathematical constants
References
External links
110,000 digits of Khinchin's constant
10,000 digits of Khinchin's constant
Continued fractions
Mathematical constants
Infinite products | Khinchin's constant | Mathematics | 1,219 |
9,411,318 | https://en.wikipedia.org/wiki/British%20Salt | British Salt Limited is a United Kingdom-based chemical company that produces pure white salt. The company is owned by Tata Chemicals Europe after a buy out from private equity company LDC in April 2010. It is based in Middlewich, Cheshire, employs 125 people, and produces approximately of pure white salt every year.
LDC bought British Salt from its previous owners, US Salt Holdings LLC in 2007, investing £35m in the company. A management team has taken a minority stake. US Salt had bought British Salt from its previous owners, Staveley Industries plc in 2000 for £80m. In 2005, British Salt acquired New Cheshire Salt Works Limited, known as NCSW Limited. This acquisition was referred to the Competition Commission who approved the purchase. Since the purchase the NCSW site in Wincham has been closed and the site sold to Chantry Developments.
The salt is extracted from strata that lie approximately below ground. Bore holes are drilled into the strata and water is forced down to dissolve the salt. The resulting brine solution is pumped along of pipes back to the surface and direct into the Middlewich factory for purification and water evaporation to produce the pure salt. It is estimated that there are salt reserves sufficient for 200 years.
Applications
The main uses for the salt products include:
Water softeners
Chemical industry
Food processing
Animal feeds
Textiles and tanning
During the severe weather experienced in the UK in February 2009, British Salt also started to supply low-grade salt for de-icing of roads, after local authorities announced they were running very low on salt used for gritting due to the unexpected weather.
See also
History of salt in Middlewich
Winter storms of 2009–2010
References
External links
Official Website
Chemical companies of the United Kingdom
Companies based in Cheshire
Middlewich
Snow removal
Salt production
Tata Chemicals | British Salt | Chemistry | 363 |
3,066,436 | https://en.wikipedia.org/wiki/Howdah | A howdah, or houdah (), derived from the Arabic (), which means "bed carried by a camel", also known as hathi howdah (, ), is a carriage which is positioned on the back of an elephant, or occasionally some other animal such as a camel, used most often in the past to carry wealthy people during progresses or processions, hunting or in warfare. It was also a symbol of wealth for the owner and as a result might be elaborately decorated, even with expensive gemstones.
Notable howdahs are the Golden Howdah, on display at the Napier Museum at Thiruvananthapuram, which was used by the Maharaja of Travancore and that is used traditionally during the Elephant Procession of the famous Mysore Dasara. The Mehrangarh Fort Museum in Jodhpur, Rajasthan, has a gallery of royal howdahs.
Today, howdahs are used mainly for tourist or commercial purposes in South East Asia and are the subject of controversy as animal rights groups and organizations, such as Millennium Elephant Foundation, openly criticize their use, citing evidence that howdahs can cause permanent damage to an elephant's spine, lungs, and other organs and can significantly shorten the animal's life.
History
A passage from Roman historian Curtius describes the lifestyles of ancient Indian kings during the "Second urbanisation" (c. 600 – c. 200 BCE) who rode on chariot mounted on elephants or howdahs when going on distant expeditions.
Howdah Gallery, Mehrangarh Fort Museum
The Mehrangarh Fort Museum, Jodhpur, has a gallery dedicated to an array of Hathi Howdah, used by the Maharaja of Mewar, mostly for ceremonial occasions.
Howdah for armies
References in literature
The American author Herman Melville in Chapter 42 ("The Whiteness of the Whale") of Moby Dick (1851), writes "To the native Indian of Peru, the continual site of the snow-howdahed Andes conveys naught of dread, except, perhaps, in the more fancy of the eternal frosted desolateness reigning at such vast altitudes, and the natural conceit of what a fearfulness it would be to lose oneself in such inhuman solitudes." It also appears in Chapter 11 of Jules Vernes' classic adventure novel Around the World in Eighty Days (1873), in which we are told "The Parsee, who was an accomplished elephant driver, covered his back with a sort of saddle-cloth, and attached to each of his flanks some curiously uncomfortable howdahs.” It is mentioned in the first chapter of Ben-Hur: “Exactly at noon the dromedary, of its own will, stopped, and uttered the cry or moan, peculiarly piteous, by which its kind always protest against an overload, and sometimes crave attention and rest. The master thereupon bestirred himself, waking, as it were, from sleep. He threw the curtains of the houdah up, looked at the sun, surveyed the country on every side long and carefully, as if to identify an appointed place.”
Tolkien wrote in The Lord of the Rings of the Mûmakil (Elephants) of Harad with howdahs on their backs.
Elephant and castle symbol
A derived symbol used in Europe is the "elephant and castle": an elephant carrying a castle on its back, being used especially to symbolize strength. The symbol was used in Europe in classical antiquity and more recently has been used in England since the 13th century, and in Denmark since at least the 17th century.
In antiquity, the Romans made use of war elephants, and turreted elephants feature on the coinage of Juba II of Numidia, in the 1st century BC. Elephants were used in the Roman campaigns against the Celtiberians in Hispania, against the Gauls, and against the Britons, the ancient historian Polyaenus writing, "Caesar had one large elephant, which was equipped with armor and carried archers and slingers in its tower. When this unknown creature entered the river, the Britons and their horses fled and the Roman army crossed over." However, he may have confused this incident with the use of a similar war elephant in Claudius' final conquest of Britain.
Alternatively, modern uses may derive from later contacts with howdahs. Fanciful images of war elephants with elaborate castles on their back date to 12th century Spain, as at right.
Notably, 13th century English use may come from the elephant given by Louis IX of France to Henry III of England, for his menagerie in the Tower of London in 1254, this being the first elephant in England since Claudius.
Today the symbol is most known in the United Kingdom from the Elephant and Castle intersection in south London. This derives its name from a pub established by 1765, in a building previously known as The White Horse and used by a smith or farrier. It has been claimed the premises had been associated with the Cutlers' Company. However the company has advised a contributor it has never had an association with the area. Meanwhile the use of the symbol by the Cutlers due to the presence of ivory in sword and cutlery handles is just one of diverse world-wide uses of the term over a long period. These include the titles of several other public houses in London. Stephen Humphrey, a historian of the Elephant and Castle, addresses the various origin theories and demonstrates the naming of the pub that subsequently gave its name to the area was random.
The elephant and castle symbol has been used since the 13th century in the coat of arms of the city of Coventry, and forms the heraldic crest of the Corbet family, feudal barons of Caus, of Caus Castle in Shropshire, powerful marcher lords. It was used in the 17th century by the Royal African Company, which led to its use on the guinea gold coin.
The symbol of an elephant and castle has also been used in the Order of the Elephant, the highest order in Denmark, since 1693.
Camel howdah
In Persia, a camel howdah used to be a common means of transport.
Turkmens traditionally used Kejebe/کجوه on Camels, mainly used for carrying women in long distances or weddings, now it is only rented for weddings.
See also
Litter
Mahout, the driver of an elephant
Howdah pistols, large handguns used to defend howdahs from predators
Persian war elephants
References
External links
Animal-powered vehicles
Animal equipment
Elephants in India
Camels
Culture of India
Hindi words and phrases
Livestock
de:Sänfte#Spezielle Sänften | Howdah | Biology | 1,357 |
3,219,844 | https://en.wikipedia.org/wiki/Azure%20Dragon | The Azure Dragon (Chinese: Qīnglóng), also known as Qinglong in Chinese, is one of the Dragon Gods who represent the mount or chthonic forces of the Five Regions' Highest Deities ( Wǔfāng Shàngdì). It is also one of the Four Symbols of the Chinese constellations, which are the astral representations of the Wufang Shangdi. The Azure Dragon represents the east and the spring season. It is also sometimes referred to as the Blue-green Dragon, Green Dragon, or the Blue Dragon ( Cānglóng).
The Dragon is frequently referred to in the media, feng shui, other cultures, and in various venues as the Green Dragon and the Avalon Dragon. His cardinal direction's epithet is "Bluegreen Dragon of the East" ( Dōngfāng Qīnglóng or Dōngfāng Cānglóng).
This dragon is also known as Seiryū in Japanese, Cheongryong in Korean and Thanh Long in Vietnamese.
Seven Mansions of the Azure Dragon
As with the other three Symbols, there are seven astrological "Mansions" (positions of the Moon) within the Azure Dragon. The names and determinative stars are:
Cultural depictions
In the , the White Tiger's star is reincarnated as fictionalized General Luo Cheng, who serves Li Shimin. The Azure Dragon's Star is reincarnated as General Shan Xiongxin, who serves Wang Shichong. The two generals are sworn brothers of Qin Shubao, Cheng Zhijie and Yuchi Gong. After death, their souls are said to possess heroes of the Tang dynasty and Goguryeo, such as Xue Rengui and Yeon Gaesomun.
The Azure Dragon appears as a door god at Taoist temples. He was represented on the tomb of Wang Hui (stone coffin, east side) at Xikang in Lushan. A rubbing of this was collected by David Crockett Graham and is in the Field Museum of Natural History. The dragon featured on the Chinese national flag in 1862–1912, and on the Twelve Symbols national emblem from 1913 to 1928.
Influence
Japan
In Japan, the Azure Dragon is one of the four guardian spirits of cities and is believed to protect the city of Kyoto on the east. The west is protected by the White Tiger, the north is protected by the Black Tortoise, the south is protected by the Vermilion Bird, and the center is protected by the Yellow Dragon. In Kyoto, there are temples dedicated to each of these guardian spirits. The Azure Dragon is represented in the Kiyomizu Temple in eastern Kyoto. Before the entrance of the temple there is a statue of the dragon, which is said to drink from the waterfall within the temple complex at nighttime. Therefore, each year a ceremony is held to worship the dragon of the east. In 1983, the Kitora Tomb was found in the village of Asuka. All four guardians were painted on the walls (in the corresponding directions) and a system of the constellations was painted on the ceiling. This is one of the few ancient records of the four guardians.
Korea
In Korea, the murals of the Goguryeo tombs found at Uhyon-ni in South Pyongan province features the Azure Dragon and the other mythological creatures of the four symbols.
Gallery
See also
Chinese dragon
References
External links
Chinese constellations
Chinese dragons
Chinese gods
Dragon deities
Four Symbols
Onmyōdō deities | Azure Dragon | Astronomy | 700 |
3,592,883 | https://en.wikipedia.org/wiki/Siege%20of%20Fort%20Pitt | The siege of Fort Pitt took place during June and July 1763 in what is now the city of Pittsburgh, Pennsylvania, United States. The siege was a part of Pontiac's War, an effort by Native Americans to remove the Anglo-Americans from the Ohio Country and Allegheny Plateau after they refused to honor their promises and treaties to leave voluntarily after the defeat of the French. The Native American efforts of diplomacy, and by siege, to remove the Anglo-Americans from Fort Pitt ultimately failed.
This event is known for a possible attempt at biological warfare, in which William Trent and Simeon Ecuyer, a Swiss mercenary in British service, may have given items from a smallpox infirmary as gifts to Native American emissaries with the hope of spreading the deadly disease to nearby tribes. The effectiveness is unknown, although it is known that the method used is inefficient compared to respiratory transmission and these attempts to spread the disease are difficult to differentiate from epidemics occurring from previous contacts with colonists.
Background
Fort Pitt was built in 1758 during the French and Indian War, on the site of what was previously Fort Duquesne in what is now the city of Pittsburgh, Pennsylvania, United States. The French abandoned and destroyed Fort Duquesne in November 1758 with the approach of General John Forbes's expedition. The Forbes expedition was successful in part because of the Treaty of Easton, in which area American Indians agreed to end their alliance with the French. American Indians—primarily the Six Nations, Delawares and Shawnees—made this agreement with the understanding that the British would leave the area after their war with the French. Instead of leaving the territory west of the Appalachian Mountains as they had agreed, the Anglo-Americans remained on Native lands and reinforced their forts while settlers continued to push westward, despite the Royal Proclamation of 1763 placing a limit upon the westward expansion of the American colonies. The hostilities between the French and British declined significantly after 1760, followed by a final cessation of hostilities and the formal surrender of the French at the Treaty of Paris in February 1763.
The attacks led by Pontiac against the British in early May 1763, near Fort Detroit, mark what is generally considered to be the beginning of Pontiac's War. The siege of Fort Pitt and numerous other British forts during the spring and summer of 1763 were part of an effort by American Indians to reclaim their territory by driving the British out of the Ohio Country and back across the Appalachian Mountains. While many of the forts and outposts in the region were destroyed, the Indian effort to remove the British from Fort Pitt ultimately failed.
Diplomacy and siege
By May 27, the uprising reached the tribes near Fort Pitt, and there were many signs of impending hostilities. The captain of the Fort Pitt militia learned that the Delaware tribe just north of the fort had abandoned their dwellings and cornfields overnight. The Mingo had also abandoned their villages further up the river. The proprietor of the Pennsylvania provincial store reported that numerous Delaware warriors had arrived "in fear and haste" to exchange their skins for gunpowder and lead. The western Delaware warrior leaders Wolf and Keekyuscung had fewer than 100 warriors, so did not immediately attack the well-fortified Fort Pitt. Instead, on May 29, they attacked the supporting farms, plantations and villages in the vicinity of the fort. Panicked settlers crowded into the already overcrowded fort. Captain Simeon Ecuyer, a 22-year veteran Swiss mercenary in British service, tried to ready his fort after this news of expanding hostilities, putting his 230 men, half regulars and half quickly organized militia, on alert. The fort's exceptional structural defenses, made of stone with bastions covering all angles of attack, were supported by 16 cannons which he had permanently loaded. Ecuyer demolished the nearby village houses and structures to deny cover for attackers. He had trenches dug outside the fort, and set out beaver traps. Smallpox had been discovered within the fort, prompting Ecuyer to build a makeshift hospital in which to quarantine those infected.
On the June 16, four Shawnee visited Fort Pitt and warned Alexander McKee and Captain Simeon Ecuyer that several Indian nations had accepted Pontiac's war belt and bloody hatchet and were going on the offensive against the British, but that the Delaware were still divided, with the older Delaware chiefs advising against war. The following day, however, the Shawnee returned and reported a more threatening situation, saying that all the nations "had taken up the hatchet" against the British, and were going to attack Fort Pitt. Even the local Shawnee themselves "were afraid to refuse" to join the uprising, a subtle hint that the occupants of Fort Pitt should leave. Ecuyer dismissed the warnings and ignored the requests to leave. On June 22, Fort Pitt was attacked on three sides by Shawnee, western Delaware, Mingo and Seneca, which prompted return fire from Ecuyer's artillery. This initial attack on the fort was repelled. Since the Indians were unfamiliar with siege warfare, they opted to try diplomacy yet again. On June 24, Turtleheart spoke with McKee and Trent outside the fort, informing them that all of the other forts had fallen, and that Fort Pitt "is the only one you have left in our country." He warned McKee that "six different nations of Indians" were ready to attack if the garrison at the fort did not retreat immediately. They thanked Turtleheart and assured him that Fort Pitt could withstand "all nations of Indians", and they presented the Indian dignitaries with two small blankets and a handkerchief from the smallpox hospital. For the next several days it remained relatively quiet, although reports were coming in about fort after fort falling before large bands of attacking warriors.
July 3, four Ottawa newcomers requested a parley and tried to trick the occupants of Fort Pitt into surrender, but the ruse failed. This was followed by several weeks of relative quiet, through July 18 when a large group of warriors arrived, likely from the Fort Ligonier area. McKee was informed by the Shawnee that the Indians were still hopeful of an amicable outcome, similar to agreements just made at Detroit. On July 26, a large conference headed by Ecuyer was convened with several leaders of the Ohioan tribes outside the walls of Fort Pitt. The Indian delegation, Shingas, Wingenum and Grey Eyes among them, came to the fort under a flag of truce to parley, and again requested that the British leave this place. They explained that by taking the Indian's country the British caused this war, and Tessecumme of the Delaware noted that the British were the cause of the trouble since they had broken their promises and treaties. They had come onto Indian land and built forts, despite being asked not to, so now the tribes in the area have amassed to take back their lands. He informed Ecuyer that there was still a short time remaining to leave peacefully. The Delaware and Shawnee chiefs made sure Captain Ecuyer at Fort Pitt understood the cause of the conflict. Turtleheart told him, "You marched your armies into our country, and built forts here, though we told you, again and again, that we wished you to move, this land is ours, and not yours." The Delaware also let it be known, "that all the country was theirs; that they had been cheated out of it, and that they would carry on the war till they burnt Philadelphia". The British refused to leave, claiming that this was their home now. They bluffed that they could hold out for three years, and bragged that several large armies were coming to their aid. This "very much enraged" the Indian delegation, Trent wrote, "White Eyes and Wingenum seemed to be very much irritated and would not shake hands with our people at parting." On July 28, the siege began in earnest and continued for several days. Seven of the fort garrison were wounded, at least one mortally; Ecuyer was wounded in the leg by an arrow.
For Commander-in-Chief, North America Jeffery Amherst, who before the war had dismissed the possibility that the Indians would offer any effective resistance to British rule, the military situation over the summer had become increasingly grim. The frustration was so great, he wrote to Colonel Henry Bouquet and instructed him not to take any Indian prisoners. He proposed that they should be intentionally exposed to smallpox, hunted down with dogs, and "Every other method that can serve to Extirpate this Execrable Race." Amherst had directed Bouquet to take his troops to relieve Fort Pitt, a march that would take several weeks. At Fort Pitt, the siege didn't let up until August 1, 1763, when most of the Indians broke off their attack in order to intercept the body of almost 500 British troops marching to the fort under Colonel Bouquet. On August 5, these two forces met at Edge Hill in the Battle of Bushy Run. Bouquet survived the attack and the Indians were unable to prevent his command from relieving Fort Pitt on August 10.
Aftermath
More than 500 British troops and perhaps a couple thousand settlers had died in the Ohio Valley, and of more than a dozen British forts, only Detroit, Niagara and Pitt remained standing at the height of this uprising. On October 7, 1763, the Crown issued Royal Proclamation of 1763, which forbade all settlement west of the Appalachian Mountains—a proclamation ignored by British settlers, and unenforced by the British military. Fort Pitt would remain in British hands, and would become a central hub for migrant settlers as they pushed west in ever larger numbers over the next decade.
Biological warfare
Handoff of infirmary items
Sometime in the spring of 1763, a smallpox epidemic broke out near Fort Pitt and subsequently spread there. A smallpox hospital was then also established there to treat sick troops. There had also been an earlier epidemic among Ohio tribes in the early 1750s, as smallpox outbreaks occurred every dozen or so years. According to John McCullough, who was held captive, some of the Mahoning village warriors raiding a Juniata settlement caught smallpox from there that then killed some of them.
In 1924 the Mississippi Valley Historical Review published a journal written by William Trent, a fur trader and merchant commissioned as a captain at Fort Pitt. For June 24, 1763, Trent wrote about a meeting with two Delaware Indians at the fort. "Out of our regard to them we gave them two Blankets and an Handkerchief out of the Small Pox Hospital. I hope it will have the desired effect." (It was commonly believed in past centuries that smallpox could be readily spread at a distance through infected clothing or bedding. However, in the 1960s A. R. Rao’s detailed research, during the last years that smallpox was sufficiently prevalent for its mode of transmission to be studied, found no evidence for this mode of transmission. He concluded that it was a breath-borne disease, transmitted by "inhalation".)
The two blankets and the handkerchief from the infirmary were seemingly wrapped in a piece of linen. The blankets and handkerchief were unwashed and dirty. In 1955 a record of Trent's trading firm was found. It had an invoice for the handkerchief, two blankets and the linen to be given to the Natives, and the expense was signed by Ecuyer. Ecuyer was relatively inexperienced, having only been a captain since April the year before and having taken over the command of the fort the same November. Trent was likely the main orchestrator of the idea, considering he had more experience with the disease and had even helped out setting the smallpox hospital. Half-Native Alexander McKee also played a part in parlaying messages, but he possibly didn't know about the items. This plan was carried out independently from General Amherst and Colonel Bouquet.
The meeting happened on June 24. The night before "Two Delawares called for Mr. McKee and told him they wanted to speak to him in the Morning." The conference took place just outside of Fort Pitt. The participants were Ecuyer, McKee, Turtle's Heart, and another Delaware, "Mamaltee a Chief." The two Delaware men tried to coax the people holed up in the fort to leave, an option that Ecuyer promptly rejected and stated that reinforcements were coming to Fort Pitt and that the stronghold could easily hold out. After conferring with their chiefs, the two "returned and said they would hold fast of the Chain of friendship", but they were not genuinely believable. The messengers had asked for presents such as food and alcohol, "to carry us Home." Requesting gifts was common, but Ecuyer in this case seemed especially generous. Turtle's Heart and his companion received food in "large quantities", some "600 Rations." Included among this was the linen bundle containing the handkerchief and two blankets.
A month after meeting on July 22, Trent met with the same delegates again and they seemingly had not contracted smallpox: "Gray Eyes, Wingenum, Turtle's Heart and Mamaultee, came over the River told us their Chiefs were in Council, that they waited for Custaluga who they expected that Day."
Gershom Hicks, who was fluent in the Delaware language and also knew some Shawnee, testified that starting from spring 1763 up to April 1764 around a hundred Natives from different tribes such as Lenni Lenape (Delaware) and Shawnee died in the smallpox epidemic, making it a relatively minor smallpox outbreak. After visiting Pittsburgh a few years later, David McClure would write in his journal published in 1899, "I was informed at Pittsburgh, that when the Delawares, Shawanese & others, laid siege suddenly and most traitorously to Fort Pitt, in 1764, in a time of peace, the people within, found means of conveying the small pox to them, which was far more destructive than the guns from the walls, or all the artillery of Colonel Boquet's army, which obliged them to abandon the enterprise."
Amherst letters
A month later in July Colonel Bouquet discussed Pontiac's War in detail with General Amherst via letters, and in postscripts of three letters in more freeform style Amherst also briefly broached the subject of using of smallpox as a weapon. Bouquet brought up blankets as a means without going into specifics, and Amherst supported the idea "to Extirpate this Execreble Race".
Bouquet himself probably never had the opportunity to "Send the Small Pox." He was very concerned about smallpox, having never had it. When Bouquet wrote to Ecuyer, he didn't mention the disease. He died only two years later in 1765 of yellow fever.
Later assessments
This event is usually described as an early attempt at biological warfare. However the plan's effectiveness is generally questioned.
Early research
The account of the British infecting Natives with smallpox during Pontiac's War of 1763 originated with nineteenth century historian Francis Parkman. His account has been relied on by later writers. He described Amherst's reply to Bouquet as a “detestable suggestion” and concluded "There is no direct evidence that Bouquet carried into effect the shameful plan of infecting the Indians though, a few months after, the small-pox was known to have made havoc among the tribes of the Ohio." Parkman had the impression that Amherst had planned the gifting, although Amherst approached the matter only a month later. Following Parkman was Howard Peckham who was more interested in the overall war and paid only cursory glance to the incident, briefly describing Ecuyer handing over the handkerchief and blankets from the smallpox hospital. He quoted a testimony of a smallpox outbreak and stated that it certainly affected the Natives' ability to wage war. Bernhard Knollenberg was more critical and pointed out that both Parkman and Peckham hadn't noticed that the smallpox epidemic among the tribes had been reported to have begun in the spring of 1763, quite some time before the meeting. Knollenberg even doubted the authenticity of the documents at first before he was contacted via letter by historian Donald H. Kent who had found a record of Trent's sundries list signed by Ecuyer.
Later researchers
Francis Jennings, a historian who extensively studied Parkman's writings, had a more damning view. He indicated that the fighting strength of the Natives was greatly compromised by the plan. Microbiologist Mark Wheelis says the act of biological aggression at Fort Pitt is indisputable, but that at the time the rare attempts to transmit infection rarely worked and they were probably made redundant with natural routes of transmission. The practice was restrained by lack of knowledge. Elizabeth A. Fenn writes that "the actual effectiveness of an attempt to spread smallpox remains impossible to ascertain: the possibility always exists that infection occurred by some natural route." Philip Ranlet describes as a clear sign that the blankets had no effect the fact that the same delegates were met a month later, and that nearly all of the met natives were recorded to have lived for decades afterwards. He also questions why Trent didn't gloat about any possible success in his journal if there was such. David Dixon holds likely that the transmission happened via some other route and possibly from the event described by John McCullough. Barbara Mann holds that the distribution worked, describing that Gershom Hick's testimony of the epidemic starting by spring is explainable by Hicks lacking a calendar. Mann also estimates that papers related to the incident have been destroyed.
Researchers James W. Martin, George W. Christopher and Edward M. Eitzen writing in a publication for the US Army Medical Department Center & School, Borden Institute, found that "In retrospect, it is difficult to evaluate the tactical success of Captain Ecuyer's biological attack because smallpox may have been transmitted after other contacts with colonists, as had previously happened in New England and the South. Although scabs from smallpox patients are thought to be of low infectivity as a result of binding of the virus in fibrin metric, and transmission by fomites has been considered inefficient compared with respiratory droplet transmission." In an article published in the journal Clinical Microbiology and Infection researchers Vincent Barras and Gilbert Greub conclude that “in the light of contemporary knowledge, it remains doubtful whether his hopes were fulfilled, given the fact that the transmission of smallpox through this kind of vector is much less efficient than respiratory transmission, and that Native Americans had been in contact with smallpox >200 years before Ecuyer’s trickery, notably during Pizarro’s conquest of South America in the 16th century. As a whole, the analysis of the various ‘pre-microbiological” attempts at BW illustrate the difficulty of differentiating attempted biological attack from naturally occurring epidemics.”
Citations
References
External links
NativeWeb documents on: Amherst-Bouquet - Fort Pitt - Fenn on smallpox in the Americas
Bouquet Papers
Global Biosecurity: Threats and Responses; Katona, Peter; Routledge
Ecuyer, Simeon: Fort Pitt and letters from the frontier (1892): Entry June 2, 1763 - Entry of June 24, 1763
"Colonial Germ Warfare", article from Colonial Williamsburg Journal
John McCullough Narrative
History of that part of the Susquehanna and Juniata valleys, embraced in the counties of Mifflin, Juniata, Perry, Union and Snyder, in the commonwealth of Pennsylvania; Everts, Peck & Richards; 1886
Pioneers of Second Fork James P. Burke
Proceedings of Sir William Johnson with the Indians at Fort Stanwix to settle a Boundary Line. 1768
Fort Pitt
Biological warfare
Fort Pitt (Pennsylvania)
Fort Pitt 1763
Fort Pitt
Smallpox
Fort Pitt
1763 in North America
British war crimes
Native American genocide | Siege of Fort Pitt | Biology | 4,012 |
24,093,107 | https://en.wikipedia.org/wiki/Double-star%20snark | In the mathematical field of graph theory, the double-star snark is a snark with 30 vertices and 45 edges.
In 1975, Rufus Isaacs introduced two infinite families of snarks—the flower snark and the BDS snark, a family that includes the two Blanuša snarks, the Descartes snark and the Szekeres snark (BDS stands for Blanuša Descartes Szekeres). Isaacs also discovered one 30-vertex snark that does not belong to the BDS family and that is not a flower snark — the double-star snark.
As a snark, the double-star graph is a connected, bridgeless cubic graph with chromatic index equal to 4. The double-star snark is non-planar and non-hamiltonian but is hypohamiltonian. It has book thickness 3 and queue number 2.
Gallery
References
Individual graphs
Regular graphs | Double-star snark | Mathematics | 208 |
29,952,043 | https://en.wikipedia.org/wiki/Aeruginospora%20furfuracea | Aeruginospora furfuracea is a species of fungus in the family Hygrophoraceae. The species, described by Egon Horak in 1973, is found in New Zealand. It is currently placed in the genus Aeruginospora, but may actually belong in Camarophyllopsis.
References
External links
Tricholomataceae
Fungi of New Zealand
Fungi described in 1973
Taxa named by Egon Horak
Fungus species | Aeruginospora furfuracea | Biology | 93 |
25,500,428 | https://en.wikipedia.org/wiki/Information%20technology%20in%20Morocco | The information technology sector in Morocco has been witnessing significant expansion. Morocco is the first country in North Africa to install a 3G network. The number of Internet subscribers in the country jumped 73% in 2006 over the previous year. Further, a new offshore site at Casablanca, with state-of-the-art technologies and other incentives, has grabbed the attention of many global multinationals. Setting up offshore service centers in the nation has become tempting. Such is the rate of growth, that off-shoring and IT activities are estimated to contribute $500 million to the country's GDP and employ 30,000 people by 2015. The communications sector already accounts for half of all foreign direct investments Morocco received over the past five years.
IT sector
The IT sector generated a turnover of Dh7bn ($910,000m) in 2007, which represented an 11% increase compared to 2006. The number of Moroccan internet subscribers in 2007 amounted to 526,080, representing an increase of 31.6% compared to the previous year and a 100% increase compared to 2005. The national penetration for internet subscription remains low, even though it increased from 0.38% in 2004 to 1.72% in 2007. Yet over 90% of subscribers have a broadband ADSL connection, which is one of the highest ratios in the world. While the telecoms sector remains the big earner, with Dh33bn ($4.3bn), the IT and off shore industries should generate Dh21bn ($2.7bn) each by 2012. In addition, the number of employees should increase from 40,000 to 125,000. The government hopes that adding more local content to the internet will increase usage. There have also been efforts to add more computers to schools and universities. E-commerce is likely to take off in the next few years, especially as the use of credit cards is gaining more ground in Morocco. Although computer and internet use have made a great leap forward in the past five years, the IT market still finds itself in infancy and offers great potential for further development.
Telecommunications
The mobile telephony segment comprises three operators: Maroc Telecom, the former state-owned company, with a market share of 58.2 per cent in 2008. Méditel (31.5 per cent) and, since April 2007, Wana (10.4 per cent). Maroc Telecom is expected to lose 12 per cent of the mobile market share. The mobile telephony market is growing rapidly, the number of subscribers reached 22.3 million in September 2008. In 2008, over 64 per cent of the population had more than one mobile phone in their households, compared to 48 per cent in 2005. The introduction of customer loyalty plans, the downward trend in prices and the enhancement of service offerings over the last two years have further boosted mobile telephony. As of June 2008, the mobile penetration rate had risen to 69.4 per cent, as against 57.8 per cent in June 2007.
In the fixed-line telephony segment, 3G telecommunications licences have been granted to two operators, Méditel and Wana. The data on fixed-line telephony published by the national telecommunications regulator ANRT in March 2008 indicate a fixed-line penetration rate of about 13.3 per cent, with the number of subscribers rising from 1.62 million in March 2007 to over 2.71 million in March 2008.
Offshoring
In 2009, Morocco entered the Gartner top 30 list for the first time, thanks to its population's language skills and cultural compatibility, especially with regard to the French-speaking markets in Europe. Morocco opened its doors to offshoring in July 2006, as one component of the development initiative Plan Emergence, and has so far attracted roughly half of the French-speaking call centres that have gone offshore so far and a number of the Spanish ones. In 2007 the country had about 200 callcentres, including 30 of significant size, that employ a total of over 18,000 people.
Government policy
The future of the Moroccan IT sector was laid out in M@roc 2006–12, an initiative by the Moroccan government. The plan aims to increase the combined value of the telecoms and IT sector from Dh24bn ($3.1bn) in 2004 to Dh60bn ($7.8bn) in 2012.
In 2009 Morocco announced it will create a commission for the protection of personal data. The move comes as the country is seeking to adapt the laws pertaining to the treatment of personal data, to those of the EU. This is expected to strengthen its offshoring and outsourcing market. In recent years the government also launched several science parks, and areas dedicated to BPO and IT.
References
Morocco
Science and technology in Morocco | Information technology in Morocco | Technology | 974 |
23,984,989 | https://en.wikipedia.org/wiki/Energy%20Manufacturing%20Co.%20Inc | Energy Manufacturing Co., Inc. is an American manufacturing company based in Monticello, Iowa. Established in 1944, the company produces a variety of hydraulic cylinders, hydraulic pumps, valves, and power systems.
History
In the early 1940s B.J. Pasker ran a blacksmith shop in New Vienna, Iowa. In this shop his son, Jerry, produced farm wagons made from discarded automobile spindles and rims. During this period, Pasker also developed a hydraulically powered front loader which mounted to farm tractors. In 1944 Jerry Pasker outgrew the blacksmith shop and sought a larger facility for his operations.
Energy in Monticello Iowa
Jerry Pasker moved to South Cedar Street in Monticello, Iowa and was introduced to Harold Sovereign, who sold John Deere tractors and equipment. Pasker and Sovereign formed a partnership known as Industrious Farmer Equipment Company, and moved the business to the vacant second floor of Sovereign's dealership on South Cedar Street. In 1946 the business again outgrew its facility. To accommodate the expansion, Pasker purchased the property of an auto dealership on Main Street. During that time the company manufactured hydraulic components, wagon hoists, truck hoists, valves and hydraulic cylinders. In 1948 the company's name was changed to Energy Farm Equipment Company. In 1962 the business incorporated to become Energy Manufacturing Company, Inc., by which it is still known.
Jerry Pasker was killed in a 22 July 1964 airplane crash in Winnipeg, Manitoba, Canada; the company presidency then passed to LaVon Pasker.
Energy Manufacturing after Jerry Pasker
In 1976 Energy completed construction of a new plant on in the Monticello Industrial park. In 1985 Energy Manufacturing Company was sold to CGF Industries of Topeka, Kansas. CGF also purchased an Omaha Nebraska company called "Williams Machine and Tool". In 1997 Energy was purchased by Lincolnshire Partners and in 1999 Energy was purchased by Textron, Inc. Textron ran the company for 5 years until Energy was acquired by an investment group. On November 15, 2005 Energy added to the facility office space for administrative and manufacturing support.
Energy 2005 - 2013
Energy designs and manufactures custom welded hydraulic cylinders. It also designs and manufactures hydraulic valves, pumps, powerpacks and power systems. Energy's cylinders are used in construction, road machinery, forestry, man lift and hoist, industrial bailer, waste compacting, and agricultural industries. Energy manufactures a wide variety of hydraulic cylinders; welded, tie-rod, ram-type, rephasing, telescopic, and position-sensing. Energy has designed and manufactured hydraulic cylinders with bores from less than one inch (2.5 cm), up to 11 inches (28 cm). Cylinders have been manufactured with strokes up to 15 feet (4.5 cm). Energy has designed cylinders with working pressures as high as 10,000 psig (690 bar).
Energy Manufacturing sold
On May 30, 2013 Ligon Industries LLC acquired Energy Manufacturing Co. Inc.,. Ligon Industries, LLC was founded in 1999, and is located in Birmingham, Alabama. In addition to Energy Manufacturing, Ligon holds 13 other manufacturing companies, seven of which are in the fluid power industry. Ligon is the largest independent manufacturer of hydraulic cylinders in North America.
See also
Hydraulic cylinder
Telescopic cylinder
Tie Rod Cylinder
References
External links
Energy Manufacturing
Williams Machine and Tool
Energy Manufacturing on NFPA list
Hydraulics Pneumatics Article about Energy
Ligon Industries
Fluid dynamics
Hydraulics
Pumps | Energy Manufacturing Co. Inc | Physics,Chemistry,Engineering | 712 |
11,459,978 | https://en.wikipedia.org/wiki/Alternaria%20radicina | Alternaria radicina is a fungal plant pathogen infecting carrots.
References
radicina
Fungal plant pathogens and diseases
Carrot diseases
Fungi described in 1922
Fungus species | Alternaria radicina | Biology | 37 |
37,910,757 | https://en.wikipedia.org/wiki/Joint%20CMU-Pitt%20Ph.D.%20Program%20in%20Computational%20Biology | The Joint CMU-Pitt Ph.D Program in Computational Biology (CPCB) is an interdisciplinary graduate training program in computational biology. It is a joint program between Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania.
The Department of Computational Biology (DCB) at the University of Pittsburgh and the Computational Biology Department at Carnegie Mellon University together serve as the administrative homes of the CPCB. Dr. Ivet Bahar, the John K. Vries Chair of the Department of Computational Biology at Pitt, and Dr. Robert F. Murphy, Director of the Computational Biology Department at Carnegie Mellon, are the founding directors of the CPCB.
In 2009, the CPCB was selected as one of ten programs nationwide to receive an NIH T32 Training Grant as part of the NIBIB-HHMI Interfaces Program (award T32-EB009403).
References
External links
Joint CMU-Pitt Ph.D. Program in Computational Biology
Computational biology | Joint CMU-Pitt Ph.D. Program in Computational Biology | Biology | 197 |
4,724,047 | https://en.wikipedia.org/wiki/DOD-STD-2167A | DOD-STD-2167A (Department of Defense Standard 2167A), titled "Defense Systems Software Development", was a United States defense standard, published on February 29, 1988, which updated the less well known DOD-STD-2167 published 4 June 1985. This document established "uniform requirements for the software development that are applicable throughout the system life cycle." This revision was written to allow the contractor more flexibility and was a significant reorganization and reduction of the previous revision; e.g.., where the previous revision prescribed pages of design and coding standards, this revision only gave one page of general requirements for the contractor's coding standards; while DOD-STD-2167 listed 11 quality factors to be addressed for each software component in the SRS, DOD-STD-2167A only tasked the contractor to address relevant quality factors in the SRS. Like DOD-STD-2167, it was designed to be used with DOD-STD-2168, "Defense System Software Quality Program".
On December 5, 1994 it was superseded by MIL-STD-498, which merged DOD-STD-2167A, DOD-STD-7935A, and DOD-STD-2168 into a single document, and addressed some vendor criticisms.
Criticism
One criticism of the standard was that it was biased toward the Waterfall Model. Although the document states "the contractor is responsible for selecting software development methods (for example, rapid prototyping)", it also required "formal reviews and audits" that seemed to lock the vendor into designing and documenting the system before any implementation began.
Another criticism was the focus on design documents, to the exclusion of Computer-Aided Software Engineering (CASE) tools being used in the industry. Vendors would often use the CASE tools to design the software, then write several standards-required documents to describe the CASE-formatted data. This created problems matching design documents to the actual product.
Predecessors
DOD-STD-2167 and DOD-STD-2168 (often mistakenly referred to as "MIL-STD-2167" and "MIL-STD-2168" respectively) are the official specification numbers for superseded U.S. DoD military standards describing documents and procedures required for developing military computer systems. Specifically:
DOD-STD-2167 described the necessary project documentation to be delivered when developing a "Mission-Critical" computer software system.
DOD-STD-2168 was the DoD's software quality assurance standard, titled "Defense System Software Quality Program".
Successors
One result of these criticisms was to begin designing a successor standard, which became MIL-STD-498. Another result was a preference for formal industry-designed standards (such as IEEE 12207) and informal "best practice" specifications, rather than trying to determine the best processes and making them formal requirements on suppliers.
MIL-STD-2167A with MIL-STD-498 eventually became the basis for DO-178 in the early 1980s, with DO-178 receiving subsequent revisions. MIL-STD-2167 and MIL-STD-498 together define standard software development life cycle processes that are expected to be implemented and followed as well as prescriptively defining standard document format and content. In contrast, the less proscriptive DO-178B/C defines objectives that should be accomplished as acceptable means of demonstrating airworthiness, permitting relative flexibility in the life cycles and processes employed to accomplish those objectives.
References
External links
The DOD-STD-2167 standard
The DOD-STD-2167A standard
MIL-HDBK-287 A Tailoring Guide for DOD-STD-2167A
Military perspective on replacing DOD-STD-2167A with MIL-STD-498
Military statement together with DOD-STD-2167A with FAM-DRE-231
United States Department of Defense standards
1988 documents
Software development | DOD-STD-2167A | Technology,Engineering | 813 |
41,029,065 | https://en.wikipedia.org/wiki/White%20bream%20virus | White bream virus is a species of virus. It is the sole species in the subgenus Blicbavirus, which is in the genus Bafinivirus. It was first isolated from white bream (Blicca bjoerkna) in Germany. It is a bacilliform (rod-shaped) positive-sense single-stranded RNA virus.
References
Nidovirales
Zoonoses | White bream virus | Biology | 87 |
62,293,592 | https://en.wikipedia.org/wiki/Roger%20Scantlebury | Roger Anthony Scantlebury (born August 1936) is a British computer scientist and Internet pioneer who worked at the National Physical Laboratory (NPL) and later at Logica.
Scantlebury led the pioneering work to implement packet switching and associated communication protocols at the NPL in the late 1960s. He proposed the use of the technology in the ARPANET, the forerunner of the Internet, at the inaugural Symposium on Operating Systems Principles in 1967. During the 1970s, he was a major figure in the International Network Working Group through which he was an early contributor to concepts used in the Transmission Control Program which became part of the Internet protocol suite.
Early life
Roger Scantlebury was born in Ealing in 1936.
Career
National Physical Laboratory
Scantlebury worked at the National Physical Laboratory in south-west London, in collaboration with the National Research Development Corporation (NRDC). His early work was on the Automatic Computing Engine and English Electric DEUCE computers.
Following this he was tasked by Derek Barber to lead the implementation of Donald Davies' pioneering packet switching concepts for data communication. Scantlebury and Keith Bartlett were the first to describe the term protocol in a modern data-communications context in an April 1967 memorandum entitled A Protocol for Use in the NPL Data Communications Network. In October 1967, he attended the Symposium on Operating Systems Principles in the United States, where he gave an exposition of packet-switching, developed at NPL (and referenced the work of Paul Baran). Also attending the conference was Larry Roberts, from the ARPA; this was the first time that Larry Roberts had heard of packet switching. Scantlebury persuaded Roberts and other American engineers to incorporate the concept into the design for the ARPANET.
Subsequently he led the development of the NPL Data Communications Network, publishing several research papers pioneering the development of packet-switched computer networks. Elements of the network became operational in early 1969, the first implementation of packet switching, and the NPL network was the first to use high-speed links. He was seconded to the Post Office Telecommunications in 1969, participating in a data communications study and supervising four data communications-related research contracts. This research team developed the alternating bit protocol (ABP).
Along with Davies and Barber, he was a major figure in the International Network Working Group (INWG) from 1972, initially chaired by Vint Cerf. He attended the INWG meeting in New York in June 1973 that shaped the early direction of international network protocols, and was acknowledged by Bob Kahn and Vint Cerf in their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. He co-authored the standard agreed by INWG in 1975, Proposal for an international end to end protocol.
Scantlebury later reported directly to Davies at the NPL. As head of the data networks group within the Computer Science Division, he was responsible for the UK technical contribution to the European Informatics Network, a datagram network linking CERN, the French research centre INRIA and the UK’s National Physical Laboratory.
Later career
Scantlebury joined Logica in 1977 in their Communications Division, where he worked on the CCITT (ITU-T) X.25 protocol and with the formation of the Euronet, a pan-European virtual circuit network using X.25. He moved to the Finance Division in 1981.
In the 2000s, he worked for Mercator Software, Integra SP and as a consultant. Subsequently, he worked for Kofax (now Tungsten Automation) and retired in 2020.
Personal life
Scantlebury married Christine Appleby in 1958 in Middlesex; they had two sons in 1961 and 1966, and a daughter in 1963. He lives in Esher.
He was influential in persuading NPL to sponsor a gallery about "Technology of the Internet" at The National Museum of Computing, which opened in 2009.
Publications
Wilkinson, P.T.; Scantlebury, R.A. (1968). The control functions in a local data network. IFIP Congress (2) 1968: 734-738.
Scantlebury, R. A.; Wilkinson, P.T.; Bartlett, K.A. (1968). The design of a message switching centre for a digital communication network. IFIP Congress (2) 1968: 723-727.
Scantlebury, R. A. (1969). A model for the local area of a data communication network objectives and hardware organization. Symposium on Problems in the Optimization of Data Communications Systems 1969: 183-204
Bartlett, Keith A.; Scantlebury, Roger A.; Wilkinson, Peter T. (1969). A note on reliable full-duplex transmission over half-duplex links. Commun. ACM 12(5): 260-261.
See also
History of the Internet
Internet in the United Kingdom § History
List of Internet pioneers
Protocol Wars
References
Further reading
External links
Internet Dreamers BBC interview with Vint Cerf, Bob Taylor, Larry Roberts and Roger Scantlebury, 2000
NPL, Packet Switching and the Internet Comments by David Rayner, Derek Barber, Roger Scantlebury, and Peter Wilkinson at the Symposium of the Institution of Analysts & Programmers, 2001
The Internet - Where it came from & where it is going, IET/BCS evening talk at the University of Cambridge, 2007
Celebrating 40 years of the net BBC News article quoting Roger Scantlebury, 2009
'Packet switching' system's first computer network BBC News interview with Roger Scantlebury, 2010
Alan Turing and the Ace computer, BBC News series on British computer pioneers, 2010
The Story of Packet Switching, Interview with Roger Scantlebury, Peter Wilkinson, Keith Bartlett, and Brian Aldous, 2011
Protocol Wars, Interview with Roger Scantlebury for the Computer History Museum, 2011
Internet pioneers airbrushed from history, Letter to the Guardian, 2013
The birth of the Internet in the UK, Google video featuring Vint Cerf, Roger Scantlebury, Peter Kirstein, Peter Wilkinson, 2013
The Joy of Data BBC Four program featuring an interview with Roger Scantlebury, 2016
How we nearly invented the internet in the UK Letter to the New Scientist, 2020
Fifty Years of the Internet Technology Event featuring Roger Scantlebury at The National Museum of Computing, 2020
1936 births
Living people
British computer scientists
History of computing in the United Kingdom
Internet pioneers
Packets (information technology)
People from Brentford
People from Esher
Scientists of the National Physical Laboratory (United Kingdom) | Roger Scantlebury | Technology | 1,317 |
20,403,801 | https://en.wikipedia.org/wiki/Delta%20set | In mathematics, a Δ-set, often called a Δ-complex or a semi-simplicial set, is a combinatorial object that is useful in the construction and triangulation of topological spaces, and also in the computation of related algebraic invariants of such spaces. A Δ-set is somewhat more general than a simplicial complex, yet not quite as sophisticated as a simplicial set. Simplicial sets have additional structure, so that every simplicial set is also a semi-simplicial set. As an example, suppose we want to triangulate the 1-dimensional circle . To do so with a simplicial complex, we need at least three vertices, and edges connecting them. But delta-sets allow for a simpler triangulation: thinking of as the interval [0,1] with the two endpoints identified, we can define a triangulation with a single vertex 0, and a single edge looping between 0 and 0.
Definition and related data
Formally, a Δ-set is a sequence of sets together with maps
for each and , that satisfy
whenever . Often, the superscript of is omitted for brevity.
This definition generalizes the notion of a simplicial complex, where the are the sets of n-simplices, and the are the associated face maps, each mapping the -th face of a simplex in to a simplex in . The composition rule ensures that the faces in of a simplex in share their neighboring faces in , i.e. that the simplexes are well-formed. Δ-set is not as general as a simplicial set, since it lacks "degeneracies".
Given Δ-sets S and T, a map of Δ-sets is a collection of set-maps
such that
whenever both sides of the equation are defined.
With this notion, we can define the category of Δ-sets, whose objects are Δ-sets and whose morphisms are maps of Δ-sets.
Each Δ-set has a corresponding geometric realization, associating a geometrically defined space (a standard n-simplex) with each abstract simplex in Δ-set, and then "gluing" the spaces together using inclusion relations between the spaces to define an equivalence relation:
where we declare as
Here, denotes a standard n-simplex as a space, and
is the inclusion of the i-th face. The geometric realization is a topological space with the quotient topology.
The geometric realization of a Δ-set S has a natural filtration
where
is a "restricted" geometric realization.
Related functors
The geometric realization of a Δ-set described above defines a covariant functor from the category of Δ-sets to the category of topological spaces. Geometric realization takes a Δ-set to a topological space, and carries maps of Δ-sets to induced continuous maps between geometric realizations.
If S is a Δ-set, there is an associated free abelian chain complex, denoted , whose n-th group is the free abelian group
generated by the set , and whose n-th differential is defined by
This defines a covariant functor from the category of Δ-sets to the category of chain complexes of abelian groups. A Δ-set is carried to the chain complex just described, and a map of Δ-sets is carried to a map of chain complexes, which is defined by extending the map of Δ-sets in the standard way using the universal property of free abelian groups.
Given any topological space X, one can construct a Δ-set as follows. A singular n-simplex in X is a continuous map
Define
to be the collection of all singular n-simplicies in X, and define
by
where again is the -th face map. One can check that this is in fact a Δ-set. This defines a covariant functor from the category of topological spaces to the category of Δ-sets. A topological space is carried to the Δ-set just described, and a continuous map of spaces is carried to a map of Δ-sets, which is given by composing the map with the singular n-simplices.
Examples
This example illustrates the constructions described above. We can create a Δ-set S whose geometric realization is the unit circle , and use it to compute the homology of this space. Thinking of as an interval with the endpoints identified, define
with for all . The only possible maps are
It is simple to check that this is a Δ-set, and that . Now, the associated chain complex is
where
In fact, for all n. The homology of this chain complex is also simple to compute:
All other homology groups are clearly trivial.
The following example is from section 2.1 of Hatcher's Algebraic Topology. Consider the Δ-set structure given to the torus in the figure, which has one vertex, three edges, and two 2-simplices.
The boundary map is 0 because there is only one vertex, so . Let be a basis for . Then , so , and hence
Since there are no 3-simplices, . We have that which is 0 if and only if . Hence is infinite cyclic generated by .
So . Clearly for
Thus,
It is worth highlighting that the minimum number of simplices needed to endow with the structure of a simplicial complex is 7 vertices, 21 edges, and 14 2-simplices, for a total of 42 simplices. This would make the above calculations, which only used 6 simplices, much harder for someone to do by hand.
This is a non-example. Consider a line segment. This is a 1-dimensional Δ-set and a 1-dimensional simplicial set. However, if we view the line segment as a 2-dimensional simplicial set, in which the 2-simplex is viewed as degenerate, then the line segment is not a Δ-set, as we do not allow for such degeneracies.
Abstract nonsense
We now inspect the relation between Δ-sets and simplicial sets. Consider the simplex category , whose objects are the finite totally ordered sets and whose morphisms are monotone maps. A simplicial set is defined to be a presheaf on , i.e. a (contravariant) functor . On the other hand, consider the subcategory of whose morphisms are only the strict monotone maps. Note that the morphisms in are precisely the injections in , and one can prove that these are generated by the monotone maps of the form which "skip" the element . From this we see that a presheaf on is determined by a sequence of sets (where we denote by for simplicity) together with maps for (where we denote by for simplicity as well). In fact, after checking that in , one concludes that
whenever . Thus, a presheaf on determines the data of a Δ-set and, conversely, all Δ-sets arise in this way. Moreover, Δ-maps between Δ-sets correspond to natural transformations when we view and as (contravariant) functors. In this sense, Δ-sets are presheaves on while simplicial sets are presheaves on .
From this perspective, it is now easy to see that every simplicial set is a Δ-set. Indeed, notice there is an inclusion ; so that every simplicial set naturally gives rise to a Δ-set, namely the composite .
Pros and cons
One advantage of using Δ-sets in this way is that the resulting chain complex is generally much simpler than the singular chain complex. For reasonably simple spaces, all of the groups will be finitely generated, whereas the singular chain groups are, in general, not even countably generated.
One drawback of this method is that one must prove that the geometric realization of the Δ-set is actually homeomorphic to the topological space in question. This can become a computational challenge as the Δ-set increases in complexity.
See also
Simplicial complexes
Simplicial sets
Singular homology
References
Topology
Algebraic topology
Simplicial sets | Delta set | Physics,Mathematics | 1,708 |
17,764,646 | https://en.wikipedia.org/wiki/Momentum%20%28finance%29 | In finance, momentum is the empirically observed tendency for rising asset prices or securities return to rise further, and falling prices to keep falling. For instance, it was shown that stocks with strong past performance continue to outperform stocks with poor past performance in the next period with an average excess return of about 1% per month. Momentum signals (e.g., 52-week high) have been used by financial analysts in their buy and sell recommendations.
The existence of momentum is a market anomaly, which finance theory struggles to explain. The difficulty is that an increase in asset prices, in and of itself, should not warrant further increase. Such increase, according to the efficient-market hypothesis, is warranted only by changes in demand and supply or new information (cf. fundamental analysis). Students of financial economics have largely attributed the appearance of momentum to cognitive biases, which belong in the realm of behavioral economics. The explanation is that investors are irrational, in that they underreact to new information by failing to incorporate news in their transaction prices. However, much as in the case of price bubbles, other research has argued that momentum can be observed even with perfectly rational traders.
See also
Factor investing
Carhart four-factor model
Momentum investing
Technical analysis
References
Financial markets
Behavioral finance
Technical analysis | Momentum (finance) | Biology | 261 |
35,869,003 | https://en.wikipedia.org/wiki/Human%20mouth | In human anatomy, the mouth is the first portion of the alimentary canal that receives food and produces saliva. The oral mucosa is the mucous membrane epithelium lining the inside of the mouth.
In addition to its primary role as the beginning of the digestive system, the mouth also plays a significant role in communication. While primary aspects of the voice are produced in the throat, the tongue, lips, and jaw are also needed to produce the range of sounds included in speech.
The mouth consists of two regions, the vestibule and the oral cavity proper. The mouth, normally moist, is lined with a mucous membrane, and contains the teeth. The lips mark the transition from mucous membrane to skin, which covers most of the body.
Structure
Oral cavity
The mouth consists of two regions: the vestibule and the oral cavity proper. The vestibule is the area between the teeth, lips and cheeks. The oral cavity is bounded at the sides and in front by the alveolar process (containing the teeth) and at the back by the isthmus of the fauces. Its roof is formed by the hard palate. The floor is formed by the mylohyoid muscles and is occupied mainly by the anterior two-thirds of the tongue. A mucous membrane – the oral mucosa, lines the sides and under surface of the tongue to the gums, and lines the inner aspect of the jaw (mandible). It receives secretions from the submandibular and sublingual salivary glands. The posterior border of the oral cavity (ie, junction between the oral cavity and the oropharynx) includes the junction of the hard palate and the soft palate superiorly, the circumvallate papillae of the tongue inferiorly, and the retromolar trigone.
Lips
The lips come together to close the opening of the mouth, forming a line between the upper and lower lip. In facial expression, this mouth line is iconically shaped like an up-open parabola in a smile, and like a down-open parabola in a frown. A down-turned mouth means a mouth line forming a down-turned parabola, and when permanent can be normal. Also, a down-turned mouth can be part of the presentation of Prader–Willi syndrome.
Nerve supply
The teeth and the periodontium (the tissues that support the teeth) are innervated by the maxillary and mandibular nerves – divisions of the trigeminal nerve. Maxillary (upper) teeth and their associated periodontal ligament are innervated by the superior alveolar nerves, branches of the maxillary division, termed the posterior superior alveolar nerve, anterior superior alveolar nerve, and the variably present middle superior alveolar nerve. These nerves form the superior dental plexus above the maxillary teeth. The mandibular (lower) teeth and their associated periodontal ligament are innervated by the inferior alveolar nerve, a branch of the mandibular division. This nerve runs inside the mandible, within the inferior alveolar canal below the mandibular teeth, giving off branches to all the lower teeth (inferior dental plexus). The oral mucosa of the gingiva (gums) on the facial (labial) aspect of the maxillary incisors, canines and premolar teeth is innervated by the superior labial branches of the infraorbital nerve. The posterior superior alveolar nerve supplies the gingiva on the facial aspect of the maxillary molar teeth. The gingiva on the palatal aspect of the maxillary teeth is innervated by the greater palatine nerve apart from in the incisor region, where it is the nasopalatine nerve (long sphenopalatine nerve). The gingiva of the lingual aspect of the mandibular teeth is innervated by the sublingual nerve, a branch of the lingual nerve. The gingiva on the facial aspect of the mandibular incisors and canines is innervated by the mental nerve, the continuation of the inferior alveolar nerve emerging from the mental foramen. The gingiva of the buccal (cheek) aspect of the mandibular molar teeth is innervated by the buccal nerve (long buccal nerve).
Development
The philtrum is the vertical depression formed between the philtral ridges between the upper lip and the nasal septum, formed where the nasomedial and maxillary processes meet during embryo development. When these processes fail to fuse fully, a cleft lip, cleft palate, or both can result.
The nasolabial folds are the deep creases of tissue that extend from the nose to the sides of the mouth. One of the first signs of age on the human face is the increase in prominence of the nasolabial folds.
Function
The mouth plays an important role in eating, drinking, and speaking. Mouth breathing refers to the act of breathing through the mouth (as a temporary backup system) if there is an obstruction to breathing through the nose, which is the designated breathing organ for the human body.
Infants are born with a sucking reflex, by which they instinctively know to suck for nourishment using their lips and jaw.
The mouth also helps in chewing and biting food.
For some disabled people, especially many disabled artists, who through illness, accident or congenital disability have lost dexterity, their mouths take the place of their hands, when typing, texting, writing, making drawings, paintings and other works of art by maneuvering brushes and other tools, in addition to the basic oral functions. Mouth painters hold the brush in their mouth or between their teeth and maneuver it with their tongue and cheek muscles, but mouth painting can be strenuous for neck and jaw muscles since the head has to perform the same back and forth movement as a hand does when painting.
A male mouth can hold, on average, , while a female mouth holds .
See also
Head and neck anatomy
Index of oral health and dental articles
List of basic dentistry topics
Mouth breathing
Further reading
References
External links
Facial features
Human anatomy
Human head and neck
Speech organs
Human mouth anatomy
digestive system | Human mouth | Biology | 1,303 |
13,412,752 | https://en.wikipedia.org/wiki/Rider-Ericsson%20Engine%20Company | The US Rider-Ericsson Engine Company was the successor of the DeLamater Iron Works and the Rider Engine Company, having bought from both companies their extensive plants and entire stocks of engines and patterns, covering all styles of Rider and Ericsson hot air pumping engines brought out by both of the old companies since 1844, excepting the original Ericsson engine, the patterns of which were burned in the DeLameter fire of 1888.
Engines
The company specialized in hot air pumping engines. A hot air engine is an external combustion engine. All hot air engines consist of a hot side and a cold side. Mechanical energy is derived from a hot air engine as air is repeatedly heated and cooled, expanding and contracting, and imparting pressure upon a reciprocating piston.
Early hot air engines
In his patent of 1759, Henry Wood was the first to document the powering an engine by the changing volume of air as it changed temperature. George Cayley was the first to build a working model in 1807. The Reverend Robert Stirling is generally credited with the "invention" of the hot air engine in 1816 for his development of a "regenerator" which conserves heat energy as the air moves between the hot and cold sides of the engine. Technically, not all hot air engines utilize regenerators, but the term hot air engine and Stirling engine are sometimes used interchangeably.
Rider's engine
The Rider style engine is an "alpha" engine which uses two separate cylinders. As air in the hot side cylinder heats, it expands, driving the piston upward. The crankshaft now moves the cold side piston upward, drawing the hot air over to the cold side. The air cools, contracts, and pulls the hot side piston downward. The cold side piston then pushes the cool air over to the hot side, and the cycle repeats.
Ericsson's engine
The Ericsson style engine is a "beta" engine, which contains both the power piston and displacer within one cylinder. The cylinder has a hot end, within the firebox, and a cold end, surrounded by a water jacket. As the air is heated within the cylinder, the air expands, driving the piston upward. The displacer next moves downward, pushing the air from the hot side into the cool side of the cylinder. The air then contracts, pulling the piston downward. The displacer then moves the air from the cool side to the hot side, the cycle begins again.
Stationary engines | Rider-Ericsson Engine Company | Technology | 504 |
54,000 | https://en.wikipedia.org/wiki/Biophysics | Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology.
The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry.
Overview
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain.
Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom.
History
The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller.
William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery.
The popularity of the field rose when the book What Is Life?'' by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world.
Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena.
Focus as a subfield
While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments.
Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics.
Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof.
Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships.
Computer science – Neural networks, biomolecular and drug databases.
Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry
Bioinformatics – sequence alignment, structural alignment, protein structure prediction
Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics.
Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe.
Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity.
Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides.
Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application.
Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing.
Agronomy and agriculture
Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training.
See also
Biophysical Society
Index of biophysics articles
List of publications in biology – Biophysics
List of publications in physics – Biophysics
List of biophysicists
Outline of biophysics
Biophysical chemistry
European Biophysical Societies' Association
Mathematical and theoretical biology
Medical biophysics
Membrane biophysics
Molecular biophysics
Neurophysics
Physiomics
Virophysics
Single-particle trajectory
References
Sources
External links
Biophysical Society
Journal of Physiology: 2012 virtual issue Biophysics and Beyond
bio-physics-wiki
Link archive of learning resources for students: biophysika.de (60% English, 40% German)
Applied and interdisciplinary physics | Biophysics | Physics,Biology | 1,568 |
61,294,445 | https://en.wikipedia.org/wiki/Dimethylolpropionic%20acid | Dimethylolpropionic acid (DMPA) is a chemical compound that has the full IUPAC name of 2,2-bis(hydroxymethyl)propionic acid and is an organic compound with one carboxyl and two hydroxy groups. It has the CAS Registry Number of 4767-03-7.
Properties
DMPA is an odorless free flowing white crystalline solid and essentially non-toxic. DMPA has two different functional groups hydroxyl and carboxylic acid so the molecule can be used for a wide variety of syntheses. In addition to reaction with other chemicals, DMPA can also react with itself to produce esters via esterification, as one example.
Uses
One key use of DMPA is in the field of coatings and adhesives. It is used as a modifier in the production of anionic Polyurethane dispersions. Solvent soluble binders/resins for coatings can be converted into an aqueous binder with the use of this material. In this case it is reacted with a suitable diisocyanate such as isophorone diisocyanate or TMXDI usually along with other polyols to make a prepolymer.
There is also the possibility of using 2,2-bis(hydroxymethyl)propionic acid for the synthesis of dendrimeric molecules, also known as hyperbranched molecules. When each hydroxyl group is reacted with 2,2-bis (hydroxymethyl) propionic acid, the number of hydroxyl groups present in the molecule doubles. Repeating this reaction step, produces one more shell each time and thus the molecule grows. If at the end the hydroxyl groups are reacted with a bifunctional component, dendrimeric UV binders can be produced, for example. Dendrimeric molecules have low solution viscosities and improved properties.
It has a wide variety of other uses including production of hyperbranched polyesters, waterborne polyesters, waterbased alkyd resins, and aqueous epoxy resins. It has even found use in polyethylene terephthalate fiber production. Another use is in the medical field for drug release purposes. In the business world it has been cited as an outstanding growth opportunity
See also
Prepolymer
Waterborne resins
References
External links
NIST
PubChem
Fabrichem website
Technical Data Sheet
Beta hydroxy acids
Coatings | Dimethylolpropionic acid | Chemistry | 524 |
595,183 | https://en.wikipedia.org/wiki/Calcium%20sulfate | Calcium sulfate (or calcium sulphate) is the inorganic compound with the formula CaSO4 and related hydrates. In the form of γ-anhydrite (the anhydrous form), it is used as a desiccant. One particular hydrate is better known as plaster of Paris, and another occurs naturally as the mineral gypsum. It has many uses in industry. All forms are white solids that are poorly soluble in water. Calcium sulfate causes permanent hardness in water.
Hydration states and crystallographic structures
The compound exists in three levels of hydration corresponding to different crystallographic structures and to minerals:
(anhydrite): anhydrous state. The structure is related to that of zirconium orthosilicate (zircon): is 8-coordinate, is tetrahedral, O is 3-coordinate.
(gypsum and selenite (mineral)): dihydrate.
(bassanite): hemihydrate, also known as plaster of Paris. Specific hemihydrates are sometimes distinguished: α-hemihydrate and β-hemihydrate.
Uses
The main use of calcium sulfate is to produce plaster of Paris and stucco. These applications exploit the fact that calcium sulfate which has been powdered and calcined forms a moldable paste upon hydration and hardens as crystalline calcium sulfate dihydrate. It is also convenient that calcium sulfate is poorly soluble in water and does not readily dissolve in contact with water after its solidification.
Hydration and dehydration reactions
With judicious heating, gypsum converts to the partially dehydrated mineral called bassanite or plaster of Paris. This material has the formula CaSO4·(nH2O), where 0.5 ≤ n ≤ 0.8. Temperatures between are required to drive off the water within its structure. The details of the temperature and time depend on ambient humidity. Temperatures as high as are used in industrial calcination, but at these temperatures γ-anhydrite begins to form. The heat energy delivered to the gypsum at this time (the heat of hydration) tends to go into driving off water (as water vapor) rather than increasing the temperature of the mineral, which rises slowly until the water is gone, then increases more rapidly. The equation for the partial dehydration is:
CaSO4 · 2 H2O → CaSO4 · H2O + H2O↑
The endothermic property of this reaction is relevant to the performance of drywall, conferring fire resistance to residential and other structures. In a fire, the structure behind a sheet of drywall will remain relatively cool as water is lost from the gypsum, thus preventing (or substantially retarding) damage to the framing (through combustion of wood members or loss of strength of steel at high temperatures) and consequent structural collapse. But at higher temperatures, calcium sulfate will release oxygen and act as an oxidizing agent. This property is used in aluminothermy. In contrast to most minerals, which when rehydrated simply form liquid or semi-liquid pastes, or remain powdery, calcined gypsum has an unusual property: when mixed with water at normal (ambient) temperatures, it quickly reverts chemically to the preferred dihydrate form, while physically "setting" to form a rigid and relatively strong gypsum crystal lattice:
CaSO4 · H2O + H2O → CaSO4 · 2 H2O
This reaction is exothermic and is responsible for the ease with which gypsum can be cast into various shapes including sheets (for drywall), sticks (for blackboard chalk), and molds (to immobilize broken bones, or for metal casting). Mixed with polymers, it has been used as a bone repair cement. Small amounts of calcined gypsum are added to earth to create strong structures directly from cast earth, an alternative to adobe (which loses its strength when wet). The conditions of dehydration can be changed to adjust the porosity of the hemihydrate, resulting in the so-called α- and β-hemihydrates (which are more or less chemically identical).
On heating to , the nearly water-free form, called γ-anhydrite (CaSO4·nH2O where n = 0 to 0.05) is produced. γ-Anhydrite reacts slowly with water to return to the dihydrate state, a property exploited in some commercial desiccants. On heating above 250 °C, the completely anhydrous form called β-anhydrite or "natural" anhydrite is formed. Natural anhydrite does not react with water, even over geological timescales, unless very finely ground.
The variable composition of the hemihydrate and γ-anhydrite, and their easy inter-conversion, is due to their nearly identical crystal structures containing "channels" that can accommodate variable amounts of water, or other small molecules such as methanol.
Food industry
The calcium sulfate hydrates are used as a coagulant in products such as tofu.
For the FDA, it is permitted in cheese and related cheese products; cereal flours; bakery products; frozen desserts; artificial sweeteners for jelly & preserves; condiment vegetables; and condiment tomatoes and some candies.
It is known in the E number series as E516, and the UN's FAO knows it as a firming agent, a flour treatment agent, a sequestrant, and a leavening agent.
Dentistry
Calcium sulfate has a long history of use in dentistry. It has been used in bone regeneration as a graft material and graft binder (or extender) and as a barrier in guided bone tissue regeneration. It is a biocompatible material and is completely resorbed following implantation. It does not evoke a significant host response and creates a calcium-rich milieu in the area of implantation.
Desiccant
When sold at the anhydrous state as a desiccant with a color-indicating agent under the name Drierite, it appears blue (anhydrous) or pink (hydrated) due to impregnation with cobalt(II) chloride, which functions as a moisture indicator.
Sulfuric acid production
Up to the 1970s, commercial quantities of sulfuric acid were produced from anhydrous calcium sulfate. Upon being mixed with shale or marl, and roasted at 1400°C, the sulfate liberates sulfur dioxide gas, a precursor to sulfuric acid. The reaction also produces calcium silicate, used in cement clinker production.
Some component reactions pertaining to calcium sulfate:
Production and occurrence
The main sources of calcium sulfate are naturally occurring gypsum and anhydrite, which occur at many locations worldwide as evaporites. These may be extracted by open-cast quarrying or by deep mining. World production of natural gypsum is around 127 million tonnes per annum.
In addition to natural sources, calcium sulfate is produced as a by-product in a number of processes:
In flue-gas desulfurization, exhaust gases from fossil-fuel power stations and other processes (e.g. cement manufacture) are scrubbed to reduce their sulfur dioxide content, by injecting finely ground limestone:
Related sulfur-trapping methods use lime and some produces an impure calcium sulfite, which oxidizes on storage to calcium sulfate.
In the production of phosphoric acid from phosphate rock, calcium phosphate is treated with sulfuric acid and calcium sulfate precipitates. The product, called phosphogypsum is often contaminated with impurities making its use uneconomic.
In the production of hydrogen fluoride, calcium fluoride is treated with sulfuric acid, precipitating calcium sulfate.
In the refining of zinc, solutions of zinc sulfate are treated with hydrated lime to co-precipitate heavy metals such as barium.
Calcium sulfate can also be recovered and re-used from scrap drywall at construction sites.
These precipitation processes tend to concentrate radioactive elements in the calcium sulfate product. This issue is particular with the phosphate by-product, since phosphate ores naturally contain uranium and its decay products such as radium-226, lead-210 and polonium-210. Extraction of uranium from phosphorus ores can be economical on its own depending on prices on the uranium market or the separation of uranium can be mandated by environmental legislation and its sale is used to recover part of the cost of the process.
Calcium sulfate is also a common component of fouling deposits in industrial heat exchangers, because its solubility decreases with increasing temperature (see the specific section on the retrograde solubility).
Solubility
The solubility of calcium sulfate decreases as temperature increases. This behaviour ("retrograde solubility") is uncommon: dissolution of most of the salts is endothermic and their solubility increases with temperature. The retrograde solubility of calcium sulfate is also responsible for its precipitation in the hottest zone of heating systems and for its contribution to the formation of scale in boilers along with the precipitation of calcium carbonate whose solubility also decreases when CO2 degasses from hot water or can escape out of the system.
See also
Calcium sulfate (data page)
Alabaster
Anhydrite
Bathybius haeckelii
Chalk (calcium carbonate)
Gypsum
Gypsum plaster
Phosphogypsum
Selenite (mineral)
Flue-gas desulfurization
References
External links
International Chemical Safety Card 1215
NIOSH Pocket Guide to Chemical Hazards
Calcium compounds
Sulfates
Desiccants
Food additives
Pyrotechnic colorants
E-number additives | Calcium sulfate | Physics,Chemistry | 2,049 |
77,488,101 | https://en.wikipedia.org/wiki/Kakhovka%20Irrigation%20System | The Kakhovka Irrigation System (; ) is an irrigation system in southern Ukraine. With a total irrigation area of , it is the largest irrigation system in the entire country.
History
In 1951, construction began for the Kakhovka Hydroelectric Power Plant, which created the Kakhovka Reservoir and provided a water source for local irrigation. By 1967, construction for an irrigation system began, and different sections began operation throughout the 1970s.
Characteristics
The irrigation system all begin at the Kakhovka Reservoir, where it flows south before diverging into different areas. The entire system includes many interconnected canals, such as the Kakhovka Canal, and it provides water for crops across much of Kherson Oblast.
Because of the vast size of the irrigation system, there are 16 pumping stations throughout the canals. This includes a main pumping station, which is sized by .
References
Canals in Ukraine
Irrigation projects
Irrigation canals
Transport in Kherson Oblast
Buildings and structures in Kherson Oblast | Kakhovka Irrigation System | Engineering | 195 |
62,240,093 | https://en.wikipedia.org/wiki/%28Pentamethylcyclopentadienyl%29titanium%20trichloride | (Pentamethylcyclopentadienyl)titanium trichloride is an organotitanium compound with the formula Cp*TiCl3 (Cp* = C5(CH3)5). It is an orange solid. The compound adopts a piano stool geometry. An early synthesis involve the combination of lithium pentamethylcyclopentadienide and titanium tetrachloride.
The compound is an intermediate in the synthesis of decamethyltitanocene dichloride. In the presence of organoaluminium compounds and other additives, it catalyzes the polymerization of alkenes.
See also
(Cyclopentadienyl)titanium trichloride
References
Chloro complexes
Titanium compounds
Half sandwich compounds | (Pentamethylcyclopentadienyl)titanium trichloride | Chemistry | 159 |
18,297,485 | https://en.wikipedia.org/wiki/Uniform%20consensus | In computer science, Uniform consensus is a distributed computing problem that is a similar to the consensus problem with one more condition which is no two processes (whether faulty or not) decide differently.
More specifically one should consider this problem:
Each process has an input, should decide on an output (one-shot problem)
Uniform Agreement: every two decisions are the same
Validity: every decision is an input of one of the processes
Termination: eventually all correct processes decide
References
Distributed computing problems | Uniform consensus | Mathematics,Technology | 96 |
969,684 | https://en.wikipedia.org/wiki/Herd | A herd is a social group of certain animals of the same species, either wild or domestic. The form of collective animal behavior associated with this is called herding. These animals are known as gregarious animals.
The term herd is generally applied to mammals, and most particularly to the grazing ungulates that classically display this behaviour. Different terms are used for similar groupings in other species; in the case of birds, for example, the word is flocking, but flock may also be used for mammals, particularly sheep or goats. Large groups of carnivores are usually called packs, and in nature a herd is classically subject to predation from pack hunters.
Special collective nouns may be used for particular taxa (for example a flock of geese, if not in flight, is sometimes called a gaggle) but for theoretical discussions of behavioural ecology, the generic term herd can be used for all such kinds of assemblage.
The word herd, as a noun, can also refer to one who controls, possesses and has care for such groups of animals when they are domesticated. Examples of herds in this sense include shepherds (who tend to sheep), goatherds (who tend to goats), and cowherds (who tend to cattle).
The structure and size of herds
When an association of animals (or, by extension, people) is described as a herd, the implication is that the group tends to act together (for example, all moving in the same direction at a given time), but that this does not occur as a result of planning or coordination. Rather, each individual is choosing behaviour in correspondence with most other members, possibly through imitation or possibly because all are responding to the same external circumstances. A herd can be contrasted with a coordinated group where individuals have distinct roles. Many human groupings, such as army detachments or sports teams, show such coordination and differentiation of roles, but so do some animal groupings such as those of eusocial insects, which are coordinated through pheromones and other forms of animal communication.
A herd is, by definition, relatively unstructured. However, there may be two or a few animals which tend to be imitated by the bulk of the herd more than others. An animal in this role is called a "control animal", since its behaviour will predict that of the herd as a whole. It cannot be assumed, however, that the control animal is deliberately taking a leadership role; control animals are not necessarily socially dominant in conflict situations, though they often are. Group size is an important characteristic of the social environment of gregarious species.
Costs and benefits of animals in groups
The reason why animals form herds can not always be stated easily, since the underlying mechanisms are diverse and complex. Understanding the social behaviour of animals and the formation of groups has been a fundamental goal in the field of sociobiology and behavioural ecology. Theoretical framework is focused on the costs and benefits associated with living in groups in terms of the fitness of each individual compared to living solitarily. Living in groups evolved independently multiple times in various taxa and can only occur if its benefits outweigh the costs within an evolutionary timescale. Thus, animals form groups whenever this increases their fitness compared to living in solitary.
The following includes an outline about some of the major effects determining the trade-offs for living in groups.
Dilution effect
Perhaps the most studied effect of herds is the so-called dilution effect. The key argument is that the risk of being preyed upon for any particular individual is smaller within a larger group, strictly because a predator has to decide which individual to attack. Although the dilution effect is influenced by so-called selfish herding, it is primarily a direct effect of group size instead of the position within a herd. Greater group sizes result in higher visibility and detection rates for predators, but this relation is not directly proportional and saturates at some point, while the risk of being attacked for an individual is directly proportional to group size. Thus, the net effect for an individual in a group concerning its predation risk is beneficial.
Whenever groups, such as shoals of fish, synchronize their movements, it becomes harder for predators to focus on particular individuals. However, animals that are weak and slower or on the periphery are preferred by predators, so that certain positions within the group are better than others (see selfish herd theory). For fit animals, being in a group with such vulnerable individuals may thus decrease the chance of being preyed upon even further.
Collective vigilance
The effect of collective vigilance in social groups has been widely studied within the framework of optimal foraging theory and animal decision making. While animals under the risk of predation are feeding or resting, they have to stay vigilant and watch for predators. It could be shown in many studies (especially for birds) that with increase in group size individual animals are less attentive, while the overall vigilance suffers little (many eyes effect). This means food intake and other activities related to fitness are optimized in terms of time allocation when animals stay in groups.
However, some details about this concepts remain unclear. Being the first to detect predators and react accordingly can be advantageous, implying individuals may not fully be able to rely only on the group. Moreover, the competition for food can lead to the misuse of warning calls, as was observed for great tits: If food is scarce or monopolized by dominant birds, other birds (mainly subordinates) use antipredatory warning calls to induce an interruption of feeding and gain access to resources.
Another study concerning a flock of geese suggested that the benefits of lower vigilance concerned only those in central positions, due to the fact that the possibly more vulnerable individuals in the flock's periphery have a greater need to stay attentive. This implies that the decrease in overall vigilance arises simply because the geese on the edge of the flock comprise a smaller group when groups get large.
A special case of collective vigilance in groups is that of sentinels. Individuals take turn in keeping guard, while all others participate in other activities. Thus, the strength of social bonds and trust within these groups have to be much higher than in the former cases.
Foraging
Hunting together enables group-living predators, such as wolves and wild dogs, to catch large prey, which they are unable to achieve when hunting alone. Working together significantly improves foraging efficiency, meaning the net energy gain of each individual is increased when animals are feeding collectively. As an example, a group of Spinner dolphins is able to corral fish into a smaller volume, which makes catching them easier, as there is less opportunity for the fish to escape. Furthermore, large groups are able to monopolize resources and defend them against solitary animals or smaller groups of the same or different species. It has been shown that larger groups of lions tend to be more successful in protecting prey from hyenas than smaller ones. Being able to communicate the location and type of food to other group members may increase the chance for each individual to find profitable food sources, a mechanism which is known to be used by both bees (via a Waggle dance) and several species of birds (using specific vocalisations to indicate food).
In terms of Optimal foraging theory, animals always try to maximize their net energy gain when feeding, because this is positively correlated to their fitness. If their energy requirement is fixed and additional energy is not increasing fitness, they will use as little time for foraging as possible (time minimizers). If on the other hand time allocated to foraging is fixed, an animal's gain in fitness is related to the quantity and quality of resources it feeds on (Energy maximizers).
Since foraging may be energetically costly (searching, hunting, handling, etc.) and may induce risk of predation, animals in groups may have an advantage, since their combined effort in locating and handling food will reduce time needed to forage sufficiently. Thus, animals in groups may have shorter searching and handling times as well as an increased chance of finding (or monopolizing) highly profitable food, which makes foraging in groups beneficial for time minimizers and energy maximizers alike.
The obvious disadvantage of foraging in groups is (scramble or direct) competition with other group members. In general, it is clear that the amount of resources available for each individual decreases with group size. If the resource availability is critical, competition within the group may get so intense, that animals no longer experience benefits from living in groups. However, only the relative importance of within- and between-group competition determines the optimal group size and ultimately the decision of each individual whether or not to stay in the group.
Diseases and parasites
Since animals in groups stay near each other and interact frequently, infectious diseases and parasites spread much easier between them compared to solitary animals. Studies have shown a positive correlation between herd size and intensity of infections, but the extent to which this sometimes drastic reduction in fitness governs group size and structure is still unclear. However, some animals have found countermeasures such as propolis in beehives or grooming in social animals.
Energetic advantages
Staying together in groups often brings energetic advantages. Birds flying together in a flock use aerodynamic effects to reduce energetic costs, e.g. by positioning themselves in a V-shaped formation. A similar effect can be observed when fish swim together in fixed formations.
Another benefit of group living occurs when climate is harsh and cold: By staying close together animals experience better thermoregulation, because their overall surface to volume ratio is reduced. Consequently, maintaining adequate body temperatures becomes less energetically costly.
Antipredatory behaviour
The collective force of a group mobbing predators can reduce risk of predation significantly. Flocks of raven are able to actively defend themselves against eagles and baboons collectively mob lions, which is impossible for individuals alone. This behaviour may be based on reciprocal altruism, meaning animals are more likely to help each other if their conspecifics did so earlier.
Mating
Animals living in groups are more likely to find mates than those living in solitary and are also able to compare potential partners in order to optimize genetic quality for their offspring.
Domestic herds
Domestic animal herds are assembled by humans for practicality in raising them and controlling them. Their behaviour may be quite different from that of wild herds of the same or related species, since both their composition (in terms of the distribution of age and sex within the herd) and their history (in terms of when and how the individuals joined the herd) are likely to be very different.
Human parallels
The term herd is also applied metaphorically to human beings in social psychology, with the concept of herd behaviour. However both the term and concepts that underlie its use are controversial.
The term has acquired a semi-technical usage in behavioral finance to describe the largest group of market investors or market speculators who tend to "move with the market", or "follow the general market trend". This is at least a plausible example of genuine herding, though according to some researchers it results from rational decisions through processes such as information cascade and rational expectations. Other researchers, however, ascribe it to non-rational process such as mimicry, fear and greed contagion. "Contrarians" or contrarian investors are those who deliberately choose to invest or speculate counter to the "herd".
See also
Literature
Krause, J., & Ruxton, G. D. (2002). Living in groups. Oxford: Oxford University Press.
References
Ethology
Group processes
Herding | Herd | Biology | 2,364 |
16,781,589 | https://en.wikipedia.org/wiki/HD%2072659%20b | HD 72659 b is a superjovian exoplanet massing at least 3.3 MJ orbiting at 4.77 AU from the star, taking 3630 days to complete one orbit. The orbital distance range from 3.49 AU to 6.05 AU with orbital eccentricity of 0.269. In 2022, the inclination and true mass of HD 72659 b were measured via astrometry.
See also
HD 73256
References
External links
Exoplanets discovered in 2002
Giant planets
Hydra (constellation)
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | HD 72659 b | Astronomy | 126 |
30,762,173 | https://en.wikipedia.org/wiki/Brass%20mill | A brass mill is a mill which processes brass. Brass mills are common in England; many date from long before the Industrial Revolution.
Examples of brass mills include
Brassmill (Ross on Wye)
Saltford Brass Mill
See also
Calamine brass
Latten
William Champion
Further reading
Metallurgical facilities | Brass mill | Chemistry,Materials_science | 61 |
14,427,846 | https://en.wikipedia.org/wiki/Hypocretin%20%28orexin%29%20receptor%202 | Orexin receptor type 2 (Ox2R or OX2), also known as hypocretin receptor type 2 (HcrtR2), is a protein that in humans is encoded by the HCRTR2 gene. It should not be confused for the protein CD200R1 which shares the alias OX2R but is a distinct, unrelated gene located on the human chromosome 3.
Structure
The structure of the receptor has been solved to 2.5 Å resolution as a fusion protein bound to suvorexant using lipid-mediated crystallization.
Function
OX2 is a G-protein coupled receptor expressed exclusively in the brain. It has 64% identity with OX1. OX2 binds both orexin A and orexin B neuropeptides. OX2 is involved in the central feedback mechanism that regulates feeding behaviour. Mice with enhanced OX2 signaling are resistant to high-fat diet-induced obesity.
This receptor is activated by Hipocretin, which is a wake-promoting hypothalamic neuropeptide that acts as a critical regulator of sleep in animals as Zebrafish or Mammals. This protein has mutations in Astyanax mexicanus that reduces the sleep needs of the cavefish.
Ligands
Agonists
Danavorexton (TAK-925) – selective OX2 receptor agonist
Firazorexton – selective OX2 receptor agonist
Orexins – dual OX1 and OX2 receptor agonists
Orexin-A – approximately equipotent at the OX1 and OX2 receptors
Orexin-B – approximately 5- to 10-fold selectivity for the OX2 receptor over the OX1 receptor
Oveporexton
SB-668875 – selective OX2 receptor agonist
Suntinorexton – selective OX2 receptor agonist
TAK-861 – selective OX2 receptor agonist
Antagonists
Almorexant - Dual OX1 and OX2 antagonist
Daridorexant (nemorexant) - Dual OX1 and OX2 antagonist
EMPA - Selective OX2 antagonist
Filorexant - Dual OX1 and OX2 antagonist
JNJ-10397049 (600x selective for OX2 over OX1)
Lemborexant - Dual OX1 and OX2 antagonist
MK-1064 - Selective OX2 antagonist
MK-8133 - Selective OX2 antagonist
SB-649,868 - Dual OX1 and OX2 antagonist
Seltorexant - Selective OX2 antagonist
Suvorexant - Dual OX1 and OX2 antagonist
TCS-OX2-29 - Selective OX2 antagonist
(3,4-dimethoxyphenoxy)alkylamino acetamides
Compound 1m - Selective OX2 antagonist
See also
Orexin receptor
References
Further reading
G protein-coupled receptors | Hypocretin (orexin) receptor 2 | Chemistry | 572 |
34,815,353 | https://en.wikipedia.org/wiki/Raymond%20Horton-Smith%20Prize | The Raymond Horton-Smith Prize is a prize awarded by the School of Clinical Medicine, University of Cambridge for the best thesis presented for MD degree during the academical year. Known as the prize for the best MD of the year, it should be awarded annually but from time to time it has not been awarded for some years.
Often the prize has been considered to have a high prestige value since it has encouraged the Doctor of Medicine graduates (MD) of the world-renowned university to write the best thesis among them.
Founder
Richard Horton Horton-Smith, MA, KC (4 December 1831 – 2 November 1919) was a barrister and a Masonic Lodge Officer. Before to be a student and later a Fellow at St John's College, Cambridge, he attended also the University College School and the University College in London. His studies was about classics and law, becoming Classical Lecturer at King's College London. At the Lincoln's Inn, London, he was called to the Bar in 1859, becoming Queen's Counsel (QC) in 1877, Bencher in 1881, Trustee in 1884, Governor of Tancred's Charities in 1889, and Treasurer in 1903. He was member and officer of many Masonic Lodges (the Scientific Lodge, Cambridge, was his first one in 1856), becoming Life Governor of the Royal Masonic Benevolent Institution, and also founding a lodge in 1893 (the Chancery Bar Lodge). He obtained his highest rank of Past Grand Registrar of England in 1898.
He was author of many books and articles (with John Peter De Gex he wrote the book Arrangements between Debtors and Creditors under the Bankruptcy Act, 1861) and was also Honorable Counsel to the Royal Philharmonic Society, Director of the Royal Academy of Music, and vice-president of the Bar Musical Society.
He had three sons and two daughters. His third son Raymond John Horton-Smith (16 March 1873 – 8 Oct 1899), who studied medicine at several universities including the St John's College, Cambridge, gaining MB BCh, MA, MRCS, LRCP and achieving brilliant results (Wainwright Prizeman at University of London), died of tuberculosis at Davos, Switzerland, aged 27. Some months later, in 1900, Richard Horton-Smith found the Raymond Horton-Smith Prize in his honour, communicating to the Council of the Senate (University of Cambridge) his offer of a fund of 500 pounds for his proposed prize, which was approved on 16 March 1900. Later, moneys for the Raymond Horton-Smith Fund would be given also by his son Sir Percival Horton-Smith Hartley, and by his granddaughter Mrs. A. G. Wornum.
Eligibility and criteria
The candidates for the degree of Doctor of Medicine (MD) present their MD Thesis or a dissertation for the MD of the academical year at the University of Cambridge. A committee judges the best thesis or dissertation among the candidates, possibly consulting one independent referee, possibly paying him through a fee approved by the Cambridge University Council.
Award value
The value of the Prize is the net annual income of the Raymond Horton-Smith Fund deducting a possible fee (to pay a referee) and the price to purchase a book selected with the prize-winner but approved by the Vice-Chancellor and to be stamped with the arms of the university and with the Horton-Smith armorial bearings.
List of recipients
The main source is a column in the British Medical Journal titled "Universities and Colleges". To be concise the references include only the PMC ID of the pages where the column appears. Due to the lack of information available in internet the list is incomplete.
References
Awards and prizes of the University of Cambridge
Awards established in 1901
Medicine awards | Raymond Horton-Smith Prize | Technology | 743 |
1,396,257 | https://en.wikipedia.org/wiki/Proteorhodopsin | Proteorhodopsin (also known as pRhodopsin) is a family of transmembrane proteins that use retinal as a chromophore for light-mediated functionality, in this case, a proton pump. pRhodopsin is found in marine planktonic bacteria, archaea and eukaryotes (protae), but was first discovered in bacteria.
Its name is derived from proteobacteria (now called Pseudomonadota) that were named after Ancient Greek (Proteus), an early sea god mentioned by Homer as "Old Man of the Sea", () for "rose", due to its pinkish color, and (opsis) for "sight". Some members of the family, Homologous rhodopsin-like pigments, i.e. bacteriorhodopsin (of which there are more than 800 types) have Sensory Functions like opsins, integral for visual phototransduction. Many of these sensory functions are unknown – for example, the function of Neuropsin in the human retina. Members are known to have different absorption spectra including green and blue visible light.
History
Proteorhodopsin (PR or pRhodopsin) was first discovered in 2000 within a bacterial artificial chromosome from previously uncultivated marine Gammaproteobacteria, still only referred to by their ribotype metagenomic data, SAR86. More species of Gammaproteobacteria, both Gram-positive and Gram-negative, were found to express the protein.
Distribution
Samples of proteorhodopsin expressing bacteria have been obtained from the Eastern Pacific Ocean, Central North Pacific Ocean and Southern Ocean, Antarctica. Subsequently, genes of proteorhodopsin variants have been identified in samples from the Mediterranean, Red Seas, the Sargasso Sea, and Sea of Japan, and the North Sea.
Proteorhodopsin variants are not spread randomly, but disperse along depth gradients based on the maximal absorption-tuning of the particular holoprotein sequence; this is mainly due to the electromagnetic absorption by water which creates wavelength gradients relative to depth. Oxyrrhis marina is a dinoflagellate protist with green-absorbing proteorhodopsin (a result of the L109 Group) that exists mostly in shallow tide pools and shores, where green light is still available. Karlodinium micrum, another dinoflagelate, expresses a blue tuned proteorhodopsin (E109) which may be related to its deep water vertical migrations. O. marina was originally believed to be a heterotroph, however the proteorhodopsin may well partake in a functionally significant manner, as it was the most abundantly expressed nuclear gene and, furthermore, is dispersed unevenly in the organism, suggesting some organelle membrane function. Previously the only known eukaryotic solar energy transducing proteins were Photosystem I and Photosystem II. It has been hypothesized that lateral gene transfer is the method by which proteorhodopsin has made its way into numerous phyla. Bacteria, archaea and eukarya all colonize the photic zone where they come to light; Proteorhodopsin has been able to disseminate through this zone, but not to other portions of the water column.
Taxonomy
Proteorhodopsin belongs to a family of similar retinylidene proteins, most similar to its archaeal homologues halorhodopsin and bacteriorhodopsin. Sensory rhodopsin was discovered by Franz Christian Boll in 1876. Bacteriorhodopsin was discovered in 1971 and named in 1973 and is currently only known to exist in archaea, not bacteria. Halorhodopsin was first discovered and named in 1977. Bacteriorhodopsin and Halorhodopsin both only exist in archaea whereas proteorhodopsin spans bacteria, archaea, and eukaryotes. Proteorhodopsin shares seven transmembrane α-helices retinal covalently linked by a Schiff base mechanism to a lysine residue in the seventh helix (helix G). Bacteriorhodopsin, like proteorhodopsin, is a light-driven proton pump. Sensory rhodopsin is a G-coupled protein involved in sight.
Active site
In comparison with its better-known archaeal homolog bacteriorhodopsin, most of the active site residues of known importance to the bacteriorhodopsin mechanism are conserved in proteorhodopsin. Sequence similarity is not significantly conserved however, from either halo- or bacterio- rhodopsin. Homologues of the active site residues Arg82, Asp85 (the primary proton acceptor), Asp212 and Lys216 (the retinal Schiff base binding site) in bacteriorhodopsin are conserved as Arg94, Asp97, Asp227 and Lys231 in proteorhodopsin. However, in proteorhodopsin, there are no carboxylic acid residues directly homologous to Glu194 or Glu204 of bacteriorhodopsin (or Glu 108 and 204 depending on the bacRhodopsin variant), which are thought to be involved in the proton release pathway at the extracellular surface. However, Asp97 and Arg94 may replace this functionality without the close residue proximity as in bacteriorhodopsin. The department of chemistry at Syracuse University decisively showed Asp97 cannot be the proton release group as the release happened at forcing conditions under which the aspartic acid group remained protonated.
Ligand
The Rhodopsin haloprotein family shares the ligand retinal, one of the many types of vitamin A. Retinal is a conjugated poly-unsaturated chromophore (polyene), obtained from carnivorous diet or by the carotene pathway (β-carotene 15,15'-monoxygenase).
Function
Proteorhodopsin functions throughout the Earth's oceans as a light-driven H+ pump, by a mechanism similar to that of bacteriorhodopsin. As in bacteriorhodopsin, the retinal chromophore of proteorhodopsin is covalently bound to the apoprotein via a protonated Schiff base at Lys231. The configuration of the retinal chromophore in unphotolyzed proteorhodopsin is predominantly all-trans, and isomerizes to 13-cis upon illumination with light. Several models of the complete proteorhodopsin photocycle have been proposed, based on FTIR and UV–visible spectroscopy; they resemble established photocycle models for bacteriorhodopsin. Complete proteorhodopsin based photosystems have been discovered and expressed in E. coli, giving them additional light mediated energy gradient capability for ATP generation without external need for retinal or precursors; with the PR, gene five other proteins code for the photopigment biosynthetic pathway.
Genetic engineering
If the gene for proteorhodopsin is inserted into E. coli and retinal is given to these modified bacteria, then they will incorporate the pigment into their cell membrane and will pump H+ in the presence of light. A deep purple is representative of clearly transformed colonies, due to light absorption. Proton gradients can be used to power other membrane protein structures or used to acidify a vesicle type organelle. It was further demonstrated that the proton gradient generated by proteorhodopsin could be used to generate ATP.
See also
Microbial rhodopsin
Bacteriorhodopsin
Opsin
Archaerhodopsin
Gallery
References
Bacterial proteins
Integral membrane proteins
Photosynthesis | Proteorhodopsin | Chemistry,Biology | 1,721 |
37,620,426 | https://en.wikipedia.org/wiki/Index%20of%20women%20scientists%20articles |
A
A. Catrina Bryce
A. Elizabeth Adams
Abby Howe Turner
Abella
Ada Lovelace
Ada Yonath
Adele Goldberg (computer scientist)
Adrienne Mayor
Aglaonike
Agnes Arber
Agnes Fay Morgan Research Award
Agnes Mary Clerke
Agnes Pockels
Agnes Sime Baxter
Agnodice
Aisling Judge
Alejandra Bravo
Alenush Terian
Alessandra Giliani
Alexia Massalin
Alice Ball
Alice Cunningham Fletcher
Alice Eastwood
Alice L. Kibbe
Alice Leigh-Smith
Alice Middleton Boring
Alice Miller (psychologist)
Alice Pegler
Alice Stewart
Alice Y. Ting
Alicia Boole Stott
Allene Jeanes
Allison Randal
Almira Hart Lincoln Phelps
Amalie Dietrich
Amanda Chessell
Ana Aslan
Anat Cohen-Dayag
Andrea Bertozzi
Andrea Brand
Annette Dolphin
Angela Clayton
Angela Merkel
Angela Orebaugh
Angioletta Coradini
Anita Borg
Anita Goel
Anita Harding
Anita K. Jones
Anita Roberts
Anja Cetti Andersen
Ann Bishop (biologist)
Ann Haven Morgan
Ann Kiessling
Ann Nelson
Anna Atkins
Anna Botsford Comstock
Anna J. Harrison
Anna Karlin
Anna Mani
Anna Maria Hussey
Anna Morandi Manzolini
Anna Nagurney
Anna Stecksén
Anna Sundström
Anna Winlock
Lady Anne Brewis
Anne Condon
Anne Elizabeth Ball
Anne H. Ehrlich
Anne McLaren
Anne Rudloe - PhD Marine Biology
Anne Simon
Anne Stine Ingstad
Anne Thynne
Anne Warner (scientist)
Annette Salmeen
Annie Antón
Annie Curtis
Annie Dale Biddle Andrews
Annie Easley
Annie Francé-Harrar
Annie Jump Cannon
Annie Lorrain Smith
Annie Meinertzhagen
Annie Scott Dill Maunder
Anousheh Ansari
Antje Boetius
Antonia Maury
Arete of Cyrene
Ariel Hollinshead
Arfa Karim
Artemisia II of Caria
Ashawna Hailey
Asima Chatterjee
Association for Women Geoscientists
Association for Women in Mathematics
Astrid Cleve
Audrey Stuckes
Audrey Tang
Ayanna Howard
B
Barbara A. Schaal
Barbara J. Meyer
Barbara Liskov
Barbara McClintock
Barbara Simons
Beatrice Helen Worsley
Beatrice Mabel Cave-Browne-Cave
Beatrice Mintz
Beatrice Tinsley
Beatrix Potter
Bernadine Healy
Berta Lutz
Bertha Swirles
Beryl May Dent
Beth Levine (physician)
Beth Shapiro
Beth Willman
Betsy Ancker-Johnson
Betty Holberton
Beyond Bias and Barriers
Bibha Chowdhuri
Birutė Galdikas
Brigitte Askonas
Bruria Kaufman
C
Caitlín R. Kiernan
Calrice di Durisio
Camilla Wedgwood
Cara Santa Maria
Carla J. Shatz
Carol A. Barnes
Carol Karp
Carol W. Greider
Carole Goble
Carole Jordan
Carole Meredith
Caroline Herschel
Carolyn Cohen
Carolyn Lawrence-Dill
Carolyn Porco
Carolyn R. Bertozzi
Carolyn S. Gordon
Carolyn Talcott
Carolyne M. Van Vliet
Carrie Derick
Caryn Navy
Catharine Parr Traill
Catherine Bréchignac
Catherine Coleman
Catherine G. Wolf
Catherine Hickson
Cecilia Krieger
Cecilia Payne-Gaposchkin
Cecilia R. Aragon
Celia Grillo Borromeo
Charlotte Auerbach
Charlotte Barnum
Charlotte Froese Fischer
Charlotte Moore Sitterly
Charlotte Scott
Chen Hang
Chien-Shiung Wu
Chrisanthi Avgerou
Christiane Desroches Noblecourt
Christiane Nüsslein-Volhard
Christina Miller
Christina Roccati
Christine Buisman
Christine Hamill
Christine Marie Berkhout
Claire F. Gmachl
Claire Fagin
Claire M. Fraser
Claire Voisin
Clara H. Hasse
Clara Immerwahr
Clara Southmayd Ludlow
Claribel Kendall
Claudia Alexander
Cleopatra the Alchemist
Clémence Royer
Colette Rolland
Constance Calenda
Corinna E. Lathan
Cornelia Clapp
Cynthia Bathurst
Cynthia Breazeal
Cynthia Dwork
Cynthia E. Rosenzweig
Cynthia Kenyon
Cécile DeWitt-Morette
D
Dana Angluin
Dana Randall
Dana Ron
Dana Ulery
Danese Cooper
Daniela Kühn
Daniela L. Rus
Danielle Bunten Berry
Daphne Jackson
Daphne Koller
Daphne Osborne
Darshan Ranganathan
Dawn Prince-Hughes
Deborah Charlesworth
Deborah Estrin
Dian Fossey
Diana Baumrind
Diane Greene
Diane Griffin
Diane Pozefsky
Diane Souvaine
Dina Lévi-Strauss
Dominiqua M. Griffin
Donna Auguste
Donna Baird
Doris Mable Cochran
Dorit Aharonov
Dorotea Bucca
Dorothea Bennett
Dorothea Erxleben
Dorothea Jameson
Dorothea Klumpke
Dorothy E. Denning
Dorothy Garrod
Dorothy Hansine Andersen
Dorothy Hill
Dorothy Hodgkin
Dorothy Lewis Bernstein
Dorothy M. Needham
Dorothy Maud Wrinch
E
Edith Bülbring
Edith Marion Patch
Edith Pretty
Edna Grossman
Elaine Fuchs
Elaine Weyuker
Elda Emma Anderson
Eleanor Anne Ormerod
Eleanor Glanville
Eleanor Maguire
Eli Fischer-Jørgensen
Elisabeth Altmann-Gottheiner
Elisabeth Hevelius
Élisabeth Lutz
Elizabeth Ann Nalley
Elizabeth Blackburn
Elizabeth Blackwell
Elizabeth Brown (astronomer)
Elizabeth Cabot Agassiz
Elizabeth Carne
Elizabeth Coleman White
Elizabeth Fulhame
Elizabeth Gertrude Britton
Elizabeth J. Feinler
Elizabeth Lee Hazen
Elizabeth Loftus
Elizabeth Nabel
Elizabeth Rather
Elizaveta Karamihailova
Elizaveta Litvinova
Ellen Gleditsch
Ellen Hayes
Ellen Spertus
Ellen Swallow Richards
Ellen Vitetta
Ellinor Catherine Cunningham van Someren
Elsa Beata Bunge
Elsie M. Burrows
Elsie MacGill
Elsie Maud Wakefield
Elsie Widdowson
Emer Jones
Émilie du Châtelet
Emilie Snethlage
Emma P. Carr
Emmy Noether
Enid Mumford
Erika Pannwitz
Esther Lederberg
Esther M. Conwell
Esther Orozco
Ethel Browne Harvey
Ethel Sargant
Ethel Shakespear
Etheldred Benett
Etta Zuber Falconer
Eugenia Del Pino
Eugenie Clark
Eva Bayer-Fluckiger
Eva Ekeblad
Eva Nogales
Evelyn Berezin
Evelyn Boyd Granville
Evelyn Fox Keller
Evelyn Hu
Éva Tardos
Evi Nemeth
F
F. Gwendolen Rees
FASEB Excellence in Science Award
Fan Chung
Faustina Pignatelli
Flora Wambaugh Patterson
Florence Bascom
Florence Nightingale
Florence R. Sabin
Florence Wells Slater
Florence Wambugu
Florentina Mosora
Floy Agnes Lee
Fotini Markopoulou-Kalamara
Frances A. Rosamond
Frances Ashcroft
Frances Cave-Browne-Cave
Frances E. Allen
Frances Hardcastle
Frances Hugle
Frances Kirwan
Frances Meehan Latterell
Frances Theodora Parsons
Frances Yao
Francine Berman
Françoise Barré-Sinoussi
Frederica Darema
G
Gabriele Rabel
Gail Williams
Garvan–Olin Medal
Geertruida de Haas-Lorentz
George and Elizabeth Peckham
Gerta Keller
Gertrud Theiler
Gertrude B. Elion
Gertrude Bell
Gertrude Blanch
Gertrude Caton–Thompson
Gertrude Mary Cox
Gertrude Neumark
Gertrude Scharff Goldhaber
Gertrude Simmons Burlingham
Gerty Cori
Gillian Bates
Gina G. Turrigiano
Gisela Richter
Gisèle Lamoureux
Gitte Moos Knudsen
Giuseppa Barbapiccola
Gladys Amelia Anslow
Gladys Kalema-Zikusoka
Glenda Schroeder
Grace Chisholm Young
Grace Evelyn Pickford
Grace Frankland
Grace Hopper
Greta Stevenson
Grete Hermann
H
Halszka Osmólska
Hannah Monyer
Harriet Boyd-Hawes
Harriet Brooks
Harriet Mann Miller
Harriet Margaret Louisa Bolus
Hava Siegelmann
Hazel Alden Reason
Hazel Bishop
Heather Couper
Heather Reid
Hedwig Kohn
Hedy Lamarr
Heidi Jo Newberg
Helen Blair Bartlett
Helen Dean King
Helen Flanders Dunbar
Helen G. Grundman
Helen Gwynne-Vaughan
Helen M. Berman
Helen Megaw
Helen Murray Free
Helen Porter
Helen Quinn
Helen Ranney
Helen Sharman
Helen T. Edwards
Henrietta Swan Leavitt
Henriette Avram
Herrad of Landsberg
Herta Freitag
Hertha Marks Ayrton
Hertha Sponer
Hertha Wambacher
Hilda Geiringer
Hilda Phoebe Hudson
Hilde Mangold
Hildegard of Bingen
Hu Hesheng
Huguette Delavault
Hypatia
Hélène Langevin-Joliot
I
Ida Henrietta Hyde
Ida Noddack
Ileana Streinu
Inge Lehmann
Ingrid Daubechies
Iota Sigma Pi
Irene Crespin
Irene Fischer
Irene Manton
Irene Uchida
Irina Beletskaya
Iris M. Ovshinsky
Irma Wyman
Irmgard Flügge-Lotz
Irène Joliot-Curie
Isabel Bassett Wasson
Isabel Briggs Myers
Isabella Bird
Isobel Bennett
J
Jacqueline Felice de Almania
Jacquetta Hawkes
Jaime Levy
Jaime Teevan
Jan Anderson (scientist)
Janaki Ammal
Jane Brotherton Walker
Jane Colden
Jane Ellen Harrison
Jane Goodall
Jane Hillston
Jane Lubchenco
Jane Marcet
Jane S. Richardson
Jane Stafford
Janet Darbyshire
Janet G. Travell
Janet Kear
Janet L. Kolodner
Janet Rowley
Janet Thornton
Janet Vaughan
Janet Watson
Janice E. Clements
Jean Bartik
Jean Beggs
Jean Jenkins (ethnomusicologist)
Jean E. Sammet
Jeanne Dumée
Jeanne Ferrante
Jeanne Villepreux-Power
Jeanette Scissum
Jeannette Wing
Jeehiun Lee
Jemma Geoghegan
Jennifer Tour Chayes
Jennifer Doudna
Jenny Preece
Jessica Meir
Jill Farrant
Jill Stein
Jill Tarter
Jing Li (chemist)
Joan A. Steitz
Joan Beauchamp Procter
Joan Birman
Joan Dingley
Joan Hinton
Joan Roughgarden
Joan Slonczewski
Joanna S. Fowler
Joanne Simpson
Jocelyn Bell Burnell
Johanna Mestorf
Johanna Moore
Josephine Kablick
Joy Adamson
Joyce Currie Little
Joyce Jacobson Kaufman
Joyce K. Reynolds
Joyce Lambert
Jude Milhon
Judith Donath
Judith Estrin
Judith Goslin Hall
Judith Q. Longyear
Judith Resnik
Judy A. Holdener
Julia Anna Gardner
Julia Serano
Juliet Wege
June Almeida
K
Kaisa Sere
Kalpana Chawla
Kamal Ranadive
Kamala Sohonie
Karen Kavanagh
Karen Spärck Jones
Karen Vousden
Karen Wetterhahn
Karin Erdmann
Kate Craig-Wood
Kate Hutton
Kateryna Lohvynivna Yushchenko
Katharine Burr Blodgett
Katharine Fowler-Billings
Katharine Way
Katherine Esau
Katherine Freese
Katherine Johnson
Katherine St. John
Kathleen Antonelli
Kathleen Booth
Kathleen C. Taylor
Kathleen Haddon
Kathleen Kenyon
Kathleen Lonsdale
Kathleen Maisey Curtis
Kathleen Taylor (biologist)
Kathrin Bringmann
Kathryn Moler
Kathryn Uhrich
Katsuko Saruhashi
Kay Redfield Jamison
Kiki Sanford
Kirstine Meyer
Klara Dan von Neumann
Klara Kedem
Krystal Tsosie
Krystyna Kuperberg
Käte Fenchel
Kristine Katherine
L
L'Oréal-UNESCO Awards for Women in Science
L'association femmes et mathématiques
Laura Bassi
Leah Jamieson
Lene Hau
Lenore Blum
Leona Woods
Lera Boroditsky
Leslie Barnett
Lila Kari
Lilian Gibbs
Lillian Dyck
Lily Young
Linda Avey
Linda B. Buck
Linda Keen
Lisa Hensley (microbiologist)
Lisa Kaltenegger
Lisa M. Diamond
Lisa Randall
Lisa Rossbacher
Lise Meitner
List of prizes, medals, and awards for women in science
Liuba Shrira
Lois Haibt
Lori McCreary
Louise Dolan
Louise Hammarström
Louise Hay (mathematician)
Louise Johnson
Louise Reiss
Louise du Pierry
Lucia Galeazzi Galvani
Lucile Quarry Mann
Lucy Everest Boole
Lucy Jones
Lucy Weston Pickett
Lucy Wilson
Luisa Ottolini
Luise Meyer-Schützmeister
Lydia Kavraki
Lydia Maria Adams DeWitt
Lydia Rabinowitsch-Kempner
Lynn Conway
Lynn J. Rothschild
Lynn Margulis
Lynne Jolitz
Lanying Lin
M
M. Christine Zink
Mae Jemison
Maja Mataric
Manuela M. Veloso
Marcia McNutt
Marcia Neugebauer
Margaret Brimble
Margaret Bryan (philosopher)
Margaret Burbidge
Margaret Cavendish, Duchess of Newcastle-upon-Tyne
Margaret Clement
Margaret Eliza Maltby
Margaret Elizabeth Barr-Bigelow
Margaret Floy Washburn
Margaret Fountaine
Margaret H. Wright
Margaret Hamilton (scientist)
Margaret Kennard
Margaret Lindsay Huggins
Margaret Mead
Margaret Morse Nice
Margaret Oakley Dayhoff
Margaret Ogola
Margaret Stanley (virologist)
Margaret Thatcher
Margrete Heiberg Bose
Marguerite Perey
Maria Ardinghelli
Maria Christina Bruhn
Maria Chudnovsky
Maria Cunitz
Maria Dalle Donne
Maria Fadiman
Maria Fitzgerald
Maria Gaetana Agnesi
Maria Goeppert-Mayer
Maria Klawe
Maria Klenova
Maria Margarethe Kirch
Maria Medina Coeli
Maria Mitchell
Maria Petraccini
Maria Reiche
Maria Sibylla Merian
Maria Wilman
Maria Zemankova
Maria Zuber
Marian Farquharson
Marian Koshland
Marian Stamp Dawkins
Mariann Bienz
Marianna Csörnyei
Marie-Andrée Bertrand
Marie-Anne Pierrette Paulze
Marie-Jeanne de Lalande
Marie Beatrice Schol-Schwarz
Marie Crous
Marie Curie
Marie Le Masson Le Golft
Marie Stopes
Marie Tharp
Marietta Blau
Marilyn Farquhar
Marilyn Tremaine
Marissa Mayer
Marjolein Kriek
Marjorie Courtenay-Latimer
Marjorie Lee Browne
Marjorie Sweeting
Marjory Stephenson
Marta Kwiatkowska
Martha Burton Woodhead Williamson
Martha Chase
Martha P. Haynes
Marthe Vogt
Mary-Claire King
Mary Adela Blagg
Mary Agnes Chase
Mary Allen Wilkes
Mary Anning
Mary Ball
Mary Buckland
Mary Cartwright
Mary Celine Fasenmyer
Mary Engle Pennington
Mary Everest Boole
Mary F. Lyon
Mary Gibson Henry
Mary Higby Schweitzer
Mary J. Rathbun
Mary Jane Irwin
Mary K. Gaillard
Mary Katharine Brandegee
Mary Kenneth Keller
Mary L. Boas
Mary L. Cleave
Mary L. Good
Mary Leakey
Mary Lee Woods
Mary Lua Adelia Davis Treat
Mary Murtfeldt
Mary P. Dolciani
Mary Parke
Mary Peters Fieser
Mary Shaw (computer scientist)
Mary Somerville
Mary Stuart MacDougall
Mary Tindale
Mary Vaux Walcott
Mary W. Gray
Mary Ward (scientist)
Mary Watson Whitney
Mary Whiton Calkins
Mary the Jewess
María Elena Galiano
María de los Ángeles Alvariño González
Mathilde Krim
Maud Cunnington
Maud Menten
Maureen C. Stone
Maxine D. Brown
Maxine Singer
Maya Paczuski
Mayana Zatz
Melba Phillips
Mercuriade
Meredith L. Patterson
Merieme Chadid
Merit-Ptah
Merle Greene Robertson
Michelle Antoine
Mildred Allen (physicist)
Mildred Cohn
Mildred Dresselhaus
Mileva Marić
Mina Bissell
Ming C. Lin
Miriam Rothschild
Misha Mahowald
Molly Holzschlag
Monica Anderson
Monica S. Lam
Monique Adolphe
Morag Crichton Timbury
Muriel Wheldale Onslow
Myra Wilson
Myriam Sarachik
Myrtle Bachelder
Mária Telkes
N
Nalini Nadkarni
Nan Laird
Nancy Adams (botanist)
Nancy Andrews (biologist)
Nancy Davis Griffeth
Nancy Hafkin
Nancy Hopkins (scientist)
Nancy Leveson
Nancy Lynch
Nancy Wexler
Naomi Ginsberg
Naomi Oreskes
Nettie Stevens
Nichole Pinkard
Nicola Pellow
Nicole-Reine Lepaute
Nicole King
Nicole Marthe Le Douarin
Nina Andreyeva
Nina Bari
Nina Byers
Ninni Kronberg
Noemie Benczer Koller
Noreen Murray
O
Olga Aleksandrovna Ladyzhenskaya
Olga Holtz
Olga Lepeshinskaya (biologist)
Olivia Lum
Orna Berry
Ottoline Leyser
P
Pamela C. Rasmussen
Pamela J. Bjorkman
Pamela L. Gay
Pascale Cossart
Patricia Adair Gowaty
Patricia Baird
Patricia Bath
Patricia H. Clarke
Patricia Vickers-Rich
Patsy O'Connell Sherman
Pattie Maes
Paulien Hogeweg
Pauline Morrow Austin
Pauline Newman
Pearl Kendrick
Persis Drell
Petronella Johanna de Timmerman
Philippa Fawcett
Philippa Marrack
Phyllis Clinch
Phyllis Fox
Phyllis Starkey
Ping Fu
Pippa Greenwood
Polly Matzinger
Praskovya Uvarova
R
Rachel Carson
Rachel Fuller Brown
Rachel Mamlok-Naaman
Radia Perlman
Rama Bansil
Rebecca Grinter
Rebecca J. Nelson
Reihaneh Safavi-Naini
Renata Kallosh
Renate Chasman
Renate Loll
Renu C. Laskar
Renée Miller
Rima Rozen
Rita Levi-Montalcini
Rita P. Wright
Rosa Beddington
Rosa Smith Eigenmann
Rosalind Franklin
Rosalind Picard
Rosalind Pitt-Rivers
Rosaly Lopes
Rosalyn Sussman Yalow
Rosemary A. Bailey
Roxana Moslehi
Ruby Hirose
Runhild Gammelsæter
Ruth Aaronson Bari
Ruth Arnon
Ruth Benedict
Ruth F. Allen
Ruth Hubbard
Ruth Lawrence
Ruth Patrick
Ruth R. Benerito
Ruth Turner
Ruzena Bajcsy
Rózsa Péter
S
Saba Valadkhan
Sabina Jeschke
Sabina Spielrein
Sallie W. Chisholm
Sally Floyd
Sally Ride
Sally Shlaer
Salome Gluecksohn-Waelsch
Sameera Moussa
Sandra Faber
Sandra Steingraber
Sandy Carter
Sara Billey
Sara Hestrin-Lerner
Sara Plummer Lemmon
Sara Shettleworth
Sarah Allen (software developer)
Sarah Flannery
Sarah Frances Whiting
Sarit Kraus
Seema Bhatnagar
Shafi Goldwasser
Sharon Glotzer
Sheena Josselyn
Sheeri Cabral
Sheila Greibach
Sheina Marshall
Sheri McCoy
Sherry Gong
Shirley Ann Jackson
Shirley M. Tilghman
Shirley Sherwood
Shoshana Kamin
Silvia Arber
Snježana Kordić
Sophia Brahe
Sophia Drossopoulou
Sophia Jex-Blake
Sophie Bryant
Sophie Germain
Sophie Wilson
Stefanie Dimmeler
Stella Atkins
Stella Cunliffe
Stella Ross-Craig
Stephanie Kwolek
Stephanie Schwabe
Stormy Peters
Sue Black (computer scientist)
Sue Hartley
Sue Hendrickson
Sue Whitesides
Sulamith Goldhaber
Susan Blackmore
Susan Dumais
Susan Gerhart
Susan Greenfield, Baroness Greenfield
Susan Hockfield
Susan Hough
Susan Howson
Susan Jane Cunningham
Susan Kieffer
Susan L. Graham
Susan Landau
Susan Owicki
Susan R. Wessler
Susan Solomon
Susana López Charreton
Susanne Albers
Suzanne Cory
Sylvia Earle
Sylvia Fedoruk
T
Tamara Mkheidze
Tandy Warnow
Tanya Atwater
Tara Keck
Tapputi
Tasneem Zehra Husain
Tatyana Afanasyeva
Tatyana Pavlovna Ehrenfest
Telle Whitney
Temple Grandin
Teresa Maryańska
Terri Attwood
Theano (philosopher)
Thelma Estrin
Theodora Lisle Prankerd
Tomoko Ohta
Toniann Pitassi
Tracy Caldwell Dyson
The Trimates
Trotula
Tu Youyou
U
Ursula Cowgill
Ursula Franklin
Ursula Martin
Uta Frith
V
Val Beral
Valerie Thomas
Vasanti N. Bhat-Nayak
Vera Kublanovskaya
Vera Popova
Vera Rubin
Vera Scarth-Johnson
Vera Yurasova
Vi Hart
Virginia Apgar
Vyda Ragulskienė
W
Wanda Orlikowski
Wanda Zabłocka
Wang Xiaoyun
Webe Kadima
Weizmann Women & Science Award
Wendy Foden
Wendy Hall
Wilhelmina Feemster Jashemski
Williamina Fleming
Wilma Olson
Winifred Asprey
X
Xie Xide
Y
Yu-Chie Chen
Yvette Cauchois
Yvonne Barr
Yvonne Choquet-Bruhat
Z
Zeng Fanyi
Zoia Ceaușescu
Zora Neale Hurston
Trans man scientists who were scientists before transitioning
Ben Barres (Barbara Barres)
Transmasculine non-binary scientists who were scientists before transitioning
A. W. Peet (Amanda Wensley Peet)
Transfeminine non-binary scientists
JJ Eldridge
Audrey Tang
See also
List of female Fellows of the Royal Society
List of female mathematicians
List of female scientists before the 21st century
List of women geologists
Women in chemistry
Women in computing
Women in geology
Women in science
Women in STEM fields
References
Herzenberg, Caroline L. 1986. Women Scientists from Antiquity to the Present: An Index. Locust Hill Press.
Howard S. The Hidden Giants, ch. 2, (Lulu.com; 2006) (accessed 22 August 2007)
Howes, Ruth H. and Caroline L. Herzenberg. 1999. Their Day in the Sun: Women of the Manhattan Project. Temple University Press.
Ogilvie, M. B. 1986. Women in Science. The MIT Press.
Contributions of 20th Century Women to Physics website at UCLA
Walsh JJ. 'Medieval Women Physicians' in Old Time Makers of Medicine: The Story of the Students and Teachers of the Sciences Related to Medicine During the Middle Ages, ch. 8, (Fordham University Press; 1911) (accessed 22 August 2007)
writers
scientists
Women scientists
.Science
.Writers | Index of women scientists articles | Technology | 4,196 |
40,736,436 | https://en.wikipedia.org/wiki/Pleiocarpine | Pleiocarpine is an anticholinergic alkaloid.
External links
Tryptamine alkaloids
Indolizidines
Heterocyclic compounds with 6 rings | Pleiocarpine | Chemistry | 39 |
24,738,960 | https://en.wikipedia.org/wiki/Bioinformatics%20%28journal%29 | Bioinformatics is a biweekly peer-reviewed open-access scientific journal covering research and software in bioinformatics and computational biology. It is the official journal of the International Society for Computational Biology (ISCB), together with PLOS Computational Biology.
The journal was established as Computer Applications in the Biosciences (CABIOS) in 1985. The founding editor-in-chief was Robert J. Beynon. In 1998, the journal obtained its current name and established an online version of the journal. It is published by Oxford University Press and, as of 2014, the editors-in-chief are Alfonso Valencia and Janet Kelso. Previous editors include Chris Sander, Gary Stormo, Christos Ouzounis, Martin Bishop, and Alex Bateman. In 2014, these five editors were appointed the first Honorary Editors of Bioinformatics. According to the Journal Citation Reports, the journal has a 2019 impact factor of 5.610
From 1998 to 2004, Bioinformatics was the official journal of the ISCB. In 2004, as many ISCB members had institutional subscriptions to Bioinformatics, ISCB decided not to renew its contract with the journal. PLOS Computational Biology became the official ISCB journal. In January 2009 Bioinformatics again became an official journal of the ISCB, alongside PLOS Computational Biology.
The proceedings of the Intelligent Systems for Molecular Biology conference and the European Conference on Computational Biology have been published in special issues of Bioinformatics since 2001 and 2002, respectively.
Following budget problems, Greek universities dropped their subscriptions to Bioinformatics in 2013.
References
External links
Bioinformatics and computational biology journals
Academic journals established in 1998
Biweekly journals
Oxford University Press academic journals
English-language journals | Bioinformatics (journal) | Biology | 362 |
29,773,157 | https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20%28Ukraine%29 | The Ministry of Energy is responsible for energy in Ukraine.
The government ministry was originally formed in the 1970s as the Ministry of Energy and Electrification.
Functions
state governing of the Fuel-Energy Complex
ensuring the realization of the state policies in the Fuel-Energy Complex
ensuring energy security of the State
participation in the formation, regulation, and improvement of the fuel-energy resource market
developing proposals to improve economic incentives in stimulation of the Fuel-Energy Complex development
Vectors of specialization
Power generation
Nuclear power
Oil and Gas industry
Coal mining
Fuel energy complex associations
Power generation
National Nuclear Power-generating Company Energoatom
Khmelnytskyi Nuclear Power Plant
Rivne Nuclear Power Plant
South Ukraine Nuclear Power Plant
Zaporizhzhia Nuclear Power Plant
Donuzlav WES (Wind Power Plant)
other supporting companies
Sevastopol Institute of Nuclear Power an Industry
State Research Company "Tsyrkoniy"
Chornobyl Center on issues of Nuclear Security, Radioactive Waste and Radioecology
Industrial Reserve-Investment Fund in Development of Energy
Ukrenerhokomplekt
Ukrainian Nuclear Association
Ukrinterenergo
State Enterprise National Power Company Ukrenerho
Derzhenerhonahlyad (State Energy Supervision)
Derzhinspektsia (State Inspection)
Centrenergo
Ukrhydroenergo (100%)
Dniester Hydro-accumulating Power Station (87.4%)
others
Oil/gas and oil refinery industries
National Joint-Stock Company Naftogaz Ukrainy
Subsidiary Company Ukrgasproduction
Open Joint-Stock Company Ukrnafta (50% + 1)
Subsidiary Joint-Stock Company Chornomornaftogaz
Overseas branches
other enterprises
Small share participants
Donbasenergo (25.0%)
DTEK Dniproenergo (25.0%)
Former members
State Special Enterprise Chernobyl Nuclear Power Plant was created on July 11, 2001 on base of the former Energoatom's company of the same name. The company was basically recommissioned under a special jurisdiction for the further decommissioning of its nuclear power station. On July 15, 2005 the enterprise was transferred from under the jurisdiction of the Ministry of Fuel and Energy to the Ministry of Emergencies.
National Joint-Stock Company Energy Company of Ukraine
History
Previous names:
1982–1997 Ministry of Energy and Electrification
1997–1999 Ministry of Energy
1999–2010 Ministry of Fuel and Energy
2010–present Ministry of Energy and Coal Mining
The ministry also absorbed a separate Ministry of Coal Mining which existed since 1954 until 1999 and was revived in 2005-2010.
The Ministry was (as it turned out) temporally merged the ministry with the Ministry of Ecology and Natural Resources by Honcharuk Government (on 29 August 2019). But the succeeding Shmyhal Government re-created the Ministry of Ecology and Natural Resources (on 27 May 2020).
List of ministers
Energy and electrification
Energy
Fuel and energy
Coal mining
Ministry of Coal Mining of Ukraine existed at least since 1954.
Energy and coal mining
Energy and environmental protection
Energy
See also
Nuclear power in Ukraine
Ministry of Emergencies (Ukraine)
Ministry of Industrial Policy (Ukraine)
DTEK
List of power stations in Ukraine
References
External links
Fuel and Energy
Fuel and Energy
Energy in Ukraine
Ukraine
Mining ministries | Ministry of Energy (Ukraine) | Engineering | 654 |
23,465,899 | https://en.wikipedia.org/wiki/Rock%20fever | Rock fever and island fever are colloquial terms for a form of mental distress said to mainly afflict mainlanders who move to isolated islands, especially any of the Hawaiian islands or Guam. It is not a medical term or classification and has not been the focus of any serious research. It has been described as "an ailment" of feeling "stifled by [the island's] size and isolation", making its sufferers "anxious, irritated, desperate, and claustrophobic." It is often ascribed to homesickness. Rock fever has also been described as a feeling of isolation that could arise in any isolated place and afflict anyone, including native inhabitants.
See also
Winter-over syndrome
References
Emotions
Living arrangements
Mental disorders
Health in Guam
Health in Hawaii
Human migration
Islands | Rock fever | Biology | 161 |
40,341,247 | https://en.wikipedia.org/wiki/Tridemorph | Tridemorph is a fungicide used to control Erysiphe graminis. It was developed by BASF in the 1960s who use the trade name Calixin. The World Health Organization has categorized it as a Class II "moderately hazardous" pesticide because it is believed harmful if swallowed and can cause irritation to skin and eyes.
One theory for the cause of the Hollinwell incident is that it might have been caused by inhalation of tridemorph.
References
External links
Fungicides
Morpholines | Tridemorph | Chemistry,Biology | 112 |
29,337,501 | https://en.wikipedia.org/wiki/Perfect%20matrix | In mathematics, a perfect matrix is an m-by-n binary matrix that has no possible k-by-k submatrix K that satisfies the following conditions:
k > 3
the row and column sums of K are each equal to b, where b ≥ 2
there exists no row of the (m − k)-by-k submatrix formed by the rows not included in K with a row sum greater than b.
The following is an example of a K submatrix where k = 5 and b = 2:
References
Matrices | Perfect matrix | Mathematics | 114 |
2,269,568 | https://en.wikipedia.org/wiki/Rainband | A rainband is a cloud and precipitation structure associated with an area of rainfall which is significantly elongated. Rainbands in tropical cyclones can be either stratiform or convective and are curved in shape. They consist of showers and thunderstorms, and along with the eyewall and the eye, they make up a tropical cyclone. The extent of rainbands around a tropical cyclone can help determine the cyclone's intensity.
Rainbands spawned near and ahead of cold fronts can be squall lines which are able to produce tornadoes. Rainbands associated with cold fronts can be warped by mountain barriers perpendicular to the front's orientation due to the formation of a low-level barrier jet. Bands of thunderstorms can form with sea breeze and land breeze boundaries, if enough moisture is present. If sea breeze rainbands become active enough just ahead of a cold front, they can mask the location of the cold front itself. Banding within the comma head precipitation pattern of an extratropical cyclone can yield significant amounts of rain or snow. Behind extratropical cyclones, rainbands can form downwind of relative warm bodies of water such as the Great Lakes. If the atmosphere is cold enough, these rainbands can yield heavy snow.
Extratropical cyclones
Rainbands in advance of warm occluded fronts and warm fronts are associated with weak upward motion, and tend to be wide and stratiform in nature. In an atmosphere with rich low level moisture and vertical wind shear, narrow, convective rainbands known as squall lines form generally in the cyclone's warm sector, ahead of strong cold fronts associated with extratropical cyclones. Wider rain bands can occur behind cold fronts, which tend to have more stratiform, and less convective, precipitation. Within the cold sector north to northwest of a cyclone center, in colder cyclones, small scale, or mesoscale, bands of heavy snow can occur within a cyclone's comma head precipitation pattern with a width of to . These bands in the comma head are associated with areas of frontogensis, or zones of strengthening temperature contrast. Southwest of extratropical cyclones, curved flow bringing cold air across the relatively warm Great Lakes can lead to narrow lake-effect snow bands which bring significant localized snowfall.
Narrow cold-frontal rainband
A narrow cold-frontal rainband (NCFR) is a characteristic of particularly sharp cold frontal boundaries. These can usually be seen very easily on satellite photos. NCFRs are typically accompanied by strong gusty winds and brief but intense rainfall. Convection may or may not occur depending on the stability of the air mass being lifted by the front. Such fronts usually are also marked by a sharp wind shift and temperature drop.
Tropical cyclones
Rainbands exist in the periphery of tropical cyclones, which point towards the cyclone's center of low pressure. Rainbands within tropical cyclones require ample moisture and a low level pool of cooler air. Bands located to from a cyclone's center migrate outward. They are capable of producing heavy rains and squalls of wind, as well as tornadoes, particularly in the storm's right-front quadrant.
Some rainbands move closer to the center, forming a secondary, or outer, eyewall within intense hurricanes. Spiral rainbands are such a basic structure to a tropical cyclone that in most tropical cyclone basins, use of the satellite-based Dvorak technique is the primary method used to determine a tropical cyclone's maximum sustained winds. Within this method, the extent of spiral banding and difference in temperature between the eye and eyewall is used to assign a maximum sustained wind and a central pressure. Central pressure values for their centers of low pressure derived from this technique are approximate.
Different programs have been studying these rainbands, including the Hurricane Rainband and Intensity Change Experiment.
Forced by geography
Convective rainbands can form parallel to terrain on its windward side, due to lee waves triggered by hills just upstream of the cloud's formation. Their spacing is normally to apart. When bands of precipitation near frontal zones approach steep topography, a low-level barrier jet stream forms parallel to and just prior to the mountain ridge, which slows down the frontal rainband just prior to the mountain barrier. If enough moisture is present, sea breeze and land breeze fronts can form convective rainbands. Sea breeze front thunderstorm lines can become strong enough to mask the location of an approaching cold front by evening. The edge of ocean currents can lead to the development of thunderstorm bands due to heat differential at this interface. Downwind of islands, bands of showers and thunderstorms can develop due to low level wind convergence downwind of the island edges. Offshore California, this has been noted in the wake of cold fronts.
References
External links
Precipitation
Extratropical cyclones
Storm
Weather hazards
Tropical cyclone meteorology
Mesoscale meteorology | Rainband | Physics | 995 |
53,322,500 | https://en.wikipedia.org/wiki/Kappa%20Crateris | Kappa Crateris (κ Crateris) is the Bayer designation for a star in the southern constellation of Crater. It has an apparent visual magnitude of 5.94, which, according to the Bortle scale, can be seen with the naked eye under dark suburban skies. The distance to this star, as determined from an annual parallax shift of 14.27 mas, is around 229 light years.
This is an evolved F-type giant star with a stellar classification of F5/6 III, where the F5/6 indicates the spectrum lies intermediate between types F5 and F6. It is an estimated 1.74 billion years old and is spinning with a projected rotational velocity of 39 km/s. Kappa Crateris has 1.74 times the mass of the Sun, and radiates 17 times the solar luminosity from its outer atmosphere at an effective temperature of 6,545 K.
Kappa Crateris has a visual companion: a magnitude 13.0 star located at an angular separation of 24.6 arc seconds along a position angle of 343°, as of 2000.
References
F-type giants
Crater (constellation)
Crateris, Kappa
Durchmusterung objects
Crateris, 16
099564
055874
4416 | Kappa Crateris | Astronomy | 257 |
24,991,420 | https://en.wikipedia.org/wiki/Corrective%20maintenance | Corrective maintenance is a maintenance task performed to identify, isolate, and rectify a fault so that the failed equipment, machine, or system can be restored to an operational condition within the tolerances or limits established for in-service operations.
Definition
A French official standard defines "corrective maintenance" as maintenance which is carried out after failure detection and is aimed at restoring an asset to a condition in which it can perform its intended function (NF EN 13306 X 60-319 standard, June 2010).
Corrective maintenance can be subdivided into "immediate corrective maintenance" (in which work starts immediately after a failure) and "deferred corrective maintenance" (in which work is delayed in conformance to a given set of maintenance rules).
Sometimes, particularly in French-speaking countries, a distinction is made between curative maintenance and regular corrective maintenance. While the former is a larger scale procedure to permanently solve the problem (e.g. by replacing the defective mechanism), the latter only fixes the acute issue (e.g. only repairing or replacing an individual component) and might be a more temporary solution.
Standards
The technical standards concerning corrective maintenance are set by IEC 60050 chapter 191 °Dependability and quality of service"
The NF EN 13306 X 60-319 is a subset of IEC 60050-191.
Choice
The decision to choose corrective maintenance as a method of maintenance is a decision depending on several factors as the cost of downtime, reliability characteristics and redundancy of assets.
Methods
The steps of corrective maintenance are, following failure, diagnosis – elimination of the part, causing the failure – ordering the replacement – replacement of the part – test of function and finally the continuation of use.
The basic form of corrective maintenance is a step-by-step procedure. The object's failure triggers the steps. Modern technologies as the use of Industry 4.0 features reduce the inherent drawbacks of corrective maintenance. by e.g. providing device history, fault patterns, repair advice or availability of spare parts.
See also
Preventive maintenance
Predictive maintenance
Bibliography
L.C. Morow: Maintenance Engineering Hand Book, Mc Graw Hill, New York, 1952
S. Nakajima: Introduction to TPM, Productivity Press, Cambridge, Massachusetts, 1988
Peter Willmott: Total Production Maintenance the Western Way, Butterworth, Heinemann, First Published 1994, Oxford, London
References
Further reading
9 Types of Maintenance: How to choose the right maintenance strategy, Erik Hupje, Road to Reliability™ (2020)
Maintenance | Corrective maintenance | Engineering | 527 |
34,651,693 | https://en.wikipedia.org/wiki/Environmental%20manager | Environmental managers are involved in processes that seek to control some environmental entities in orientation to a plan or idea. Whether such control is possible, however, is contested. Examples for environmental managers range from corporate agents (corporate environmental managers) via managers of a nature reserve, to environmental and resource planning agents but, analytically seen, also involve indigenous environmental managers, farmers or environmental activists. In many accounts, hope is held that environmental managers implement grand plans or political programmes. At the heart of the notion of environmental managers is, thus, a pragmatic and rational actor who optimises environments in orientation to some aim. Critical academics point out that the very idea that such managers exist and are imagined as capable of managing may well be flawed.
Corporate environmental managers
Steve Fineman studied UK managers and their "'green' selves and roles" in the last decade, suggesting that while environmental problems may be recognised by them, production is seen as legitimising pollution. Optimistic accounts see managers as stewards of environmental ethics. Literature differentiates different styles by managers to engage with the environment.
State environmental managers
State institutions can manage directly environments through their staff. And state institutions can use civil agents on their behalf. Examples for the latter are farmers who are to implement environmental regulation, citizens subject to e.g. recycling legislation or independent auditors who use laws as standards. Military agents can also act as environmental managers insofar as their action constitutes planned intervention in some environment (e.g. the burning of a forest, the destruction of streets or managing an open landscape for military training), trying to achieve military aims.
Scientists as environmental managers
A variety of scientists are involved directly in environmental management. Cases of ecologists acting as managers of ecosystems are known.
Study of environmental managers
The very notion that humans may be able to manage environments is criticised for being top-down, anthropocentric and short-sighted.
See also
Environmental activist
Chief sustainability officer
Rational planning model
References
External links
How do you manage? Unravelling the situated practice of environmental management
The Environmental Manager Symposium
Environmental sociology
Environmental policy
Sustainability and environmental management | Environmental manager | Environmental_science | 426 |
1,411,678 | https://en.wikipedia.org/wiki/Gibbons%20Creek%20Reservoir | Gibbons Creek Reservoir (sometimes referred to as Gibbons Creek Lake) is a power plant cooling reservoir on Gibbons Creek in the Navasota River basin, 20 miles (32 km) east of College Station, Texas, United States. The dam and lake are managed by Texas Municipal Power Agency (TMPA), which uses the reservoir as a cooling pond for a coal-fired power plant generating electricity for the cities of Bryan, Denton, Garland, and Greenville (all of whom have municipality-owned electric companies).
The reservoir was officially impounded in 1981.
Gibbons Creek Reservoir is a popular recreational destination due to its location near the Bryan–College Station metropolitan area. The nearby power plant was mothballed indefinitely by TMPA in January 2019 due to the high cost of coal-powered electricity when compared to cheaper natural gas. TMPA had been trying to sell the plant since 2016.
Fish populations
Gibbons Creek Reservoir has been stocked with species of fish intended to improve the utility of the reservoir for recreational fishing. Fish present in Gibbons Creek Reservoir include largemouth bass, bluegill, catfish, Tilapia, white crappie, and black crappie. The water has standing timber and aquatic vegetation but generally is rather turbid. Its shoreline is covered with native grasses mixed with oak, elm, and other East Texas hardwoods.
Recreational uses
Boating and fishing are very popular. The steam power plant on the southwest shore of the lake constantly pumps in warm water that keeps this lake a viable fishing spot year-round, even when other lakes in the area become too cold in the winter months.
Recreational fishing and other activities on this lake are regulated by the Texas Parks and Wildlife Department.
References
External links
Gibbons Creek Reservoir
Texas Municipal Power Agency
Gibbons Creek Reservoir - Texas Parks & Wildlife
Protected areas of Grimes County, Texas
Reservoirs in Texas
Bodies of water of Grimes County, Texas
Cooling ponds | Gibbons Creek Reservoir | Chemistry,Environmental_science | 378 |
29,545,728 | https://en.wikipedia.org/wiki/Biotin%20attachment%20domain | Biotin/lipoyl attachment domain has a conserved lysine residue that binds biotin or lipoic acid. Biotin plays a catalytic role in some carboxyl transfer reactions and is covalently attached, via an amide bond, to a lysine residue in enzymes requiring this coenzyme. Lipoamide acyltransferases have an essential cofactor, lipoic acid, which is covalently bound via an amide linkage to a lysine group. The lipoic acid cofactor is found in a variety of proteins.
Human proteins containing this domain
ACACA; ACACB; DBT; DLAT; DLST; DLSTP; MCCC1; PC;
PCCA; PDHX;
References
Protein domains | Biotin attachment domain | Biology | 161 |
2,872,234 | https://en.wikipedia.org/wiki/Bird%27s-eye%20view | A bird's-eye view is an elevated view of an object or location from a very steep viewing angle, creating a perspective as if the observer were a bird in flight looking downward. Bird's-eye views can be an aerial photograph, but also a drawing, and are often used in the making of blueprints, floor plans and maps.
Before crewed flight was common, the term "bird's eye" was used to distinguish views drawn from direct observation at high vantage locations (e.g. a mountain or tower), from those constructed from an imagined bird's perspectives. Bird's eye views as a genre have existed since classical times. They were significantly popular in the mid-to-late 19th century in the United States and Europe as photographic prints.
Terminology
The terms aerial view and aerial viewpoint are also sometimes used synonymous with bird's-eye view. The term aerial view can refer to any view from a great height, even at a wide angle, as for example when looking sideways from an airplane window or from a mountain top. Overhead view is fairly synonymous with bird's-eye view but tends to imply a vantage point of a lesser height than the latter term. For example, in computer and video games, an "overhead view" of a character or situation often places the vantage point only a few feet (a meter or two) above human height. See top–down perspective.
Recent technological and networking developments have made satellite images more accessible. Microsoft Bing Maps offers direct overhead satellite photos of the entire planet but also offers a feature named Bird's eye view in some locations. The Bird's Eye photos are angled at 40 degrees rather than being straight down. Satellite imaging programs and photos have been described as offering a viewer the opportunity to "fly over" and observe the world from this specific angle.
In filmmaking and video production, a bird's-eye shot refers to a shot looking directly down on the subject. The perspective is very foreshortened, making the subject appear short and squat. This shot can be used to give an overall establishing shot of a scene, or to emphasise the smallness or insignificance of the subjects. It is shot by lifting the camera up by hands or by hanging it off something strong enough to support it. When a scene needs a large area shot, it is a crane shot.
Bird's-eye views are common in the broadcasting of sports events, especially in the 21st century, with the increased usage of the Skycam and other devices like it, such as the CableCam and Spidercam.
Gallery
Bird's-flight view
A distinction is sometimes drawn between a bird's-eye view and a bird's-flight view, or "view-plan in isometrical projection". Whereas a bird's-eye view shows a scene from a single viewpoint (real or imagined) in true perspective, including, for example, the foreshortening of more distant features, a bird's-flight view combines a vertical plan of ground-level features with perspective views of buildings and other standing features, all presented at roughly the same scale. The landscape appears "as it would unfold itself to any one passing over it, as in a balloon, at a height sufficient to abolish sharpness of perspective, and yet low enough to allow of distinct view of the scene beneath". The technique was popular among local surveyors and cartographers of the sixteenth and early seventeenth centuries.
See also
Aerial landscape art
Aerial perspective (disambiguation)
Aerial photography
Archimedean point
Camera angle
Cinematic techniques
Filmmaking
Google Earth
Pictorial map
Pictometry
Plans (drawings)
Top-down perspective
Video production
Worm's-eye view
References
Technical drawing
Cartography
Methods of representation
Metaphors referring to birds | Bird's-eye view | Engineering | 774 |
66,516,442 | https://en.wikipedia.org/wiki/Nyctacovirus | Nyctacovirus is a subgenus of viruses in the genus Alphacoronavirus.
Species
The genus consists of the following two species:
Nyctalus velutinus alphacoronavirus SC-2013
Pipistrellus kuhlii coronavirus 3398
References
Virus subgenera
Alphacoronaviruses | Nyctacovirus | Biology | 64 |
392,828 | https://en.wikipedia.org/wiki/Abundance%20of%20the%20chemical%20elements | The abundance of the chemical elements is a measure of the occurrences of the chemical elements relative to all other elements in a given environment. Abundance is measured in one of three ways: by mass fraction (in commercial contexts often called weight fraction), by mole fraction (fraction of atoms by numerical count, or sometimes fraction of molecules in gases), or by volume fraction. Volume fraction is a common abundance measure in mixed gases such as planetary atmospheres, and is similar in value to molecular mole fraction for gas mixtures at relatively low densities and pressures, and ideal gas mixtures. Most abundance values in this article are given as mass fractions.
The abundance of chemical elements in the universe is dominated by the large amounts of hydrogen and helium which were produced during Big Bang nucleosynthesis. Remaining elements, making up only about 2% of the universe, were largely produced by supernova nucleosynthesis. Elements with even atomic numbers are generally more common than their neighbors in the periodic table, due to their favorable energetics of formation, described by the Oddo–Harkins rule.
The abundance of elements in the Sun and outer planets is similar to that in the universe. Due to solar heating, the elements of Earth and the inner rocky planets of the Solar System have undergone an additional depletion of volatile hydrogen, helium, neon, nitrogen, and carbon (which volatilizes as methane). The crust, mantle, and core of the Earth show evidence of chemical segregation plus some sequestration by density. Lighter silicates of aluminium are found in the crust, with more magnesium silicate in the mantle, while metallic iron and nickel compose the core. The abundance of elements in specialized environments, such as atmospheres, oceans, or the human body, are primarily a product of chemical interactions with the medium in which they reside.
Abundance values
Abundance of each element is expressed as a relative number. Astronomy uses a logarithmic abundance scale for abundance of element X relative to Hydrogen, defined by
for number density ; on this scale. Another scale is mass fraction or, equivalently, percent by mass.
For example, the abundance of oxygen in pure water can be measured in two ways: the mass fraction is about 89%, because that is the fraction of water's mass which is oxygen. However, the mole fraction is about 33% because only 1 atom of 3 in water, H2O, is oxygen. As another example, looking at the mass fraction abundance of hydrogen and helium in both the universe as a whole and in the atmospheres of gas-giant planets such as Jupiter, it is 74% for hydrogen and 23–25% for helium; while the (atomic) mole fraction for hydrogen is 92%, and for helium is 8%, in these environments. Changing the given environment to Jupiter's outer atmosphere, where hydrogen is diatomic while helium is not, changes the molecular mole fraction (fraction of total gas molecules), as well as the fraction of atmosphere by volume, of hydrogen to about 86%, and of helium to 13%. Below Jupiter's outer atmosphere, volume fractions are significantly different from mole fractions due to high temperatures (ionization and disproportionation) and high density, where the ideal gas law is inapplicable.
Universe
The abundance of chemical elements in the universe is dominated by the large amounts of hydrogen and helium which were produced during Big Bang nucleosynthesis. Remaining elements, making up only about 2% of the universe, were largely produced by supernovae and certain red giant stars. Lithium, beryllium, and boron, despite their low atomic number, are rare because, although they are produced by nuclear fusion, they are destroyed by other reactions in the stars. Their natural occurrence is the result of cosmic ray spallation of carbon, nitrogen and oxygen in a type of nuclear fission reaction. The elements from carbon to iron are relatively more abundant in the universe because of the ease of making them in supernova nucleosynthesis. Elements of higher atomic numbers than iron (element 26) become progressively rarer in the universe, because they increasingly absorb stellar energy in their production. Also, elements with even atomic numbers are generally more common than their neighbors in the periodic table, due to favorable energetics of formation (see Oddo–Harkins rule), and among the lightest nuclides helium through sulfur the most abundant isotopes of equal number of protons and neutrons.
Hydrogen is the most abundant element in the Universe; helium is second. All others are orders of magnitude less common. After this, the rank of abundance does not continue to correspond to the atomic number. Oxygen has abundance rank 3, but atomic number 8.
There are 80 known stable elements, and the lightest 16 comprise 99.9% of the ordinary matter of the universe. These same 16 elements, hydrogen through sulfur, fall on the initial linear portion of the table of nuclides (also called the Segrè plot), a plot of the proton versus neutron numbers of all matter both ordinary and exotic, containing hundreds of stable isotopes and thousands more that are unstable. The Segrè plot is initially linear because (aside from hydrogen) the vast majority of ordinary matter (99.4% in the Solar System) contains an equal number of protons and neutrons (Z=N).
The abundance of the lightest elements is well predicted by the standard cosmological model, since they were mostly produced shortly (i.e., within a few hundred seconds) after the Big Bang, in a process known as Big Bang nucleosynthesis. Heavier elements were mostly produced much later, in stellar nucleosynthesis.
Hydrogen and helium are estimated to make up roughly 74% and 24% of all baryonic matter in the universe respectively. Despite comprising only a very small fraction of the universe, the remaining "heavy elements" can greatly influence astronomical phenomena. Only about 2% (by mass) of the Milky Way galaxy's disk is composed of heavy elements.
These other elements are generated by stellar processes. In astronomy, a "metal" is any element other than hydrogen or helium. This distinction is significant because hydrogen and helium are the only elements that were produced in significant quantities in the Big Bang. Thus, the metallicity of a galaxy or other object is an indication of stellar activity after the Big Bang.
In general, elements up to iron are made by large stars in the process of becoming supernovae, or by smaller stars in the process of dying. Iron-56 is particularly common, since it is the most stable nuclide (in that it has the highest nuclear binding energy per nucleon) and can easily be "built up" from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with increasing atomic number.
The table shows the ten most common elements in our galaxy (estimated spectroscopically), as measured in parts per million, by mass.
Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. Since physical laws and processes are apparently uniform throughout the universe, however, it is expected that these galaxies will likewise have evolved similar abundances of elements.
As shown in the periodic table, the abundance of elements is in keeping with their origin. Very abundant hydrogen and helium are products of the Big Bang. The next three elements in the periodic table (lithium, beryllium, and boron) are rare, despite their low atomic number. They had little time to form in the Big Bang. They are produced in small quantities by nuclear fusion in dying stars or by breakup of heavier elements in interstellar dust, caused by cosmic ray spallation. In supernova stars, they are produced by nuclear fusion, but then destroyed by other reactions.
Heavier elements, beginning with carbon, have been produced in dying or supernova stars by buildup from alpha particles (helium nuclei), contributing to an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). The effect of odd-numbered chemical elements generally being more rare in the universe was empirically noticed in 1914, and is known as the Oddo–Harkins rule.
The following graph (log scale) shows abundance of elements in the Solar System.
Relation to nuclear binding energy
Loose correlations have been observed between estimated elemental abundances in the universe and the nuclear binding energy curve (also called the binding energy per nucleon). Roughly speaking, the relative stability of various atomic nuclides in withstanding the extremely energetic conditions of Big Bang nucleosynthesis (BBN) has exerted a strong influence on the relative abundance of elements formed in the Big Bang, and during the development of the universe thereafter.
See the article about nucleosynthesis for an explanation of how certain nuclear fusion processes in stars (such as carbon burning, etc.) create the elements heavier than hydrogen and helium.
A further observed peculiarity is the jagged alternation between relative abundance and scarcity of adjacent atomic numbers in the estimated abundances of the chemical elements in which the relative abundance of even atomic numbers is roughly 2 orders of magnitude greater than the relative abundance of odd atomic numbers (Oddo–Harkins rule). A similar alternation between even and odd atomic numbers can be observed in the nuclear binding energy curve in the neighborhood of carbon and oxygen, but here the loose correlation between relative abundance and binding energy ends. The binding energy for beryllium (an even atomic number), for example, is less than the binding energy for boron (an odd atomic number), as illustrated in the nuclear binding energy curve. Additionally, the alternation in the nuclear binding energy between even and odd atomic numbers resolves above oxygen as the graph increases steadily up to its peak at iron. The semi-empirical mass formula (SEMF), also called Weizsäcker's formula or the Bethe-Weizsäcker mass formula, gives a theoretical explanation of the overall shape of the curve of nuclear binding energy.
Sun
Modern astronomy relies on understanding the abundance of elements in the Sun as part of cosmological models. Abundance values are difficult to obtain: even photospheric or observational abundances depend upon models of solar atmospherics and radiation coupling. These astronomical abundance values are reported as logarithms of the ratio with hydrogen. Hydrogen is set to an abundance of 12 on this scale.
The Sun's photosphere consists mostly of hydrogen and helium; the helium abundance varies between about 10.3 and 10.5 depending on the phase of the solar cycle; carbon is 8.47, neon is 8.29, oxygen is 7.69 and iron is estimated at 7.62.
Earth
The Earth formed from the same cloud of matter that formed the Sun, but the planets acquired different compositions during the formation and evolution of the Solar System. In turn, the history of Earth led to parts of the planet having differing concentrations of the elements.
The mass of the Earth is approximately 5.97 kg. By mass, it is composed mostly of iron (32.1%), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminium (1.4%); with the remaining 1.2% consisting of trace amounts of other elements.
The bulk composition of the Earth by elemental mass is roughly similar to the gross composition of the solar system, with the major differences being that Earth is missing a great deal of the volatile elements hydrogen, helium, neon, and nitrogen, as well as carbon which has been lost as volatile hydrocarbons.
The remaining elemental composition is roughly typical of the "rocky" inner planets, which formed "inside" the "frost line" close to the Sun, where the young Sun's heat and stellar wind drove off volatile compounds into space.
The Earth retains oxygen as the second-largest component of its mass (and largest atomic fraction), mainly due to oxygen's high reactivity; this caused it to bond into silicate minerals which have a high melting point and low vapor pressure.
Crust
The mass-abundance of the nine most abundant elements in the Earth's crust is roughly: oxygen 46%, silicon 28%, aluminium 8.3%, iron 5.6%, calcium 4.2%, sodium 2.5%, magnesium 2.4%, potassium 2.0%, and titanium 0.61%. Other elements occur at less than 0.15%. For a full list, see abundance of elements in Earth's crust.
The graph at right illustrates the relative atomic-abundance of the chemical elements in Earth's upper continental crust—the part that is relatively accessible for measurements and estimation.
Many of the elements shown in the graph are classified into (partially overlapping) categories:
rock-forming elements (major elements in green field, and minor elements in light green field);
rare earth elements (lanthanides (La–Lu), Sc, and Y; labeled in blue);
major industrial metals (global production >~3×10 kg/year; labeled in red);
precious metals (labeled in purple);
the nine rarest "metals" – the six platinum group elements plus Au, Re, and Te (a metalloid) – in the yellow field. These are rare in the crust from being soluble in iron and thus concentrated in Earth's core. Tellurium is the single most depleted element in the silicate Earth relative to cosmic abundance, because in addition to being concentrated as dense chalcogenides in the core it was severely depleted by preaccretional sorting in the nebula as volatile hydrogen telluride.
There are two breaks where the unstable elements technetium (atomic number 43) and promethium (number 61) would be. These elements are surrounded by stable elements, yet their most stable isotopes have relatively short half lives (~4 million years and ~18 years respectively). These are thus extremely rare, since any primordial amounts of these elements have long since decayed. These two elements are now only produced naturally through spontaneous fission of very heavy radioactive elements (such as uranium, thorium, or the trace amounts of plutonium that exist in uranium ores), or by the interaction of certain other elements with cosmic rays. Both technetium and promethium have been identified spectroscopically in the atmospheres of stars, where they are produced by ongoing nucleosynthetic processes.
There are also breaks in the abundance graph where the six noble gases would be, since they are not chemically bound in the Earth's crust, and so their crustal abundance is not well-defined.
The eight naturally occurring very rare, highly radioactive elements (polonium, astatine, francium, radium, actinium, protactinium, neptunium, and plutonium) are not included, since any of these elements that were present at the formation of the Earth have decayed eons ago, and their quantity today is negligible and is only produced from radioactive decay of uranium and thorium.
Oxygen and silicon are the most common elements in the crust. On Earth and rocky planets in general, silicon and oxygen are far more common than their cosmic abundance. The reason is that they combine with each other to form silicate minerals. Other cosmically common elements such as hydrogen, carbon and nitrogen form volatile compounds such as ammonia and methane that easily boil away into space from the heat of planetary formation and/or the Sun's light.
Rare-earth elements
"Rare" earth elements is a historical misnomer. The persistence of the term reflects unfamiliarity rather than true rarity. The more abundant rare earth elements are similarly concentrated in the crust compared to commonplace industrial metals such as chromium, nickel, copper, zinc, molybdenum, tin, tungsten, or lead. The two least abundant stable rare earth elements (thulium and lutetium) are nearly 200 times more common than gold. However, in contrast to the ordinary base and precious metals, rare earth elements have very little tendency to become concentrated in exploitable ore deposits. Consequently, most of the world's supply of rare earth elements comes from only a handful of sources. Furthermore, the rare earth metals are all quite chemically similar to each other, and they are thus quite difficult to separate into quantities of the pure elements.
Differences in abundances of individual rare earth elements in the upper continental crust of the Earth represent the superposition of two effects, one nuclear and one geochemical. First, the rare earth elements with even atomic numbers (58Ce, 60Nd, ...) have greater cosmic and terrestrial abundances than the adjacent rare earth elements with odd atomic numbers (57La, 59Pr, ...). Second, the lighter rare earth elements are more incompatible (because they have larger ionic radii) and therefore more strongly concentrated in the continental crust than the heavier rare earth elements. In most rare earth ore deposits, the first four rare earth elements – lanthanum, cerium, praseodymium, and neodymium – constitute 80% to 99% of the total amount of rare earth metal that can be found in the ore.
Mantle
The mass-abundance of the seven most abundant elements in the Earth's mantle is approximately: oxygen 44.3%, magnesium 22.3%, silicon 21.3%, iron 6.32%, calcium 2.48%, aluminium 2.29%, nickel 0.19%.
Core
Due to mass segregation, the core of the Earth is believed to be primarily composed of iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements.
Ocean
The most abundant elements in the ocean by proportion of mass in percent are oxygen (85.84%), hydrogen (10.82%), chlorine (1.94%), sodium (1.08%), magnesium (0.13%), sulfur (0.09%), calcium (0.04%), potassium (0.04%), bromine (0.007%), carbon (0.003%), and boron (0.0004%).
Atmosphere
The order of elements by volume fraction (which is approximately molecular mole fraction) in the atmosphere is nitrogen (78.1%), oxygen (20.9%), argon (0.96%), followed by (in uncertain order) carbon and hydrogen because water vapor and carbon dioxide, which represent most of these two elements in the air, are variable components. Sulfur, phosphorus, and all other elements are present in significantly lower proportions.
According to the abundance curve graph, argon, a significant if not major component of the atmosphere, does not appear in the crust at all. This is because the atmosphere has a far smaller mass than the crust, so argon remaining in the crust contributes little to mass fraction there, while at the same time buildup of argon in the atmosphere has become large enough to be significant.
Urban soils
For a complete list of the abundance of elements in urban soils, see Abundances of the elements (data page)#Urban soils.
Human body
By mass, human cells consist of 65–90% water (H2O), and a significant portion of the remainder is composed of carbon-containing organic molecules. Oxygen therefore contributes a majority of a human body's mass, followed by carbon. Almost 99% of the mass of the human body is made up of six elements: hydrogen (H), carbon (C), nitrogen (N), oxygen (O), calcium (Ca), and phosphorus (P) . The next 0.75% is made up of the next five elements: potassium (K), sulfur (S), chlorine (Cl), sodium (Na), and magnesium (Mg). Only 17 elements are known for certain to be necessary to human life, with one additional element (fluorine) thought to be helpful for tooth enamel strength. A few more trace elements may play some role in the health of mammals. Boron and silicon are notably necessary for plants but have uncertain roles in animals. The elements aluminium and silicon, although very common in the earth's crust, are conspicuously rare in the human body.
Below is a periodic table highlighting nutritional elements.
See also
Natural abundance – isotopic abundance
List of data references for chemical elements
References
Footnotes
Notes
Notations
External links
List of elements in order of abundance in the Earth's crust (only correct for the twenty most common elements)
Cosmic abundance of the elements and nucleosynthesis
WebElements.com Lists of elemental abundances for the Universe, Sun, meteorites, Earth, ocean, streamwater, etc.
Astrochemistry
Astrophysics
Geochemistry
Geophysics
Properties of chemical elements | Abundance of the chemical elements | Physics,Chemistry,Astronomy | 4,393 |
76,673,018 | https://en.wikipedia.org/wiki/IRAS%2014348-1447 | IRAS 14348-1447 known as PGC 52270, are a pair of spiral galaxies located 1 billion light-years away in the constellation of Libra. The galaxy IRAS 14348-1447NE, is in the early process of merging with IRAS 14348-1447SW, causing gravity to pull stars from both galaxies and forming tidal tails. As the interaction takes place, molecular gas is swirled about and creating emission that is responsible for the galaxies' ultraluminous appearance.
IRAS 14348-1447, is classified a Seyfert 1 galaxy and has an active galactic nucleus, indicating certain activity in its supermassive black hole has awakened, possibly turning it into a quasar.
References
Luminous infrared galaxies
Interacting galaxies
Libra (constellation)
52270
IRAS catalogue objects
J14373831-1500239 | IRAS 14348-1447 | Astronomy | 178 |
27,994,073 | https://en.wikipedia.org/wiki/Plique-%C3%A0-jour | Plique-à-jour (French for "letting in daylight") is a vitreous enamelling technique where the enamel is applied in cells, similar to cloisonné, but with no backing in the final product, so light can shine through the transparent or translucent enamel. It is in effect a miniature version of stained-glass and is considered very challenging technically: high time consumption (up to 4 months per item), with a high failure rate. The technique is similar to that of cloisonné, but using a temporary backing that after firing is dissolved by acid or rubbed away. A different technique relies solely on surface tension, for smaller areas. In Japan the technique is known as shotai-jippo (shotai shippo), and is found from the 19th century on.
History
The technique was developed in the Byzantine Empire in 6th century AD. Some examples of Byzantine plique-à-jour survived in Georgian icons. The technique of plique-à-jour was adopted by Kievan Rus' (a strong trading partner of Constantinople) with other enamel techniques. Despite its complexity plique-à-jour tableware (especially "kovsh" bowls) was used by its aristocracy. Russian masters significantly developed plique-à-jour technique: in addition to cells cut in precious metal they worked with cells made of silver wire. Unfortunately the plique-à-jour technique of Kievan Rus' was lost after the crushing Mongol invasion in the 13th century. Some surviving examples are exhibited in the Historical Museum in Moscow.
Western Europe adopted the plique-à-jour technique (cells cut in metal) of Byzantium. The term smalta clara ("clear enamel"), probably meaning plique-à-jour appears in 1295 in the inventory of Pope Boniface VIII and the French term itself appears in inventories from the 14th century onwards. Benvenuto Cellini (1500–1571) gives a full description of the process in his Treatises of Benvenuto Cellini on Gold-smithing and Sculpture of 1568. Pre-19th century pieces are extremely rare because of their "extreme fragility ... which increases greatly with their size", and the difficulty of the technique. Survivals "are almost exclusively small ornamental pieces". The outstanding early examples that survive are "the decorative insets in the early fifteenth-century Mérode Cup (Burgundian cup) at the Victoria and Albert Museum in London, a Swiss early sixteenth-century plique-à-jour enamel plaque representing the family of the Virgin Mary in the Metropolitan Museum of Art in New York, and the eight pinnacle points over the front of the eleventh-century Saint Stephen's Crown in Hungary". The technique was lost in both Western and Eastern Europe.
The technique was revived in the late 19th century movement of revivalist jewellery, and became especially popular in Russia and Scandinavia. Works by Pavel Ovchinikov, Ivan Khlebnikov, and some masters working for Faberge are real masterpieces of plique-à-jour. Russian masters predominately worked with tableware. Norwegian jewellers included David Andersen and J. Tostrup in Oslo, and Martin Hummer in Bergen. Art Nouveau artists such as René Lalique, Lucien Gaillard and other French and German artists predominantly used plique-à-jour in small jewellery, though the Victoria & Albert Museum has a tray of 1901 by Eugène Feuillâtre (1870–1916).
Currently plique-à-jour is not often used, because it is challenging technically and mainly because of breaks in transferring skills from one generation of jewellers to the next. However, some luxury houses do produce limited numbers of products in the plique-à-jour technique, for example Tiffany in jewellery, and Bulushoff in jewellery and tableware. Works in the shotai shippo technique are also known from China and Iran.
Techniques
There are four basic ways of creating plique-à-jour:
1. Filigree plique-à-jour ("Russian plique-à-jour"): This is a building up process whereby a planned design is interpreted using gold or silver wires which are worked over a metal form (e.g. a bowl). Wires are twisted or engraved, i.e. have additional micro patterns. The wires are soldered together. Enamels are ground and applied to each "cell" created by the metal wirework. The piece is fired in a kiln. This process of placing and firing the enamels is repeated until all cells are completely filled. Usually it takes up to 15–20 repeats.
2. Pierced plique-à-jour ("Western plique-à-jour"): A sheet of gold or silver is pierced and sawed, cutting out a desired design. This leaves empty spaces or "cells" to fill with enamel powders (ground glass).
3. Shotai shippo ("Japanese plique-à-jour"): A layer of flux (clear enamel) is fired over a copper form. Wires are fired onto the flux (similar to cloisonné) and the resulting areas are enameled in the colors of choice. When all the enameling is finished, the copper base is etched away leaving a translucent shell of plique-à-jour.
4. Cloisonné on mica: Cells in precious metal are covered with fixed mica, which is removed by abrasives after enameling.
Process for cloisonné plique-à-jour on mica
Sample process
Notes
References
Campbell, Marian. An Introduction to Medieval Enamels, 1983, HMSO for V&A Museum,
Ostoia, Vera K., "A Late Mediaeval Plique-à-Jour Enamel", The Metropolitan Museum of Art Bulletin, New Series, Vol. 4, No. 3 (Nov. 1945), pp. 78–80, JSTOR
External links
Artistic techniques
Vitreous enamel | Plique-à-jour | Chemistry | 1,246 |
43,064,767 | https://en.wikipedia.org/wiki/C/2013%20V5%20%28Oukaimeden%29 | C/2013 V5 (Oukaimeden) is a retrograde Oort cloud comet discovered on 12 November 2013 by Oukaimeden Observatory at an apparent magnitude of 19.4 using a reflecting telescope.
From 5 May 2014 until 18 July 2014 it had an elongation less than 30 degrees from the Sun. By late August 2014 it had brighten to apparent magnitude 8 making it a small telescope and high-end binoculars target for experienced observers. It crossed the celestial equator on 30 August 2014 becoming a southern hemisphere object. On 16 September 2014 the comet passed from Earth. The comet peaked around magnitude 6.2 in mid-September 2014 but only had an elongation of about 35 degrees from the Sun. On 20 September 2014 the comet was visible in STEREO HI-1B. The comet came to perihelion (closest approach to the Sun) on 28 September 2014 at a distance of from the Sun.
C/2013 V5 is dynamically new. It came from the Oort cloud with a loosely bound chaotic orbit that was easily perturbed by galactic tides and passing stars. Before entering the planetary region (epoch 1950), C/2013 V5 had an orbital period of several million years. After leaving the planetary region (epoch 2050), it will have an orbital period of about 6000 years.
The infrared spectroscopy of the comet revealed that most of its volatile ices, with the exception of ammonia are depleted. Spectrography also revealed that the relative abundance of ethane and methanol increased in the start of September 2014, suggesting that the ices that comprise the comet are heterogenous.
Notes
References
External links
Activity level and perihelion distance (Bortle survival limit Jakub Cerny 11 August 2014)
C/2013 V5 Oukaimeden wide-field (Damian Peach 31 August 2014)
C/2013 V5 map for Aug 21 – Sept 5
Did we catch disintegration of comet Oukaimeden? (FRAM)
(C/2013 V5 magnitude)
Non-periodic comets
20131112
20140916
Comets in 2013
Comets in 2014
Oort cloud | C/2013 V5 (Oukaimeden) | Astronomy | 437 |
2,980,850 | https://en.wikipedia.org/wiki/ITU-R%20468%20noise%20weighting | ITU-R 468 (originally defined in CCIR recommendation 468-4, therefore formerly also known as CCIR weighting; sometimes referred to as CCIR-1k) is a standard relating to noise measurement, widely used when measuring noise in audio systems. The standard, now referred to as ITU-R BS.468-4, defines a weighting filter curve, together with a quasi-peak rectifier having special characteristics as defined by specified tone-burst tests. It is currently maintained by the International Telecommunication Union who took it over from the CCIR.
It is used especially in the UK, Europe, and former countries of the British Empire such as Australia and South Africa. It is less well known in the USA where A-weighting has always been used.
M-weighting is a closely related filter, an offset version of the same curve, without the quasi-peak detector.
Explanation
The A-weighting curve was based on the 40 phon equal-loudness contour derived initially by Fletcher and Munson (1933). Originally incorporated into an ANSI standard for sound level meters, A-weighting was intended for measurement of the audibility of sounds by themselves. It was never specifically intended for the measurement of the more random (near-white or pink) noise in electronic equipment, though has been used for this purpose by most microphone manufacturers since the 1970s. The human ear responds quite differently to clicks and bursts of random noise, and it is this difference that gave rise to the CCIR-468 weighting curve (now supported as an ITU standard), which together with quasi-peak measurement (rather than the rms measurement used with A-weighting) became widely used by broadcasters throughout Britain, Europe, and former British Commonwealth countries, where engineers were heavily influenced by BBC test methods. Telephone companies worldwide have also used methods similar to ITU-R 468 weighting with quasi-peak measurement to describe objectionable interference induced in one telephone circuit by switching transients in another.
History
Original research
Developments in the 1960s, in particular the spread of FM broadcasting and the development of the compact audio cassette with Dolby-B Noise Reduction, alerted engineers to the need for a weighting curve that gave subjectively meaningful results on the typical random noise that limited the performance of broadcast circuits, equipment and radio circuits. A-weighting was not giving consistent results, especially on FM radio transmissions and Compact Cassette recording where preemphasis of high frequencies was resulting in increased noise readings that did not correlate with subjective effect. Early efforts to produce a better weighting curve led to a DIN standard that was adopted for European Hi-Fi equipment measurement for a while.
Experiments in the BBC led to BBC Research Department Report EL-17, The Assessment of Noise in Audio Frequency Circuits, in which experiments on numerous test subjects were reported, using a variety of noises ranging from clicks to tone-bursts to pink noise. Subjects were asked to compare these with a 1 kHz tone, and final scores were then compared with measured noise levels using various combinations of weighting filter and quasi-peak detector then in existence (such as those defined in a now discontinued German DIN standard). This led to the CCIR-468 standard which defined a new weighting curve and quasi-peak rectifier.
The origin of the current ITU-R 468 weighting curve can be traced to 1956. The 1968 BBC EL-17 report discusses several weighting curves, including one identified as D.P.B. which was chosen as superior to the alternatives: A.S.A, C.C.I.F and O.I.R.T. The report's graph of the DPB curve is identical to that of the ITU-R 468 curve, except that the latter extends to slightly lower and higher frequencies. The BBC report states that this curve was given in a "contribution by the D.B.P. (The Telephone Administration of the Federal German Republic) in the Red Book Vol. 1 1957 covering the first plenary assembly of the CCITT (Geneva 1956)". D.B.P. is Deutsche Bundespost, the German post office which provides telephone service in Germany as the GPO does in the UK. The BBC report states "this characteristic is based on subjective tests described by Belger." and cites a 1953 paper by E. Belger.
Dolby Laboratories took up the new CCIR-468 weighting for use in measuring noise on their noise reduction systems, both in cinema (Dolby A) and on cassette decks (Dolby B), where other methods of measurement were failing to show up the advantage of such noise reduction. Some Hi-Fi column writers took up 468 weighting enthusiastically, observing that it reflected the roughly 10 dB improvement in noise observed subjectively on cassette recordings when using Dolby B while other methods could indicate an actual worsening in some circumstances, because they did not sufficiently attenuate noise above 10 kHz.
Standards
CCIR Recommendation 468-1 was published soon after this report, and appears to have been based on the BBC work. Later versions up to CCIR 468-4 differed only in minor changes to permitted tolerances. This standard was then incorporated into many other national and international standards (IEC, BSI, JIS, ITU) and adopted widely as the standard method for measuring noise, in broadcasting, professional audio, and 'Hi-Fi' specifications throughout the 1970s. When the CCIR ceased to exist, the standard was officially taken over by the ITU-R (International Telecommunication Union). Current work on this standard occurs primarily in the maintenance of IEC 60268, the international standard for sound systems.
The CCIR curve differs greatly from A-weighting in the 5 to 8 kHz region where it peaks to +12.2 dB at 6.3 kHz, the region in which we appear to be extremely sensitive to noise. While it has been said (incorrectly) that the difference is due to a requirement for assessing noise intrusiveness in the presence of programme material, rather than just loudness, the BBC report makes clear the fact that this was not the basis of the experiments. The real reason for the difference probably relates to the way in which our ears analyse sounds in terms of spectral content along the cochlea. This behaves like a set of closely spaced filters with a roughly constant Q factor, that is, bandwidths proportional to their centre frequencies. High frequency hair cells would therefore be sensitive to a greater proportion of the total energy in noise than low frequency hair cells. Though hair-cell responses are not exactly constant Q, and matters are further complicated by the way in which the brain integrates adjacent hair-cell outputs, the resultant effect appears roughly as a tilt centred on 1 kHz imposed on the A-weighting.
Dependent on spectral content, 468-weighted measurements of noise are generally about 11 dB higher than A-weighted, and this is probably a factor in the recent trend away from 468-weighting in equipment specifications as cassette tape use declines.
It is important to realise that the 468 specification covers both weighted and 'unweighted' (using a 22 Hz to 22 kHz 18 dB/octave bandpass filter) measurement and that both use a very special quasi-peak rectifier with carefully devised dynamics (A-weighting uses RMS detection for no particular reason). Rather than having a simple 'integration time' this detector requires implementation with two cascaded 'peak followers' each with different attack time-constants carefully chosen to control the response to both single and repeating tone-bursts of various durations. This ensures that measurements on impulsive noise take proper account of our reduced hearing sensitivity to short bursts. This quasi-peak measurement is also called psophometric weighting.
This was once more important because outside broadcasts were carried over 'music circuits' that used telephone lines, with clicks from Strowger and other electromechanical telephone exchanges. It now finds fresh relevance in the measurement of noise on computer 'Audio Cards' which commonly suffer clicks as drives start and stop.
Present usage of 468-weighting
468-weighting is also used in weighted distortion measurement at 1 kHz. Weighting the distortion residue after removal of the fundamental emphasises high-order harmonics, but only up to 10 kHz or so where the ears response falls off. This results in a single measurement (sometimes called distortion residue measurement) which has been claimed to correspond well with subjective effect even for power amplifiers where crossover distortion is known to be far more audible than normal THD (total harmonic distortion) measurements would suggest.
468-weighting is still demanded by the BBC and many other broadcasters, with increasing awareness of its existence and the fact that it is more valid on random noise where pure tones do not exist.
Often both A-weighted and 468-weighted figures are quoted for noise, especially in microphone specifications.
While not intended for this application, the 468 curve has also been used (offset to place the 0 dB point at 2 kHz rather than 1 kHz) as "M weighting" in standards such as ISO 21727 intended to gauge loudness or annoyance of cinema soundtracks. This application of the weighting curve does not include the quasi-peak detector specified in the ITU standard.
Summary of specification
Note: this is not the full definitive standard.
Weighting curve specification (weighted measurement)
The weighting curve is specified by both a circuit diagram of a weighting network and a table of amplitude responses.
Above is the ITU-R 468 Weighting Filter Circuit Diagram. The source and sink impedances are both 600 ohms (resistive), as shown in the diagram. The values are taken directly from the ITU-R 468 specification. Note that since this circuit is purely passive, it cannot create the additional 12 dB gain required; any results must be corrected by a factor of 8.1333, or +18.2 dB.
Table of amplitude responses:
The values of the amplitude response table slightly differ from those resulting from the circuit diagram, e.g. because of the finite resolution of the numerical values. In the standard it is said that the 33.06 nF capacitor may be adjusted or an active filter may be used.
Modeling at hand the circuit above and some calculus give this formula to get the amplitude response in dB for any given frequency value :
where
Tone-burst response requirements
5 kHz single bursts:
Repetitive tone-burst response
5 ms, 5 kHz bursts at repetition rate:
Unweighted measurement
Uses 22 Hz HPF and 22 kHz LPF 18 dB/decade or greater.
(Tables to be added)
See also
Noise weighting
Audio system measurements
References
Further reading
Audio Engineer's Reference Book, 2nd Ed 1999, edited Michael Talbot Smith, Focal Press
An Introduction to the Psychology of Hearing 5th ed, Brian C. J. Moore, Elsevier Press
External links
AES pro audio reference definition of ITU-R 468-weighting
Weighting Filter Set Circuit diagrams
Audio engineering
Noise
Sound
468 noise weighting | ITU-R 468 noise weighting | Engineering | 2,270 |
18,652,109 | https://en.wikipedia.org/wiki/Yeoh%20hyperelastic%20model | The Yeoh hyperelastic material model is a phenomenological model for the deformation of nearly incompressible, nonlinear elastic materials such as rubber. The model is based on Ronald Rivlin's observation that the elastic properties of rubber may be described using a strain energy density function which is a power series in the strain invariants of the Cauchy-Green deformation tensors. The Yeoh model for incompressible rubber is a function only of . For compressible rubbers, a dependence on is added on. Since a polynomial form of the strain energy density function is used but all the three invariants of the left Cauchy-Green deformation tensor are not, the Yeoh model is also called the reduced polynomial model.
Yeoh model for incompressible rubbers
Strain energy density function
The original model proposed by Yeoh had a cubic form with only dependence and is applicable to purely incompressible materials. The strain energy density for this model is written as
where are material constants. The quantity can be interpreted as the initial shear modulus.
Today a slightly more generalized version of the Yeoh model is used. This model includes terms and is written as
When the Yeoh model reduces to the neo-Hookean model for incompressible materials.
For consistency with linear elasticity the Yeoh model has to satisfy the condition
where is the shear modulus of the material.
Now, at ,
Therefore, the consistency condition for the Yeoh model is
Stress-deformation relations
The Cauchy stress for the incompressible Yeoh model is given by
Uniaxial extension
For uniaxial extension in the -direction, the principal stretches are . From incompressibility . Hence .
Therefore,
The left Cauchy-Green deformation tensor can then be expressed as
If the directions of the principal stretches are oriented with the coordinate basis vectors, we have
Since , we have
Therefore,
The engineering strain is . The engineering stress is
Equibiaxial extension
For equibiaxial extension in the and directions, the principal stretches are . From incompressibility . Hence .
Therefore,
The left Cauchy-Green deformation tensor can then be expressed as
If the directions of the principal stretches are oriented with the coordinate basis vectors, we have
Since , we have
Therefore,
The engineering strain is . The engineering stress is
Planar extension
Planar extension tests are carried out on thin specimens which are constrained from deforming in one direction. For planar extension in the directions with the direction constrained, the principal stretches are . From incompressibility . Hence .
Therefore,
The left Cauchy-Green deformation tensor can then be expressed as
If the directions of the principal stretches are oriented with the coordinate basis vectors, we have
Since , we have
Therefore,
The engineering strain is . The engineering stress is
Yeoh model for compressible rubbers
A version of the Yeoh model that includes dependence is used for compressible rubbers. The strain energy density function for this model is written as
where , and are material constants. The quantity is interpreted as half the initial shear modulus, while is interpreted as half the initial bulk modulus.
When the compressible Yeoh model reduces to the neo-Hookean model for incompressible materials.
History
The model is named after Oon Hock Yeoh. Yeoh completed his doctoral studies under Graham Lake at the University of London. Yeoh held research positions at Freudenberg-NOK, MRPRA (England), Rubber Research Institute of Malaysia (Malaysia), University of Akron, GenCorp Research, and Lord Corporation. Yeoh won the 2004 Melvin Mooney Distinguished Technology Award from the ACS Rubber Division.
References
See also
Hyperelastic material
Strain energy density function
Mooney-Rivlin solid
Finite strain theory
Stress measures
Elasticity (physics)
Rubber properties
Solid mechanics
Continuum mechanics | Yeoh hyperelastic model | Physics,Materials_science | 791 |
18,945,682 | https://en.wikipedia.org/wiki/Dexter%20Kozen | Dexter Campbell Kozen (born December 20, 1951) is an American theoretical computer scientist. He is Professor Emeritus and Joseph Newton Pew, Jr. Professor in Engineering at Cornell University.
Career
Kozen received his BA in mathematics from Dartmouth College in 1974 and his PhD in computer science in 1977 from Cornell University, where he was advised by Juris Hartmanis on the thesis, Complexity of Finitely Presented Algebras.
He is known for his work at the intersection of logic and complexity. He is one of the fathers of dynamic logic and developed the version of the modal μ-calculus most used today. His work on Kleene algebra with tests was recognized with an Alonzo Church Award in 2022. Moreover, he has written several textbooks on the theory of computation, automata theory, dynamic logic, and algorithms.
Kozen was a guitarist, singer, and songwriter in the band "Harmful if Swallowed". He also holds the position of faculty advisor for Cornell's rugby football club.
Awards and honors
John G. Kemeny Prize in Computing, Dartmouth College (1974)
Outstanding Innovation Award, IBM Corporation (1974)
Fellow, John Simon Guggenheim Foundation (1991)
Stephen and Margery Russell Distinguished Teaching Award, College of Arts and Sciences, Cornell (2001)
ACM Fellow, for contributions to theoretical computer science (2003)
Fellow, AAAS (2008)
2001 LICS Test-of-Time Award for the paper "A completeness theorem for Kleene algebras and the algebra of regular events" (2011)
Faculty of the Year, ACSU (Association of Computer Science Undergraduates at Cornell) (2013)
Radboud Excellence professorship at the Radboud University Nijmegen (2014)
Fellow, EATCS (2015)
EATCS Distinguished Achievements Award (2016)
McDowell Award, for groundbreaking contributions to topics ranging from computational complexity to the analysis of algebraic computations to logics of programs and verification (2016)
Weiss Presidential Fellow (2018)
POPL Distinguished Paper Award for the paper "Guarded Kleene algebra with tests: verification of uninterpreted programs in nearly linear time" (2020)
Alonzo Church Award, for his fundamental work on developing the theory and applications of Kleene Algebra with Tests, an equational system for reasoning about iterative programs, published in the paper "Kleene algebra with tests" (2022)
OOPSLA Distinguished Paper Award for the paper "Formal abstractions for packet scheduling" (2023)
References
External links
Dexter Kozen's homepage
2003 fellows of the Association for Computing Machinery
Fellows of the American Association for the Advancement of Science
Living people
American theoretical computer scientists
Cornell University faculty
Cornell University alumni
Dartmouth College alumni
American computer scientists
Computer scientists
Theoretical computer scientists
1951 births
Academic staff of Radboud University Nijmegen | Dexter Kozen | Technology | 564 |
37,124,406 | https://en.wikipedia.org/wiki/Alicaforsen | Alicaforsen (trade name Camligo) is an antisense oligonucleotide therapeutic that targets the messenger RNA for the production of human ICAM-1 receptor and is being developed for the treatment of acute disease flares in moderate to severe Inflammatory Bowel Disease (IBD).
Alicaforsen inhibits ICAM-1 production, which is an important adhesion molecule involved in leukocyte migration and trafficking to the site of inflammation. Hitherto, alicaforsen has been granted orphan drug designation and is prescribed as an unlicensed medicine in accordance with international regulation, for the treatment of pouchitis and left-sided ulcerative colitis. Given the positive results from an open-label trial and one case series in patients with chronic refractory pouchitis, US FDA has agreed to a rolling submission for a license application for the treatment of pouchitis.
It was discovered by Ionis Pharmaceuticals (formerly Isis Pharmaceuticals) and in 2017 Atlantic Healthcare plc took over the development for chronic antibiotic refractory pouchitis in an enema formulation.
Pharmacology
ICAM-1 promotes the extravasation and activation of leukocytes (white blood cells), which is part of the inflammation process. Alicaforsen inhibits the activity of ICAM-1 protein by degrading mRNA coding for it via an RNase-H based mechanism.
It appears to have better efficacy as a topical medication than via systemic administration which is typical of antisense drugs.
Clinical trials
In a Phase III randomised clinical trial with 299 patients with steroid dependent Crohn's disease, clinical response was correlated with drug exposure, with significant efficacy versus placebo being observed in the subgroup with greatest area under the curve, hence pharmacodynamic modelling suggests that alicaforsen (ISIS 2302) may be an effective therapy at adequate dose levels.
In another placebo-controlled study with 331 subjects with active Crohn's Disease, alicaforsen failed to demonstrate efficacy in any of its primary outcome measures. However there was a suggestion of a therapeutic response in a sub-population with elevated serum CRP (C-reactive Protein) levels >2mg/dl. The differential response highlights the confounding symptoms of a large subset of subjects whose needs are apparently not being met by the current clinical trial design. There is a need for more specific biomarkers that clearly can identify disease severity and subtypes of Crohn's disease and can be used to monitor disease response objectively.
Chemistry
Alicaforsen is a 20 unit phosphorothioate modified antisense oligonucleotide.
History
Alicaforsen was discovered and initially developed by Isis Pharmaceuticals, which changed its name to Ionis Pharmaceuticals in 2015.
Isis partnered on development of alicaforsen with Boehringer Ingelheim starting in 1995; that deal ended in 1999, after each of IV and subcutaneously delivered alicaforsen failed in phase III trials for Crohn's disease and development of those formulations in that indication was terminated; development for rheumatoid arthritis was terminated the same year and development in kidney transplant apparently ceased as well at that time.
The company reformulated alicaforsen as an enema and three small trials were published between 2004 and 2006, an open label trial in chronic pouchitis and two randomized trials in ulcerative colitis (UC); in the UC trials the drug missed its primary endpoint of improvements at 6 weeks, but showed a better effect in the longer term (between 18 and 30 weeks).
A post hoc meta-analysis of individual data of 200 patients from four phase 2 studies evaluating nightly alicaforsen 240 mg enema for six weeks showed that alicaforsen is effective in patients with active, distal, moderate to severe UC. The efficacy of alicaforsen was durable in these sub-groups, with significantly improved duration of clinical response with no maintenance therapy, suggesting a disease-modifying effect. This analysis suggests that alicaforsen enema could offer an effective, potentially durable response in moderate/severe distal active UC.
Alicaforsen was licensed to Atlantic Healthcare in 2007.
The use of the enema formulation of alicaforsen to treat pouchitis was granted orphan drug status in the US in 2008 and received the same in Europe in 2009. The enema formulation of alicaforsen for pouchitis received FDA Fast Track designation. In a subsequent multicentre Phase 3 clinical trial in 138 subjects with Active, Chronic, Antibiotic Refractory Primary Idiopathic Pouchitis, showed a clinically relevant 34% remission in stool frequency with 8% delta vs placebo. However the co-primary endpoints (an adaptation of the Mayo Score of improvement in endoscopic remission and bowel frequency) were not met, perhaps due to the high discontinuation rate of 35% that compromised the sample size available for statistical analysis. Disallowing background maintenance therapies contributed to this high dropout rate in this challenging, heterogenous patient group. Further the appropriateness of endoscopy as a primary end point is questionable.
Atlantic Healthcare has supplied over 350 courses of alicaforsen enema treatment on a named patient / compassionate use programme. Clinical outcomes published in case series have confirmed durable disease remission in patients with Ulcerative Colitis with no treatment related SAEs.
• UC case series publications confirming prolonged duration of action:
Ø Digestive Diseases Journal (Nov 2017); 10/12 patients with left-sided UC/proctitis responded to treatment, with a median durable response of 18 weeks
Ø Gastrodagarna Congress, Sweden (May 2016); all 7 distal UC patients that completed treatment responded, 57% remained in remission for 5-20 months
Ø Irish Society of Gastroenterology (Nov 2014); reported remission achieved in 57% of patients with UC with a durable response 1
No drug-related SAEs have been reported in any usage of alicaforsen enema
References
Further reading
Drugs acting on the gastrointestinal system and metabolism
Antisense RNA
Therapeutic gene modulation | Alicaforsen | Biology | 1,270 |
37,096,441 | https://en.wikipedia.org/wiki/Permit-to-work | Permit-to-work (PTW) refers to a management system procedure used to ensure that work is done safely and efficiently. It is used in hazardous industries, such as process and nuclear plants, usually in connection with maintenance work. It involves procedured request, review, authorization, documenting and, most importantly, de-conflicting of tasks to be carried out by front line workers. It ensures affected personnel are aware of the nature of the work and the hazards associated with it, all safety precautions have been put in place before starting the task, and the work has been completed correctly.
Implementation
Instructions or procedures are often adequate for most work activities, but some require extra care. A permit-to-work system is a formal system stating exactly what work is to be done, where, and when.
Permit-to-work is an essential part of control of work (CoW), a structured communication mechanism to reliably communicate information about hazards, control measures, and so on. During critical maintenance activities, good communication between management, supervisors, operators, and maintenance staff and contractors is essential.
Permit-to-work is also a core element of integrated safe system of work (ISSOW) systems, that along with risk assessment and isolation planning, enable as low as reasonably practicable (ALARP) reduction of unsafe activities in non-trivial work environments. Permit-to-work adherence is essential in process safety management.
Examples of high-risk jobs where a written permit-to-work procedure may need to be used include hot work (such as welding), confined space entry, cutting into pipes carrying hazardous substances (breaking containment), diving in the vicinity of intake openings, and work that requires electrical or mechanical isolation.
A permit-to-work is not a replacement for robust risk assessment, but can help provide context for the risk of the work to be done. Studies by the U.K. Health and Safety Executive have shown that the most significant cause of maintenance-related accidents in the U.K. chemical industry was a failure to implement effective permit-to-work systems. Common failures in control of work systems are a failure to follow the permit-to-work or isolation management procedures, risk assessments that are not suitable and sufficient to identify the risks, and/or the control measures and a combination of the two.
PTW is a means of coordinating different work activities to avoid conflicts. Its implementation usually involves the use of incompatible operations matrices to manage simultaneous operations (SIMOPS), thus preventing conflicting short-term activities of different workgroups that may present hazardous interference. For example, PTW can preclude one workgroup welding or grinding in the vicinity of another venting explosive or flammable gases.
A responsible person should assess the work and check safety at each stage. The people doing the job sign the permit to show that they understand the risks and precautions necessary. Ideally one person should be delegated with the responsibility of PTW authorization at any one time, and all workers at the facility should be fully aware of who that person is and when the responsibility is transferred.
A permit to work form typically contains these items:
The work to be done, the equipment to be used and the personnel involved.
Precautions to be taken when performing the task.
Other workgroups to be informed of work being performed in their area.
Authorisation for work to commence.
Duration that the permit is valid.
Method to extend the permit for an additional period.
Witness mechanism that all work has been complete and the worksite restored to a clean, safe condition.
Actions to be taken in an emergency.
Once a PTW has been issued to a workgroup, a lock-out tag-out system is used to restrict equipment state changes such as valve operations until the work specified in the permit is complete. Since the permit-to-work is the primary de-conflictation tool, all non-routine work activities in high-risk environments should have a PTW.
Historically, permit-to-work has been paper-based. Electronic permit-to-work (ePTW) systems have been developed since the early 1980s as an alternative to paper permit-to-work methods.
Historical examples of manual permit to work failures
USS Guitarro, a submarine of the United States Navy, sank alongside when two independent work groups repeatedly flooded ballast tanks in an attempt to achieve conflicting objectives of zero trim and two degree bow-up trim; a result of failing to have a single person aware of and authorising all simultaneous activities by a permit to work system.
HMS Artemis, a submarine of the Royal Navy, sank alongside when activities of ballast management and watertight integrity were uncontrolled and without oversight.
Occidental Petroleum's Piper Alpha platform was destroyed on the 6th July 1988 by explosion and fire, after a shift reinstated a system left partially disassembled by the previous shift. 167 men died in this incident due to failure to properly communicate permit state at shift handover.
Examples of legislative and industry association guidelines
Australia: Commonwealth Law - Offshore Petroleum Safety Case.
United Kingdom: Health and Safety Executive - Permit to Work Systems.
United States: Occupational Safety and Health Administration - Process Safety Management.
European Industrial Gases Association: Work Permit Systems, Doc. 40/02/E.
References
Occupational safety and health
Petroleum production
Process safety | Permit-to-work | Chemistry,Engineering | 1,076 |
5,219,699 | https://en.wikipedia.org/wiki/Human%20Genome%20Project | The Human Genome Project (HGP) was an international scientific research project with the goal of determining the base pairs that make up human DNA, and of identifying, mapping and sequencing all of the genes of the human genome from both a physical and a functional standpoint. It started in 1990 and was completed in 2003. It was the world's largest collaborative biological project. Planning for the project began in 1984 by the US government, and it officially launched in 1990. It was declared complete on 14 April 2003, and included about 92% of the genome. Level "complete genome" was achieved in May 2021, with only 0.3% of the bases covered by potential issues. The final gapless assembly was finished in January 2022.
Funding came from the US government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside the government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centres in the United States, the United Kingdom, Japan, France, Germany, and China, working in the International Human Genome Sequencing Consortium (IHGSC).
The Human Genome Project originally aimed to map the complete set of nucleotides contained in a human haploid reference genome, of which there are more than three billion. The genome of any given individual is unique; mapping the human genome involved sequencing samples collected from a small number of individuals and then assembling the sequenced fragments to get a complete sequence for each of the 23 human chromosome pairs (22 pairs of autosomes and a pair of sex chromosomes, known as allosomes). Therefore, the finished human genome is a mosaic, not representing any one individual. Much of the project's utility comes from the fact that the vast majority of the human genome is the same in all humans.
History
The Human Genome Project was a 13-year-long publicly funded project initiated in 1990 with the objective of determining the DNA sequence of the entire euchromatic human genome within 13 years. The idea of such a project originated in the work of Ronald A. Fisher, whose work is also credited with later initiating the project. In 1977, Walter Gilbert, Frederick Sanger, and Paul Berg invented these methods of sequencing DNA.
In May 1985, Robert Sinsheimer organized a workshop at the University of California, Santa Cruz, to discuss the feasibility of building a systematic reference genome using gene sequencing technologies. Gilbert wrote the first plan for what he called The Human Genome Institute on the plane ride home from the workshop. In March 1986, the Santa Fe Workshop was organized by Charles DeLisi and David Smith of the Department of Energy's Office of Health and Environmental Research (OHER). At the same time Renato Dulbecco, President of the Salk Institute for Biological Studies, first proposed the concept of whole genome sequencing in an essay in Science. The published work, titled "A Turning Point in Cancer Research: Sequencing the Human Genome", was shortened from the original proposal of using the sequence to understand the genetic basis of breast cancer. James Watson, one of the discoverers of the double helix shape of DNA in the 1950s, followed two months later with a workshop held at the Cold Spring Harbor Laboratory. Thus the idea for obtaining a reference sequence had three independent origins: Sinsheimer, Dulbecco and DeLisi. Ultimately it was the actions by DeLisi that launched the project.
The fact that the Santa Fe Workshop was motivated and supported by a federal agency opened a path, albeit a difficult and tortuous one, for converting the idea into public policy in the United States. In a memo to the Assistant Secretary for Energy Research Alvin Trivelpiece, then-Director of the OHER Charles DeLisi outlined a broad plan for the project. This started a long and complex chain of events that led to the approved reprogramming of funds that enabled the OHER to launch the project in 1986, and to recommend the first line item for the HGP, which was in President Reagan's 1988 budget submission, and ultimately approved by Congress. Of particular importance in congressional approval was the advocacy of New Mexico Senator Pete Domenici, whom DeLisi had befriended. Domenici chaired the Senate Committee on Energy and Natural Resources, as well as the Budget Committee, both of which were key in the DOE budget process. Congress added a comparable amount to the NIH budget, thereby beginning official funding by both agencies.
Trivelpiece sought and obtained the approval of DeLisi's proposal from Deputy Secretary William Flynn Martin. This chart was used by Trivelpiece in the spring of 1986 to brief Martin and Under Secretary Joseph Salgado regarding his intention to reprogram $4 million to initiate the project with the approval of John S. Herrington. This reprogramming was followed by a line item budget of $13 million in the Reagan administration's 1987 budget submission to Congress. It subsequently passed both Houses. The project was planned to be completed within 15 years.
In 1990 the two major funding agencies, DOE and the National Institutes of Health, developed a memorandum of understanding to coordinate plans and set the clock for the initiation of the Project to 1990. At that time, David J. Galas was Director of the renamed "Office of Biological and Environmental Research" in the US Department of Energy's Office of Science and James Watson headed the NIH Genome Program. In 1993, Aristides Patrinos succeeded Galas and Francis Collins succeeded Watson, assuming the role of overall Project Head as Director of the NIH National Center for Human Genome Research (which would later become the National Human Genome Research Institute). A working draft of the genome was announced in 2000 and the papers describing it were published in February 2001. A more complete draft was published in 2003, and genome "finishing" work continued for more than a decade after that.
The $3 billion project was formally founded in 1990 by the US Department of Energy and the National Institutes of Health, and was expected to take 15 years. In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Australia, China, and myriad other spontaneous relationships. The project ended up costing less than expected, at about $2.7 billion (equivalent to about $5 billion in 2021).
Two technologies enabled the project: gene mapping and DNA sequencing. The gene mapping technique of restriction fragment length polymorphism (RFLP) arose from the search for the location of the breast cancer gene by Mark Skolnick of the University of Utah, which began in 1974. Seeing a linkage marker for the gene, in collaboration with David Botstein, Ray White and Ron Davis conceived of a way to construct a genetic linkage map of the human genome. This enabled scientists to launch the larger human genome effort.
Because of widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as parallel advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by US President Bill Clinton and British Prime Minister Tony Blair on 26 June 2000). This first available rough draft assembly of the genome was completed by the Genome Bioinformatics Group at the University of California, Santa Cruz, primarily led by then-graduate student Jim Kent and his advisor David Haussler. Ongoing sequencing led to the announcement of the essentially complete genome on 14 April 2003, two years earlier than planned. In May 2006, another milestone was passed on the way to completion of the project when the sequence of the very last chromosome was published in Nature.
The various institutions, companies, and laboratories which participated in the Human Genome Project are listed below, according to the NIH:
State of completion
Notably the project was not able to sequence all of the DNA found in human cells; rather, the aim was to sequence only euchromatic regions of the nuclear genome, which make up 92.1% of the human genome. The remaining 7.9% exists in scattered heterochromatic regions such as those found in centromeres and telomeres. These regions by their nature are generally more difficult to sequence and so were not included as part of the project's original plans.
The Human Genome Project (HGP) was declared complete in April 2003. An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on 14 April 2003. Although this was reported to cover 99% of the euchromatic human genome with 99.99% accuracy, a major quality assessment of the human genome sequence was published on 27 May 2004, indicating over 92% of sampling exceeded 99.99% accuracy which was within the intended goal.
In March 2009, the Genome Reference Consortium (GRC) released a more accurate version of the human genome, but that still left more than 300 gaps, while 160 such gaps remained in 2015.
Though in May 2020 the GRC reported 79 "unresolved" gaps, accounting for as much as 5% of the human genome, months later, the application of new long-range sequencing techniques and a hydatidiform mole-derived cell line in which both copies of each chromosome are identical led to the first telomere-to-telomere, truly complete sequence of a human chromosome, the X chromosome. Similarly, an end-to-end complete sequence of human autosomal chromosome 8 followed several months later.
In 2021, it was reported that the Telomere-to-Telomere (T2T) consortium had filled in all of the gaps except five in repetitive regions of ribosomal DNA. Months later, those gaps had also been closed. The full sequence did not contain the Y chromosome, which causes the embryo to become male, being absent in the cell line that served as the source for the DNA analysis. About 0.3% of the full sequence proved difficult to check for quality, and thus might have contained errors, which were being targeted for confirmation. In April 2022, the complete non-Y chromosome sequence was formally published, providing a view of much of the 8% of the genome left out by the HGP. In December 2022, a preprint article claimed that the sequencing of the remaining missing regions of Y chromosome had been performed, thus completing the sequencing of all 24 human chromosomes. In August 2023 this preprint was finally published.
Applications and proposed benefits
The sequencing of the human genome holds benefits for many fields, from molecular medicine to human evolution. The Human Genome Project, through its sequencing of the DNA, can help researchers understand diseases including: genotyping of specific viruses to direct appropriate treatment; identification of mutations linked to different forms of cancer; the design of medication and more accurate prediction of their effects; advancement in forensic applied sciences; biofuels and other energy applications; agriculture, animal husbandry, bioprocessing; risk assessment; bioarcheology, anthropology and evolution.
The sequence of the DNA is stored in databases available to anyone on the Internet. The US National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations, such as the UCSC Genome Browser at the University of California, Santa Cruz, and Ensembl present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data because the data itself is difficult to interpret without such programs. Generally speaking, advances in genome sequencing technology have followed Moore's Law, a concept from computer science which states that integrated circuits can increase in complexity at an exponential rate. This means that the speeds at which whole genomes can be sequenced can increase at a similar rate, as was seen during the development of the Human Genome Project.
Techniques and analysis
The process of identifying the boundaries between genes and other features in a raw DNA sequence is called genome annotation and is in the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. Beginning in 2008, a new technology known as RNA-seq was introduced that allowed scientists to directly sequence the messenger RNA in cells. This replaced previous methods of annotation, which relied on the inherent properties of the DNA sequence, with direct measurement, which was much more accurate. Today, annotation of the human genome and other genomes relies primarily on deep sequencing of the transcripts in every human tissue using RNA-seq. These experiments have revealed that over 90% of genes contain at least one and usually several alternative splice variants, in which the exons are combined in different ways to produce 2 or more gene products from the same locus.
Subsequent projects sequenced the genomes of multiple distinct ethnic groups, though as of 2019 there is still only one "reference genome".
Findings
Key findings of the draft (2001) and complete (2004) genome sequences include:
There are approximately 22,300 protein-coding genes in human beings, the same range as in other mammals.
The human genome has significantly more segmental duplications (nearly identical, repeated sections of DNA) than had been previously suspected.
At the time when the draft sequence was published, fewer than 7% of protein families appeared to be vertebrate specific.
Accomplishments
The human genome has approximately 3.1 billion base pairs. The Human Genome Project was started in 1990 with the goal of sequencing and identifying all base pairs in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. It is considered a megaproject.
The genome was broken into smaller pieces; approximately 150,000 base pairs in length. These pieces were then ligated into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.
Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust, as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute, the Wellcome Sanger Institute (then called The Sanger Centre) based at the Wellcome Genome Campus, Washington University in St. Louis, and Baylor College of Medicine.
The UN Educational, Scientific and Cultural Organization (UNESCO) served as an important channel for the involvement of developing countries in the Human Genome Project.
Public versus private approaches
In 1998 a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300 million Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. While the Celera project focused its efforts on production sequencing and assembly of the human genome, the public HGP also funded mapping and sequencing of the worm, fly, and yeast genomes, funding of databases, development of new technologies, supporting bioinformatics and ethics programs, as well as polishing and assessment of the genome assembly. Both the Celera and public approaches spent roughly $250 million on the production sequencing effort. For sequence assembly, Celera made use of publicly available maps at GenBank, which Celera was capable of generating, but the availability of which was "beneficial" to the privately-funded project.
Celera used a technique called whole genome shotgun sequencing, employing pairwise end sequencing, which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.
Celera initially announced that it would seek patent protection on "only 200–300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100–300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes.
Celera also promised to publish their findings in accordance with the terms of the 1996 "Bermuda Statement", by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitors were compelled to release the first draft of the human genome before Celera for this reason. On 7 July 2000, the UCSC Genome Bioinformatics Group released the first working draft on the web. The scientific community downloaded about 500 GB of information from the UCSC genome server in the first 24 hours of free and unrestricted access.
In March 2000 President Clinton, along with Prime Minister Tony Blair in a dual statement, urged that all researchers who wished to research the sequence should have "unencumbered access" to the genome sequence. The statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days.
Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper) described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to approximately 92% of the sequence currently.
Genome donors
In the International Human Genome Sequencing Consortium (IHGSC) public-sector HGP, researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones taken from many different libraries were used in the overall project, with most of those libraries being created by Pieter J. de Jong. Much of the sequence (>70%) of the reference genome produced by the public HGP came from a single anonymous male donor from Buffalo, New York, (code name RP11; the "RP" refers to Roswell Park Comprehensive Cancer Center).
HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) – each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, because of quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome) compared to female samples (which contain two X chromosomes). The other 22 chromosomes (the autosomes) are the same for both sexes.
Although the main sequencing phase of the HGP has been completed, studies of DNA variation continued in the International HapMap Project, whose goal was to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes, or "haps"). The DNA samples for the HapMap came from a total of 270 individuals; Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre d'Etude du Polymorphisme Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.
In the Celera Genomics private-sector project, DNA from five different individuals was used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of 21 samples in the pool, five of which were selected for use.
Developments
With the sequence in hand the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.
It is anticipated that detailed knowledge of the human genome will offer new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics, started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, hemostasis disorders, cystic fibrosis, liver diseases, and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.
There are also many tangible benefits for biologists. For example a researcher investigating a certain form of cancer may have narrowed down their search to a particular gene. By visiting the human genome database on the internet, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its functions, its evolutionary relationships to other human genes, or to genes in mice, yeast, or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, and diseases associated with this gene or other datatypes. Further, a deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.
Analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and their closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data in this project.
The project inspired and paved the way for genomic work in other fields, such as agriculture. For example by studying the genetic composition of Tritium aestivum, the world's most commonly used bread wheat, great insight has been gained into the ways that domestication has impacted the evolution of the plant. It is being investigated which loci are most susceptible to manipulation, and how this plays out in evolutionary terms. Genetic sequencing has allowed these questions to be addressed for the first time, as specific loci can be compared in wild and domesticated strains of the plant. This will allow for advances in genetic modification in the future which could yield healthier and disease-resistant wheat crops, among other things.
Ethical, legal, and social issues
At the onset of the Human Genome Project, several ethical, legal, and social concerns were raised in regard to how increased knowledge of the human genome could be used to discriminate against people. One of the main concerns of most individuals was the fear that both employers and health insurance companies would refuse to hire individuals or refuse to provide insurance to people because of a health concern indicated by someone's genes. In 1996, the United States passed the Health Insurance Portability and Accountability Act (HIPAA), which protects against the unauthorized and non-consensual release of individually identifiable health information to any entity not actively engaged in the provision of healthcare services to a patient.
Along with identifying all of the approximately 20,000–25,000 genes in the human genome (estimated at between 80,000 and 140,000 at the start of the project), the Human Genome Project also sought to address the ethical, legal, and social issues that were created by the onset of the project. For that, the Ethical, Legal, and Social Implications (ELSI) program was founded in 1990. Five percent of the annual budget was allocated to address the ELSI arising from the project. This budget started at approximately $1.57 million in the year 1990, but increased to approximately $18 million in the year 2014.
While the project may offer significant benefits to medicine and scientific research, some authors have emphasized the need to address the potential social consequences of mapping the human genome. Historian of science Hans-Jörg Rheinberger wrote that "the prospect of 'molecularizing' diseases and their possible cure will have a profound impact on what patients expect from medical help, and on a new generation of doctors' perception of illness."
In July 2024 an investigation by Undark Magazine and co-published with STAT News revealed for the first time several ethical lapses by the scientists spearheading the Human Genome Project. Chief among these was the use of roughly 75 percent of a single donor's DNA in the construction of the reference genome, despite informed consent forms, provided to each of the 20 anonymous donors participating, that indicated no more than 10 percent of any one donor's DNA would be used. About 10 percent of the reference genome belonged to one of the project's lead scientists, Pieter De Jong.
See also
References
Further reading
361 pages. Examines the intellectual origins, history, and motivations of the project to map the human genome; draws on interviews with key figures.
External links
National Human Genome Research Institute (NHGRI). NHGRI led the National Institutes of Health's contribution to the International Human Genome Project. This project, which had as its primary goal the sequencing of the three billion base pairs that make up the human genome, was successfully completed in April 2003.
Human Genome News. Published from 1989 to 2002 by the US Department of Energy, this newsletter was a major communications method for coordination of the Human Genome Project. Complete online archives are available.
The HGP information pages Department of Energy's portal to the international Human Genome Project, Microbial Genome Program, and Genomics:GTL systems biology for energy and environment
yourgenome.org: The Sanger Institute public information pages has general and detailed primers on DNA, genes, and genomes, the Human Genome Project and science spotlights.
Ensembl project, an automated annotation system and browser for the human genome
UCSC genome browser, This site contains the reference sequence and working draft assemblies for a large collection of genomes. It also provides a portal to the ENCODE project.
Nature magazine's human genome gateway, including the HGP's paper on the draft genome sequence
Wellcome Trust Human Genome website A free resource allowing you to explore the human genome, your health and your future.
Learning about the Human Genome. Part 1: Challenge to Science Educators. ERIC Digest.
Learning about the Human Genome. Part 2: Resources for Science Educators. ERIC Digest.
Patenting Life by Merrill Goozner
Prepared Statement of Craig Venter of Celera Venter discusses Celera's progress in deciphering the human genome sequence and
relationship to healthcare and to the federally funded Human Genome Project.
Cracking the Code of Life Companion website to 2-hour NOVA program documenting the race to decode the genome, including the entire program hosted in 16 parts in either QuickTime or RealPlayer format.
Bioethics Research Library Numerous original documents at Georgetown University.
David J. Galas
Works by archive
Project Gutenberg hosts e-texts for Human Genome Project, titled Human Genome Project, Chromosome Number # (# denotes 01–22, X and Y). This information is the raw sequence, released in November 2002; access to entry pages with download links is available through Human Genome Project, Chromosome Number 01 for Chromosome 1 sequentially to Human Genome Project, Y Chromosome for the Y Chromosome. Note that this sequence might not be considered definitive because of ongoing revisions and refinements. In addition to the chromosome files, there is a supplementary information file dated March 2004 which contains additional sequence information.
.
Biotechnology
Life sciences industry
Wellcome Trust
Projects established in 1990
1990 in biotechnology
1990 in biology
1990 in science
1990 establishments in the United States
2003 in biotechnology
James Watson
Bioinformatics | Human Genome Project | Engineering,Biology | 5,919 |
14,410,911 | https://en.wikipedia.org/wiki/Sodium%20maleonitriledithiolate | Sodium maleonitriledithiolate is the chemical compound described by the formula . The name refers to the cis compound, structurally related to maleonitrile (). Maleonitriledithiolate is often abbreviated mnt. It is a "dithiolene", i.e. a chelating alkene-1,2-dithiolate. It is a prototypical non-innocent ligand in coordination chemistry. Several complexes are known, such as .
The salt is synthesized by treating carbon disulfide with sodium cyanide to give the cyanodithioformate salt, which eliminates elemental sulfur in aqueous solution:
The compound was first described in 1958.
References
Thiolates
Alkene derivatives
Sodium compounds
Nitriles
Substances discovered in the 1950s | Sodium maleonitriledithiolate | Chemistry | 164 |
33,974,097 | https://en.wikipedia.org/wiki/Water%20storage | Water storage is a broad term referring to storage of both potable water for consumption, and non potable water for use in agriculture. In both developing countries and some developed countries found in tropical climates, there is a need to store potable drinking water during the dry season. In agriculture water storage, water is stored for later use in natural water sources, such as groundwater aquifers, soil water, natural wetlands, and small artificial ponds, tanks and reservoirs behind major dams. Storing water invites a host of potential issues regardless of that water's intended purpose, including contamination through organic and inorganic means.
Types
Groundwater
Groundwater is located beneath the ground surface in soil pore spaces and in the fractures of rock formations. A unit of rock or an unconsolidated deposit is called an aquifer when it can yield a usable quantity of water. The depth at which soil pore spaces or fractures and voids in rock become completely saturated with water is called the water table. There are two broad types of aquifers: An unconfined aquifer is where the surface is not restricted by impervious rocks, so the water table is at atmospheric pressure. In a confined aquifer, the upper surface of water is overlain by a layer of impervious rock, so the groundwater is stored under pressure.
Aquifers receive water through two ways, one from precipitation that flows through the unsaturated zone of the soil profile, and two from lakes and rivers. When a water table reaches capacity, or all soil is completely saturated, the water table meets the surface of the ground where water discharge in the forms of springs or seeps.
It is also possible to artificially recharge aquifers (using wells), for example through the use of Aquifer storage and recovery (ASR).
Soil moisture
Groundwater is stored in two zones, one being the saturated zone, or Aquifer, the other is the pore space of unsaturated soil immediately below the ground surface. Soil moisture is the water held between soil particles in the root zone (rhizosphere) of plants, generally in the top 200 cm of soil. Water storage in the soil profile is extremely important for agriculture, especially in locations that rely on rainfall for cultivating plants. For example, in Africa rain-fed agriculture accounts for 95% of farmed land.
Wetlands
Wetlands span the surface/sub-surface interface, storing water at various times as groundwater, soil moisture and surface water. They are vital ecosystems that support wildlife and perform valuable ecosystem services, such as flood protection and water cleansing. They also provide livelihoods for millions of people who live within and around them. For example, the Inner Niger River Delta in the Western Sahel zone supports more than a million people who make their living as fishermen, cattle breeders or farmers, using the annual rise and fall of the river waters and its floodplains.
Wetlands are basically sponges that capture and slowly release large amounts of rain, snowmelt, groundwater and floodwater. Trees and other wetlands vegetation slow the speed of flood water and more evenly distribute it across the wetland. The combination of increased water storage and flood water hindrances lower flood heights and reduce erosion.
Ponds and tanks
Detention basins and water tanks can be defined as community-built and household water stores, filled by rainwater, groundwater infiltration or surface runoff. They are usually open, and therefore exposed to high levels of evaporation. They can be a great help to farmers in helping them overcome dry spells. However, they can promote vector-borne diseases such as malaria or schistosomiasis.
Detention basins are designed for temporary capture of flood waters and do not allow for permanent pooling of water and therefore do not make viable or reliable sources of water storage. Retention basins are similar to detention basins for flood control management, but are built for permanent pooling to control sediment and pollutants in the flood water.
Dams and reservoirs
In the past, large dams have often been the focus of water storage efforts. Many large dams and their reservoirs have brought significant social and economic benefits. For example, Egypt's Aswan High Dam, built in the 1960s, has protected the nation from drought and floods and supplies water used to irrigate some 15 million hectares. However, dams can also have great negative impacts. Because sediment is trapped by the Aswan High Dam, the Nile no longer delivers nutrients in large quantities to the floodplain. This has reduced soil fertility and increased the need for fertilizer. Water stored in dams and reservoirs can be treated for drinking water, but in the past due to poor taxing and high water prices in the US, water supply dams are unable to reach their intended levels of operation. Due to the increased surface area of water that dams create, huge amounts of water is lost to evaporation, much more so than what would have been lost from the river that flowed in its place.
Planting basins
Rainfed agriculture constitutes 80% of global agriculture. Many of the 852 million poor people in the world live in parts of Asia and Africa that depend on rainfall to cultivate food crops. As the global population swells, more food will be needed, but climate variability is likely to make farming more difficult. A range of water stores could help farmers overcome dry spells that would otherwise cause their crops to fail. Field studies have shown the effectiveness of small-scale water storage. For example, using small planting basins to 'harvest' water in Zimbabwe have been shown to boost maize yields, whether rainfall is abundant or scarce. In Niger, they have led to three or fourfold increases in millet yields.
Contamination
As of 2010, it was reported that nearly half of the global population depends on in-home water storage due to a lack of adequate water supply networks. Many of the in-home solutions have improvised from available materials. It has been suggested that the lack of proper tools and equipment for construction, leads to a system more likely to contain breaches, making them more susceptible to contamination from the environment and users.
Common factors
Roofing Materials: In certain parts of the world uncoated lead flashing is used as a roofing material. Researchers found on-site water storage of rainwater was more acidic, and contained elevated levels of heavy metals in a study conducted in Australia from 2005–2006.
Hand-washing: When water is stored in tanks for consumption hand-washing can become a factor if the tank lacks a proper faucet system, or if there is a lack of education on the risks posed by using hands for water consumption. It was found in a 2009 study that water tanks in Tanzania contained 140-180% more fecal indicator bacteria than the water they were supplied with.
Fertilizer Runoff
Common risks
Fluorosis
Arsenic Poisoning
Bacteria and other organic contaminants
Decontamination
In the event that a water tank or tanker is contaminated, the following steps should be taken to reclaim the tank or tanker, if it is structurally intact. Additionally, it is recommended that tanks in continuous use are cleaned every five years, and for seasonal use, annually.
Clean: Drain the tank(er) of any remaining fluid, making sure to capture any hazardous fluid to be properly disposed of. Then scrub the inside of the tank with a detergent, and hot water mixture.
Disinfect: Fill the tank a quarter full with clean water. Sprinkle 80 grams of granular high-strength calcium hypochlorite (HSCH) into the tank for every 1000 litres total capacity of the tank. Fill the tank completely with clean water, close the lid and leave to stand for 24 hours.
Flush: After the chlorine solution has sat in the tank for 24 hours, flush out/empty the storage tank. Do not drain the tank into a septic system or adjacent surface water body. Continue flushing until the waste water is clear and no chlorine odor is detected.
Test: Once the storage tank has been thoroughly flushed, test for free chlorine residual to ensure it is non-detectable. Once a non-detectable chlorine residual has been obtained, collect operational & maintenance (O&M) total coliform bacteria water samples.
If the test results are negative for bacteria, the drinking water is considered safe to use and drink.
See also
Blue Nile
Blue roof
Drum (container)
Pumped-storage hydroelectricity
Volta River
Water resources
References
External links
International Water Management Institute (IWMI) : Home Page
Reservoirs
Lakes
Water supply infrastructure | Water storage | Environmental_science | 1,734 |
57,302,917 | https://en.wikipedia.org/wiki/Aluminium%20dihydrogenphosphate | Aluminium dihydrogenphosphate describes inorganic compounds with the formula Al(H2PO4)3.xH2O where x = 0 or 3. They are white solids. Upon heating these materials convert sequentially to a family of related polyphosphate salts including aluminium triphosphate (AlH2P3O10.2H2O), aluminium hexametaphosphate (Al2P6O18), and aluminium tetrametaphosphate (Al4(P4O12)3). Some of these materials are used for fireproofing and as ingredients in specialized glasses.
According to analysis by X-ray crystallography, the structure consists of a coordination polymer featuring octahedral Al3+ centers bridged by tetrahedral dihydrogen phosphate ligands. The dihydrogen phosphate ligands are bound to Al3+ as monodentate ligands.
References
Phosphates
Aluminium compounds | Aluminium dihydrogenphosphate | Chemistry | 198 |
13,415,486 | https://en.wikipedia.org/wiki/Bolaamphiphile | In chemistry, bolaamphiphiles (also known as bolaform surfactants, bolaphiles, or alpha-omega-type surfactants) are amphiphilic molecules that have hydrophilic groups at both ends of a sufficiently long hydrophobic hydrocarbon chain. Compared to single-headed amphiphiles, the introduction of a second head-group generally induces a higher solubility in water, an increase in the critical micelle concentration (CMC), and a decrease in aggregation number. The aggregate morphologies of bolaamphiphiles include spheres, cylinders, disks, and vesicles. Bolaamphiphiles are also known to form helical structures that can form monolayer microtubular self-assemblies.
References
Fuhrhop, J-H; Wang, T. Bolaamphiphile, Chem. Rev. (2004), 104(6), 2901-2937.
Chen, Yuxia; Liu, Yan; Guo, Rong. Aggregation behavior of an amino acid-derived bolaamphiphile and a conventional surfactant mixed system. Journal of Colloid and Interface Science (2009), 336(2), 766-772. CODEN: JCISA5 . AN 2009:776584
Yin, Shouchun; Wang, Chao; Song, Bo; Chen, Senlin; Wang, Zhiqiang. Self-Organization of a Polymerizable Bolaamphiphile Bearing a Diacetylene Group and L-Aspartic Acid Group. Langmuir (2009), 25(16), 8968-8973. CODEN: LANGD5 . CAN 151:173915 AN 2009:383258
Wang, H.; Li, M.; Xu, Z.; Qiao, W.; Li, Z. Interfacial tension of unsymmetrical bolaamphiphile surfactant in surfactant/alkali/crude oil systems. Energy Sources, Part A: Recovery, Utilization, and Environmental Effects (2008), 30(16), 1442-1450. CODEN: ESPACB . CAN 150:475745 AN 2008:763292
Chen, Senlin; Song, Bo; Wang, Zhiqiang; Zhang, Xi. Self-Organization of Bolaamphiphile Bearing Biphenyl Mesogen and Aspartic-Acid Headgroups. Journal of Physical Chemistry C (2008), 112(9), 3308-3313. CODEN: JPCCCK . CAN 148:372219 AN 2008:176360
Feng Qiu, Chengkang Tang, Yongzhu Chen Amyloid-like aggregation of designer bolaamphiphilic peptides: Effect of hydrophobic section and hydrophilic heads. Journal of peptide science. (2017) DOI: 10.1002/psc.3062
Organic chemistry
Physical chemistry
Surfactants | Bolaamphiphile | Physics,Chemistry | 626 |
54,801,052 | https://en.wikipedia.org/wiki/Google%27s%20Ideological%20Echo%20Chamber | "Google's Ideological Echo Chamber", commonly referred to as the Google memo, is an internal memo, dated July 2017, by US-based Google engineer James Damore () about Google's culture and diversity policies. The memo and Google's subsequent firing of Damore in August 2017 became a subject of interest for the media. Damore's arguments received both praise and criticism from media outlets, scientists, academics and others.
The company fired Damore for violation of the company's code of conduct. Damore filed a complaint with the National Labor Relations Board, but later withdrew this complaint. A lawyer with the NLRB wrote that his firing did not violate Federal employment laws, as most employees in the United States can be fired at the employer's discretion. After withdrawing this complaint, Damore filed a class action lawsuit, retaining the services of attorney Harmeet Dhillon, alleging that Google was discriminating against conservatives, Whites, Asians, and men. Damore withdrew his claims in the lawsuit to pursue arbitration against Google.
Course of events
James Damore wrote the memo after a Google diversity program he attended solicited feedback. The memo was written on a flight to China. Calling the culture at Google an "ideological echo chamber", the memo states that, whereas discrimination exists, it is extreme to ascribe all disparities to oppression, and it is authoritarian to try to correct disparities through reverse discrimination. Instead, the memo argues that male to female disparities can be partly explained by biological differences. Alluding to the work of Simon Baron-Cohen, Damore said that those differences include women generally having a stronger interest in people rather than things, and tending to be more social, artistic, and prone to neuroticism (a higher-order personality trait). Damore's memorandum also suggests ways to adapt the tech workplace to those differences to increase women's representation and comfort, without resorting to discrimination.
The memo is dated July 2017 and was originally shared on an internal mailing list. It was later updated with a preface affirming the author's opposition to workplace sexism and stereotyping. On August 5, a version of the memo (omitting sources and graphs) was published by Gizmodo. The memo's publication resulted in controversy across social media, and in public criticism of the memo and its author from some Google employees. According to Wired, Google's internal forums showed some support for Damore, who said he received private thanks from employees who were afraid to come forward.
Damore was fired remotely by Google on August 7, 2017. The same day, prior to being fired, Damore filed a complaint with the National Labor Relations Board. The complaint is marked as "8(a)(1) Coercive Statements (Threats, Promises of Benefits, etc.)". A subsequent statement from Google asserted that its executives were unaware of the complaint when they fired Damore; it is illegal to fire an employee in retaliation for an NLRB complaint. Following his firing, Damore announced he would pursue legal action against Google.
Google's VP of Diversity, Danielle Brown, responded to the memo on August 8: "Part of building an open, inclusive environment means fostering a culture in which those with alternative views, including different political views, feel safe sharing their opinions. But that discourse needs to work alongside the principles of equal employment found in our Code of Conduct, policies, and anti-discrimination laws". Google's CEO Sundar Pichai wrote a note to Google employees, supporting Brown's formal response, and adding that much of the document was fair to debate. His explanation read "to suggest a group of our colleagues have traits that make them less biologically suited to that work is offensive and not OK ... At the same time, there are co-workers who are questioning whether they can safely express their views in the workplace (especially those with a minority viewpoint). They too feel under threat, and that is also not OK." Anonymously-placed physical ads criticizing Pichai and Google for the firing were put up shortly after. Damore characterized the response by Google executives as having "shamed" him for his views. CNN described the fallout as "perhaps the biggest setback to what has been a foundational premise for [Google] employees: the freedom to speak up about anything and everything".
Damore gave interviews to Bloomberg Technology and to the YouTube channels of Canadian professor Jordan Peterson and podcaster Stefan Molyneux. Damore stated that he wanted his first interviews to be with media who were not hostile. He wrote an op-ed in The Wall Street Journal, detailing the history of the memo and Google's reaction, followed by interviews with Reason, Reddit's "IAmA" section, CNN, CNBC, Business Insider, Joe Rogan, Dave Rubin, Milo Yiannopoulos, and Ben Shapiro.
In response to the memo, Google's CEO planned an internal "town hall" meeting, fielding questions from employees on inclusivity. The meeting was cancelled a short time before it was due to start, over safety concerns as "our Dory questions appeared externally this afternoon, and on some websites, Googlers are now being named personally". Outlets found to be posting these names, with pictures, included 4chan, Breitbart News, and Milo Yiannopoulos' blog. Danielle Brown, Google's VP for diversity, was harassed online, and temporarily disabled her Twitter account.
Damore withdrew his complaint with the National Labor Relations Board before the board released any official findings. However, shortly before the withdrawal, an internal NLRB memo found that his firing was legal. The memo, which was not released publicly until February 2018, said that, whereas the law shielded him from being fired solely for criticizing Google, it did not protect discriminatory statements, that his memo's "statements regarding biological differences between the sexes were so harmful, discriminatory, and disruptive as to be unprotected", and that these "discriminatory statements", not his criticisms of Google, were the reason for his firing.
After withdrawing his complaint with the National Labor Relations Board, Damore and another ex-Google employee instead shifted focus to a class action lawsuit accusing Google of various forms of discrimination against conservatives, white people, and men. In October 2018, Damore and the other former Google employee dismissed their claims in the lawsuit, in order to pursue private arbitration against Google. Another engineer, Tim Chevalier, later filed a lawsuit against Google claiming that he was terminated in part for criticizing Damore's memo on Google's internal message boards.
Reactions
On the science
Responses from scientists who study gender and psychology reflected the controversial nature of the science Damore cited.
Some commentators in the academic community said Damore had understood the science correctly, such as Debra W. Soh, a columnist and psychologist; Lee Jussim, a professor of social psychology at Rutgers University; and Geoffrey Miller, an evolutionary psychology professor at University of New Mexico.
Others said that he had got the science wrong and relied on data that was suspect, outdated, irrelevant, or otherwise flawed; these included Gina Rippon, chair of cognitive brain imaging at Aston University; evolutionary biologist Suzanne Sadedin; and Rosalind Barnett, a psychologist at Brandeis University.
David P. Schmitt, former professor of psychology at Bradley University, said that while some sex differences are "small to moderate" in size and not relevant to occupational performance at Google, "culturally universal sex differences in personal values and certain cognitive abilities are a bit larger in size, and sex differences in occupational interests are quite large. It seems likely these culturally universal and biologically-linked sex differences play some role in the gendered hiring patterns of Google employees."
British journalist Angela Saini said that Damore failed to understand the research he cited, while American journalist John Horgan criticized the track record of evolutionary psychology and behavioral genetics. Columnist for The Guardian Owen Jones said that the memo was "guff dressed up with pseudo-scientific jargon" and cited a former Google employee saying that it failed to show the desired qualities of an engineer. Feminist journalist Louise Perry in her book The Case Against the Sexual Revolution comments on the affair saying that she is sympathetic to Damore and that the science he quotes is perfectly sound.
Alice H. Eagly, professor of psychology at Northwestern University, wrote "As a social scientist who's been conducting psychological research about sex and gender for almost 50 years, I agree that biological differences between the sexes likely are part of the reason we see fewer women than men in the ranks of Silicon Valley's tech workers. But the road between biology and employment is long and bumpy, and any causal connection does not rule out the relevance of nonbiological causes."
Impact on Google
Prior to his interview with Damore, Steve Kovach interviewed a female Google employee for Business Insider who said she objected to the memo, saying it lumped all women together, and that it came across as a personal attack. Business Insider also reported that several women were preparing to leave Google by interviewing for other jobs. Within Google, the memo sparked discussions among staff, some of whom believe they were disciplined or fired for their comments supporting diversity or for criticizing Damore's beliefs.
Concerns about sexism
In addition to Sheryl Sandberg, who linked to scientific counterarguments, a number of other women in technology condemned the memorandum, including Megan Smith, a former Google vice president. Susan Wojcicki, CEO of YouTube, wrote an editorial in which she described feeling devastated about the potential effect of the memo on young women. Laurie Leshin, president of the Worcester Polytechnic Institute, said that she was heartened by the backlash against the memo, which gave her hope that things were changing. Kara Swisher of Recode criticized the memo as sexist; Cynthia B. Lee, a computer science lecturer at Stanford University stated that there is ample evidence for bias in tech and that correcting this was more important than whether biological differences might account for a proportion of the numerical imbalances in Google and in technology.
Cathy Young in USA Today said that while the memo had legitimate points, it mischaracterized some sex differences as being universal, while Google's reaction to the memo was harmful since it fed into arguments that men are oppressed in modern workplaces. Libertarian author Megan McArdle, writing for Bloomberg View, said that Damore's claims about differing levels of interest between the sexes reflected her own experiences.
Christina Cauterucci of Slate drew parallels between arguments from Damore's memo and those of men's rights activists.
UC Law legal scholar Joan C. Williams expressed concerns about the prescriptive language used by some diversity training programs and recommended that diversity initiatives be phrased in problem-solving terms.
Employment law and free speech concerns
Yuki Noguchi, a reporter for NPR (National Public Radio), said that Damore's firing has raised questions regarding the limits of free speech in the workplace. First Amendment free speech protections usually do not extend into the workplace, as the First Amendment restricts government action but not the actions of private employers, and employers have a duty to protect their employees against a hostile work environment.
Several employment law experts interviewed by CNBC said that while Damore could challenge his firing in court, his potential case would be weak and Google would arguably have several defensible reasons for firing him; had Google not made a substantive response to his memo, that could have been cited as evidence of a "hostile work environment" in lawsuits against Google. Additionally, they argued that the memo could indicate that Damore would be unable to fairly assess or supervise the work of female colleagues.
Cultural commentary
Google's reaction to the memo and its firing of Damore were criticized by several cultural commentators, including Margaret Wente of The Globe and Mail, Erick Erickson, a conservative writer for RedState, David Brooks of The New York Times, Clive Crook of Bloomberg View, and moral philosopher Peter Singer, writing in New York Daily News.
Others objected to the intensity of the broader response to the memo in the media and across the internet, such as CNN's Kirsten Powers, Conor Friedersdorf of The Atlantic, and Jesse Singal, writing in The Boston Globe.
See also
Biological determinism
Cancel culture
Criticism of Google
Gender disparity in computing
Resistance to diversity efforts in organizations
Neuroscience of sex differences
Sex differences in psychology
Sexism in the technology industry
Women in computing
Women in STEM fields
References
Further reading
External links
The memo as PDF also hosted here
Fired for Truth - James Damore's official website
Google Video on Unconscious Bias - Making the Unconscious Conscious by Life at Google (YouTube, 4 minutes)
2017 controversies in the United States
2017 documents
Ideological Echo Chamber
Diversity in computing
Memoranda
Sexism in the United States
Women in computing
Works about Google
Computing-related controversies
Freedom of speech in the United States
fr:Google's Ideological Echo Chamber | Google's Ideological Echo Chamber | Technology | 2,655 |
69,426,485 | https://en.wikipedia.org/wiki/Gliese%20367 | Gliese 367 (GJ 367, formally named Añañuca) is a red dwarf star from Earth in the constellation of Vela. It is suspected to be a variable with amplitude 0.012 stellar magnitude and period 5.16 years. A stellar multiplicity survey in 2015 failed to detect any stellar companions to Gliese 367. It hosts three known exoplanets, Gliese 367 b, c & d.
Gliese 367's age is unclear. Modelling using stellar isochrones gives a young age of less than 60 million years old, but its orbit around the Milky Way is highly eccentric, unusual for a young star. It may have been forced into such an orbit via a gravitational encounter. Spectroscopic evidence presented in a 2023 study supports an old age for Gliese 367.
Nomenclature
The designation Gliese 367 comes from the Gliese Catalogue of Nearby Stars. This was the 367th star listed in the first edition of the catalogue.
In August 2022, this planetary system was included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from Chile, were announced in June 2023. Gliese 367 is named Añañuca and its innermost planet is named Tahay, after names for the endemic Chilean wildflowers Phycella cyrtanthoides and Calydorea xiphioides.
Planetary system
The star Gliese 367 was observed by TESS in February-March 2019, leading to its designation as an object of interest, and by January 2021 additional radial velocity data suggested the existence of a short-period planet, albeit with low certainty. The planet's existence was confirmed by both ground-based and satellite-based transit photometry data by December 2021.
Gliese 367 b takes just 7.7 hours to orbit its star, one of the shortest orbits of any planet. Due to its close orbit, the exoplanet gets bombarded with radiation over 500 times what Earth receives from the Sun. Dayside temperatures on GJ 367b are around . Due to its close orbit, it most likely is tidally locked. The atmosphere of Gliese 367 b, due to the extreme temperatures, would have boiled away along with signs of life. The core of GJ 367b is likely composed of iron and nickel, making its core similar to Mercury's core. The core of GJ 367b is extremely dense, making up most of the planet's mass.
, Gliese 367 b is the smallest known exoplanet within 10 parsecs of the Solar System, and the second-least massive after Proxima Centauri d.
A direct imaging study in 2022 failed to find any additional planets or stellar companions around Gliese 367. This rules out any companions at distances greater than 5 AU with masses greater than (for an age of 5 billion years) or (for an age of 50 million years). The discovery of two additional super-Earth-mass planets with periods of 11.5 and 34 days was published in 2023.
See also
List of nearest exoplanets
References
M-type main-sequence stars
Planetary systems with three confirmed planets
Planetary transit variables
Vela (constellation)
047780
J09442986-4546351
0367
CD-45 5378
0731
Añañuca | Gliese 367 | Astronomy | 714 |
170,567 | https://en.wikipedia.org/wiki/Toxicity | Toxicity is the degree to which a chemical substance or a particular mixture of substances can damage an organism. Toxicity can refer to the effect on a whole organism, such as an animal, bacterium, or plant, as well as the effect on a substructure of the organism, such as a cell (cytotoxicity) or an organ such as the liver (hepatotoxicity). Sometimes the word is more or less synonymous with poisoning in everyday usage.
A central concept of toxicology is that the effects of a toxicant are dose-dependent; even water can lead to water intoxication when taken in too high a dose, whereas for even a very toxic substance such as snake venom there is a dose below which there is no detectable toxic effect. Toxicity is species-specific, making cross-species analysis problematic. Newer paradigms and metrics are evolving to bypass animal testing, while maintaining the concept of toxicity endpoints.
Etymology
In Ancient Greek medical literature, the adjective τοξικόν (meaning "toxic") was used to describe substances which had the ability of "causing death or serious debilitation or exhibiting symptoms of infection." The word draws its origins from the Greek noun τόξον (meaning "arc"), in reference to the use of bows and poisoned arrows as weapons.
English-speaking American culture has adopted several figurative usages for toxicity, often when describing harmful inter-personal relationships or character traits (e.g. "toxic masculinity").
History
Humans have a deeply rooted history of not only being aware of toxicity, but also taking advantage of it as a tool. Archaeologists studying bone arrows from caves of Southern Africa have noted the likelihood that some aging 72,000 to 80,000 years old were dipped in specially prepared poisons to increase their lethality. Although scientific instrumentation limitations make it difficult to prove concretely, archaeologists hypothesize the practice of making poison arrows was widespread in cultures as early as the paleolithic era. The San people of Southern Africa have managed to preserved this practice into the modern era, with the knowledge base to form complex mixtures from poisonous beetles and plant derived extracts, yielding an arrow-tip product with a shelf life beyond several months to a year.
Types
There are generally five types of toxicities: chemical, biological, physical, radioactive and behavioural.
Disease-causing microorganisms and parasites are toxic in a broad sense but are generally called pathogens rather than toxicants. The biological toxicity of pathogens can be difficult to measure because the threshold dose may be a single organism. Theoretically one virus, bacterium or worm can reproduce to cause a serious infection. If a host has an intact immune system, the inherent toxicity of the organism is balanced by the host's response; the effective toxicity is then a combination. In some cases, e.g. cholera toxin, the disease is chiefly caused by a nonliving substance secreted by the organism, rather than the organism itself. Such nonliving biological toxicants are generally called toxins if produced by a microorganism, plant, or fungus, and venoms if produced by an animal.
Physical toxicants are substances that, due to their physical nature, interfere with biological processes. Examples include coal dust, asbestos fibres or finely divided silicon dioxide, all of which can ultimately be fatal if inhaled. Corrosive chemicals possess physical toxicity because they destroy tissues, but are not directly poisonous unless they interfere directly with biological activity. Water can act as a physical toxicant if taken in extremely high doses because the concentration of vital ions decreases dramatically with too much water in the body. Asphyxiant gases can be considered physical toxicants because they act by displacing oxygen in the environment but they are inert, not chemically toxic gases.
Radiation can have a toxic effect on organisms.
Behavioral toxicity refers to the undesirable effects of essentially therapeutic levels of medication clinically indicated for a given disorder (DiMascio, Soltys and Shader, 1970). These undesirable effects include anticholinergic effects, alpha-adrenergic blockade, and dopaminergic effects, among others.
Measuring
Toxicity can be measured by its effects on the target (organism, organ, tissue or cell). Because individuals typically have different levels of response to the same dose of a toxic substance, a population-level measure of toxicity is often used which relates the probabilities of an outcome for a given individual in a population. One such measure is the . When such data does not exist, estimates are made by comparison to known similar toxic things, or to similar exposures in similar organisms. Then, "safety factors" are added to account for uncertainties in data and evaluation processes. For example, if a dose of a toxic substance is safe for a laboratory rat, one might assume that one-tenth that dose would be safe for a human, allowing a safety factor of 10 to allow for interspecies differences between two mammals; if the data are from fish, one might use a factor of 100 to account for the greater difference between two chordate classes (fish and mammals). Similarly, an extra protection factor may be used for individuals believed to be more susceptible to toxic effects such as in pregnancy or with certain diseases. Or, a newly synthesized and previously unstudied chemical that is believed to be very similar in effect to another compound could be assigned an additional protection factor of 10 to account for possible differences in effects that are probably much smaller. This approach is very approximate, but such protection factors are deliberately very conservative, and the method has been found to be useful in a wide variety of applications.
Assessing all aspects of the toxicity of cancer-causing agents involves additional issues, since it is not certain if there is a minimal effective dose for carcinogens, or whether the risk is just too small to see. In addition, it is possible that a single cell transformed into a cancer cell is all it takes to develop the full effect (the "one hit" theory).
It is more difficult to determine the toxicity of chemical mixtures than a pure chemical because each component displays its own toxicity, and components may interact to produce enhanced or diminished effects. Common mixtures include gasoline, cigarette smoke, and industrial waste. Even more complex are situations with more than one type of toxic entity, such as the discharge from a malfunctioning sewage treatment plant, with both chemical and biological agents.
The preclinical toxicity testing on various biological systems reveals the species-, organ- and dose-specific toxic effects of an investigational product. The toxicity of substances can be observed by (a) studying the accidental exposures to a substance (b) in vitro studies using cells/ cell lines (c) in vivo exposure on experimental animals. Toxicity tests are mostly used to examine specific adverse events or specific endpoints such as cancer, cardiotoxicity, and skin/eye irritation. Toxicity testing also helps calculate the No Observed Adverse Effect Level (NOAEL) dose and is helpful for clinical studies.
Classification
For substances to be regulated and handled appropriately they must be properly classified and labelled. Classification is determined by approved testing measures or calculations and has determined cut-off levels set by governments and scientists (for example, no-observed-adverse-effect levels, threshold limit values, and tolerable daily intake levels). Pesticides provide the example of well-established toxicity class systems and toxicity labels. While currently many countries have different regulations regarding the types of tests, numbers of tests and cut-off levels, the implementation of the Globally Harmonized System has begun unifying these countries.
Global classification looks at three areas: Physical Hazards (explosions and pyrotechnics), Health Hazards and environmental hazards.
Health hazards
The types of toxicities where substances may cause lethality to the entire body, lethality to specific organs, major/minor damage, or cause cancer. These are globally accepted definitions of what toxicity is. Anything falling outside of the definition cannot be classified as that type of toxicant.
Acute toxicity
Acute toxicity looks at lethal effects following oral, dermal or inhalation exposure. It is split into five categories of severity where Category 1 requires the least amount of exposure to be lethal and Category 5 requires the most exposure to be lethal. The table below shows the upper limits for each category.
Note: The undefined values are expected to be roughly equivalent to the category 5 values for oral and dermal administration.
Other methods of exposure and severity
Skin corrosion and irritation are determined through a skin patch test analysis, similar to an allergic inflammation patch test. This examines the severity of the damage done; when it is incurred and how long it remains; whether it is reversible and how many test subjects were affected.
Skin corrosion from a substance must penetrate through the epidermis into the dermis within four hours of application and must not reverse the damage within 14 days. Skin irritation shows damage less severe than corrosion if: the damage occurs within 72 hours of application; or for three consecutive days after application within a 14-day period; or causes inflammation which lasts for 14 days in two test subjects. Mild skin irritation is minor damage (less severe than irritation) within 72 hours of application or for three consecutive days after application.
Serious eye damage involves tissue damage or degradation of vision which does not fully reverse in 21 days. Eye irritation involves changes to the eye which do fully reverse within 21 days.
Other categories
Respiratory sensitizers cause breathing hypersensitivity when the substance is inhaled.
A substance which is a skin sensitizer causes an allergic response from a dermal application.
Carcinogens induce cancer, or increase the likelihood of cancer occurring.
Neurotoxicity is a form of toxicity in which a biological, chemical, or physical agent produces an adverse effect on the structure or function of the central or peripheral nervous system. It occurs when exposure to a substance – specifically, a neurotoxin or neurotoxicant– alters the normal activity of the nervous system in such a way as to cause permanent or reversible damage to nervous tissue.
Reproductively toxic substances cause adverse effects in either sexual function or fertility to either a parent or the offspring.
Specific-target organ toxins damage only specific organs.
Aspiration hazards are solids or liquids which can cause damage through inhalation.
Environmental hazards
An Environmental hazard can be defined as any condition, process, or state adversely affecting the environment. These hazards can be physical or chemical, and present in air, water, and/or soil. These conditions can cause extensive harm to humans and other organisms within an ecosystem.
Common types of environmental hazards
Water: detergents, fertilizer, raw sewage, prescription medication, pesticides, herbicides, heavy metals, PCBs
Soil: heavy metals, herbicides, pesticides, PCBs
Air: particulate matter, carbon monoxide, sulfur dioxide, nitrogen dioxide, asbestos, ground-level ozone, lead (from aircraft fuel, mining, and industrial processes)
The EPA maintains a list of priority pollutants for testing and regulation.
Occupational hazards
Workers in various occupations may be at a greater level of risk for several types of toxicity, including neurotoxicity. The expression "Mad as a hatter" and the "Mad Hatter" of the book Alice in Wonderland derive from the known occupational toxicity of hatters who used a toxic chemical for controlling the shape of hats. Exposure to chemicals in the workplace environment may be required for evaluation by industrial hygiene professionals.
Hazards for small businesses
Hazards from medical waste and prescription disposal
Hazards in the arts
Hazards in the arts have been an issue for artists for centuries, even though the toxicity of their tools, methods, and materials was not always adequately realized. Lead and cadmium, among other toxic elements, were often incorporated into the names of artist's oil paints and pigments, for example, "lead white" and "cadmium red".
20th-century printmakers and other artists began to be aware of the toxic substances, toxic techniques, and toxic fumes in glues, painting mediums, pigments, and solvents, many of which in their labelling gave no indication of their toxicity. An example was the use of xylol for cleaning silk screens. Painters began to notice the dangers of breathing painting mediums and thinners such as turpentine. Aware of toxicants in studios and workshops, in 1998 printmaker Keith Howard published Non-Toxic Intaglio Printmaking which detailed twelve innovative Intaglio-type printmaking techniques including photo etching, digital imaging, acrylic-resist hand-etching methods, and introducing a new method of non-toxic lithography.
Mapping environmental hazards
There are many environmental health mapping tools. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund programs. TOXMAP is a resource funded by the US Federal Government. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network
(TOXNET) and PubMed, and from other authoritative sources.
Aquatic toxicity
Aquatic toxicity testing subjects key indicator species of fish or crustacea to certain concentrations of a substance in their environment to determine the lethality level. Fish are exposed for 96 hours while crustacea are exposed for 48 hours. While GHS does not define toxicity past 100 mg/L, the EPA currently lists aquatic toxicity as "practically non-toxic" in concentrations greater than 100 ppm.
Note: A category 4 is established for chronic exposure, but simply contains any toxic substance which is mostly insoluble, or has no data for acute toxicity.
Factors influencing toxicity
Toxicity of a substance can be affected by many different factors, such as the pathway of administration (whether the toxicant is applied to the skin, ingested, inhaled, injected), the time of exposure (a brief encounter or long term), the number of exposures (a single dose or multiple doses over time), the physical form of the toxicant (solid, liquid, gas), the concentration of the substance, and in the case of gases, the partial pressure (at high ambient pressure, partial pressure will increase for a given concentration as a gas fraction), the genetic makeup of an individual, an individual's overall health, and many others. Several of the terms used to describe these factors have been included here.
Acute exposure A single exposure to a toxic substance which may result in severe biological harm or death; acute exposures are usually characterized as lasting no longer than a day.
Chronic exposure Continuous exposure to a toxicant over an extended period of time, often measured in months or years; it can cause irreversible side effects.
Alternatives to dose-response framework
Considering the limitations of the dose-response concept, a novel Abstract Drug Toxicity Index (DTI) has been proposed recently. DTI redefines drug toxicity, identifies hepatotoxic drugs, gives mechanistic insights, predicts clinical outcomes and has potential as a screening tool.
See also
Agency for Toxic Substances and Disease Registry (ATSDR)
Biological activity
Biological warfare
California Proposition 65 (1986)
Carcinogen
Drunkenness
Indicative limit value
List of highly toxic gases
Material safety data sheet (MSDS)
Mutagen
Hepatotoxicity
Nephrotoxicity
Neurotoxicity
Ototoxicity
Paracelsus
Physiologically-based pharmacokinetic modelling.
Poison
Reference dose
Registry of Toxic Effects of Chemical Substances (RTECS) – toxicity database
Soil contamination
Teratogen
Toxic tort
Toxication
Toxicophore
Toxin
Toxica, a disambiguation page
References
External links
Agency for Toxic Substances and Disease Registry
Whole Effluent, Aquatic Toxicity Testing FAQ
TOXMAP Environmental Health e-Maps from the United States National Library of Medicine
Toxseek: meta-search engine in toxicology and environmental health
Pharmacology
Toxicology
Chemical hazards | Toxicity | Chemistry,Environmental_science | 3,308 |
331,921 | https://en.wikipedia.org/wiki/Common%20name | In biology, a common name of a taxon or organism (also known as a vernacular name, English name, colloquial name, country name, popular name, or farmer's name) is a name that is based on the normal language of everyday life; and is often contrasted with the scientific name for the same organism, which is often based in Latin. A common name is sometimes frequently used, but that is not always the case.
In chemistry, IUPAC defines a common name as one that, although it unambiguously defines a chemical, does not follow the current systematic naming convention, such as acetone, systematically 2-propanone, while a vernacular name describes one used in a lab, trade or industry that does not unambiguously describe a single chemical, such as copper sulfate, which may refer to either copper(I) sulfate or copper(II) sulfate.
Sometimes common names are created by authorities on one particular subject, in an attempt to make it possible for members of the general public (including such interested parties as fishermen, farmers, etc.) to be able to refer to one particular species of organism without needing to be able to memorise or pronounce the scientific name. Creating an "official" list of common names can also be an attempt to standardize the use of common names, which can sometimes vary a great deal between one part of a country and another, as well as between one country and another country, even where the same language is spoken in both places.
Use as part of folk taxonomy
A common name intrinsically plays a part in a classification of objects, typically an incomplete and informal classification, in which some names are degenerate examples in that they are unique and lack reference to any other name, as is the case with say, ginkgo, okapi, and ratel. Folk taxonomy, which is a classification of objects using common names, has no formal rules and need not be consistent or logical in its assignment of names, so that say, not all flies are called flies (for example Braulidae, the so-called "bee lice") and not every animal called a fly is indeed a fly (such as dragonflies and mayflies). In contrast, scientific or biological nomenclature is a global system that attempts to denote particular organisms or taxa uniquely and definitively, on the assumption that such organisms or taxa are well-defined and generally also have well-defined interrelationships; accordingly the ICZN has formal rules for biological nomenclature and convenes periodic international meetings to further that purpose.
Common names and the binomial system
The form of scientific names for organisms, called binomial nomenclature, is superficially similar to the noun-adjective form of vernacular names or common names which were used by non-modern cultures. A collective name such as owl was made more precise by the addition of an adjective such as screech. Linnaeus himself published a flora of his homeland Sweden, Flora Svecica (1745), and in this, he recorded the Swedish common names, region by region, as well as the scientific names. The Swedish common names were all binomials (e.g. plant no. 84 Råg-losta and plant no. 85 Ren-losta); the vernacular binomial system thus preceded his scientific binomial system.
Linnaean authority William T. Stearn said:
Geographic range of use
The geographic range over which a particularly common name is used varies; some common names have a very local application, while others are virtually universal within a particular language. Some such names even apply across ranges of languages; the word for cat, for instance, is easily recognizable in most Germanic and many Romance languages. Many vernacular names, however, are restricted to a single country and colloquial names to local districts.
Some languages also have more than one common name for the same animal. For example, in Irish, there are many terms that are considered outdated but still well-known for their somewhat humorous and poetic descriptions of animals.
Constraints and problems
Common names are used in the writings of both professionals and laymen. Lay people sometimes object to the use of scientific names over common names, but the use of scientific names can be defended, as it is in these remarks from a book on marine fish:
Because common names often have a very local distribution, the same fish in a single area may have several common names.
Because of ignorance of relevant biological facts among the lay public, a single species of fish may be called by several common names, because individuals in the species differ in appearance depending on their maturity, gender, or can vary in appearance as a morphological response to their natural surroundings, i.e. ecophenotypic variation.
In contrast to common names, formal taxonomic names imply biological relationships between similarly named creatures.
Because of incidental events, contact with other languages, or simple confusion, common names in a given region will sometimes change with time.
In a book that lists over 1200 species of fishes more than half have no widely recognised common name; they either are too nondescript or too rarely seen to have earned any widely accepted common name.
Conversely, a single common name often applies to multiple species of fishes. The lay public might simply not recognise or care about subtle differences in appearance between only very distantly related species.
Many species that are rare, or lack economic importance, do not have a common name.
Coining common names
In scientific binomial nomenclature, names commonly are derived from classical or modern Latin or Greek or Latinised forms of vernacular words or coinages; such names generally are difficult for laymen to learn, remember, and pronounce and so, in such books as field guides, biologists commonly publish lists of coined common names. Many examples of such common names simply are attempts to translate the scientific name into English or some other vernacular. Such translation may be confusing in itself, or confusingly inaccurate, for example, gratiosus does not mean "gracile" and gracilis does not mean "graceful".
The practice of coining common names has long been discouraged; de Candolle's Laws of Botanical Nomenclature, 1868, the non-binding recommendations that form the basis of the modern (now binding) International Code of Nomenclature for algae, fungi, and plants contains the following:
Various bodies and the authors of many technical and semi-technical books do not simply adapt existing common names for various organisms; they try to coin (and put into common use) comprehensive, useful, authoritative, and standardised lists of new names. The purpose typically is:
to create names from scratch where no common names exist
to impose a particular choice of name where there is more than one common name
to improve existing common names
to replace them with names that conform more to the relatedness of the organisms
Other attempts to reconcile differences between widely separated regions, traditions, and languages, by arbitrarily imposing nomenclature, often reflect narrow perspectives and have unfortunate outcomes. For example, members of the genus Burhinus occur in Australia, Southern Africa, Eurasia, and South America. A recent trend in field manuals and bird lists is to use the name "thick-knee" for members of the genus. This, in spite of the fact that the majority of the species occur in non-English-speaking regions and have various common names, not always English. For example, "Dikkop" is the centuries-old South African vernacular name for their two local species: Burhinus capensis is the Cape dikkop (or "gewone dikkop", not to mention the presumably much older Zulu name "umBangaqhwa"); Burhinus vermiculatus is the "water dikkop". The thick joints in question are not even, in fact, the birds' knees, but the intertarsal joints—in lay terms the ankles. Furthermore, not all species in the genus have "thick knees", so the thickness of the "knees" of some species is not of clearly descriptive significance. The family Burhinidae has members that have various common names even in English, including "stone curlews", so the choice of the name "thick-knees" is not easy to defend but is a clear illustration of the hazards of the facile coinage of terminology.
Lists that include common names
Lists of general interest
Plants
Plant by common name
Garden plants
Culinary herbs and spices
Poisonous plants
Plants in the Bible
Vegetables
Useful plants
Animals
Birds by region
Mammals by region
List of fish common names
Plants and animals
Invasive species
Collective nouns
For collective nouns for various subjects, see a list of collective nouns (e.g. a flock of sheep, pack of wolves).
Official lists
Some organizations have created official lists of common names, or guidelines for creating common names, hoping to standardize the use of common names.
For example, the Australian Fish Names List or AFNS was compiled through a process involving work by taxonomic and seafood industry experts, drafted using the CAAB (Codes for Australian Aquatic Biota) taxon management system of the CSIRO, and including input through public and industry consultations by the Australian Fish Names Committee (AFNC). The AFNS has been an official Australian Standard since July 2007 and has existed in draft form (The Australian Fish Names List) since 2001.
Seafood Services Australia (SSA) serve as the Secretariat for the AFNC. SSA is an accredited Standards Australia (Australia's peak non-government standards development organisation) Standards Development
The Entomological Society of America maintains a database of official common names of insects, and proposals for new entries must be submitted and reviewed by a formal committee before being added to the listing.
Efforts to standardize English names for the amphibians and reptiles of North America (north of Mexico) began in the mid-1950s. The dynamic nature of taxonomy necessitates periodical updates and changes in the nomenclature of both scientific and common names. The Society for the Study of Amphibians and Reptiles (SSAR) published an updated list in 1978, largely following the previous established examples, and subsequently published eight revised editions ending in 2017. More recently the SSAR switched to an online version with a searchable database. Standardized names for the amphibians and reptiles of Mexico in Spanish and English were first published in 1994, with a revised and updated list published in 2008.
A set of guidelines for the creation of English names for birds was published in The Auk in 1978. It gave rise to Birds of the World: Recommended English Names and its Spanish and French companions.
The Academy of the Hebrew Language publish from time to time short dictionaries of common name in Hebrew for species that occur in Israel or surrounding countries e.g. for Reptilia in 1938, Osteichthyes in 2012, and Odonata in 2015.
See also
Folk taxonomy
List of historical common names
Scientific terminology
:Category:Plant common names
Specific name (zoology)
References
Citations
Sources
Stearn, William T. (1959). "The Background of Linnaeus's Contributions to the Nomenclature and Methods of Systematic Biology". Systematic Zoology 8: 4–22.
External links
Plant names
Multilingual, Multiscript Plant Name Database
The use of common names
Chemical Names of Common Substances
Plantas medicinales / Medicinal plants (database)
Biological nomenclature
Common names of organisms
Flora without expected TNC conservation status | Common name | Biology | 2,315 |
894,995 | https://en.wikipedia.org/wiki/Jan%20Ingenhousz | Jan Ingenhousz FRS (8 December 1730 – 7 September 1799) was a Dutch-British physiologist, biologist and chemist.
He is best known for discovering photosynthesis by showing that light is essential to the process by which green plants absorb carbon dioxide and release oxygen. He also discovered that plants, like animals, have cellular respiration. In his lifetime he was known for successfully inoculating the members of the Habsburg family in Vienna against smallpox in 1768 and subsequently being the private counsellor and personal physician to the Austrian Empress Maria Theresa.
Early life
He was born into the patrician Ingen Housz family in Breda in Staats-Brabant in the Dutch Republic. From the age of 16, Ingenhousz studied medicine at the University of Leuven, the Protestant Universities were not then open to Catholics like himself, where he obtained his MD in 1753. He studied for two more years at the University of Leiden, where he attended lectures by, among others, Pieter van Musschenbroek, which led Ingenhousz to have a lifelong interest in electricity. In 1755 he returned home to Breda, where he started a general medical practice.
Work with smallpox
Following his father's death in July 1764, Ingenhousz intended to travel through Europe for study, starting in England where he wanted to learn the latest techniques in inoculation against smallpox. Via the physician John Pringle, who had been a family friend since the 1740s, he quickly made many valuable contacts in London, and in due time became a master inoculator. In 1767, he inoculated 700 village people in a successful effort to combat an epidemic in Hertfordshire. In 1768, Empress Maria Theresa read a letter by Pringle on the success in the fight against smallpox in England, whereas in the Austrian Empire the medical establishment vehemently opposed inoculations. She decided to have her own family inoculated first (a cousin had already died), and requested help via the English royal house. On Pringle's recommendation, Ingenhousz was selected and requested to travel to Austria. He had planned to inoculate the Royal Family by pricking them with a needle and thread that were coated with smallpox germs taken from the pus of a smallpox-infected person. The idea of the inoculation was that by giving a few germs to a healthy body the body would develop immunisation from smallpox. The inoculation was a success and he became Maria Theresa's court physician. He settled in Vienna, where in 1775 he married Agatha Maria Jacquin.
Work with photosynthesis
In the 1770s Ingenhousz became interested in gaseous exchanges of plants. He did this after meeting the scientist Joseph Priestley (1733–1804) at his house in Birstall, West Yorkshire, on 23 May 1771. Priestley had found out that plants make and absorb gases. Ingenhousz' travelling party in northern England included Benjamin Franklin. They then stayed at the rectory in Thornhill, West Yorkshire with the polymath and botanist Rev. John Michell.
In 1779, Ingenhousz working at his rented country house in Southall Green, discovered that, in the presence of light, plants give off bubbles from their green parts while, in the shade, the bubbles eventually stop. He identified the gas as oxygen. He also discovered that, in the dark, plants give off carbon dioxide. He realised as well that the amount of oxygen given off in the light is more than the amount of carbon dioxide given off in the dark. This demonstrated that some of the mass of plants comes from the air, and not only the water and nutrients in the soil.
Other work
In addition to his work in the Netherlands and Vienna, Ingenhousz spent time in France, England, Scotland, and Switzerland, among other places. He carried out research in electricity, heat conduction, and chemistry, and was in close and frequent correspondence with both Benjamin Franklin and Henry Cavendish. In 1785, he described the irregular movement of coal dust on the surface of alcohol and therefore has a claim as discoverer of what came to be known as Brownian motion. Ingenhousz was elected a Fellow of the Royal Society of London in 1769 and a member of the American Philosophical Society in 1786.
In 1799, Ingenhousz died at Bowood House, near Calne in Wiltshire, and was buried in the churchyard of St Mary the Virgin, Calne. His wife died the following year.
Tribute
On 8 December 2017, a Google Doodle commemorated his 287th birthday.
References
Further reading
Norman and Elaine Beale, Echoes of Ingen Housz. The long lost story of the genius who rescued the Habsburgs from smallpox and became the father of photosynthesis. 630 pages, with a foreword by David Bellamy, Hobnob Press, July 2011, .
Geerdt Magiels, From sunlight to insight. Jan IngenHousz, the discovery of photosynthesis & science in the light of ecology. VUB Press, 2009, .
External links
Experiments upon Vegetables, Discovering their Great Power of Purifying the Common Air in the Sun-shine, and of Injuring it in the Shade and at Night (London, 1779)
Entry at the Catholic Encyclopedia
Ingenhousz's relationship to Brownian motion, see page 1
Best known for Discovering Photosynthesis
1730 births
1799 deaths
18th-century Dutch physicians
Court physicians
18th-century Dutch botanists
Dutch physiologists
Fellows of the Royal Society
Leiden University alumni
People from Breda
Researchers of photosynthesis
Smallpox
Old University of Leuven alumni
Dutch Catholics
Members of the American Philosophical Society | Jan Ingenhousz | Chemistry | 1,174 |
76,999,108 | https://en.wikipedia.org/wiki/Gliese%2012 | Gliese 12 (GJ 12) is a red dwarf star located away in the constellation Pisces. It has about 24% the mass and 26% the radius of the Sun, and a temperature of about . It is an inactive star and hosts one known exoplanet.
Planetary system
The transiting exoplanet Gliese 12 b was discovered by TESS, and two independent studies confirming it as a planet were published in May 2024. Gliese 12 b is similar in size to Earth and Venus, and completes an orbit around its star every 12.8 days. Its mass is poorly constrained but is known to be less than 4 times that of Earth.
Along with the planets of TRAPPIST-1 and LHS 1140 b, Gliese 12 b is one of the nearest known relatively temperate transiting exoplanets, and so is a promising target for the James Webb Space Telescope to determine whether it has retained an atmosphere. Gliese 12 b orbits slightly closer than the inner edge of its star's habitable zone, with an insolation between those of Earth and Venus. Its equilibrium temperature, assuming an albedo of zero, is ; if it has an atmosphere, the surface temperature would be greater than this.
References
Pisces (constellation)
M-type main-sequence stars
Planetary systems with one confirmed planet
J00154919+1333218
0012
52005579
6251 | Gliese 12 | Astronomy | 298 |
541,158 | https://en.wikipedia.org/wiki/Java%20Management%20Extensions | Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices (such as printers) and service-oriented networks. Those resources are represented by objects called MBeans (for Managed Bean). In the API, classes can be dynamically loaded and instantiated.
Managing and monitoring applications can be designed and developed using the Java Dynamic Management Kit.
JSR 003 of the Java Community Process defined JMX 1.0, 1.1 and 1.2. JMX 2.0 was being developed under JSR 255, but this JSR was subsequently withdrawn. The JMX Remote API 1.0 for remote management and monitoring is specified by JSR 160. An extension of the JMX Remote API for Web Services was being developed under JSR 262.
Adopted early on by the J2EE community, JMX has been a part of J2SE since version 5.0. "JMX" is a trademark of Oracle Corporation.
Architecture
JMX uses a three-level architecture:
The Probe level – also called the Instrumentation level – contains the probes (called MBeans) instrumenting the resources
The Agent level, or MBeanServer – the core of JMX. It acts as an intermediary between the MBean and the applications.
The Remote Management level enables remote applications to access the MBeanServer through connectors and adaptors. A connector provides full remote access to the MBeanServer API using various communication (RMI, IIOP, JMS, WS-* …), while an adaptor adapts the API to another protocol (SNMP, …) or to Web-based GUI (HTML/HTTP, WML/HTTP, …).
Applications can be generic consoles (such as JConsole and MC4J) or domain-specific (monitoring) applications. External applications can interact with the MBeans through the use of JMX connectors and protocol adapters. Connectors serve to connect an agent with a remote JMX-enabled management application. This form of communication involves a connector in the JMX agent and a connector client in the management application.
The Java Platform, Standard Edition ships with one connector, the RMI connector, which uses the Java Remote Method Protocol that is part of the Java remote method invocation API. This is the connector which most management applications use.
Protocol adapters provide a management view of the JMX agent through a given protocol. Management applications that connect to a protocol adapter are usually specific to the given protocol.
Managed beans
A managed bean – sometimes simply referred to as an MBean – is a type of JavaBean, created with dependency injection. Managed Beans are particularly used in the Java Management Extensions technology – but with Java EE 6 the specification provides for a more detailed meaning of a managed bean.
The MBean represents a resource running in the Java virtual machine, such as an application or a Java EE technical service (transactional monitor, JDBC driver, etc.). They can be used for collecting statistics on concerns like performance, resources usage, or problems (pull); for getting and setting application configurations or properties (push/pull); and notifying events like faults or state changes (push).
Java EE 6 provides that a managed bean is a bean that is implemented by a Java class, which is called its bean class. A top-level Java class is a managed bean if it is defined to be a managed bean by any other Java EE technology specification (for example, the JavaServer Faces technology specification), or if it meets all of the following conditions:
It is not a non-static inner class.
It is a concrete class, or is annotated @Decorator.
It is not annotated with an EJB component-defining annotation or declared as an EJB bean class in ejb-jar.xml.
No special declaration, such as an annotation, is required to define a managed bean.
A MBean can notify the MBeanServer of its internal changes (for the attributes) by implementing the javax.management.NotificationEmitter. The application interested in the MBean's changes registers a listener (javax.management.NotificationListener) to the MBeanServer. Note that JMX does not guarantee that the listeners will receive all notifications.
Types
There are two basic types of MBean:
Standard MBeans implement a business interface containing setters and getters for the attributes and the operations (i.e., methods).
Dynamic MBeans implement the javax.management.DynamicMBean interface that provides a way to list the attributes and operations, and to get and set the attribute values.
Additional types are Open MBeans, Model MBeans and Monitor MBeans. Open MBeans are dynamic MBeans that rely on the basic data types. They are self-explanatory and more user-friendly. Model MBeans are dynamic MBeans that can be configured during runtime. A generic MBean class is also provided for dynamically configuring the resources during program runtime.
A MXBean (Platform MBean) is a special type of MBean that reifies Java virtual machine subsystems such as garbage collection, JIT compilation, memory pools, multi-threading, etc.
A MLet (Management applet) is a utility MBean to load, instantiate and register MBeans in a MBeanServer from an XML description. The format of the XML descriptor is:
<MLET CODE = ''class'' | OBJECT = ''serfile''
ARCHIVE = ''archiveList''
[CODEBASE = ''codebaseURL'']
[NAME = ''objectName'']
[VERSION = ''version'']
>
[arglist]
</MLET>
Support
JMX is supported at various levels by different vendors:
JMX is supported by Java application servers such as OpenCloud Rhino Application Server , JBoss, JOnAS, WebSphere Application Server, WebLogic, SAP NetWeaver Application Server, Oracle Application Server 10g and Sun Java System Application Server.
JMX is supported by the UnboundID Directory Server, Directory Proxy Server, and Synchronization Server.
Systems management tools that support the protocol include Empirix OneSight, GroundWork Monitor, Hyperic, HP OpenView, IBM Director, ITRS Geneos, Nimsoft NMS, OpenNMS, Zabbix, Zenoss Core, and Zyrion, SolarWinds, Uptime Infrastructure Monitor, and LogicMonitor.
JMX is also supported by servlet containers such as Apache Tomcat. & Jetty (web server)
MX4J is Open Source JMX for Enterprise Computing.
jManage is an open source enterprise-grade JMX Console with Web and command-line interfaces.
MC4J is an open source visual console for connecting to servers supporting JMX
snmpAdaptor4j is an open source providing a simple access to MBeans via the SNMP protocol.
jvmtop is a lightweight open source JMX monitoring tool for the command-line
Prometheus can ingest JMX data via the JMX exporter which exposes metrics in Prometheus format.
New Relic's on-host infrastructure agent collects JMX data which is shown in various charts in its observability platform's dashboard.
Jolokia is a j2ee application which exposes JMX over HTTP.
See also
Jini
Network management
Simple Network Management Protocol
References
Further reading
Articles
"Enabling Component Architectures with JMX" by Marc Fleury and Juha Lindfors
"Introducing A New Vendor-Neutral J2EE Management API" by Andreas Schaefer
"Java in the management sphere" by Max Goff 1999
Oct 20
Nov 20
Dec 29
JMX/JBoss – The microkernel design
"Manage your JMX-enabled applications with jManage 1.0" by Rakesh Kalra Jan 16, 2006
"Managing J2EE Systems with JMX and JUnit " by Lucas McGregor
Sun Java Overview of Monitoring and Management
The Java EE 6 Tutorial: About managed beans
Books
Benjamin G Sullins, Mark B Whipple : JMX in Action: You will also get your first JMX application up and running, Manning Publications Co. 2002,
J. Steven Perry: Java Management Extensions, O'Reilly,
Jeff Hanson: Connecting JMX Clients and Servers: Understanding the Java Management Extensions, APress L. P.,
Marc Fleury, Juha Lindfors: JMX: Managing J2EE with Java Management Extensions, Sams Publishing,
External links
JMX 1.4 (JMX 1.4, part of Java 6)
JMX at JBoss.com
JMX on www.oracle.com
JSR 255 (JMX 2.0)
JSR 3 (JMX 1.0, 1.1, and 1.2)
Java APIs
Management Extensions
Management Extensions
Network management | Java Management Extensions | Engineering | 1,861 |
14,118,911 | https://en.wikipedia.org/wiki/RELB | Transcription factor RelB is a protein that in humans is encoded by the RELB gene.
Interactions
RELB has been shown to interact with NFKB2, NFKB1, and C22orf25.
Activation and function
In resting cells, RelB is sequestered by the NF-κB precursor protein p100 in the cytoplasm. A select set of TNF-R superfamily members, including lymphotoxin β-receptor (LTβR), BAFF-R, CD40 and RANK, activate the non-canonical NF-κB pathway. In this pathway, NIK stimulates the processing of p100 into p52, which in association with RelB appears in the nucleus as RelB:p52 NF-κB heterodimers. RelB:p52 activates the expression homeostatic lymphokines, which instruct lymphoid organogenesis and determine the trafficking of naive lymphocytes in the secondary lymphoid organs.
Recent studies has suggested that the functional non-canonical NF-κB pathway is modulated by canonical NF-κB signalling. For example, syntheses of the constituents of the non-canonical pathway, viz RelB and p52, are controlled by canonical IKK2-IκB-RelA:p50 signalling. Moreover, generation of canonical and non-canonical dimers, viz RelA:p50 and RelB:p52, within the cellular milieu are mechanistically interlinked. These analyses suggest that an integrated NF-κB system network underlies activation of both RelA and RelB containing dimer and that a malfunctioning canonical pathway will lead to an aberrant cellular response also through the non-canonical pathway.
Most intriguingly, a recent study identified that TNF-induced canonical signalling subverts non-canonical RelB:p52 activity in the inflamed lymphoid tissues limiting lymphocyte ingress. Mechanistically, TNF inactivated NIK in LTβR‐stimulated cells and induced the synthesis of Nfkb2 mRNA encoding p100; these together potently accumulated unprocessed p100, which attenuated the RelB activity. A role of p100/Nfkb2 in dictating lymphocyte ingress in the inflamed lymphoid tissue may have broad physiological implications.
See also
NF-κB
References
Further reading
Transcription factors | RELB | Chemistry,Biology | 521 |
24,288,564 | https://en.wikipedia.org/wiki/International%20Journal%20of%20Geometric%20Methods%20in%20Modern%20Physics | The International Journal of Geometric Methods in Modern Physics (IJGMMP) is a peer-reviewed journal, published by World Scientific, covering mathematical physics. It was originally published bimonthly beginning in January 2004; as of 2006 it appears 8 times a year, and as of 2024 it appears 14 times a year. Editorial policy for the journal specifies that "The journal publishes short communications, research and review articles devoted to the application of geometric methods (including differential geometry, algebraic geometry, global analysis and topology) to quantum field theory, non-perturbative quantum gravity, string and brane theory, quantum mechanics, semi-classical approximations in quantum theory, quantum thermodynamics and statistical physics, quantum computation and control theory."
History
IJGMMP was founded in 2003 by Gennadi Sardanashvily, a theoretical physicist at Moscow State University. He served as managing editor of the journal until 2013.
Abstracting and indexing
The journal is indexed in Science Citation Index Expanded, ISI Alerting Services, Inspec, Current Contents/Physical Chemical and Earth Sciences, Mathematical Reviews, Scopus, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.874.
References
Mathematical physics journals
Physics journals
Academic journals established in 2004
World Scientific academic journals
English-language journals
Geometry journals | International Journal of Geometric Methods in Modern Physics | Mathematics | 280 |
42,787,988 | https://en.wikipedia.org/wiki/Alajuela%20orthobunyavirus | Alajuela orthobunyavirus (ALJV) is a species in the genus Orthobunyavirus in the Gamboa serogroup. It is isolated from mosquitoes, Aedeomyia squamipennis. It has not been reported to cause disease in humans.
References
Orthobunyaviruses | Alajuela orthobunyavirus | Biology | 73 |
14,622,190 | https://en.wikipedia.org/wiki/Hachimycin | Hachimycin, also known as trichomycin, is a polyene macrolide antibiotic, antiprotozoal, and antifungal derived from streptomyces. It was first described in 1950, and in most research cases have been used for gynecological infections.
References
Antibiotics
Macrolide antibiotics
Polyenes | Hachimycin | Biology | 72 |
30,779,094 | https://en.wikipedia.org/wiki/Augeas%20%28software%29 | Augeas is a free software configuration-management library, written in the C programming language. It is licensed under the terms of the GNU Lesser General Public License.
Augeas uses programs called lenses (in reference to the Harmony Project) to map a filesystem to an XML tree which can then be parsed using an XPath syntax, using a bidirectional transformation. Writing such lenses extends the amount of files Augeas can parse.
Bindings
Augeas has bindings for Python, Ruby, OCaml, Perl, Haskell, Java, PHP, and Tcl.
Programs using augeas
Certbot, ACME client
Puppet provides an Augeas module which makes use of the Ruby bindings
SaltStack provides an Augeas module which makes use of the python bindings
References
External links
Configuration management | Augeas (software) | Engineering | 175 |
18,979,417 | https://en.wikipedia.org/wiki/Citizens%27%20Advisory%20Council%20on%20National%20Space%20Policy | The Citizen's Advisory Council on National Space Policy was a group of prominent US citizens concerned with the space policy of the United States of America. It is no longer active.
History
The Council's roots date to 1980 as a group which prepared many of the Reagan Administration Transition Team's space policy papers.
The Council was formally created in 1981 by joint action of the American Astronautical Society and the L5 Society to develop a detailed and technically feasible space policy to further the national interest. Participant Gregory Benford would in 1994 describe the activities of the council:
The Council, a raucous bunch with feisty opinions, met at the spacious home of science fiction author Larry Niven. The men mostly talked hard-edge tech, the women policy. Pournelle stirred the pot and turned up the heat. Amid the buffet meals, saunas and hot tubs, well-stocked open bar, and myriad word processors, fancies simmered and ideas cooked, some emerging better than half-baked...Finally, we settled on recommending a position claiming at least the moral high ground, if not high orbits. Defense was inevitably more stabilizing than relying on hair-trigger offense, we argued. It was also more principled. And eventually, the Soviet Union might not even be the enemy, we said - though we had no idea it would fade so fast. When that happened, defenses would still be useful against any attacker, especially rogue nations bent on a few terrorist attacks. There were plenty of science fiction stories, some many decades old, dealing with that possibility. The Advisory Council met in August of 1984 in a mood of high celebration. Their pioneering work had yielded fruits unimaginable in 1982 - Reagan himself had proposed the Strategic Defense Initiative, suggesting that nuclear weapons be made "impotent and obsolete". The Soviets were clearly staggered by the prospect. (Years later I heard straight from a senior Soviet advisor that the U.S. SDI had been the straw that broke the back of the military's hold on foreign policy. That seems to be the consensus now among the diplomatic community, though politically SDI is a common whipping boy, its funding cut.)Participant David Mitchell added this history update on March 18, 2021:
Dr. Pournelle held many meetings at “Chaos Manor”, his home in Studio City. These were more formal meetings than the annual party/meetings Gregory Benford describes at Larry Niven’s home. As someone who worked closely with Dr. P. on BIX (the Byte Information Exchange), it was a natural follow-on to assist when and where I could on council-related matters. Henry Vanderbilt’s Space Access Society events and meetings helped focus the agenda. I created the Lunar Teleoperations Model I to test telepresence research and generate publicity. The critical council focus was on affordable access to space during the period of the late 1980’s to the late 1990’s. Meetings were held at Chaos Manor, the “Making Orbit 93” conference in Berkeley, and in Las Cruces, NM during DC-X test flight events. I hosted events at various space activist forums and created events such as “Minds In Space”.
The critical path in affordable access to space was (and is) SSTO – single stage to orbit. In meeting with Max Hunter, he was kind enough to provide me a copy of RITA, his “Reusable Interplanetary Transport Approach”. Daniel Graham ruled out queries for mass drivers to move items to LEO, due to treaty violations. I have been discussing with Benford the concept of lunar mass drivers for zero cost transport of materials to Mars. (I favor solar powered mass drivers, he prefers plutonium reactors to prevent downtime during the 14 day lunar night.)
To achieve affordable access, the council focused on 2 paths. The first was X-projects and demonstrators. Peter Diamandis really stepped up to the plate with the X-Prize Foundation. Pournelle, Hunter, and Graham were able to get $60 million of BMBO money allocated to DC-X. Dr. Gaubatz, John Garvey, Andy Karlson and many others made the DC-X happen – over and over again. The DC-X marked the “birth” and pivot point of demonstrating reusability. Pournelle always kept emphasizing “bending metal”, something Elon Musk has embraced and brought to a new level.
The second path was (and still is) creating an environment legally to allow the creation of profit-seeking new space companies. This meant working the beltway on a non-partisan basis (so much easier then than now). Pournelle once told me of a meeting he had with Gingrich in his kitchen at Chaos Manor. Critical laws were enacted moving launch process to the DOT and therefore to the FAA, creating a bit of symmetry between civilian air and the new civilian space domains. Always aware that for a profit-seeking company to be successful, NASA must take the long-term research and exploratory role, with affordable access to space moving incrementally to the private sector to unleash the growth potential.
I don’t think any of us ever thought we would hit a dual jackpot of Elon Musk and Jeff Bezos.
Meetings
November, 1980
July, 1983
May 9–11, 1986
? 1993
? 1995
August 10, 1997
Reports
Spring, 1981
28 September 1983. Substantial portions of this report were later published in the book Mutual Assured Survival (Baen Books, 1984) by Jerry Pournelle and Dean Ing.
Spring, 1986
February 15, 1989
March 20, 1994
Membership
Jerry Pournelle, Chairman
Astronauts
Buzz Aldrin, Gerald Carr, Fred Haise, Phil Chapman, Pete Conrad
Aerospace industry
George Merrick (North American Rockwell, Space Division), George Gould, Gordon Woodcock, Gary Hudson, George Koopman, Maxwell Hunter, Art Dula
Space scientists and engineers
Lowell Wood, G. Harry Stine, Eric Laursen, Chuck Lindley, James Benford, Maxwell Hunter, George Gould
Military officers (retired)
Lt. General Daniel O. Graham, USA Ret'd; Brigadier General Robert Richardson, USAF Ret'd; Major General Stewart Meyer; USA Ret'd, Col. Jack Coakley, USA Ret'd; Col. Francis X. Kane, USAF Ret'd.
Computer scientists
Marvin Minsky, Danny Hillis, John McCarthy, David Mitchell
Science fiction authors and publishers
Poul Anderson, Greg Bear, Robert A. Heinlein, Gregory Benford, Dean Ing, Steven Barnes, Jim Baen, Larry Niven
Others
Stefan T. Possony, Bjo Trimble, Alexander C. Pournelle, James Miller Vaughn, Jr.
References
Notes
Bibliography
Space advocacy organizations
Space organizations
Organizations established in 1980
1997 disestablishments in the United States | Citizens' Advisory Council on National Space Policy | Astronomy | 1,413 |
3,573,165 | https://en.wikipedia.org/wiki/Tsallis%20entropy | In physics, the Tsallis entropy is a generalization of the standard Boltzmann–Gibbs entropy.
It is proportional to the expectation of the q-logarithm of a distribution.
History
The concept was introduced in 1988 by Constantino Tsallis as a basis for generalizing the standard statistical mechanics and is identical in form to Havrda–Charvát structural α-entropy, introduced in 1967 within information theory.
Definition
Given a discrete set of probabilities with the condition , and any real number, the Tsallis entropy is defined as
where is a real parameter sometimes called entropic-index and a positive constant.
In the limit as , the usual Boltzmann–Gibbs entropy is recovered, namely
where one identifies with the Boltzmann constant .
For continuous probability distributions, we define the entropy as
where is a probability density function.
Cross-entropy
The cross-entropy pendant is the expectation of the negative q-logarithm with respect to a second distribution, . So .
Using , this may be written . For smaller , values all tend towards .
The limit computes the negative of the slope of at and one recovers . So for fixed small , raising this expectation relates to log-likelihood maximalization.
Properties
Identities
A logarithm can be expressed in terms of a slope through resulting in the following formula for the standard entropy:
Likewise, the discrete Tsallis entropy satisfies
where Dq is the q-derivative with respect to x.
Non-additivity
Given two independent systems A and B, for which the joint probability density satisfies
the Tsallis entropy of this system satisfies
From this result, it is evident that the parameter is a measure of the departure from additivity. In the limit when q = 1,
which is what is expected for an additive system. This property is sometimes referred to as "pseudo-additivity".
Exponential families
Many common distributions like the normal distribution belongs to the statistical exponential families.
Tsallis entropy for an exponential family can be written as
where F is log-normalizer and k the term indicating the carrier measure.
For multivariate normal, term k is zero, and therefore the Tsallis entropy is in closed-form.
Applications
The Tsallis Entropy has been used along with the Principle of maximum entropy to derive the Tsallis distribution.
In scientific literature, the physical relevance of the Tsallis entropy has been debated. However, from the years 2000 on, an increasingly wide spectrum of natural, artificial and social complex systems have been identified which confirm the predictions and consequences that are derived from this nonadditive entropy, such as nonextensive statistical mechanics, which generalizes the Boltzmann–Gibbs theory.
Among the various experimental verifications and applications presently available in the literature, the following ones deserve a special mention:
The distribution characterizing the motion of cold atoms in dissipative optical lattices predicted in 2003 and observed in 2006.
The fluctuations of the magnetic field in the solar wind enabled the calculation of the q-triplet (or Tsallis triplet).
The velocity distributions in a driven dissipative dusty plasma.
Spin glass relaxation.
Trapped ion interacting with a classical buffer gas.
High energy collisional experiments at LHC/CERN (CMS, ATLAS and ALICE detectors) and RHIC/Brookhaven (STAR and PHENIX detectors).
Among the various available theoretical results which clarify the physical conditions under which Tsallis entropy and associated statistics apply, the following ones can be selected:
Anomalous diffusion.
Uniqueness theorem.
Sensitivity to initial conditions and entropy production at the edge of chaos.
Probability sets that make the nonadditive Tsallis entropy to be extensive in the thermodynamical sense.
Strongly quantum entangled systems and thermodynamics.
Thermostatistics of overdamped motion of interacting particles.
Nonlinear generalizations of the Schrödinger, Klein–Gordon and Dirac equations.
Blackhole entropy calculation.
For further details a bibliography is available at http://tsallis.cat.cbpf.br/biblio.htm
Generalized entropies
Several interesting physical systems abide by entropic functionals that are more general than the standard Tsallis entropy. Therefore, several physically meaningful generalizations have been introduced. The two most general of these are notably: Superstatistics, introduced by C. Beck and E. G. D. Cohen in 2003 and Spectral Statistics, introduced by G. A. Tsekouras and Constantino Tsallis in 2005. Both these entropic forms have Tsallis and Boltzmann–Gibbs statistics as special cases; Spectral Statistics has been proven to at least contain Superstatistics and it has been conjectured to also cover some additional cases.
See also
Rényi entropy
Tsallis distribution
References
Further reading
External links
Tsallis Statistics, Statistical Mechanics for Non-extensive Systems and Long-Range Interactions
Statistical mechanics
Entropy and information
Thermodynamic entropy
Information theory | Tsallis entropy | Physics,Mathematics,Technology,Engineering | 1,031 |
72,083,500 | https://en.wikipedia.org/wiki/Time%20in%20Panama | Panama observes Eastern Standard Zone (UTC−5) year-round.
IANA time zone database
In the IANA time zone database, Panama is given one zone in the file zone.tab—America/Panama. "PA" refers to the country's ISO 3166-1 alpha-2 country code. Data for Panama directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
References
External links
Current time in Panama at Time.is
Time in Panama at TimeAndDate
Time by country
Geography of Panama | Time in Panama | Physics | 118 |
73,769,888 | https://en.wikipedia.org/wiki/2023%20Taichung%20crane%20collapse | On 10 May 2023, a construction crane fell 30 floors from a construction site of Highwealth Construction Corp onto a moving Taichung Metro Green Line train south of Feng-le Park metro station, Taichung, Taiwan, killing 1 and injuring 10 passengers onboard.
The deceased passenger, legal scholar , 52, was ejected out of the train carriage upon impact, resulting in her being crushed under the same train.
One passenger onboard, a Canadian national, claimed that the driverless train was stationary when the crane fell onto the tracks and the train then proceeded to drive and collide straight onto the fallen crane.
Investigation
It was revealed that the onboard train attendant followed company procedures to contact the control center about the crane obstructing the track. However, the control center at Taichung Metro would require 20 seconds to activate the emergency brakes remotely, which was insufficient to prevent the collision. The passenger emergency buttons onboard the train were not designed to immediately stop the train.
The Taichung District Prosecutors Office questioned Taichung Metro staff involved in the incident and ten Highwealth Construction Corp personnel who were responsible for the operation of the construction crane.
The operations control center detected a loss of power caused by the fallen crane, but power was automatically restored shortly after. Half of the control center staff were on meal break at time of the incident.
Timeline of incident
At a press conference, Taichung Metro revealed that based on CCTV footage of the train and the station before the fallen crane, the following events occurred:
12:26:50 - Trainset 03/04 entered the station
12:27:04 - Construction crane fell onto the track, breaching the noise barrier
12:27:14 - Station security reported incident to station supervisor
12:27:22 - Trainset 03/04 doors closes
12:27:26 - Staff onboard trainset 03/04 found track obstruction, contacted control center and attempted to open the manual driving control panel to stop the train
12:27:30 - Trainset 03/04 departed the station
12:27:45 - Trainset 03/04 collided with the fallen construction crane on track
12:27:52 - Trainset 03/04 came to a complete stop
Reactions
Taichung Metro said they intended to seek at least TWD 0.2 billion (USD6.5 million) in compensation against Highwealth Construction Corp for damage and losses resulting from the collapse.
Other metro operators in Taiwan began to review and secure ongoing construction sites that were situated near the tracks.
Taipei Metro admitted that the existing procedures regarding driverless trains, such as the Wenhu Line, were inadequate to stop the train in time under a similar scenario as the track circuit would not be broken and detect a fallen crane, and staff opening the manual driving panel or informing the control centre to cut off power would have taken too long. Taipei Metro promised to develop new procedures to deal with such scenarios, and in the meantime metro staff were authorized to deliberately obstruct the platform or train doors from closing in order to prevent the train from moving off.
The acting chairman of Taichung Metro resigned after he was criticized for his performance post-collapse.
Taichung Metro proposed changes to procedures to prevent collapse of a similar nature. Changes in protocol included introducing a new standardized hand signal for staff indicating emergency stop is necessary, encouraging staff and passengers to prevent the doors from closing by obstructing the doors to prevent a driverless train from departing after witnessing an incident, and relocating the key to a separate, more accessible holder to allow roving staff to more easily open the manual driving panel to access the emergency stop button. Taichung Metro also promised updates to emergency devices at stations and obstruction detection devices that will allow a train to stop in time or prevent its departure in similar circumstances.
The Taiwan Transportation Safety Board concluded in June 2024 that the primary reasons for the collapse were a failure to ensure proper operation of a tower crane and a lack of clear measures for restricting or prohibiting construction on either side of Taichung Metro tracks. In August 2024, two workers were indicted on charges of negligent manslaughter.
See also
2021 Hualien train derailment - collision with construction machinery that fell onto the track
References
External links
2023 disasters in Taiwan
Engineering failures
May 2023 events in Taiwan
Railway accident deaths
Railway accidents and incidents in Taiwan
Railway accidents in 2023
2023 crane collapse | 2023 Taichung crane collapse | Technology,Engineering | 894 |
33,787,267 | https://en.wikipedia.org/wiki/Nuclear%20resonance%20vibrational%20spectroscopy | Nuclear resonance vibrational spectroscopy is a synchrotron-based technique that probes vibrational energy levels. The technique, often called NRVS, is specific for samples that contain nuclei that respond to Mössbauer spectroscopy, most commonly iron. The method exploits the high resolution offered by synchrotron light sources, which enables the resolution of vibrational fine structure, especially those vibrations that are coupled to the position of the Fe centre(s). The method is popularly applied to problems in bioinorganic chemistry, materials science, and geophysics. A novel aspect of the method is the ability to determine the 3D-trajectory of iron atoms within vibrational modes, providing a unique appraisal of DFT-prediction accuracy. Other names for this method include nuclear inelastic scattering (NIS), nuclear inelastic absorption (NIA), nuclear resonant inelastic x-ray scattering (NRIXS), and phonon assisted Mössbauer effect.
Experimental set-up
In the experimental setup, X-rays are released from the particle beam by an undulator; a high-resolution monochromator produces a beam with small energy dispersion (typically 1.0 meV). The sample is irradiated with photons chosen around the resonance of the Mössbauer isotope and further information is provided for the specific isotope. Typical parameters for the experimental scan are –20 meV below recoil-free resonance energy to +100 meV above it. The number of scans (often recorded for 5 seconds every 0.2 meV) depends on the amount of Mössbauer-active nuclei in the sample. The number of photons absorbed by the sample at any wavelength are measured by detecting the fluorescence emitted from the excited atom with an avalanche photodiode detector. The resulting raw spectrum contains a high-intensity resonance that corresponds to the nuclear excited state of the probed nucleus. For bulk samples, the technique detects natural abundance 57Fe. For many dilute or biological samples, the sample is often enriched in 57Fe.
References
Vibrational spectroscopy
Scientific techniques | Nuclear resonance vibrational spectroscopy | Physics,Chemistry | 429 |
51,276,643 | https://en.wikipedia.org/wiki/Harposporium%20anguillulae | Harposporium anguillulae is a member of the genus Harposporium. It is an endoparasitic nematophagous fungus that attacks nematodes and eelworms and is isolated commonly from field and agricultural soils as well as used as an experimental organism in the laboratory.
History and taxonomy
Harposporium anguillulae was described in the late 1800s as a parasite of nematodes. It has since been commonly reported in the literature. This fungus also traps eelworms. Harposporium anguillulae is one of 26 species in the genus Harposporium in the division Ascomycota. It is a pathogen of eelworms and nematodes, notable for its distinct sickle-shaped conidia that grow in pierce out through the host body. This genus Harposporium was treated initially in the Clavicipitaceae and is thought to be closely related to members of the genus, Tolypocladium. Both genera occur on nematodes and eelworms but rarely insects. The two genera can be differentiated morphologically, as members of the genus Tolypocladium produce more complex conidiophores with narrower conidiogenous cells.
Growth and physiology
The invasive apparatus of this species consists of non-adhesive, crescent-shaped conidia that are ingested by hosts and lodge in the esophagus or gut. The sickle shape of the conidia is also contributes to the ability of the fungus to pierce through the host cuticle. In the laboratory, cultures of the fungus can be cultivated on agar containing yeast hydrolysate or glucose, though growth is much slower on glucose. The fungus grows rapidly on water-agar and produces chlamydospores, implying an oligotrophic physiology.
Habitat and ecology
Nematophagous fungi occur in a variety of habitats including leaves entering the decomposition phase, soil samples that contain decomposed leaves or in soil samples from agricultural land and they can also be found in pasture land. The latter possibly relates to the tendency of this species to occur in dung of cow and sheep where its nematode hosts are abundant. The fungus is commonly found in tropical and warm climates. It is more commonly encountered in the spring and fall. The fungus has been isolated from include Brazil, China, Florida, New Zealand, and eastern Canada. The fungus tends to be more commonly reported from climate regions subject to monsoons and does not appear to survive cold weather well, though the predilection of this species for warm damp climates may relate more to the distributions of its hosts.
The fungus is known primarily as a parasite of nematodes and eelworms. During its life cycle, conidia of the fungus are ingested by eelworms or nematodes and lodge in the pharnyx or gut. Once inside the host, the conidia germinate and begin to colonize the host digestive tract. During the initial phases of this process, the host remains alive, but as the fungus spreads from the gut to the surrounding tissues in the latter stages of infection and the death of the host soon follows. Conidial production occurs on nematode cadavers by the eruption of conidiophores and conidia through the host cuticle.
Biological control of nematodes
This fungus has been investigated as a biocontrol agent of agriculturally important nematodes, most notably those responsible for gastrointestinal infection of grazing animals. These parasitic infections are commonly treated with anthelmintic agents including benimidazole, levamisole and invermectin. However, increasing levels of anthelmintic resistance have been observed, driving the search for new treatment and prevention options. Larvae of animal-pathogenic nematodes are found in soil. The prospect of treating contaminated soils with nematode pathogenic fungi such as H. anguillulae has shown potential to reduce nematode populations. However, the fungus does not persist in soil following the elimination of nematode populations, potentially limiting its use as a sustainable biocontrol agent.
References
Ophiocordycipitaceae
Fungi described in 1888
Fungus species
Carnivorous fungi | Harposporium anguillulae | Biology | 874 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.