id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
5,607,578 | https://en.wikipedia.org/wiki/EAA%20Aviation%20Museum | The EAA Aviation Museum, formerly the EAA AirVenture Museum (or Air Adventure Museum), is a museum dedicated to the preservation and display of historic and experimental aircraft as well as antiques, classics, and warbirds. The museum is located in Oshkosh, Wisconsin, United States, adjacent to Wittman Regional Airport, home of the museum's sponsoring organization, the Experimental Aircraft Association (EAA), and the organization's EAA AirVenture Oshkosh event (the world's biggest fly-in and airshow) that takes place in late July/early August.
With over 200 aircraft, indoors and outdoors, and other exhibits and activities (including occasional aircraft rides nearby), the AirVenture Museum is a key tourist attraction in Oshkosh and is a center of activity throughout the AirVenture fly-in and airshow each summer. The museum is open year-round with the exception of a few holidays.
History
EAA founder Paul Poberezny proposed the idea of the EAA Air Museum-Air Education center in August 1958. In the late 1970s, his son, EAA president Tom Poberezny, led the campaign to build the current updated EAA museum and headquarters, which was officially opened in 1983.
The EAA library has been open to EAA members since 1985.
The museum opened an Education Center in July 2022. The new building includes a Pilot Proficiency Center.
Features and exhibits
The museum's collection displays more than 200 aircraft and 20,000 artifacts, including civilian and military aircraft of historic importance, and aircraft popular with aviation hobbyists—vintage, homebuilt, racing and stunt aircraft.
Some of the more historic and unusual planes include a Curtiss Pusher, Bleriot XI, Curtiss Jenny, Pitcairn PCA-2 autogyro, Sikorsky S-38 amphibian flying boat, and the Taylor Aerocar flying car, as well various warbirds and Golden Age aircraft.
Other exhibits include functional replicas of the Wright Flyer and its predecessor, Octave Chanute's hang glider, French and German World War I fighters, Lindbergh's Ryan NYP "Spirit of St. Louis" replica (flown in the movie), and a replica of the historic Laird Super Solution 1931 racer.
A large section on Burt Rutan's aircraft includes a portion of his homebuilts, replicas of his globe-circling Rutan Voyager and the first private spacecraft, Space Ship One, crafted by Rutan's own shop.
The museum has a variety of donated aircraft, including the Church Midwing, Funk B, Monnett Moni, and many homebuilt and kitplane aircraft (some foreign)—many built by the original designers. Notable homebuilts on display consist of Van's Aircraft's Van's RV-3, designed by Richard VanGrunsven, Christen Industries' Christen Eagle II, designed by Frank Christensen, and Cirrus Aircraft's first model, the Cirrus VK-30, designed by the Klapmeier brothers.
Pioneer Airport
Pioneer Airport is an old grass airstrip immediately behind the museum.
Rides
Aircraft rides are offered through various EAA programs at the museum's Pioneer Airport, or at the adjoining Wittman Field, especially during AirVenture Fly-In and Airshow, typically in late summer.
Ford Tri-Motor rides
A 1920s/1930s vintage Ford Tri-Motor airliner sells rides occasionally at adjoining Wittman Field. A particular program is the Fall Colors Flights, short flights to view colorful fall foliage in the area.
Boeing B-17 Flying Fortress rides
The EAA's 1940s-vintage Boeing B-17 Flying Fortress World War II bomber, the Aluminum Overcast, sells rides occasionally at adjoining Wittman Field when not on tour.
Helicopter rides
Helicopter rides, typically in Bell 47 ("MASH") helicopters are available occasionally at adjacent Pioneer Airport, or from adjoining Wittman Field.
Children's section
The museum includes a children's section which provides extensive hands-on aviation-related exhibits and activities, most notably, a 1/2 scale F-22 Raptor model, numerous flight simulators, and a "control tower" observation platform overlooking Pioneer Airport.
Location
The EAA Museum is near the northwest corner of the grounds of Wittman Regional Airport, on the southeast side of the interchange connecting Interstate 41 with Wisconsin state highways 44 and 91.
Gallery
The museum has over 200 aircraft on display and several other exhibits and activities. Below are some of the museum's most notable aircraft.
See also
Historic Aircraft Restoration Museum
List of aerospace museums
Mitchell Gallery of Flight
Young Eagles program
References
External links
EAA Young Eagles website
Experimental Aircraft Association
Aerospace museums in Wisconsin
Museums in Oshkosh, Wisconsin
Military and war museums in Wisconsin
Institutions accredited by the American Alliance of Museums | EAA Aviation Museum | [
"Engineering"
] | 995 | [
"Experimental Aircraft Association",
"Aerospace engineering organizations"
] |
5,607,752 | https://en.wikipedia.org/wiki/Renewable%20Energy%20Corporation | The Renewable Energy Corporation (REC) is a solar power company with headquarters in Singapore. REC produces silicon materials for photovoltaics (PV) applications and multicrystalline wafers, as well as solar cells and modules. It is a wholly-owned subsidiary of Reliance New Solar Energy Limited.
The previous parent company of REC was ChemChina, one of the largest chemical companies and state-owned by the People's Republic of China, which held its stake in Elkem since 2015 through the China National Bluestar Group. The purchase price was 490 million euros. On 10 October 2021, Indian conglomerate Reliance Industries announced that its subsidiary, Reliance New Solar Energy Limited, had acquired complete control of REC from China National Bluestar Group for US$771 million.
History
The predecessor of today's company was established in 1996 under the name Fornybar Energi AS. Today's company is a result of a fusion in September 2000 between ScanWafer AS, SolEnergy AS and Fornybar Energi AS. In 2002 REC ScanCell started production of multicrystalline solar cells in Narvik for the sister company REC ScanModule in Glava, Arvika. REC Wafer was at the time the world's largest producer of multicrystalline wafers with factories in Glomfjord and at Herøya.
Immediately after its IPO in 2006, the share price of the company soared, reaching a peak of NOK 262 in November 2007, corresponding to a market capitalization of NOK 174 billion. Based on this value, the company was at the time the largest wholly privately owned company in Norway.
In 2007, REC decided to build its new world-scale integrated solar manufacturing facility in Singapore, the world’s largest integrated solar manufacturing complex. When completed, the manufacturing complex was planned to incorporate wafer, cell and module production facilities, with a production capacity of up to 1.5 gigawatts (GW).
The development of this site was projected to enable REC’s ability to deliver solar products that can compete with traditional energy sources in the sunny areas of the world without government incentives.
In 2008 and 2009 two new factories for multicrystalling wafers were opened at Herøya.
In August 2008 REC made the decision to build a new facility for silicon manufacturing expansion in Bécancour, Quebec, Canada. Included in the decision is a 20-year power contract with Hydro-Québec for the delivery of electricity at a competitive industrial rate.
In 2010, fully automated and integrated production of wafers, cells, and panels began at the company's state-of-the-art factory in Singapore
Crisis during 2008-2009
During 2008 and 2009, the company faced a crisis with falling income and increasing debt. As of May 2010, the market capitalization is down to 18 billion NOK. The large drop in value has been partially blamed on the financial crisis, which caused a near halving of the price of silicon wafers, as well as increasing costs of investments, in particular due to delays in opening a new factory in Moses Lake, Washington.
Crisis during 2011 and 2012
In addition due to the continued weak market conditions and prospects of significant negative cash flow, the board of directors announced in October that REC would permanently close down the production capacity at the oldest multicrystalline wafer plants at Herøya, the multicrystalline wafer plant in Glomfjord and the solar cell plant in Narvik. The remaining Norwegian plant, at Herøya, was closed down in 2012. Its wafer subsidiary, REC Wafer Norway AS, is planning to file for insolvency.
All of REC's solar cell and wafer production plants in Norway were shut down permanently in 2011 and 2012 due to weak market conditions and prospects of significant negative cash flow.
Split
In 2013, REC announced that it would establish REC Solar as an independently listed company, and REC will continue operating in polysilicon business. REC changed its name to REC Silicon in October 2013. The splinter company consisted of the silicon manufacturing facilities at Moses Lake, Washington and Butte, Montana and is based in the United States.
At these plants, REC Silicon produces polysilicon and silane gas for the solar industry and the electronics industry. REC Silicon produced 21,405 MT of polysilicon in 2012, and targets a production of 20,000 MT polysilicon in 2013.
Present day
In 2023, REC announced the REC@NUS Corporate R&D Laboratory for Next Generation Photovoltaics project with the National University of Singapore.
1 Estimated production/capacity
Operations
Solar
REC produce multicrystalline solar cells and solar panels.
1 Estimated production/capacity
2 Numbers not available
3 Plant closed down
Commercial agreements
REC has entered into a significant long-term agreement for supply of mono-crystalline silicon wafer to Suniva, Inc. Under the agreement, REC will until 2013 deliver wafers worth more than US$300 million.
REC also entered into a significant long-term agreement for supply of mono-crystalline silicon wafers to China Sunergy Co. Ltd. Under the agreement, REC were to deliver wafers worth more than US$400 million until 2015. It was structured as a take-or-pay contract with pre-determined prices and volumes for the entire contract period. However, in 2009, this contract became the cause of a legal battle between REC and China Sunergy. As spot prices for wafers fell dramatically in 2009, China Sunergy found itself bound to prices well below spot, prompting a stall in purchasing, leading REC to terminate the contract.
Listings
The shares are listed on the Photovoltaik Global 30 Index since the beginning of this stock index in 2009. During the past few years REC's stock price has gradually come down as a result of the demanding times in the solar industry due to overcapacity and heavy price pressure on solar products.
See also
Economic Development Board (EDB)
Low cost solar cell
Take-or-pay contract
References
External links
Solar energy companies
Solar power in Norway
Engineering companies of Norway
Electric power companies of Norway
Photovoltaics manufacturers
Manufacturing companies of Norway
Companies based in Oslo
Energy companies established in 1996
Manufacturing companies established in 1996
Renewable resource companies established in 1996
Companies listed on the Oslo Stock Exchange
Norwegian brands
Norwegian companies established in 1996
Reliance Industries | Renewable Energy Corporation | [
"Engineering"
] | 1,297 | [
"Photovoltaics manufacturers",
"Engineering companies"
] |
14,637,814 | https://en.wikipedia.org/wiki/Polymerase%20stuttering | Polymerase stuttering is the process by which a polymerase transcribes a nucleotide several times without progressing further on the mRNA chain. It is often used in addition of poly A tails or capping mRNA chains by less complex organisms such as viruses.
Process
A polymerase may undergo stuttering as a probability controlled event, hence it is not explicitly controlled by any mechanisms in the translation process. Generally, it is a result of many short repeated frameshifts on a slippery sequence of nucleotides on the mRNA strand. However, the frameshift is restricted to one (in some cases two) nucleotides with a pseudoknot or choke points on both sides of the sequence.
Examples
A polymerase that exhibits this behavior is RNA-dependent RNA polymerase, present in many RNA viruses. Reverse transcriptase has also been observed to undergo this polymerase stuttering.
Literature
Genetics | Polymerase stuttering | [
"Biology"
] | 185 | [
"Genetics"
] |
14,638,086 | https://en.wikipedia.org/wiki/Ilya%20Zbarsky | Ilya Borisovich Zbarsky (; 8 November 1913 – 9 November 2007) was a Soviet and Russian biochemist who served as the head of Lenin's Mausoleum from 1956 to 1989. He was appointed as Advisor at the Direction of the Institute in 1989 due to his age. He was the son of Boris Zbarsky, who helped mummify Lenin's body in 1924. Zbarsky was a member of the Russian Academy of Medical Sciences.
With Samuel Hutchinson, he was the author of the book Lenin's Embalmers.
He died on 9 November 2007, in Moscow.
References and sources
Ilya Borisovich Zbarsky biography
References
1913 births
2007 deaths
20th-century Russian chemists
People from Kamianets-Podilskyi
Academicians of the Russian Academy of Medical Sciences
Academicians of the USSR Academy of Medical Sciences
Moscow State University alumni
Recipients of the Order of Friendship of Peoples
Recipients of the Order of the Red Banner of Labour
Molecular biologists
Russian biochemists
Russian Jews
Soviet biochemists
Soviet Jews | Ilya Zbarsky | [
"Chemistry"
] | 210 | [
"Molecular biologists",
"Biochemists",
"Molecular biology"
] |
14,638,435 | https://en.wikipedia.org/wiki/Parts%20of%20a%20theatre | There are different types of theatres, but they all have three major parts in common. Theatres are divided into two main sections, the house and the stage; there is also a backstage area in many theatres. The house is the seating area for guests watching a performance and the stage is where the actual performance is given. The backstage area is usually restricted to people who are producing or in the performance.
Types of theatres
Arena: A large open door with seating capacity for very large groups. Seating layouts are typically similar to the theatre in the round, or proscenium (though the stage will not have a proscenium arch. In almost all cases the playing space is made of temporary staging (risers) and is elevated a few feet higher than the first rows of audience.
Black box theatre: An unadorned space with no defined playing area. Often the seating is not fixed allowing the room to be re-configured for the demands of a specific production. Typically the seating and performance space are on the same level.
Proscenium: The audience directly faces the playing area which is separated by a portal called the proscenium arch.
Theatre in the round: The playing area is surrounded by audience seating on all sides.
Thrust: The playing area protrudes out into the house with the audience seating on 3 sides.
Traverse: The elongated playing area is surrounded by audience seating on two sides. Similar in design to a fashion show runway.
Stage
The area of the theatre in which the performance takes place is referred to as the stage.
Stage directions or stage positions
In order to keep track of how performers and set pieces move around the space, the stage is divided up into sections oriented based on the performers perspective to the audience. Movement is choreographed by blocking which is organized movement on stage created by the director to synchronize the actor's movement onstage in order to use these positions.
Upstage: The area of the stage furthest from the audience.
Downstage: The area of the stage closest to the audience.
Stage Left: The area of the stage to the performer's left, when facing downstage (i.e. towards the audience).
Stage Right: The area of the stage to the performer's right, when facing downstage (i.e. towards the audience).
Center Stage: The center of the playing (performance) area.
Center Line: An imaginary reference line on the playing area that indicates the exact center of the stage, travelling from up to downstage.
Onstage: The portion of the playing area visible to the audience.
Offstage: The area surrounding the playing space not visible to the audience. Typically this refers to spaces accessible to the performers but not the audience, such as the wings, crossovers, and voms.
Note that for performance spaces with audiences in more than one orientation, typically one direction is arbitrarily denoted as "downstage" and all other directions reference that point.
Stage components
Apron: The area of the stage in front of the proscenium arch, which may be small or, in a thrust stage, large.
Backstage: Areas of the theatre adjacent to the stage accessible only to performers and technicians, including the wings, crossover, and dressing rooms. Typically this refers to areas directly accessible from the stage and does not include spaces such as the control booth or Orchestra pit
Crossover: The area used by performers and technicians to travel between sides of the stage out of sight of the audience; sometimes created onstage with flats, or masking and drapery.
Plaster Line: An imaginary reference line on the playing area that indicates where the proscenium arch is. Typically, the plaster line runs across the stage at the back face (upstage face) of the proscenium wall.
Portal or Proscenium Arch: An open frame on a proscenium stage that divides the audience from the stage in traditional Western theatres.
Prompt corner: Area just to one side of the proscenium where the stage manager stands to cue the show and prompt performers.
Rake: A slope in the performance space (stage), rising away from the audience.
Safety curtain: A heavy fireproof curtain, in fiberglass, iron or similar material placed immediately behind the proscenium.
Shell: A hard, often removable surface, designed to reflect sound out into the audience for musical performances.
Smoke Pocket: Vertical channels against the proscenium designed to contain the safety curtain.
Thrust stage: A performance space projecting well in front of the proscenium arch, usually with the audience on three sides.
Wings: Areas that are part of a stage deck but offstage (out of sight of the audience). The wings are typically masked with legs. The wing space is used for performers preparing to enter, storage of sets for scenery changes and as a stagehand work area. Wings also contain technical equipment, such as the fly system.
In the dressing room there is a makeup bench, chairs and mirrors.
House
The house can refer to any area which is not considered playing space or backstage area. Outside the theatre itself this includes the lobby, coat check, ticketing counters, and restrooms. More specifically, the house refers to any area in the theatre where the audience is seated. This can also include aisles, the orchestra pit, control booth, balconies and boxes.
Orchestra or Orchestra Pit: In productions where live music is required, such as ballet, folk-dance groups, opera, and musicals, the orchestra is positioned in front and below of the stage in a pit. The pit is usually a large opening ranging from wide, long and deep. Some orchestra pits have lifts or elevators that can raise the floor of the pit up to the same height as the stage. This allows for easier movement of instruments among other things. Often an orchestra pit will be equipped with a removable pit cover which provides safety by eliminating the steep drop off and also increases the available acting area above. In most cases, some sort of lattice or sound port is built into the front of the orchestra pit, to allow audience members in the front rows to hear the music while still having a wall to keep them separated from the orchestra. The orchestra pit is the closest to the audience.
Auditorium: The section of the theatre designated for the viewing of a performance. Includes the patrons main seating area, balconies, boxes, and entrances from the lobby. Typically the control booth is located in the back of the auditorium, although for some types of performance an audio mixing positing in located closer to the stage within the seating.
Vomitorium: A passage situated below or behind a tier of seats.
Control booth: The section of the theatre designated for the operation of technical equipment, followspots, lighting and sound boards, and is sometimes the location of the stage manager's station. The control booth is located in the theatre in such a way that there is a good, unobstructed view of the playing area without causing any (or minimal) distraction to the audience (i.e. preventing distracting light leak or noise), and is generally an enclosed space.
Catwalks: A catwalk is a section of the house hidden in the ceiling from which many of the technical functions of a theatre, such as lighting and sound, may be manipulated.
Front of house
Lobby: The lobby is a room in a theatre which is used for public entry to the building from the outside. Ticket counters, coat check, concessions and restrooms are all usually located in, or just off the lobby.
Box office: A place where tickets are sold to the public for admission to a venue
Marquee: Signage stating either the name of the establishment or the play and the artist(s) appearing at that venue.
Backstage or offstage
The areas of a theatre that are not part of the house or stage are considered part of backstage. These areas include dressing rooms, green rooms, offstage areas (i.e. wings), cross-overs, fly rails or linesets, dimmer rooms, shops and storage areas.
Dressing rooms: Rooms where cast members apply wigs, make-up and change into costumes. Depending on the size of the theatre, there may be only a male and female dressing room, or there might be many (i.e. one for each member of the cast). Often in larger spaces, cast members in lead roles have their own dressing room, those in supporting roles share with one or two others and those in the background or "chorus" roles share with up to 10 or 15 other people. Dressing rooms generally feature a large number of switchable outlets for accessories like hair dryers, straightening irons, and curlers. They also feature mirrors, which are often lit. Sinks are present for the removal of makeup and sometimes a dressing room will have showers and restrooms attached. Lockers, or costume racks are generally used for storage of costumes. In some performances, dressing rooms are used as a secondary green room because of space limitation or noise, especially by performers with long breaks between stage appearances.
Green room: The lounge backstage. This is the room where actors and other performers wait in when they are not needed onstage or in their dressing rooms.
Crossover: A crossover is a hallway, room, or catwalk designed to allow actors in a theater to move from wings on one side of a stage to wings on the other side without being seen by the audience. Sometimes this is built as a part of the theater, sometimes exiting the building is required, and still other times the set includes a false wall to create a temporary crossover. A trap room, orchestra pit, or even the front of house can be used as crossovers.
Fly system: A fly system is a system of ropes, counterweights, pulleys, and other such tools designed to allow a technical crew to quickly move set pieces, lights, and microphones on and off stage quickly by "flying" them in from a large opening above the stage known as a fly tower/flyspace.
Catwalk: A catwalk is an elevated platform from which many of the technical functions of a theatre, such as lighting and sound, may be manipulated.
Dimmer room: The room backstage which contains the dimmer racks which power the lighting rig in the theatre. Often dimmer racks may not be housed in dedicated room, instead they may be in a mechanical room, control booth, or catwalk, or even on the side of the stage as is often the case on Broadway, touring shows, or at corporate events. When the dimmers are stored onstage, this area of the stage is known as the "Dimmer Beach". In the UK it is known as "Dimmer City".
Shops and storage areas: Depending on the space available a theatre may have its own storage areas for old scenic and costume elements as well as lighting and sound equipment. The theatre may also include its own lighting, scenic, costume and sound shops. In these shops each element of the show is constructed and prepared for each production.
Call board: Literally a backstage bulletin board which contains information about a theatrical production including contact sheets, schedules, rehearsal time changes, etc.
Trap room: A large open space under the stage of many large theatres. The trap room allows the stage floor to be leveled, extra electrical equipment to be attached, and most importantly, the placement of trap doors onto the stage (hence the name). It is usually unfinished and often doubles as a storage area. It is often also used as a substitute for a crossover.
References
Sanders, T. (2018). An introduction to technical theatre - commonknowledge. Pacific University Press. Retrieved 5 August 2023. https://commons.pacificu.edu/work/sc/b2e02743-2b36-4018-821c-55daa5305cf6
External links | Parts of a theatre | [
"Technology"
] | 2,422 | [
"Parts of a theatre",
"Components"
] |
14,638,490 | https://en.wikipedia.org/wiki/Severe%20combined%20immunodeficiency%20%28non-human%29 | The severe combined immunodeficiency (SCID) is a severe immunodeficiency genetic disorder that is characterized by the complete inability of the adaptive immune system to mount, coordinate, and sustain an appropriate immune response, usually due to absent or atypical T and B lymphocytes. In humans, SCID is colloquially known as "bubble boy" disease, as victims may require complete clinical isolation to prevent lethal infection from environmental microbes.
Several forms of SCID occur in animal species. Not all forms of SCID have the same cause; different genes and modes of inheritance have been implicated in different species.
Horses
Equine SCID is an autosomal recessive disorder that affects the Arabian horse. Similar to the "bubble boy" condition in humans, an affected foal is born with no immune system, and thus generally dies of an opportunistic infection, usually within the first four to six months of life. There is a DNA test that can detect healthy horses who are carriers of the gene causing SCID, thus testing and careful, planned matings can now eliminate the possibility of an affected foal ever being born.
SCID is one of six genetic diseases known to affect horses of Arabian bloodlines, and the only one of the six for which there is a DNA test to determine if a given horse is a carrier of the allele. The only known form of horse SCID involves mutation in DNA-PKcs.
Unlike SCID in humans, which can be treated, for horses, to date, the condition remains a fatal disease. When a horse is heterozygous for the gene, it is a carrier, but perfectly healthy and has no symptoms at all. If two carriers are bred together, however, classic Mendelian genetics indicate that there is a 50% chance of any given mating producing a foal that is a carrier heterozygous for the gene, and a 25% risk of producing a foal affected by the disease. If a horse is found to carry the gene, the breeder can choose to geld a male or spay a female horse so that they cannot reproduce, or they can choose to breed the known carrier only to horses that have been tested and found to be "clear" of the gene. In either case, careful breeding practices can avoid ever producing an SCID-affected foal.
Dogs
There are two known types of SCID in dogs, an X chromosome-linked form that is very similar to X-SCID in humans, and an autosomal recessive form that is similar to the disease in Arabian horses and SCID mice.
X-SCID in dogs (caused by IL2RG mutation) is seen in Basset Hounds and Cardigan Welsh Corgis. Because it is an X-linked disease, females are carriers only and disease is seen in males exclusively. It is caused by a mutation in the gene for the cytokine receptor common gamma chain. Recurring infections are seen and affected animals usually do not live beyond three to four months. Characteristics include a poorly developed thymus gland, decreased T-lymphocytes and IgG, absent IgA, and normal quantities of IgM. A common cause of death is canine distemper, which develops following vaccination with a modified live distemper virus vaccine. Due to its similarity to X-SCID in humans, breeding colonies of affected dogs have been created in order to study the disease and test treatments, particularly bone marrow transplantation and gene therapy.
The autosomal recessive form of SCID has been identified in one line of Jack Russell Terriers. It is caused by a loss of DNA protein kinase (DNA-PKcs aka PRKDC), which leads to faulty V(D)J recombination. V(D)J recombination is necessary for recognition of a diverse range of antigens from bacteria, viruses, and parasites. It is characterized by nonfunctional T and B-lymphocytes and a complete lack of gammaglobulins. Death is secondary to infection. Differences between this disease and the form found in Bassets and Corgis include a complete lack of IgM and the presence of the disease in females.
Mice
SCID mice are routinely used as model organisms for research into the basic biology of the immune system, cell transplantation strategies, and the effects of disease on mammalian systems. They have been extensively used as hosts for normal and malignant tissue transplants. In addition, they are useful for testing the safety of new vaccines or therapeutic agents in immunocompromised individuals.
The condition is due to a rare recessive mutation on Chromosome 16 responsible for deficient activity of an enzyme involved in DNA repair (Prkdc or "protein kinase, DNA activated, catalytic polypeptide"). Because V(D)J recombination does not occur, the humoral and cellular immune systems fail to mature. As a result, SCID mice have an impaired ability to make T or B lymphocytes, may not activate some components of the complement system, and cannot efficiently fight infections, nor reject tumors and transplants.
In addition to the natural mutation form, SCID in mice can also be created by a targeted knockout of Prkdc. Other human forms of SCID can similarly be mimicked by mutation in genes such as IL2RG (creating a form similar to X-linked SCID). By crossing SCID mice with these other mice, more severely immunocompromised strains can be created to further aid research (e.g. by being less likely to reject transplants). The degree to which the various components of the immune system are compromised varies according to what other mutations the mice carry along with the SCID mutation.
Artificial models
In addition to the natural mutations above, humans have also engineered model organisms to have SCID.
Two laboratory rat models were created in 2022, one having Prkdc knocked out, the other having both Prkdc and Rag2 knocked out.
See also
Severe combined immunodeficiency, for a detailed overview of the condition in humans and an in-depth scientific explanation of the disease
Foal immunodeficiency syndrome
Animal testing on rodents
References
Mammal diseases
Severe combined
Combined T and B–cell immunodeficiencies
Horse diseases | Severe combined immunodeficiency (non-human) | [
"Biology"
] | 1,308 | [
"Model organisms",
"Animal models"
] |
14,639,061 | https://en.wikipedia.org/wiki/Intellifont | Intellifont is a scalable font technology developed by Tom Hawkins at Compugraphic in Wilmington, Massachusetts during the late 1980s, the patent for which was granted to Hawkins in 1987. Intellifont fonts were hinted on a Digital Equipment Corporation VAX mainframe computer using Ikarus software. In 1990, printer and computing system manufacturer Hewlett-Packard adopted Intellifont scaling as part of its PCL 5 printer control protocol, and Intellifont technology was shipped with HP LaserJet III and 4 printers. In 1991, Commodore released AmigaOS 2.04, which included a version of diskfont.library that contained the Bullet font scaling engine (which in Workbench 2.1 became a separate library called bullet.library), with native support for the format. Intellifont technology became part of Agfa-Gevaert's Universal Font Scaling Technology (UFST), which allows OEMs to produce printers capable of printing on either the Adobe systems PostScript or HP PCL language.
See also
PCL
References
Further reading
(NB. FAIS = Font Access Interchange Standard.)
External links
Font archive
Font engine's API
Method for construction of a scalable font database
Typesetting
Digital typography
Font formats
Wilmington, Massachusetts
AmigaOS | Intellifont | [
"Technology"
] | 266 | [
"AmigaOS",
"Computing platforms"
] |
14,640,471 | https://en.wikipedia.org/wiki/Mars | Mars is the fourth planet from the Sun. The surface of Mars is orange-red because it is covered in iron(III) oxide dust, giving it the nickname "the Red Planet". Mars is among the brightest objects in Earth's sky, and its high-contrast albedo features have made it a common subject for telescope viewing. It is classified as a terrestrial planet and is the second smallest of the Solar System's planets with a diameter of . In terms of orbital motion, a Martian solar day (sol) is equal to 24.6 hours, and a Martian solar year is equal to 1.88 Earth years (687 Earth days). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos.
The relatively flat plains in northern parts of Mars strongly contrast with the cratered terrain in southern highlands – this terrain observation is known as the Martian dichotomy. Mars hosts many enormous extinct volcanoes (the tallest is Olympus Mons, tall) and one of the largest canyons in the Solar System (Valles Marineris, long). Geologically, the planet is fairly active with marsquakes trembling underneath the ground, dust devils sweeping across the landscape, and cirrus clouds. Carbon dioxide is substantially present in Mars's polar ice caps and thin atmosphere. During a year, there are large surface temperature swings on the surface between to similar to Earth's seasons, as both planets have significant axial tilt.
Mars was formed approximately 4.5 billion years ago. During the Noachian period (4.5 to 3.5 billion years ago), Mars's surface was marked by meteor impacts, valley formation, erosion, and the possible presence of water oceans. The Hesperian period (3.5 to 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, has been marked by the wind as a dominant influence on geological processes. Due to Mars's geological history, the possibility of past or present life on Mars remains of great scientific interest.
Since the late 20th century, Mars has been explored by uncrewed spacecraft and rovers, with the first flyby by the Mariner 4 probe in 1965, the first orbit by the Mars 2 probe in 1971, and the first landing by the Viking 1 probe in 1976. As of 2023, there are at least 11 active probes orbiting Mars or on the Martian surface. Mars is an attractive target for future human exploration missions, though in the 2020s no such mission is planned.
Natural history
Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind.
After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning , or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet.
A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring.
The geological history of Mars can be split into many periods, but the following are the three primary periods:
Noachian period: Formation of the oldest extant surfaces of Mars, 4.5 to 3.5 billion years ago. Noachian age surfaces are scarred by many large impact craters. The Tharsis bulge, a volcanic upland, is thought to have formed during this period, with extensive flooding by liquid water late in the period. Named after Noachis Terra.
Hesperian period: 3.5 to between 3.3 and 2.9 billion years ago. The Hesperian period is marked by the formation of extensive lava plains. Named after Hesperia Planum.
Amazonian period: between 3.3 and 2.9 billion years ago to the present. Amazonian regions have few meteorite impact craters but are otherwise quite varied. Olympus Mons formed during this period, with lava flows elsewhere on Mars. Named after Amazonis Planitia.
Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches.
Physical characteristics
Mars is approximately half the diameter of Earth, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's hot deserts. The red-orange appearance of the Martian surface is caused by rust. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present.
Internal structure
Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about thick, with a minimum thickness of in Isidis Planitia, and a maximum thickness of in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminium, calcium, and potassium. Mars is confirmed to be seismically active; in 2019 it was reported that InSight had detected and recorded over 450 marsquakes and related events.
Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick.
Mars's iron and nickel core is completely molten, with no solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen.
Surface geology
Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust.
Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded.
The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth. They are necessary for growth of plants. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans.
Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms.
Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth.
Geography and features
Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars.
Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers.
Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe.
Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection.
Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm).
For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023.
Volcanoes
The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over , a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over to the northwest, to the summit approaches , roughly three times the height of Mount Everest, which in comparison stands at just over . Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at .
Impact topography
The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System.
Mars is scarred by a number of impact craters: a total of 43,000 observed craters with a diameter of or greater have been found. The largest exposed crater is Hellas, which is wide and deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around in diameter, and Isidis, which is around in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter.
Martian craters can have a morphology that suggests the ground became wet after the meteor impact.
Tectonic sites
The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of and a depth of up to . The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only long and nearly deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement.
Holes and caves
Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from wide and they are estimated to be at least deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface.
Atmosphere
Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of on Olympus Mons to over in Hellas Planitia, with a mean pressure at the surface level of . The highest atmospheric density on Mars is equal to that found above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's . The scale height of the atmosphere is about , which is higher than Earth's , because the surface gravity of Mars is only about 38% of Earth's.
The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. The concentration of methane in the Martian atmosphere fluctuates from about 0.24 ppb during the northern winter to about 0.65 ppb during the summer. Estimates of its lifetime range from 0.6 to 4 years, so its presence indicates that an active source of the gas must be present. Methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life.
Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above.
Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month.
Climate
Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to .
Martian surface temperatures vary from lows of about to highs of up to in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight.
Mars has the largest dust storms in the Solar System, reaching speeds of over . These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature.
Seasons also produce dry ice covering polar ice caps. Large areas of the polar regions of Mars
Hydrology
While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps.
The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of .
Water in its liquid form cannot prevail on the surface of Mars due to the low atmospheric pressure on Mars, which is less than 1% that of Earth, only at the lowest of elevations pressure and temperature is high enough for water being able to be liquid for short periods.
Water in the atmosphere is small, but enough to produce larger clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice.
Past hydrosphere
Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is long, much greater than the Grand Canyon, with a width of and a depth of in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago.
Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases.
Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water.
History of observations and findings of water evidence
In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of .
On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of , during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive.
Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately of water ice.
In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system.
Orbital motion
Mars's average distance from the Sun is roughly , and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth.
The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb.
Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years.
Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition.
The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about across when Earth and Mars are closest because of Earth's atmosphere.
As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval.
Moons
Mars has two relatively small (compared to Earth's) natural moons, Phobos (about in diameter) and Deimos (about in diameter), which orbit close to the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit.
Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης).
From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbitwhere the orbital period would match the planet's period of rotationrises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet.
The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than in diameter, and a dust ring is predicted to exist between Phobos and Deimos.
A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite.
Human observations and exploration
The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth.
Ancient and medieval observations
The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as . Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake.
In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" based on the Wuxing system.
During the seventeenth century A.D., Tycho Brahe measured the diurnal parallax of Mars that Johannes Kepler used to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum. When the telescope became available, the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. In 1610, Mars was viewed by Italian astronomer Galileo Galilei, who was first to see it via telescope. The first person to draw a map of Mars that displayed any terrain features was the Dutch astronomer Christiaan Huygens.
Martian "canals"
By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals".
Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time.
The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an telescope, irregular patterns were observed, but no canali were seen.
Robotic exploration
Dozens of crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars by the Soviet Union, the United States, Europe, India, the United Arab Emirates, and China to study the planet's surface, climate, and geology. NASA's Mariner 4 was the first spacecraft to visit Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space.
Once spacecraft visited the planet during NASA's Mariner missions in the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made, and the Mars Global Surveyor mission, which launched in 1996 and operated until late 2006, produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. These maps are available online at websites including Google Mars. Both the Mars Reconnaissance Orbiter and Mars Express continued exploring with new instruments and supporting lander missions. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity.
, Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover.
Planned missions to Mars include:
NASA's EscaPADE spacecraft, planned to launch in 2025.
The Rosalind Franklin rover mission, designed to search for evidence of past life, which was intended to be launched in 2018 but has been repeatedly delayed, with a launch date pushed to 2028 at the earliest. The project was restarted in 2024 with additional funding.
A current concept for a joint NASA-ESA mission to return samples from Mars would launch in 2026.
China's Tianwen-3, a sample return mission, scheduled to launch in 2030.
, debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components.
In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging.
Habitability and the search for life
During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars.
The current understanding of planetary habitabilitythe ability of a world to develop environmental conditions favorable to the emergence of lifefavors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life.
The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet.
Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive.
Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site.
The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available.
Human mission proposals
Several plans for a human mission to Mars have been proposed throughout the 20th and 21st centuries, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report eventually concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, there are groups such as the Mars Society and The Planetary Society that advocates for human missions to Mars.
In culture
Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "the bringer of war". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri.
The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". High-resolution mapping of the surface of Mars revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears."
The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy.
See also
Astronomy on Mars
Notes
References
Further reading
External links
Mars Trek, an integrated map browser of maps and datasets for Mars
Google Mars and Google Mars 3D, interactive maps of the planet
First TV image of Mars (15 July 1965), CNN News; 15 July 2023
Articles containing video clips
Astronomical objects known since antiquity
Planets of the Solar System
Terrestrial planets
Solar System | Mars | [
"Astronomy"
] | 10,604 | [
"Outer space",
"Solar System"
] |
14,640,482 | https://en.wikipedia.org/wiki/Turtle%20Island%20%28book%29 | Turtle Island is a book of poems and essays written by Gary Snyder and published by New Directions in 1974. The writings express Snyder's vision for humans to live in harmony with the earth and all its creatures. The book was awarded the Pulitzer Prize for Poetry in 1975. "Turtle Island" is a name for the continent of North America used by many Native American tribes.
Background
By the late 1950s, Snyder had established himself as one of the major American poets of his generation. He was associated with both the Beat Generation and the regional San Francisco Renaissance. He spent much of the 1960s traveling between California and Japan, where he studied Zen. In 1966, he met Masa Uehara while in Osaka. They married the following year and had their first child, Kai, in April 1968; by December, Snyder and his new family moved to California. His return coincided with the highest crest of 1960s counterculture, as well as the nascent environmental movement. He was received as an elder statesman by both the hippies and the environmentalists, and he became a public intellectual who gave public lectures, making television appearances, and publishing new writing.
Many of the poems and essays in the book had been previously published. The essay "Four Changes" first appeared in The Environmental Handbook, a collection published by David Brower and Friends of the Earth for the first Earth Day in 1970. "Four Changes" was initially published anonymously with no copyright notice, and consequently it was widely reproduced. One of the poems, "The Hudsonian Curlew", was first published in the November 1969 issue of Poetry magazine. Some of the poems were published in 1972 as a limited-edition collection titled Manzanita.
Many of the poems in Turtle Island are political in nature, like much of Snyder's poetry of the late 1960s, albeit with a different focus than that of his earlier writings. With American military involvement in the Vietnam War coming to a close, Snyder's attention had turned from matters of war and peace to environmental and ecological concerns. In 1973 several of Snyder's friends, interested in his new direction, gathered in Berkeley, California to hear him read his new work. At the reading, Snyder asked whether these political poems could "succeed as poetry"; his friends "reportedly refused to pass judgment" on the question. Later, the poet's UC Davis colleague Jack Hicks related words from a female graduate student who took one of Snyder's classes in the late 1980s: "there are two kinds of political poetry: Suckers—rare—seduce you to the point. Whackers assault you with the message.... I cited Turtle Island as a blatant whacker, and Gary defended it strongly. But first he listened."
Contents
Turtle Island is split into four sections. The first three—Manzanita, Magpie's Song, and For the Children—include a total of almost 60 poems, while the fourth section, Plain Talk, includes five prose essays. The collection includes many of Snyder's most commonly quoted and anthologized poems. There is also an introduction, in which Snyder explains the significance of the book's title.
Reception
In his review of Turtle Island in Poetry magazine, critic Richard Howard commented that the book describes "where we are and where he wants us to be," although the difference between those two is "so vast that the largely good-humored resonance of the poems attests to Snyder's forbearance, his enforced detachment." He praised the book's poems for their meditative quality and their lack of preachiness or invective. He described the poems as "transitory, elliptical, extraterritorial" works, in which "the world becomes largely a matter of contours and traces to be guessed at, marveled over, left alone."
In Library Journal, James McKenzie wrote:
Writing for the Christian Science Monitor, Victor Howes praised the book's "gentle, uncomplicated love-lyrics to planet earth" and said it would be equally appealing to poetry readers and to conservationists. Herbert Leibowitz, writing for the New York Times Book Review, was less enthusiastic. While Leibowitz found merit in a select few poems and praised Snyder's prose as "vigorous and persuasive", he found the collection "flat, humorless ... uneventful ... [and] oddly egotistical". In his view, it was "a textbook example of the limits of Imagism." Still, the critic said he was "reluctant to mention these doubts" because he found Snyder's fundamental environmentalist message to be so laudatory, even "on the side of the gods."
The printing of the first American edition was limited to 2,000 copies. As of 2005, the book had been reprinted roughly once a year in the United States, placing it among a handful of Snyder's books that have never gone out-of-print. It has sold more than 100,000 copies. The book has been translated into Swedish (by Reidar Ekner in 1974), French (by Brice Matthieussent in 1977), Japanese (by Nanao Sakaki in 1978), and German (by Ronald Steckel in 1980).
Pulitzer Prize
Snyder received the Pulitzer Prize for Poetry for Turtle Island in May 1975. Because of Snyder's remoteness at Kitkitdizze, news of the award took some time to reach him. It was the first time a Pulitzer had been given to a poet from the West Coast. The prestigious award helped to legitimize Snyder's idiosyncratic worldview in the intellectual mainstream.
Along with the award itself, Snyder received a check for $1,000 (). According to his friend Steve Sanfield, Snyder quietly donated the money to a local volunteer organization that was building a new school in the San Juan Ridge area. Snyder maintained that the best perk of winning the Pulitzer Prize was that people no longer introduced him as "a Beat poet".
Citations
References
Bibliography
Journal and web articles
External links
Turtle Island at the Internet Archive; a digital copy of the book can be borrowed for 14 days with registration
Several poems from Turtle Island have been published online by the Poetry Foundation:
"The Bath"
"The Hudsonian Curlew"
"I Went into the Maverick Bar"
1974 in the environment
1974 non-fiction books
1974 poetry books
American essay collections
American poetry collections
Books about California
Books about environmentalism
Books about North America
Deep ecology
Ecology books
English-language books
Environmental non-fiction books
Essays about poetry
New Directions Publishing books
Pulitzer Prize for Poetry–winning works
Simple living | Turtle Island (book) | [
"Biology",
"Environmental_science"
] | 1,361 | [
"Biological hypotheses",
"Deep ecology",
"Biophilia hypothesis",
"Environmental ethics"
] |
14,640,519 | https://en.wikipedia.org/wiki/Fungal%20mating%20pheromone%20receptors | Fungal pheromone mating factor receptors form a distinct family of G-protein-coupled receptors.
Function
Mating factor receptors STE2 and STE3 are integral membrane proteins that may be involved in the response to mating factors on the cell membrane. The amino acid sequences of both receptors contain high proportions of hydrophobic residues grouped into 7 domains, in a manner reminiscent of the rhodopsins and other receptors believed to interact with G-proteins.
References
G protein-coupled receptors
Protein domains
Protein families
Membrane proteins | Fungal mating pheromone receptors | [
"Chemistry",
"Biology"
] | 103 | [
"Protein classification",
"Signal transduction",
"G protein-coupled receptors",
"Protein domains",
"Membrane proteins",
"Protein families"
] |
14,640,617 | https://en.wikipedia.org/wiki/Cyclic%20AMP%20receptors | Cyclic AMP receptors from slime molds are a distinct family of
G-protein coupled receptors. These receptors control development in
Dictyostelium discoideum.
In D. discoideum, the cyclic AMP receptors coordinate aggregation of individual cells into a multicellular organism, and regulate the expression of a large number of developmentally-regulated genes. The amino acid sequences of the receptors contain high proportions of hydrophobic residues grouped into 7 domains, in a manner reminiscent of the rhodopsins and other receptors believed to interact with G-proteins. However, while a similar 3D framework has been proposed to account for this, there is no significant sequence similarity between these families: the cAMP receptors thus bear their own unique '7TM' signature.
See also
cAMP receptor protein
References
G protein-coupled receptors
Protein domains
Protein families
Membrane proteins | Cyclic AMP receptors | [
"Chemistry",
"Biology"
] | 169 | [
"Protein classification",
"Signal transduction",
"G protein-coupled receptors",
"Protein domains",
"Membrane proteins",
"Protein families"
] |
14,640,744 | https://en.wikipedia.org/wiki/Turtle%20Island | Turtle Island is a name for Earth or North America, used by some American Indigenous peoples, as well as by some Indigenous rights activists. The name is based on a creation myth common to several indigenous peoples of the Northeastern Woodlands of North America.
A number of contemporary works continue to use and/or tell the Turtle Island creation story.
Lenape
The Lenape story of the "Great Turtle" was first recorded by Europeans between 1678 and 1680 by Jasper Danckaerts. The story is shared by other Northeastern Woodlands tribes, notably the Iroquois peoples.
The Lenape believe that before creation there was nothing, an empty dark space. However, in this emptiness, there existed a spirit of their creator, Kishelamàkânk. Eventually in that emptiness, he fell asleep. While he slept, he dreamt of the world as we know it today, the Earth with mountains, forests, and animals. He also dreamt up man, and he saw the ceremonies man would perform. Then he woke up from his dream to the same nothingness he was living in before. Kishelamàkânk then started to create the Earth as he had dreamt it.
First, he created helper spirits, the Grandfathers of the North, East, and West, and the Grandmother of the South. Together, they created the Earth just as Kishelamàkânk had dreamt it. One of their final acts was creating a special tree. From the roots of this tree came the first man, and when the tree bent down and kissed the ground, woman sprang from it.
All the animals and humans did their jobs on the Earth, until a problem eventually arose. There was a tooth of a giant bear that could give the owner magical powers, and the humans started to fight over it. Eventually, the wars got so bad that people moved away, and made new tribes and new languages. Kishelamàkânk saw this fighting and decided to send down a spirit, Nanapush, to bring everyone back together. He went on top of a mountain and started the first Sacred Fire, which gave off a smoke that caused all the people of the world to come investigate what it was. When they all came, Nanapush created a pipe with a sumac branch and a soapstone bowl, and the creator gave him Tobacco to smoke with. Nanapush then told the people that whenever they fought with each other, to sit down and smoke tobacco in the pipe, and they would make decisions that were good for everyone.
The same bear tooth later caused a fight between two evil spirits, a giant toad and an evil snake. The toad was in charge of all the waters, and amidst the fighting he ate the tooth and the snake. The snake then proceeded to bite his side, releasing a great flood upon the Earth. Nanapush saw this destruction and began climbing a mountain to avoid the flood, all the while grabbing animals that he saw and sticking them in his sash. At the top of the mountain there was a cedar tree that he started to climb, and as he climbed he broke off limbs of the tree. When he got to the top of the tree, he pulled out his bow, played it and sang a song that made the waters stop. Nanapush then asked which animal he could put the rest of the animals on top of in the water. The turtle volunteered saying he'd float and they could all stay on him, and that's why they call the land Turtle Island.
Nanapush then decided the turtle needed to be bigger for everyone to live on, so he asked the animals if one of them would dive down into the water to get some of the old Earth. The beaver tried first, but came up dead and Nanapush had to revive him. The loon tried second, but its attempt ended with the same fate. Lastly, the muskrat tried. He stayed down the longest, and came up dead as well, but he had some Earth on his nose that Nanapush put on the Turtles back. Because of his accomplishment, Nanapush told the muskrat he was blessed and his kind would always thrive in the land.
Nanapush then took out his bow and again sang, and the turtle started to grow. It kept growing, and Nanapush sent out animals to try to get to the edge to see how long it had grown. First, he sent the bear, and the bear returned in two days saying he had reached the end. Next, he sent out the deer, who came back in two weeks saying he had reached the end. Finally, he sent the wolf, and the wolf never returned because the land had gotten so big. Lenape tradition said wolves howl to call their ancestor back home.
Haudenosaunee
According to the oral tradition of the Haudenosaunee (or "Iroquois"), "the earth was the thought of [a ruler of] a great island which floats in space [and] is a place of eternal peace." Sky Woman fell down to the earth when it was covered with water, or more specifically, when there was a "great cloud sea". Various animals tried to swim to the bottom of the ocean to bring back dirt to create land. Muskrat succeeded in gathering dirt, which was placed on the back of a turtle. This dirt began to multiply and also caused the turtle to grow bigger. The turtle continued to grow bigger and bigger and the dirt continued to multiply until it became a huge expanse of land. Thus, when Iroquois cultures refer to the earth, they often call it Turtle Island.
According to Converse and Parker, the Iroquois faith shared with other religions the "belief that the Earth is supported by a gigantic turtle." In the Seneca language, the mythical turtle is called Hah-nu-nah, while the name for an everyday turtle is ha-no-wa.
In Susan M. Hill's version of the story, the muskrat or other animals die in their search for land for the Sky Woman (named Mature Flower in Hills's telling). This is a representation of the Haudenosaunee beliefs of death and chaos as forces of creation, as we all give our bodies to the land to become soil, which in turn continues to support life. This concept plays out again when the Mature Flower's daughter dies during childbirth, becoming the first person to be buried on the turtle's back and whose burial post helped grow various plants such as corn and strawberries. This, according to Hill, also shows how soil, and the land itself, has the ability to act and shape creation. Some tellings do not include this expanded edition as part of the Creation Story, however, these differences are important to note when considering Haudenosaunee traditions and relationships.
Indigenous rights activism and environmentalism
The name Turtle Island has been used by many Indigenous cultures in North America, and both native and non-native activists, especially since the 1970s when the term came into wider usage. American author and ecologist Gary Snyder uses the term to refer to North America, writing that it synthesizes both indigenous and colonizer cultures, by translating the indigenous name into the colonizer's languages (the Spanish "Isla Tortuga" being proposed as a name as well). Snyder argues that understanding North America under the name of Turtle Island will help shift conceptions of the continent. Turtle Island has been used by writers and musicians, including Snyder for his Pulitzer Prize-winning book of poetry, Turtle Island; the Turtle Island Quartet jazz string quartet; Tofurky manufacturer Turtle Island Foods; and the Turtle Island Research Cooperative in Boise, Idaho.
The Canadian Association of University Teachers has put into practice the acknowledgment of indigenous territory and claims, particularly at institutions located within unceded land or covered by perpetual decrees such as the Haldimand Tract. At Canadian universities, many courses, student and academic meetings, as well as convocation and other celebrations begin with a spoken acknowledgement of the traditional Indigenous territories, sometimes including reference to Turtle Island, in which they are taking place.
Contemporary works
There are a number of contemporary works which continue to use and/or tell the story of the Turtle Island creation story.
The Truth About Stories by Thomas King
Thomas King's book tells us that "the truth about stories is they're all we are." King's book explores the power of story both in native lives and in the lives of every person on this planet. Every chapter opens with a telling of the story of the world on the back of a turtle in space, and in each chapter, it is slightly altered to show how stories change through tellers and audiences. Their fluidity is itself a characteristic of the story as they traverse through time.
King provides us with his own telling of the story using a woman named Charm as his Sky Woman. Charm is from a different planet and is described as being curious to a fault, often asking the animals of her planet questions they deem to be too nosy. When she becomes pregnant, she develops a craving for Red Fern Root, which can only be found underneath the oldest tree. While digging for the Red Fern Root she digs so deep she makes a hole in the planet, and in her curiosity falls through all the way to earth. King tells us that this is a young Earth from before land was created, and in order to save Charm from falling hard and fast into the water and upsetting the stillness of the water, all the water birds fly up to catch her. With no land to set her on they offer her the back of the turtle. When Charm is almost ready to give birth the animals fear that the turtle will be too crowded, so she asks the animals to dive down to find mud so that she can use its magic to build dry land. Many animals try but most fail, until the otter dives down for days before finally surfacing, passed out from exhaustion, clutching mud in its paws. Charm creates land from the mud, magic, and the turtle's back and gives birth to twins which keep the earth in balance. One twin flattened out the land, created light, and created woman, while the other made valleys and mountains, shadows, and man.
King emphasizes that the Turtle Island creation story creates "a world in which creation is a shared activity...a world that begins in chaos and moves toward harmony." He explains that understanding and continuing to tell this story creates a world that values these ideas and relationships with nature. Without that understanding, we fail to uphold the relationships forged by Charm, the twins, and the animals that created the earth.
Braiding Sweetgrass by Robin Wall Kimmerer
Robin Wall Kimmerer's book, Braiding Sweetgrass, addresses the need for us to understand our reciprocal relationships with nature in order for us to understand and use ecology as a means to save the earth. The version of the story from Kimmerer starts off with the Sky Woman falling from a hole in the sky, cradling something tightly in her hands. Geese rise up to soften her landing and place her on the back of a turtle so that she does not drown. All the animals congregate to help find dirt for the sky woman so that she can build her habitat, some giving their lives in the search. Finally, the muskrat surfaces, dead but clutching a handful of soil for the Sky Woman, who takes the offering gratefully and uses seeds from The Tree of Life to begin her garden using her gratitude and the gifts from the animals, thus creating Turtle Island as we know it. Through the Sky Woman story, Kimmerer tells us that we cannot "begin to move toward ecological and cultural sustainability if we cannot even imagine what the path feels like."
Cherokee Stories of the Turtle Island Liars' Club by Christopher B. Teuton
Christopher B. Teuton book provides a comprehensive look into Cherokee oral traditions and art to bring them into the contemporary moment. He put together his collection with three friends, also master storytellers, who get together to swap stories from around the 14 Cherokee states. The first chapter of the book Beginnings starts with a telling of the Sky Woman story. Notably, this telling of Turtle Island has the water beetle dive for the earth necessary for the sky woman, where often you will see a muskrat or otter. Turtle Island is a running theme throughout the book, as it is the beginning of life and story.
We Are Water Protectors by Carole Lindstrom
We Are Water Protectors is a children's storybook written by Carole Lindstrom in 2020 in response to the building of the Dakota Access Pipeline, represented as a large black snake in the book. The book says that water is the source of all life, and it is all of ours duty to protect our water sources so that we can preserve not only ourselves but those of animals and the environment. The story draws important meanings from the Turtle Island creation story such as water as the origin of life and closes with a drawing of the main character returning the turtle to the water saying "We are stewards of the earth. Our spirits are not to be broken."
See also
Geographical renaming – the practice of political renaming
Abya Yala – a similar name used by the Guna people and others to refer to the Americas as a whole
Aotearoa – the Māori name for New Zealand
Aztlán – the legendary ancestral home of the Aztec peoples
Anahuac – Nahuatl name for the historical and cultural region of Mexico
Cemanahuac – Nahuatl name used by the Mexica to refer to the larger region beyond their empire, between the Pacific and Atlantic Ocean
Turtles in North American Indigenous Mythology
World Turtle
Discworld
Turtle Island (Lake Erie)
References
Specific
Bibliography
External links
Geography of North America
Iroquois legendary creatures
Legendary creatures of the indigenous peoples of North America
Legendary turtles
Mythological islands
Native American toponymy
Creation myths | Turtle Island | [
"Astronomy"
] | 2,836 | [
"Cosmogony",
"Creation myths"
] |
14,641,162 | https://en.wikipedia.org/wiki/Tektin | Tektins are cytoskeletal proteins found in cilia and flagella as structural components of outer doublet microtubules. They are also present in centrioles and basal bodies. They are polymeric in nature, and form filaments.
They include TEKT1, TEKT2, TEKT3, TEKT4, TEKT5.
Structure
Tektin filaments are 2 to 3 nm diameter with two alpha helical segments. They have the consensus amino acid sequence of RPNVELCRD. Different types of tektins, designated as A (53 kDa), B (51 kDa), C (47 kDa) form dimers, trimers and oligomers in various combinations and are also associated with tubulin in the microtubule. Tektins A and B form heteropolymeric protofilaments whereas tektin C forms homodimers. Tektin filaments are present in a supercoiled state. This structure of tektins suggests that they are evolutionarily related to intermediate filaments.
The proteins are predicted to form extended rods composed of 2 alpha- helical segments (~180 residues long) capable of forming coiled coils, interrupted by non-helical linkers. The 2 segments are similar in sequence, indicating a gene duplication event. Along each tektin rod, cysteine residues occur with a periodicity of ~8 nm, coincident with the axial repeat of tubulin dimers in microtubules. It is proposed that the assembly of tektin heteropolymers produces filaments with repeats of 8, 16, 24, 32, 40, 48 and 96 nm, generating the basis for the complex spatial arrangements of axonemal components.
Function
Tektins as integral components of microtubules are essential for their structural integrity. A mutation in the tektin-t genes may lead to defects in flagellar activity which could manifest, for instance, as immotility of sperm leading to male infertility. Tektins are thought to be involved in the assembly of the basal body.
The study of tektins has also been found to be useful in phylogeny, to establish evolutionary relationship between organisms.
Amino acid sequences of tektins are well conserved, with significant similarity between mouse and human homologs.
See also
Tubulin
Microtubule
References
Protein families | Tektin | [
"Biology"
] | 503 | [
"Protein families",
"Protein classification"
] |
14,641,222 | https://en.wikipedia.org/wiki/Longitudinal%20stability | In flight dynamics, longitudinal stability is the stability of an aircraft in the longitudinal, or pitching, plane. This characteristic is important in determining whether an aircraft pilot will be able to control the aircraft in the pitching plane without requiring excessive attention or excessive strength.
The longitudinal stability of an aircraft, also called pitch stability, refers to the aircraft's stability in its plane of symmetry about the lateral axis (the axis along the wingspan). It is an important aspect of the handling qualities of the aircraft, and one of the main factors determining the ease with which the pilot is able to maintain level flight.
Longitudinal static stability refers to the aircraft's initial tendency on pitching. Dynamic stability refers to whether oscillations tend to increase, decrease or stay constant.
Static stability
If an aircraft is longitudinally statically stable, a small increase in angle of attack will create a nose-down pitching moment on the aircraft, so that the angle of attack decreases. Similarly, a small decrease in angle of attack will create a nose-up pitching moment so that the angle of attack increases. This means the aircraft will self-correct longitudinal (pitch) disturbances without pilot input.
If an aircraft is longitudinally statically unstable, a small increase in angle of attack will create a nose-up pitching moment on the aircraft, promoting a further increase in the angle of attack.
If the aircraft has zero longitudinal static stability it is said to be statically neutral, and the position of its center of gravity is called the neutral point.
The longitudinal static stability of an aircraft depends on the location of its center of gravity relative to the neutral point. As the center of gravity moves increasingly forward, the pitching moment arm is increased, increasing stability. The distance between the center of gravity and the neutral point is defined as "static margin". It is usually given as a percentage of the mean aerodynamic chord. If the center of gravity is forward of the neutral point, the static margin is positive. If the center of gravity is aft of the neutral point, the static margin is negative. The greater the static margin, the more stable the aircraft will be.
Most conventional aircraft have positive longitudinal stability, providing the aircraft's center of gravity lies within the approved range. The operating handbook for every airplane specifies a range over which the center of gravity is permitted to move. If the center of gravity is too far aft, the aircraft will be unstable. If it is too far forward, the aircraft will be excessively stable, which makes the aircraft "stiff" in pitch and hard for the pilot to bring the nose up for landing. Required control forces will be greater.
Some aircraft have low stability to reduce trim drag. This has the benefit of reducing fuel consumption. Some aerobatic and fighter aircraft may have low or even negative stability to provide high manoeuvrability. Low or negative stability is called relaxed stability. An aircraft with low or negative static stability will typically have fly-by-wire controls with computer augmentation to assist the pilot. Otherwise, an aircraft with negative longitudinal stability will be more difficult to fly. It will be necessary for the pilot devote more effort, make more frequent inputs to the elevator control, and make larger inputs, in an attempt to maintain the desired pitch attitude.
For an aircraft to possess positive static stability, it is not necessary for its level to return to exactly what it was before the upset. It is sufficient that the speed and orientation do not continue to diverge but undergo at least a small change back towards the original speed and orientation.
The deployment of flaps will increase longitudinal stability.
Unlike motion about the other two axes, and in the other degrees of freedom of the aircraft (sideslip translation, rotation in roll, rotation in yaw), which are usually heavily coupled, motion in the longitudinal plane does not typically cause a roll or yaw.
A larger horizontal stabilizer, and a greater moment arm of the horizontal stabilizer about the neutral point, will increase longitudinal stability.
Tailless aircraft
For a tailless aircraft, the neutral point coincides with the aerodynamic center, and so for such aircraft to have longitudinal static stability, the center of gravity must lie ahead of the aerodynamic center.
For missiles with symmetric airfoils, the neutral point and the center of pressure are coincident and the term neutral point is not used.
An unguided rocket must have a large positive static margin so the rocket shows minimum tendency to diverge from the direction of flight given to it at launch. In contrast, guided missiles usually have a negative static margin for increased maneuverability.
Dynamic stability
Longitudinal dynamic stability of a statically stable aircraft refers to whether the aircraft will continue to oscillate after a disturbance, or whether the oscillations are damped. A dynamically stable aircraft will experience oscillations reducing to nil. A dynamically neutral aircraft will continue to oscillate around its original level, and dynamically unstable aircraft will experience increasing oscillations and displacement from its original level.
Dynamic stability is caused by damping. If damping is too great, the aircraft will be less responsive and less manoeuvrable.
Decreasing phugoid (long-period) oscillations can be achieved by building a smaller stabilizer on a longer tail, and by shifting the center of gravity to the rear.
An aircraft that is not statically stable cannot be dynamically stable.
Analysis
Near the cruise condition most of the lift force is generated by the wings, with ideally only a small amount generated by the fuselage and tail. We may analyse the longitudinal static stability by considering the aircraft in equilibrium under wing lift, tail force, and weight. The moment equilibrium condition is called trim, and we are generally interested in the longitudinal stability of the aircraft about this trim condition.
Equating forces in the vertical direction:
where W is the weight, is the wing lift and is the tail force.
For a thin airfoil at low angle of attack, the wing lift is proportional to the angle of attack:
where is the wing area is the (wing) lift coefficient, is the angle of attack. The term is included to account for camber, which results in lift at zero angle of attack. Finally is the dynamic pressure:
where is the air density and is the speed.
Trim
The force from the tail-plane is proportional to its angle of attack, including the effects of any elevator deflection and any adjustment the pilot has made to trim-out any stick force. In addition, the tail is located in the flow field of the main wing, and consequently experiences downwash, reducing its angle of attack.
In a statically stable aircraft of conventional (tail in rear) configuration, the tail-plane force may act upward or downward depending on the design and the flight conditions. In a typical canard aircraft both fore and aft planes are lifting surfaces. The fundamental requirement for static stability is that the aft surface must have greater authority (leverage) in restoring a disturbance than the forward surface has in exacerbating it. This leverage is a product of moment arm from the center of gravity and surface area. Correctly balanced in this way, the partial derivative of pitching moment with respect to changes in angle of attack will be negative: a momentary pitch up to a larger angle of attack makes the resultant pitching moment tend to pitch the aircraft back down. (Here, pitch is used casually for the angle between the nose and the direction of the airflow; angle of attack.) This is the "stability derivative" d(M)/d(alpha), described below.
The tail force is, therefore:
where is the tail area, is the tail force coefficient, is the elevator deflection, and is the downwash angle.
A canard aircraft may have its foreplane rigged at a high angle of incidence, which can be seen in a canard catapult glider from a toy store; the design puts the c.g. well forward, requiring nose-up lift.
Violations of the basic principle are exploited in some high performance "relaxed static stability" combat aircraft to enhance agility; artificial stability is supplied by active electronic means.
There are a few classical cases where this favorable response was not achieved, notably in T-tail configurations. A T-tail airplane has a higher horizontal tail that passes through the wake of the wing later (at a higher angle of attack) than a lower tail would, and at this point the wing has already stalled and has a much larger separated wake. Inside the separated wake, the tail sees little to no freestream and loses effectiveness. Elevator control power is also heavily reduced or even lost, and the pilot is unable to easily escape the stall. This phenomenon is known as 'deep stall'.
Taking moments about the center of gravity, the net nose-up moment is:
where is the location of the center of gravity behind the aerodynamic center of the main wing, is the tail moment arm.
For trim, this moment must be zero. For a given maximum elevator deflection, there is a corresponding limit on center of gravity position at which the aircraft can be kept in equilibrium. When limited by control deflection this is known as a 'trim limit'. In principle trim limits could determine the permissible forwards and rearwards shift of the center of gravity, but usually it is only the forward cg limit which is determined by the available control, the aft limit is usually dictated by stability.
In a missile context 'trim limit' more usually refers to the maximum angle of attack, and hence lateral acceleration which can be generated.
Static stability
The nature of stability may be examined by considering the increment in pitching moment with change in angle of attack at the trim condition. If this is nose up, the aircraft is longitudinally unstable; if nose down it is stable. Differentiating the moment equation with respect to :
Note: is a stability derivative.
It is convenient to treat total lift as acting at a distance h ahead of the centre of gravity, so that the moment equation may be written:
Applying the increment in angle of attack:
Equating the two expressions for moment increment:
The total lift is the sum of and so the sum in the denominator can be simplified and written as the derivative of the total lift due to angle of attack, yielding:
Where c is the mean aerodynamic chord of the main wing. The term:
is known as the tail volume ratio. Its coefficient, the ratio of the two lift derivatives, has values in the range of 0.50 to 0.65 for typical configurations. Hence the expression for h may be written more compactly, though somewhat approximately, as:
is known as the static margin. For stability it must be negative. (However, for consistency of language, the static margin is sometimes taken as , so that positive stability is associated with positive static margin.)
See also
Directional stability
Flight dynamics
Handling qualities
Phugoid
Yaw damper
References
Aerospace engineering
Aircraft aerodynamics
Flight control systems
Aviation science | Longitudinal stability | [
"Engineering"
] | 2,227 | [
"Aerospace engineering"
] |
14,642,717 | https://en.wikipedia.org/wiki/Tilt%20test%20%28vehicle%20safety%20test%29 | The tilt test is a type of safety test that certain government vehicle certification bodies require new vehicle designs to pass before being allowed on the road or rail track.
The test is an assessment of the weight distribution and hence the position of the centre of gravity of the vehicle, and can be carried out in a laden or unladen state, i.e. with or without passengers or freight. The test can be applied to automobiles, trucks, buses and rail vehicles.
The test involves tilting the vehicle in the notional direction of the side of the vehicle, on a movable platform. In order to pass the test, the vehicle must not tip over before a specified angle of tilt is reached by the table.
In the United Kingdom, double-decker buses have to: "be capable of leaning, fully laden on top, at an angle of 28 deg without toppling over before they are allowed on the road."
The same 28-degree requirement is in place in Hong Kong for double-decker buses. For single-deckers the requirement is 35 degrees.
See also
Vehicle metrics
Weight distribution
Moose test
References
Product safety
Vehicle design | Tilt test (vehicle safety test) | [
"Engineering"
] | 230 | [
"Vehicle design",
"Design"
] |
14,642,741 | https://en.wikipedia.org/wiki/Digital%20image%20correlation%20and%20tracking | Digital image correlation and tracking is an optical method that employs tracking and image registration techniques for accurate 2D and 3D measurements of changes in images. This method is often used to measure full-field displacement and strains, and it is widely applied in many areas of science and engineering. Compared to strain gauges and extensometers, digital image correlation methods provide finer details about deformation, due to the ability to provide both local and average data.
Overview
Digital image correlation (DIC) techniques have been increasing in popularity, especially in micro- and nano-scale mechanical testing applications due to their relative ease of implementation and use. Advances in computer technology and digital cameras have been the enabling technologies for this method and while white-light optics has been the predominant approach, DIC can be and has been extended to almost any imaging technology.
The concept of using cross-correlation to measure shifts in datasets has been known for a long time, and it has been applied to digital images since at least the early 1970s. The present-day applications are almost innumerable, including image analysis, image compression, velocimetry, and strain estimation. Much early work in DIC in the field of mechanics was led by researchers at the University of South Carolina in the early 1980s and has been optimized and improved in recent years. Commonly, DIC relies on finding the maximum of the correlation array between pixel intensity array subsets on two or more corresponding images, which gives the integer translational shift between them. It is also possible to estimate shifts to a finer resolution than the resolution of the original images, which is often called "sub-pixel" registration because the measured shift is smaller than an integer pixel unit. For sub-pixel interpolation of the shift, other methods do not simply maximize the correlation coefficient. An iterative approach can also be used to maximize the interpolated correlation coefficient by using non-linear optimization techniques. The non-linear optimization approach tends to be conceptually simpler and can handle large deformations more accurately, but as with most nonlinear optimization techniques , it is slower.
The two-dimensional discrete cross correlation can be defined in several ways, one possibility being:
Here f(m, n) is the pixel intensity or the gray-scale value at a point (m, n) in the original image, g(m, n) is the gray-scale value at a point (m, n) in the translated image, and are mean values of the intensity matrices f and g respectively.
However, in practical applications, the correlation array is usually computed using Fourier-transform methods, since the fast Fourier transform is a much faster method than directly computing the correlation.
Then taking the complex conjugate of the second result and multiplying the Fourier transforms together elementwise, we obtain the Fourier transform of the correlogram,:
where is the Hadamard product (entry-wise product). It is also fairly common to normalize the magnitudes to unity at this point, which results in a variation called phase correlation.
Then the cross-correlation is obtained by applying the inverse Fourier transform:
At this point, the coordinates of the maximum of give the integer shift:
Deformation mapping
For deformation mapping, the mapping function that relates the images can be derived from comparing a set of subwindow pairs over the whole images. (Figure 1). The coordinates or grid points (xi, yj) and (xi*, yj*) are related by the translations that occur between the two images. If the deformation is small and perpendicular to the optical axis of the camera, then the relation between (xi, yj) and (xi*, yj*) can be approximated by a 2D affine transformation such as:
Here u and v are translations of the center of the sub-image in the X and Y directions respectively. The distances from the center of the sub-image to the point (x, y) are denoted by and . Thus, the correlation coefficient rij is a function of displacement components (u, v) and displacement gradients
DIC has proven to be very effective at mapping deformation in macroscopic mechanical testing, where the application of specular markers (e.g. paint, toner powder) or surface finishes from machining and polishing provide the needed contrast to correlate images well. However, these methods for applying surface contrast do not extend to the application of free-standing thin films for several reasons. First, vapor deposition at normal temperatures on semiconductor grade substrates results in mirror-finish quality films with RMS roughnesses that are typically on the order of several nanometers. No subsequent polishing or finishing steps are required, and unless electron imaging techniques are employed that can resolve microstructural features, the films do not possess enough useful surface contrast to adequately correlate images. Typically this challenge can be circumvented by applying paint that results in a random speckle pattern on the surface, although the large and turbulent forces resulting from either spraying or applying paint to the surface of a free-standing thin film are too high and would break the specimens. In addition, the sizes of individual paint particles are on the order of μms, while the film thickness is only several hundred nanometers, which would be analogous to supporting a large boulder on a thin sheet of paper.
μDIC
Advances in pattern application and deposition at reduced length scales have exploited small-scale synthesis methods including nano-scale chemical surface restructuring and photolithography of computer-generated random specular patterns to produce suitable surface contrast for DIC. The application of very fine powder particles that electrostatically adhere to the surface of the specimen and can be digitally tracked is one approach. For Al thin films, fine alumina abrasive polishing powder was initially used since the particle sizes are relatively well controlled, although the adhesion to Al films was not very good and the particles tended to agglomerate excessively. The candidate that worked most effectively was a silica powder designed for a high temperature adhesive compound (Aremco, inc.), which was applied through a plastic syringe.
A light blanket of powder would coat the gage section of the tensile sample and the larger particles could be blown away gently. The remaining particles would be those with the best adhesion to the surface. While the resulting surface contrast is not ideal for DIC, the high intensity ratio between the particles and the background provide a unique opportunity to track the particles between consecutive digital images taken during deformation. This can be achieved quite straightforwardly using digital image processing techniques. Sub-pixel tracking can be achieved by a number of correlation techniques, or by fitting to the known intensity profiles of particles.
Photolithography and Electron Beam Lithography can be used to create micro tooling for micro speckle stamps, and the stamps can print speckle patterns onto the surface of the specimen. Stamp inks can be chosen which are appropriate for optical DIC, SEM-DIC, and simultaneous SEM-DIC/EBSD studies (the ink can be transparent to EBSD).
Digital volume correlation
Digital Volume Correlation (DVC, and sometimes called Volumetric-DIC) extends the 2D-DIC algorithms into three dimensions to calculate the full-field 3D deformation from a pair of 3D images. This technique is distinct from 3D-DIC, which only calculates the 3D deformation of an exterior surface using conventional optical images. The DVC algorithm is able to track full-field displacement information in the form of voxels instead of pixels. The theory is similar to above except that another dimension is added: the z-dimension. The displacement is calculated from the correlation of 3D subsets of the reference and deformed volumetric images, which is analogous to the correlation of 2D subsets described above.
DVC can be performed using volumetric image datasets. These images can be obtained using confocal microscopy, X-ray computed tomography, Magnetic Resonance Imaging or other techniques. Similar to the other DIC techniques, the images must exhibit a distinct, high-contrast 3D "speckle pattern" to ensure accurate displacement measurement.
DVC was first developed in 1999 to study the deformation of trabecular bone using X-ray computed tomography images. Since then, applications of DVC have grown to include granular materials, metals, foams, composites and biological materials. To date it has been used with images acquired by MRI imaging, Computer Tomography (CT), micro-CT, confocal microscopy, and lightsheet microscopy. DVC is currently considered to be ideal in the research world for 3D quantification of local displacements, strains, and stress in biological specimens. It is preferred because of the non-invasiveness of the method over traditional experimental methods.
Two of the key challenges are improving the speed and reliability of the DVC measurement. The 3D imaging techniques produce noisier images than conventional 2D optical images, which reduces the quality of the displacement measurement. Computational speed is restricted by the file sizes of 3D images, which are significantly larger than 2D images. For example, an 8-bit (1024x1024) pixel 2D image has a file size of 1 MB, while an 8-bit (1024x1024x1024) voxel 3D image has a file size of 1 GB. This can be partially offset using parallel computing.
Applications
Digital image correlation has demonstrated uses in the following industries:
Automotive
Aerospace
Biological
Industrial
Research and Education
Government and Military
Biomechanics
Robotics
Electronics
It has also been used for mapping earthquake deformation.
DIC Standardization
The International Digital Image Correlation Society () is a body composed of members from academia, government, and industry, and is involved in training and educating end-users about DIC systems and the standardization of DIC practice for general applications. Created in 2015, the iDIC has been focused on creating standardizations for DIC users.
See also
Optical flow
Stress
Strain
Displacement vector
Particle Image Velocimetry
Digital Image Correlation for Electronics
References
External links
Mathematica ImageCorrelate function
Using Digital Image Correlation to Measure Strain on a Tubine Blade
Image Systems DIC
DIC in Electronic Design
DIC Applications in Aerospace
3D Optical Strain Measurements
The International Digital Image Correlation Society (iDICs)
Continuum mechanics
Materials science
Optical metrology
Image processing | Digital image correlation and tracking | [
"Physics",
"Materials_science",
"Engineering"
] | 2,103 | [
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"nan"
] |
14,642,898 | https://en.wikipedia.org/wiki/Class%20C%20GPCR | The class C G-protein-coupled receptors () are a class of G-protein coupled receptors that include the metabotropic glutamate receptors () and several additional receptors.
Structurally they are composed of four elements; an N-terminal signal sequence; a large hydrophilic extracellular agonist-binding region containing several conserved cysteine residues which could be involved in disulphide bonds; a shorter region containing seven transmembrane domains; and a C-terminal cytoplasmic domain of variable length. This protein family includes metabotropic glutamate receptors, the extracellular calcium-sensing receptors, the gamma-amino-butyric acid (GABA) type B receptors, and the vomeronasal type-2 receptors.
Subfamilies
Calcium-sensing receptor-related
extracellular calcium-sensing receptor-related
Calcium-sensing receptor ()
GPRC6A ()
GABAB receptors
GABAB receptor (gamma-aminobutyric acid)
GABAB receptor 1 ()
GABAB receptor 2 ()
Metabotropic glutamate receptors
Metabotropic glutamate receptors (mGluR)
mGluR1 ()
mGluR2 ()
mGluR3 ()
mGluR4 ()
mGluR5 ()
mGluR6 ()
mGluR7 ()
mGluR8 ()
RAIG
Retinoic acid-inducible orphan G protein-coupled receptors (RAIG)
RAIG1 ()
RAIG2 ()
RAIG3 ()
RAIG4 ()
Taste receptors
Taste receptor
Taste receptor, type 1, member 1 ()
Taste receptor, type 1, member 2 ()
Taste receptor, type 1, member 3 ()
Orphan
Class C Orphan receptors
GPR158 ()
GPR179 ()
GPR156 ()
Other
Bride of sevenless protein
Vomeronasal receptor, type 2
References
G protein-coupled receptors | Class C GPCR | [
"Chemistry"
] | 409 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,643,140 | https://en.wikipedia.org/wiki/Debt%20service%20ratio | In economics and government finance, a country’s debt service ratio is the ratio of its debt service payments (principal + interest) to its export earnings. A country's international finances are healthier when this ratio is low. For most countries the ratio is between 0 and 20%.
In contrast to the debt service coverage ratio, which is calculated as income divided by debt, this ratio is inverse and calculated as debt service divided by country's income from international trade, i.e., exports.
References
Macroeconomic indicators
Financial ratios | Debt service ratio | [
"Mathematics"
] | 110 | [
"Financial ratios",
"Quantity",
"Metrics"
] |
14,643,464 | https://en.wikipedia.org/wiki/K-approximation%20of%20k-hitting%20set | In computer science, k-approximation of k-hitting set is an approximation algorithm for weighted hitting set. The input is a collection S of subsets of some universe T and a mapping W from T to non-negative numbers called the weights of the elements of T. In k-hitting set the size of the sets in S cannot be larger than k. That is, . The problem is now to pick some subset T' of T such that every set in S contains some element of T', and such that the total weight of all elements in T' is as small as possible.
The algorithm
For each set in S is maintained a price, , which is initially 0. For an element a in T, let S(a) be the collection of sets from S containing a. During the algorithm the following invariant is kept
We say that an element, a, from T is tight if . The main part of the algorithm consists of a loop: As long as there is a set in S that contains no element from T which is tight, the price of this set is increased as much as possible without violating the invariant above. When this loop exits, all sets contain some tight element. Pick all the tight elements to be the hitting set.
Correctness
The algorithm always terminates because in each iteration of the loop the price of some set in S is increased enough to make one more element from T tight. If it cannot increase any price, it exits. It runs in polynomial time because the loop will not make more iterations than the number of elements in the union of all the sets of S. It returns a hitting set, because when the loop exits, all sets in S contain a tight element from T, and the set of these tight elements are returned.
Note that for any hitting set T* and any prices where the invariant from the algorithm is true, the total weight of the hitting set is an upper limit to the sum over all members of T* of the sum of the prices of sets containing this element, that is: . This follows from the invariant property. Further, since the price of every set must occur at least once on the left hand side, we get . Especially, this property is true for the optimal hitting set.
Further, for the hitting set H returned from the algorithm above, we have . Since any price can appear at most k times in the left-hand side (since subsets of S can contain no more than k element from T) we get Combined with the previous paragraph we get , where T* is the optimal hitting set. This is exactly the guarantee that it is a k-approximation algorithm.
Relation to linear programming
This algorithm is an instance of the primal-dual method, also called the pricing method. The intuition is that it is dual to a linear programming algorithm. For an explanation see http://algo.inria.fr/seminars/sem00-01/vazirani.html.
References
.
.
Approximation algorithms | K-approximation of k-hitting set | [
"Mathematics"
] | 604 | [
"Mathematical relations",
"Approximations",
"Approximation algorithms"
] |
14,643,727 | https://en.wikipedia.org/wiki/Pirani%20gauge | The Pirani gauge is a robust thermal conductivity gauge used for the measurement of the pressures in vacuum systems. It was invented in 1906 by Marcello Pirani.
Marcello Stefano Pirani was a German physicist working for Siemens & Halske which was involved in the vacuum lamp industry. In 1905 their product was tantalum lamps which required a high vacuum environment for the filaments. The gauges that Pirani was using in the production environment were some fifty McLeod gauges, each filled with 2 kg of mercury in glass tubes.
Pirani was aware of the gas thermal conductivity investigations of Kundt and Warburg (1875) published thirty years earlier and the work of Marian Smoluchowski (1898). In 1906 he described his "directly indicating vacuum gauge" that used a heated wire to measure vacuum by monitoring the heat transfer from the wire by the vacuum environment.
Structure
The Pirani gauge consists of a metal sensor wire (usually gold plated tungsten or platinum) suspended in a tube which is connected to the system whose vacuum is to be measured. The wire is usually coiled to make the gauge more compact. The connection is usually made either by a ground glass joint or a flanged metal connector, sealed with an o-ring. The sensor wire is connected to an electrical circuit from which, after calibration, a pressure reading may be taken.
Mode of operation
In order to understand the technology, consider that in a gas filled system there are four ways that a heated wire transfers heat to its surroundings.
Gas conduction at high pressure (r representing the distance from the heated wire)
Gas transport at low pressure
Thermal radiation
End losses through the support structures
A heated metal wire (sensor wire, or simply sensor) suspended in a gas will lose heat to the gas as its molecules collide with the wire and remove heat. If the gas pressure is reduced, the number of molecules present will fall proportionately and the wire will lose heat more slowly. Measuring the heat loss is an indirect indication of pressure.
There are three possible schemes that can be done.
Keep the bridge voltage constant and measure the change in resistance as a function of pressure
Keep the current constant and measure the change in resistance as a function of pressure
Keep the temperature of the sensor wire constant and measure the voltage as a function of pressure
Note that keeping the temperature constant implies that the end losses (4.) and the thermal radiation losses (3.) are constant.
The electrical resistance of a wire varies with its temperature, so the resistance indicates the temperature of wire. In many systems, the wire is maintained at a constant resistance R by controlling the current I through the wire. The resistance can be set using a bridge circuit. The current required to achieve this balance is therefore a measure of the vacuum.
The gauge may be used for pressures between 0.5 Torr to 1×10−4 Torr. Below 5×10−4 Torr, a Pirani gauge has only one significant digit of resolution. The thermal conductivity and heat capacity of the gas affects the readout from the meter, and therefore the apparatus may need calibrating before accurate readings are obtainable. For lower pressure measurement, the thermal conductivity of the gas becomes increasingly smaller and more difficult to measure accurately, and other instruments such as a Penning gauge or Bayard–Alpert gauge are used instead.
Pulsed Pirani gauge
A special form of the Pirani gauge is the pulsed Pirani vacuum gauge where the sensor wire is not operated at a constant temperature, but is cyclically heated up to a certain temperature threshold by an increasing voltage ramp. When the threshold is reached, the heating voltage is switched off and the sensor cools down again. The required heat-up time is used as a measure of pressure.
For adequately low pressure, the following first-order dynamic thermal response model relating supplied heating power and sensor temperature T(t) applies:
where and are specific heat and emissivity of the sensor wire (material properties), and are surface area and mass of the sensor wire, and and are constants determined for each sensor in calibration.
Advantages and disadvantages of the pulsed gauge
Advantages
Significantly better resolution in the range above 75 Torr.
The power consumption is drastically reduced compared to continuously operated Pirani gauges.
The gauge's thermal influence on the real measurement is lowered considerably due to the low temperature threshold of 80 °C and the ramp heating in pulsed mode.
The pulsed mode can be efficiently implemented using modern microprocessors.
Disadvantages
Increased calibration effort
Longer heat-up phase
Alternative
An alternative to the Pirani gauge is the thermocouple gauge, which works on the same principle of detecting thermal conductivity of the gas by a change in temperature. In the thermocouple gauge, the temperature is sensed by a thermocouple rather than by the change in resistance of the heated wire.
References
External links
http://homepages.thm.de/~hg8831/vakuumlabor/litera.htm
Vacuum gauges
Pressure gauges | Pirani gauge | [
"Physics",
"Technology",
"Engineering"
] | 1,034 | [
"Vacuum",
"Measuring instruments",
"Vacuum gauges",
"Vacuum systems",
"Pressure gauges",
"Matter"
] |
14,644,044 | https://en.wikipedia.org/wiki/Okimate%2010 | The Okimate 10 by Oki Electric Industry was a low-cost 1980s color printer with interface "plug 'n print" modules for Commodore, Atari, IBM PC, and Apple Inc. home computers.
Unlike thermal printers, which use thermal printing technology and require thermal paper, the Okimate used thermal transfer technology and was advertised as being able to print on any type of paper. In practice, however, printing to common printer/copier paper did not produce adequate results. Best results were obtained by printing to special "thermal transfer paper" which looks like ordinary copier paper but is actually an ultra-smooth paper for the wax-transfer to adhere to.
A thermal transfer printer contains a ribbon cartridge that uses a wax ink. When the heating elements in the print head heat up, they melt the wax and transfer it to the paper, thus the need for the paper to be really smooth. This also means that the ribbon cannot be reused after the head runs over it, since the wax transfers off the ribbon to the paper.
The Okimate 10 had two interchangeable wax-ink cartridges, a black one and a color one. The black cartridge was used for text printing, and the color was used for graphics. The color ribbon had three primary colors which were overlaid and dithered on top of each other to create secondary colors. Thus to print a graphic, the printer typically needed to make three passes over the same line before advancing.
It was one of the first low-cost color printers available to consumers and became a popular printer for printing computer art drawn with software packages such as KoalaPad, Deluxe Paint, Doodle! and NEOchrome but was criticized for its slowness and high cost of operation, as the wax-coated ribbon only lasted for one pass, unlike an ink ribbon. The Okimate 10 was succeeded by the Okimate 20.
Reception
Ahoy! favorably reviewed the Okimate 10 with the Commodore 64 interface, calling the color output "impressive enough" given the slow speed. It concluded that "for the home user for whom it is intended, it represents an excellent value".
References
External links
Contemporary review of the Okimate 20
RUN Magazine Dec, 1986
Non-impact printing | Okimate 10 | [
"Technology"
] | 445 | [
"Computing stubs",
"Computer hardware stubs"
] |
14,644,098 | https://en.wikipedia.org/wiki/J.%20Peter%20May | Jon Peter May (born September 16, 1939, in New York) is an American mathematician working in the fields of algebraic topology, category theory, homotopy theory, and the foundational aspects of spectra. He is known, in particular, for the May spectral sequence and for coining the term operad.
Education and career
May received a Bachelor of Arts degree from Swarthmore College in 1960 and a Doctor of Philosophy degree from Princeton University in 1964. His thesis, written under the direction of John Moore, was titled The cohomology of restricted Lie algebras and of Hopf algebras: Application to the Steenrod algebra.
From 1964 to 1967, May taught at Yale University. He has been a faculty member at the University of Chicago since 1967, and a professor since 1970.
The word "operad" was created by May as a portmanteau of "operations" and "monad".
Awards
In 2012 he became a fellow of the American Mathematical Society. He has advised over 60 doctoral students, including Mark Behrens, Andrew Blumberg, Frederick Cohen, Ib Madsen, Emily Riehl, Michael Shulman, and Zhouli Xu.
References
Notes
May himself has stated that he was partially inspired by his mother's opera singing when coining the term.
External links
May's homepage at the University of Chicago
Jon Peter May at the Mathematics Genealogy Project
20th-century American mathematicians
21st-century American mathematicians
American topologists
University of Chicago faculty
Yale University faculty
Princeton University alumni
Swarthmore College alumni
Fellows of the American Mathematical Society
1939 births
Living people
Mathematicians from New York (state) | J. Peter May | [
"Mathematics"
] | 329 | [
"Mathematical structures",
"Category theory",
"Category theory stubs"
] |
14,644,243 | https://en.wikipedia.org/wiki/Taxonomic%20rank | In biology, taxonomic rank (which some authors prefer to call nomenclatural rank because ranking is part of nomenclature rather than taxonomy proper, according to some definitions of these terms) is the relative or absolute level of a group of organisms (a taxon) in a hierarchy that reflects evolutionary relationships. Thus, the most inclusive clades (such as Eukarya and Opisthokonta) have the highest ranks, whereas the least inclusive ones (such as Homo sapiens or Bufo bufo) have the lowest ranks. Ranks can be either relative and be denoted by an indented taxonomy in which the level of indentation reflects the rank, or absolute, in which various terms, such as species, genus, family, order, class, phylum, kingdom, and domain designate rank. This page emphasizes absolute ranks and the rank-based codes (the Zoological Code, the Botanical Code, the Code for Cultivated Plants, the Prokaryotic Code, and the Code for Viruses) require them. However, absolute ranks are not required in all nomenclatural systems for taxonomists; for instance, the PhyloCode, the code of phylogenetic nomenclature, does not require absolute ranks.
Taxa are hierarchical groups of organisms, and their ranks describes their position in this hierarchy. High-ranking taxa (e.g. those considered to be domains or kingdoms, for instance) include more sub-taxa than low-ranking taxa (e.g. those considered genera, species or subspecies). The rank of these taxa reflects inheritance of traits or molecular features from common ancestors. The name of any species and genus are basic; which means that to identify a particular organism, it is usually not necessary to specify names at ranks other than these first two, within a set of taxa covered by a given rank-based code. However, this is not true globally because most rank-based codes are independent from each other, so there are many inter-code homonyms (the same name used for different organisms, often for an animal and for a taxon covered by the botanical code). For this reason, attempts were made at creating a BioCode that would regulate all taxon names, but this attempt has so far failed because of firmly entrenched traditions in each community.
Consider a particular species, the red fox, Vulpes vulpes: in the context of the Zoological Code, the specific epithet vulpes (small v) identifies a particular species in the genus Vulpes (capital V) which comprises all the "true" foxes. Their close relatives are all in the family Canidae, which includes dogs, wolves, jackals, and all foxes; the next higher major taxon, Carnivora (considered an order), includes caniforms (bears, seals, weasels, skunks, raccoons and all those mentioned above), and feliforms (cats, civets, hyenas, mongooses). Carnivorans are one group of the hairy, warm-blooded, nursing members of the class Mammalia, which are classified among animals with notochords in the phylum Chordata, and with them among all animals in the kingdom Animalia. Finally, at the highest rank all of these are grouped together with all other organisms possessing cell nuclei in the domain Eukarya.
The International Code of Zoological Nomenclature defines rank as: "The level, for nomenclatural purposes, of a taxon in a taxonomic hierarchy (e.g. all families are for nomenclatural purposes at the same rank, which lies between superfamily and subfamily)." Note that the discussions on this page generally assume that taxa are clades (monophyletic groups of organisms), but this is required neither by the International Code of Zoological Nomenclature nor by the Botanical Code, and some experts on biological nomenclature do not think that this should be required, and in that case, the hierarchy of taxa (hence, their ranks) does not necessarily reflect the hierarchy of clades.
History
While older approaches to taxonomic classification were phenomenological, forming groups on the basis of similarities in appearance, organic structure and behavior, two important new methods developed in the second half of the 20th century changed drastically taxonomic practice. One is the advent of cladistics, which stemmed from the works of the German entomologist Willi Hennig. Cladistics is a method of classification of life forms according to the proportion of characteristics that they have in common (called synapomorphies). It is assumed that the higher the proportion of characteristics that two organisms share, the more recently they both came from a common ancestor. The second one is molecular systematics, based on genetic analysis, which can provide much additional data that prove especially useful when few phenotypic characters can resolve relationships, as, for instance, in many viruses, bacteria and archaea, or to resolve relationships between taxa that arose in a fast evolutionary radiation that occurred long ago, such as the main taxa of placental mammals.
Main ranks
In his landmark publications, such as the Systema Naturae, Carl Linnaeus used a ranking scale limited to kingdom, class, order, genus, species, and one rank below species. Today, the nomenclature is regulated by the nomenclature codes. There are seven main taxonomic ranks: kingdom, phylum or division, class, order, family, genus, and species. In addition, domain (proposed by Carl Woese) is now widely used as a fundamental rank, although it is not mentioned in any of the nomenclature codes, and is a synonym for dominion (), introduced by Moore in 1974.
A taxon is usually assigned a rank when it is given its formal name. The basic ranks are species and genus. When an organism is given a species name it is assigned to a genus, and the genus name is part of the species name.
The species name is also called a binomial, that is, a two-term name. For example, the zoological name for the human species is Homo sapiens. This is usually italicized in print or underlined when italics are not available. In this case, Homo is the generic name and it is capitalized; sapiens indicates the species and it is not capitalized. While not always used, some species include a subspecific epithet. For instance, modern humans are Homo sapiens sapiens, or H. sapiens sapiens.
In zoological nomenclature, higher taxon names are normally not italicized, but the Botanical Code, the Prokaryotic Code, the Code for Viruses, the draft BioCode and the PhyloCode all recommend italicizing all taxon names (of all ranks).
Ranks in zoology
There are rules applying to the following taxonomic ranks in the International Code of Zoological Nomenclature: superfamily, family, subfamily, tribe, subtribe, genus, subgenus, species, subspecies.
The International Code of Zoological Nomenclature divides names into "family-group names", "genus-group names" and "species-group names". The Code explicitly mentions the following ranks for these categories:
Family-groups
Superfamily (-oidea)
Family (-idae)
Subfamily (-inae)
Tribe (-ini)
Subtribe (-ina)
Genus-groups
Genus
Subgenus
Species-groups
Species
Subspecies
The rules in the Code apply to the ranks of superfamily to subspecies, and only to some extent to those above the rank of superfamily. Among "genus-group names" and "species-group names" no further ranks are officially allowed, which creates problems when naming taxa in these groups in speciose clades, such as Rana. Zoologists sometimes use additional terms such as species group, species subgroup, species complex and superspecies for convenience as extra, but unofficial, ranks between the subgenus and species levels in taxa with many species, e.g. the genus Drosophila. (Note the potentially confusing use of "species group" as both a category of ranks as well as an unofficial rank itself. For this reason, Alain Dubois has been using the alternative expressions "nominal-series", "family-series", "genus-series" and "species-series" (among others) at least since 2000.)
At higher ranks (family and above) a lower level may be denoted by adding the prefix "infra", meaning lower, to the rank. For example, infraorder (below suborder) or infrafamily (below subfamily).
Names of zoological taxa
A taxon above the rank of species has a scientific name in one part (a uninominal name).
A species has a name typically composed of two parts (a binomial name or binomen): generic name + specific name; for example Canis lupus. Sometimes the name of a subgenus (in parentheses) can be intercalated between the genus name and the specific epithet, which yields a trinomial name that should not be confused with that of a subspecies. An example is Lithobates (Aquarana) catesbeianus, which designates a species that belongs to the genus Lithobates and the subgenus Aquarana.
A subspecies has a name composed of three parts (a trinomial name or trinomen): generic name + specific name + subspecific name; for example Canis lupus italicus. As there is only one possible rank below that of species, no connecting term to indicate rank is needed or used.
Ranks in botany
Botanical ranks categorize organisms based (often) on their relationships (monophyly is not required by that clade, which does not even mention this word, nor that of "clade"). They start with Kingdom, then move to Division (or Phylum), Class, Order, Family, Genus, and Species. Taxa at each rank generally possess shared characteristics and evolutionary history. Understanding these ranks aids in taxonomy and studying biodiversity.
There are definitions of the following taxonomic categories in the International Code of Nomenclature for Cultivated Plants: cultivar group, cultivar, grex.
The rules in the ICN apply primarily to the ranks of family and below, and only to some extent to those above the rank of family.
Names of botanical taxa
Taxa at the rank of genus and above have a botanical name in one part (unitary name); those at the rank of species and above (but below genus) have a botanical name in two parts (binary name); all taxa below the rank of species have a botanical name in three parts (an infraspecific name). To indicate the rank of the infraspecific name, a "connecting term" is needed. Thus Poa secunda subsp. juncifolia, where "subsp". is an abbreviation for "subspecies", is the name of a subspecies of Poa secunda.
Hybrids can be specified either by a "hybrid formula" that specifies the parentage, or may be given a name. For hybrids receiving a hybrid name, the same ranks apply, prefixed with notho (Greek: 'bastard'), with nothogenus as the highest permitted rank.
Outdated names for botanical ranks
If a different term for the rank was used in an old publication, but the intention is clear, botanical nomenclature specifies certain substitutions:
If names were "intended as names of orders, but published with their rank denoted by a term such as": "cohors" [Latin for "cohort"; see also cohort study for the use of the term in ecology], "nixus", "alliance", or "Reihe" instead of "order" (Article 17.2), they are treated as names of orders.
"Family" is substituted for "order" (ordo) or "natural order" (ordo naturalis) under certain conditions where the modern meaning of "order" was not intended. (Article 18.2)
"Subfamily" is substituted for "suborder" (subordo) under certain conditions where the modern meaning of "suborder" was not intended. (Article 19.2)
In a publication prior to 1 January 1890, if only one infraspecific rank is used, it is considered to be that of variety. (Article 37.4) This commonly applies to publications that labelled infraspecific taxa with Greek letters, α, β, γ, ...
Examples
Classifications of five species follow: the fruit fly familiar in genetics laboratories (Drosophila melanogaster), humans (Homo sapiens), the peas used by Gregor Mendel in his discovery of genetics (Pisum sativum), the "fly agaric" mushroom Amanita muscaria, and the bacterium Escherichia coli. The eight major ranks are given in bold; a selection of minor ranks are given as well.
Table notes
In order to keep the table compact and avoid disputed technicalities, some common and uncommon intermediate ranks are omitted. For example, the mammals of Europe, Africa, and upper North America are in class Mammalia, legion Cladotheria, sublegion Zatheria, infralegion Tribosphenida, subclass Theria, clade Eutheria, clade Placentalia – but only Mammalia and Theria are in the table. Legitimate arguments might arise if the commonly used clades Eutheria and Placentalia were both included, over which is the rank "infraclass" and what the other's rank should be, or whether the two names are synonyms.
The ranks of higher taxa, especially intermediate ranks, are prone to revision as new information about relationships is discovered. For example, the flowering plants have been downgraded from a division (Magnoliophyta) to a subclass (Magnoliidae), and the superorder has become the rank that distinguishes the major groups of flowering plants. The traditional classification of primates (class Mammalia, subclass Theria, infraclass Eutheria, order Primates) has been modified by new classifications such as McKenna and Bell (class Mammalia, subclass Theriformes, infraclass Holotheria) with Theria and Eutheria assigned lower ranks between infraclass and the order Primates. These differences arise because there are few available ranks and many branching points in the fossil record.
Within species further units may be recognised. Animals may be classified into subspecies (for example, Homo sapiens sapiens, modern humans) or morphs (for example Corvus corax varius morpha leucophaeus, the pied raven). Plants may be classified into subspecies (for example, Pisum sativum subsp. sativum, the garden pea) or varieties (for example, Pisum sativum var. macrocarpon, snow pea), with cultivated plants getting a cultivar name (for example, Pisum sativum var. macrocarpon 'Snowbird'). Bacteria may be classified by strains (for example Escherichia coli O157:H7, a strain that can cause food poisoning).
Terminations of names
Taxa above the genus level are often given names based on the type genus, with a standard termination. The terminations used in forming these names depend on the kingdom (and sometimes the phylum and class) as set out in the table below.
Pronunciations given are the most Anglicized. More Latinate pronunciations are also common, particularly rather than for stressed a.
Table notes
In botany and mycology names at the rank of family and below are based on the name of a genus, sometimes called the type genus of that taxon, with a standard ending. For example, the rose family, Rosaceae, is named after the genus Rosa, with the standard ending "-aceae" for a family. Names above the rank of family are also formed from a generic name, or are descriptive (like Gymnospermae or Fungi).
For animals, there are standard suffixes for taxa only up to the rank of superfamily. Uniform suffix has been suggested (but not recommended) in AAAS as -ida for orders, for example; protozoologists seem to adopt this system. Many metazoan (higher animals) orders also have such suffix, e.g. Hyolithida and Nectaspida (Naraoiida).
Forming a name based on a generic name may be not straightforward. For example, the has the genitive , thus the genus Homo (human) is in the Hominidae, not "Homidae".
The ranks of epifamily, infrafamily and infratribe (in animals) are used where the complexities of phyletic branching require finer-than-usual distinctions. Although they fall below the rank of superfamily, they are not regulated under the International Code of Zoological Nomenclature and hence do not have formal standard endings. The suffixes listed here are regular, but informal.
In virology, the formal endings for taxa of viroids, of satellite nucleic acids, and of viriforms are similar to viruses, only -vir- is replaced by -viroid-, -satellit- and -viriform-. The extra levels of realm and subrealm end with -viria and -vira respectively.
All ranks
There is an indeterminate number of ranks, as a taxonomist may invent a new rank at will, at any time, if they feel this is necessary. In doing so, there are some restrictions, which will vary with the nomenclature code that applies.
The following is an artificial synthesis, solely for purposes of demonstration of absolute rank (but see notes), from most general to most specific:
Superdomain
Domain or Empire
Subdomain (biology)
Realm (in virology)
Subrealm (in virology)
Hyperkingdom
Superkingdom
Kingdom
Subkingdom
Infrakingdom
Parvkingdom
Superphylum, or superdivision (in botany)
Phylum, or division (in botany)
Subphylum, or subdivision (in botany)
Infraphylum, or infradivision (in botany)
Microphylum
Superclass
Class
Subclass
Infraclass
Subterclass
Parvclass
Superdivision (in zoology)
Division (in zoology)
Subdivision (in zoology)
Infradivision (in zoology)
Superlegion (in zoology)
Legion (in zoology)
Sublegion (in zoology)
Infralegion (in zoology)
Supercohort (in zoology)
Cohort (in zoology)
Subcohort (in zoology)
Infracohort (in zoology)
Gigaorder (in zoology)
Magnorder or megaorder (in zoology)
Grandorder or capaxorder (in zoology)
Mirorder or hyperorder (in zoology)
Superorder
Series (in ichthyology)
Order
Parvorder (position in some zoological classifications)
Nanorder (in zoology)
Hypoorder (in zoology)
Minorder (in zoology)
Suborder
Infraorder
Parvorder (usual position), or microorder (in zoology)
Section (in zoology)
Subsection (in zoology)
Gigafamily (in zoology)
Megafamily (in zoology)
Grandfamily (in zoology)
Hyperfamily (in zoology)
Superfamily
Epifamily (in zoology)
Series (for Lepidoptera)
Group (for Lepidoptera)
Family
Subfamily
Infrafamily
Supertribe
Tribe
Subtribe
Infratribe
Supergenus
Genus
Subgenus
Section (in botany)
Subsection (in botany)
Series (in botany)
Subseries (in botany)
Species complex
Species
Subspecies, or forma specialis (for fungi), or pathovar (for bacteria))
Variety or varietas (in botany); or form or morph (in zoology) or aberration (in lepidopterology)
Subvariety (in botany)
Form or forma (in botany)
Subform or subforma (in botany)
Significance and problems
Ranks are assigned based on subjective dissimilarity, and do not fully reflect the gradational nature of variation within nature. These problems were already identified by Willi Hennig, who advocated dropping them in 1969, and this position gathered support from Graham C. D. Griffiths only a few years later. In fact, these ranks were proposed in a fixist context and the advent of evolution sapped the foundations of this system, as was recognised long ago; the introduction of The Code of Nomenclature and Check-list of North American Birds Adopted by the American Ornithologists' Union published in 1886 states "No one appears to have suspected, in 1842 [when the Strickland code was drafted], that the Linnaean system was not the permanent heritage of science, or that in a few years a theory of evolution was to sap its very foundations, by radically changing men's conceptions of those things to which names were to be furnished." Such ranks are used simply because they are required by the rank-based codes; because of this, some systematists prefer to call them nomenclatural ranks. In most cases, higher taxonomic groupings arise further back in time, simply because the most inclusive taxa necessarily appeared first. Furthermore, the diversity in some major taxa (such as vertebrates and angiosperms) is better known than that of others (such as fungi, arthropods and nematodes) not because they are more diverse than other taxa, but because they are more easily sampled and studied than other taxa, or because they attract more interest and funding for research.
Of these many ranks, many systematists consider that the most basic (or important) is the species, but this opinion is not universally shared. Thus, species are not necessarily more sharply defined than taxa at any other rank, and in fact, given the phenotypic gaps created by extinction, in practice, the reverse is often the case. Ideally, a taxon is intended to represent a clade, that is, the phylogeny of the organisms under discussion, but this is not a requirement of the zoological and botanical codes.
A classification in which all taxa have formal ranks cannot adequately reflect knowledge about phylogeny. Since taxon names are dependent on ranks in rank-based (Linnaean) nomenclature, taxa without ranks cannot be given names. Alternative approaches, such as phylogenetic nomenclature, as implemented under the PhyloCode and supported by the International Society for Phylogenetic Nomenclature, or using circumscriptional names, avoid this problem. The theoretical difficulty with superimposing taxonomic ranks over evolutionary trees is manifested as the boundary paradox which may be illustrated by Darwinian evolutionary models.
There are no rules for how many species should make a genus, a family, or any other higher taxon (that is, a taxon in a category above the species level). It should be a natural group (that is, non-artificial, non-polyphyletic), as judged by a biologist, using all the information available to them. Equally ranked higher taxa in different phyla are not necessarily equivalent in terms of time of origin, phenotypic distinctiveness or number of lower-ranking included taxa (e.g., it is incorrect to assume that families of insects are in some way evolutionarily comparable to families of mollusks). Of all criteria that have been advocated to rank taxa, age of origin has been the most frequently advocated. Willi Hennig proposed it in 1966, but he concluded in 1969 that this system was unworkable and suggested dropping absolute ranks. However, the idea of ranking taxa using the age of origin (either as the sole criterion, or as one of the main ones) persists under the name of time banding, and is still advocated by several authors. For animals, at least the phylum rank is usually associated with a certain body plan, which is also, however, an arbitrary criterion.
Enigmatic taxa
Enigmatic taxa are taxonomic groups whose broader relationships are unknown or undefined.
Mnemonic
There are several acronyms intended to help memorise the taxonomic hierarchy, such as "King Phillip came over for great spaghetti".
See also
Breed
Catalogue of Life (a database)
Cladistics
Landrace
Tree of life (biology)
Alliance (taxonomy)
Strain (biology)
Footnotes
References
Bibliography
Botanical nomenclature
Plant taxonomy
R
Biology terminology | Taxonomic rank | [
"Biology"
] | 5,038 | [
"Zoological nomenclature",
"Botanical nomenclature",
"Plants",
"Botanical terminology",
"Biological nomenclature",
"Plant taxonomy",
"nan"
] |
14,644,287 | https://en.wikipedia.org/wiki/Pfaffian%20function | In mathematics, Pfaffian functions are a certain class of functions whose derivative can be written in terms of the original function. They were originally introduced by Askold Khovanskii in the 1970s, but are named after German mathematician Johann Pfaff.
Basic definition
Some functions, when differentiated, give a result which can be written in terms of the original function. Perhaps the simplest example is the exponential function, f(x) = ex. If we differentiate this function we get ex again, that is
Another example of a function like this is the reciprocal function, g(x) = 1/x. If we differentiate this function we will see that
Other functions may not have the above property, but their derivative may be written in terms of functions like those above. For example, if we take the function h(x) = ex log x then we see
Functions like these form the links in a so-called Pfaffian chain. Such a chain is a sequence of functions, say f1, f2, f3, etc., with the property that if we differentiate any of the functions in this chain then the result can be written in terms of the function itself and all the functions preceding it in the chain (specifically as a polynomial in those functions and the variables involved). So with the functions above we have that f, g, h is a Pfaffian chain.
A Pfaffian function is then just a polynomial in the functions appearing in a Pfaffian chain and the function argument. So with the Pfaffian chain just mentioned, functions such as F(x) = x3f(x)2 − 2g(x)h(x) are Pfaffian.
Rigorous definition
Let U be an open domain in Rn. A Pfaffian chain of order r ≥ 0 and degree α ≥ 1 in U is a sequence of real analytic functions f1,..., fr in U satisfying differential equations
for i = 1, ..., r where Pi, j ∈ R[x1, ..., xn, y1, ..., yi] are polynomials of degree ≤ α. A function f on U is called a Pfaffian function of order r and degree (α, β) if
where P ∈ R[x1, ..., xn, y1, ..., yr] is a polynomial of degree at most β ≥ 1. The numbers r, α, and β are collectively known as the format of the Pfaffian function, and give a useful measure of its complexity.
Examples
The most trivial examples of Pfaffian functions are the polynomial functions. Such a function will be a polynomial in a Pfaffian chain of order r = 0, that is the chain with no functions. Such a function will have α = 0 and β equal to the degree of the polynomial.
Perhaps the simplest nontrivial Pfaffian function is f(x) = ex. This is Pfaffian with order r = 1 and α = β = 1 due to the differential equation f = f.
Recursively, one may define f1(x) = exp(x) and fm+1(x) = exp(fm(x)) for 1 ≤ m < r. Then fm′ = f1f2···fm. So this is a Pfaffian chain of order r and degree α = r.
All of the algebraic functions are Pfaffian on suitable domains, as are the hyperbolic functions. The trigonometric functions on bounded intervals are Pfaffian, but they must be formed indirectly. For example, the function cos(x) is a polynomial in the Pfaffian chain tan(x/2), cos2(x/2) on the interval (−π, π).
In fact all the elementary functions and Liouvillian functions are Pfaffian.
In model theory
Consider the structure R = (R, +, −, ·, <, 0, 1), the ordered field of real numbers. In the 1960s Andrei Gabrielov proved that the structure obtained by starting with R and adding a function symbol for every analytic function restricted to the unit box [0, 1]m is model complete. That is, any set definable in this structure Ran was just the projection of some higher-dimensional set defined by identities and inequalities involving these restricted analytic functions.
In the 1990s, Alex Wilkie showed that one has the same result if instead of adding every restricted analytic function, one just adds the unrestricted exponential function to R to get the ordered real field with exponentiation, Rexp, a result known as Wilkie's theorem. Wilkie also tackled the question of which finite sets of analytic functions could be added to R to get a model-completeness result. It turned out that adding any Pfaffian chain restricted to the box [0, 1]m would give the same result. In particular one may add all Pfaffian functions to R to get the structure RPfaff as a variant of Gabrielov's result. The result on exponentiation is not a special case of this result (even though exp is a Pfaffian chain by itself), as it applies to the unrestricted exponential function.
This result of Wilkie's proved that the structure RPfaff is an o-minimal structure.
Noetherian functions
The equations above that define a Pfaffian chain are said to satisfy a triangular condition, since the derivative of each successive function in the chain is a polynomial in one extra variable. Thus if they are written out in turn a triangular shape appears:
and so on. If this triangularity condition is relaxed so that the derivative of each function in the chain is a polynomial in all the other functions in the chain, then the chain of functions is known as a Noetherian chain, and a function constructed as a polynomial in this chain is called a Noetherian function. So, for example, a Noetherian chain of order three is composed of three functions f1, f2, f3, satisfying the equations
The name stems from the fact that the ring generated by the functions in such a chain is Noetherian.
Any Pfaffian chain is also a Noetherian chain (the extra variables in each polynomial are simply redundant in this case), but not every Noetherian chain is Pfaffian; for example, if we take f1(x) = sin x and f2(x) = cos x then we have the equations
and these hold for all real numbers x, so f1, f2 is a Noetherian chain on all of R. But there is no polynomial P(x, y) such that the derivative of sin x can be written as P(x, sin x), and so this chain is not Pfaffian.
Notes
References
Functions and mappings
Types of functions | Pfaffian function | [
"Mathematics"
] | 1,442 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Types of functions"
] |
14,644,488 | https://en.wikipedia.org/wiki/Carinderia | Carinderia (sometimes spelled as Karinderya) is a common type of eatery in the Philippines that serves affordable and locally-inspired dishes. These food establishments, also known as turo-turo (meaning "point-point" in Filipino), play a significant role in Filipino cuisine and provide a convenient and economical dining option for people from all walks of life.
Carinderias are known for their affordability, making them accessible to a wide range of customers, from students and office workers to taxi drivers and construction workers. The low cost of meals is one of the main reasons why carinderias are popular among Filipinos.
History and evolution
The concept of the carinderia can be traced back to the early 1800s when it emerged as a native food shop and a convenient stop for travelers. Prior to 1764, there was no specific Filipino term to describe a commercial establishment selling cooked food. However, with the growth of busy crossroads, carinderias developed into a quick food service option for locals and travelers in need of sustenance. Over time, carinderias have adapted and evolved to meet the needs and preferences of Filipinos. Today, variations of carinderias can be found, including traveling carinderias and high-class carinderias, each offering its own unique dining experience and menu options.
Influence of British Sepoys
According to Filipino food historian Felice Prudente-Sta. Maria, carinderias and "karihans" (a term used interchangeably with carinderias) in the Philippines were influenced by the presence of British Sepoys. British Sepoys were Indian natives who deserted British General William Draper's fleet around 1764 during the British occupation of Manila. These Sepoys integrated into the local community, marrying Filipina wives and settling in towns in the province of Tondo such as Taytay and Cainta, which were located along the Maytime Pilgrimage route to Antipolo Church.
Role in pilgrimage routes and tourism
Carinderias played a crucial role in providing sustenance to travelers and pilgrims along pilgrimage routes. As tourist transportation options emerged, such as the inauguration of the Philippine railway in 1892, towns like Cainta and Taytay became important stops for pilgrims embarking on the trek to the Antipolo town shrine. These towns witnessed an increase in the number of carinderias, offering a diverse menu that often included dishes like curry. The term "carinderia" has been linked by Spanish authority Wenceslao E. Retana to the Tagalog word for curry, "kari," which is also the root word for the native dish called Kare-kare.
Cuisine
Carinderias offer a wide range of Filipino dishes, including traditional home-cooked meals and popular local favorites. The menu can vary from day to day, depending on the availability of ingredients and the cook's expertise. Common offerings may include adobo (marinated meat stew), sinigang (sour soup), tinola (chicken stew), kare-kare (oxtail stew in peanut sauce), and a variety of vegetable and seafood dishes. Rice, the staple food of Filipinos, is usually included or available as a side dish.
Some carinderias may display raw meats, such as chicken neck, chicken livers, chicken gizzards, strips of marinated pork or chicken meat, pork belly or other foods, which the customer can purchase and they will grill the meat over charcoal while the customer waits. They are typically basted in some type of sauce. The raw food are usually displayed and served on a bamboo stick, which makes the handling easier.
See also
Nasi campur, a similar dining concept where rice is paired with a variety of side dishes in Indonesia, Malaysia, Brunei and Singapore
Khao kaeng, a Thai dish consisting of rice served with a variety of curries, soups, or stir-fries, typically served in a casual setting
References
Filipino cuisine
Restaurants in the Philippines
Street food
Food and drink in the Philippines
Culture of the Philippines
Tourism in the Philippines
Infrastructure
Building types
Buildings and structures by type
Urban studies and planning terminology
Restaurants by type | Carinderia | [
"Engineering"
] | 856 | [
"Construction",
"Buildings and structures by type",
"Infrastructure",
"Architecture"
] |
14,644,608 | https://en.wikipedia.org/wiki/Conservation%20Geoportal | The Conservation Geoportal was an online geoportal, intended to provide a comprehensive listing of geographic information systems (GIS) datasets and web map service relevant to biodiversity conservation. It is currently defunct. The site, its contents and functionality were free for anyone to use and contribute to. The Conservation Geoportal was launched on June 28, 2006 at the joint Society for Conservation Biology and Society for Conservation GIS Conference in San Jose, California, USA. As of October 2007, it included metadata for over 3,667 GIS records.
History
The Conservation Geoportal was conceived when representatives from a group of conservation-minded organizations met at the National Geographic Society in March 2005 to define a vision for a World Conservation Base Map. Initially the focus on developing an inventory or catalog of datasets and maps in the form of a metadata database was to be mined to develop the Conservation Base Map and Atlas.
Overview
The Conservation Geoportal constitutes a collaborative effort by and for the conservation community to facilitate the discovery and publishing of GIS data and maps, to support conservation decision-making and education. It does not actually store maps and data, but rather the descriptions and links to those data resources. These descriptions are known as metadata. It was intended to provide an efficient point of access for people interested in a full range of conservation-related GIS data. Capabilities of the Conservation Geoportal included:
Search for data and maps by keyword, category, geography, or time period
Save search queries for future use
Use the built-in Map Viewer to display, manipulate, and combine live map services
Map viewer supports OpenGIS standards (WMS, WFS, WCS) and ArcIMS services
Create, save, and email custom maps using data from various web map service
Publish metadata for maps and data so others can find them
Featured Map section
Content in designated thematic data channels
Share information with other geoportal
Status
Sponsored by The Nature Conservancy, National Geographic Society and UNEP-World Conservation Monitoring Centre
~2,000 visitors per month at its peak
~3,667 metadata records & 515 registered users
Data channels
The Conservation Geoportal included Data Channels and Sub-channels to organize and facilitate access to metadata describing data and maps in a given topic or theme. Channels provided quick access (2 clicks to content) to key data resources that experts consider important to the larger user community. Channels were managed by organizations and experts (channel stewards) knowledgeable about that theme, including:
Conservation areas: Conservation areas can include existing legally protected areas, as well as areas of ecological or cultural significance identified through assessment and planning efforts. They represent areas where conservation activities are currently taking place or where one or more organizations intend to take action
Species: Species distributions including amphibian, birds, fish, mammals and many others
Habitats: Habitats and ecosystems
Threats: Threats to biodiversity
Environmental factors: Physical environmental factors including soils, geology, land cover/land use and oceanography
Socioeconomics: Factors including population, economy, policy, culture, indigenous rights, ecosystem services
Base map layers: Layers including roads, political boundaries, and satellite imagery
Geoportal consortium
The Conservation Geoportal was designed and maintained collaboratively by a consortium of institutions including (in alphabetical order):
American Museum of Natural History
Conservation International
Environmental Systems Research Institute
IUCN - The World Conservation Union
NASA
National Geographic Society
NatureServe
Smithsonian Institution
The Nature Conservancy
UNEP - World Conservation Monitoring Centre
University of Maryland - Global Land Cover Facility
USGS - National Biological Information Infrastructure
Waterborne Environmental, Inc.
Wildlife Conservation Society
World Resources Institute
World Wildlife Fund
Technology
The Conservation Geoportal was based on ESRI's GIS Portal Toolkit (Version 3.0) and ArcWeb Services technologies. Currently the site is maintained and hosted by ESRI. Although the underlying technology was proprietary, the Conservation Geoportal supports several metadata standards and OpenGIS standards, as:
FGDC and ISO 19115 metadata standards
Harvesting from ArcIMS, Z39.50, OAI, and WAF based metadata repositories
OpenGIS WFS, WMS, and WCS services through the map viewer
ArcIMS Image and ArcGIS Image Server
OpenLS geocoder
Mashup capabilities
The Map Viewer let users overlay or mashup data layers from different map servers, which may be hosted by different organizations using different protocols (e.g., ArcIMS, WMS). For example, by searching the catalog, users could discover three different map services, hosted by the Nature Conservancy, Conservation International, and World Wildlife Fund, delineating conservation priority areas. Then, with a click, these live maps can be overlaid together in the Map Viewer along with a satellite image backdrop from NASA. Users could then zoom and pan to their area of interest, turn layers on and off, adjust transparencies, and save that map view to a URL, which they can e-mail to their colleagues to show how various priority maps compare. When their colleagues click the link, exactly the same map view opens, allowing them to work with the live map, perhaps adding map services or posting the link to the map view on their Web site.
Parent project
The Conservation Geoportal was intended to support the principles and objectives of the Conservation Commons. At its simplest, it encourages organizations and individuals alike to ensure open access to data, information, expertise and knowledge related to the conservation of biodiversity. The Conservation Commons is the expression of a cooperative effort of non-governmental organizations, international and multi-lateral organizations, governments, academia, and the private sector, to improve open access to and unrestricted use of, data, information and knowledge related to the conservation of biodiversity with the belief that this will contribute to improving conservation outcomes.
References
Sources
Biasi, F. 2007. New Conservation GeoPortal Taps into a World of Maps and Data, ArcWatch (May)
ArcNews Online Conservation GeoPortal to Support Worldwide Data Sharing and Discovery. (Summer 2006)
ConserveOnline (2006) (.doc format)
External links
Data Basin
Conservation Commons
Web portals
Geographic information systems
Nature conservation organizations | Conservation Geoportal | [
"Technology"
] | 1,250 | [
"Information systems",
"Geographic information systems"
] |
14,645,059 | https://en.wikipedia.org/wiki/Flow%20injection%20analysis | Flow injection analysis (FIA) is an approach to chemical analysis. It is accomplished by injecting a plug of sample into a flowing carrier stream. The principle is similar to that of Segmented Flow Analysis (SFA) but no air is injected into the sample or reagent streams..
Overview
FIA is an automated method of chemical analysis in which a sample is injected into a flowing carrier solution that mixes with reagents before reaching a detector. Over past 30 years, FIA techniques developed into a wide array of applications using spectrophotometry, fluorescence spectroscopy, atomic absorption spectroscopy, mass spectrometry, and other methods of instrumental analysis for detection.
Automated sample processing, high repeatability, adaptability to micro-miniaturization, containment of chemicals, waste reduction, and reagent economy in a system that operates at microliter levels are all valuable assets that contribute to the application of flow injection to real-world assays. The main assets of flow injection are the well defined concentration gradient that forms when an analyte is injected into the reagent stream (which offers an infinite number of well-reproduced analyte/reagent ratios) and the exact timing of fluidic manipulations (which provide exquisite control over the reaction conditions).
Based on computer control, FIA evolved into Sequential Injection and Bead Injection which are novel techniques based on flow programming. FIA literature comprises over 22,000 scientific papers and 22 monographs.
History
Flow injection analysis (FIA) was first described by Ruzicka and Hansen in Denmark in 1974 and Stewart and coworkers in United States in 1979. FIA is a popular, simple, rapid, and versatile technique which is a well-established position in modern analytical chemistry, and widespread application in quantitative chemical analysis.
Principles of operation
A sample (analyte) is injected into a flowing carrier solution stream that is forced by a peristaltic pump. The injection of the sample is done under controlled dispersion in known volumes. The carrier solution and sample then meet at mixing points with reagents and react. The reaction time is controlled by a pump and reaction coil. The reaction product then flows through a detector. Most often, the detector is a spectrophotometer as the reactions usually produce a colored product. One can then determine the amount of an unknown material in the sample as it is proportional to the absorption spectrum given by the spectrophotometer. After moving through the detector, the sample then flows to waste.
Detail of sample dispersion
When a sample is injected into the carrier stream it has the rectangular flow. As the sample is carried through the mixing and reaction zone, the width of the flow profile increases as the sample disperses into the carrier stream. Dispersion results from two processes: convection due to the flow of the carrier stream and diffusion due to a concentration gradient between the sample and the carrier stream. Convection of the sample occurs by laminar flow, in which the linear velocity of the sample at the tube's walls is zero, while the sample at the center of the tube moves with a linear velocity twice that of the carrier stream. The result is the parabolic flow profile, before the sample passes through a detector to a waste container.
Detectors
A flow-through detector is located downstream from the sample injector and records a chemical physical parameter. Many types of detector can be used such as:
spectrophotometer
fluorimeter
ion-selective electrode
biosensors
mass spectrometer
Marine applications
Flow injection techniques have proven very useful in marine science for both organic and inorganic analytes in marine animal samples/seafood. Flow Injection methods applied to the determination of amino acids (histidine, L-lysine and tyrosine), DNA/RNA, formaldehyde, histamine, hypoxanthine, polycyclic aromatic hydrocarbons, diarrheic shellfish poisoning, paralytic shellfish poisoning, succinate/glutamate, trimethylamine/ total volatile basic nitrogen, total lipid hydroperoxides, total volatile acids, uric acid, vitamin B12, silver, aluminium, arsenic, boron, calcium, cadmium, cobalt, chromium, copper, iron, gallium, mercury, indium, lithium, manganese, molibdenum, nickel, lead, antimony, selenium, tin, strontium, thallium, vanadium, zinc, nitrate/nitrite, phosphorus/phosphate and silicate.
See also
AutoAnalyzer
Colorimetric analysis
References
Medical equipment
Laboratory equipment
Analytical chemistry | Flow injection analysis | [
"Chemistry",
"Biology"
] | 945 | [
"Medical equipment",
"nan",
"Medical technology"
] |
14,645,977 | https://en.wikipedia.org/wiki/Upwind%20scheme | In computational physics, the term advection scheme refers to a class of numerical discretization methods for solving hyperbolic partial differential equations. In the so-called upwind schemes typically, the so-called upstream variables are used to calculate the derivatives in a flow field. That is, derivatives are estimated using a set of data points biased to be more "upwind" of the query point, with respect to the direction of the flow. Historically, the origin of upwind methods can be traced back to the work of Courant, Isaacson, and Rees who proposed the CIR method.
Model equation
To illustrate the method, consider the following one-dimensional linear advection equation
which describes a wave propagating along the -axis with a velocity . This equation is also a mathematical model for one-dimensional linear advection. Consider a typical grid point in the
domain. In a one-dimensional domain, there are only two directions associated with point – left (towards negative infinity) and
right (towards positive infinity). If is positive, the traveling wave solution of the equation above propagates towards the right, the left side is called the upwind side and the right side is the downwind side. Similarly, if is negative the traveling wave solution propagates towards the left, the left side is called downwind side and right side is the upwind side. If the finite difference scheme for the spatial derivative, contains more points in the upwind side, the scheme is called an upwind-biased or simply an upwind scheme.
First-order upwind scheme
The simplest upwind scheme possible is the first-order upwind scheme. It is given by
where refers to the dimension and refers to the dimension. (By comparison, a central difference scheme in this scenario would look like
regardless of the sign of .)
Compact form
Defining
and
the two conditional equations () and () can be combined and written in a compact form as
Equation (3) is a general way of writing any upwind-type schemes.
Stability
The upwind scheme is stable if the following Courant–Friedrichs–Lewy condition (CFL) is satisfied.
and .
A Taylor series analysis of the upwind scheme discussed above will show that it is first-order accurate in space and time. Modified wavenumber analysis shows that the first-order upwind scheme introduces severe numerical diffusion/dissipation in the solution where large gradients exist due to necessity of high wavenumbers to represent sharp gradients.
Second-order upwind scheme
The spatial accuracy of the first-order upwind scheme can be improved by including 3 data points instead of just 2, which offers a more accurate finite difference stencil for the approximation of spatial derivative. For the second-order upwind scheme, becomes the 3-point backward difference in equation () and is defined as
and is the 3-point forward difference, defined as
This scheme is less diffusive compared to the first-order accurate scheme and is called linear upwind differencing (LUD) scheme.
See also
Finite difference method
Upwind differencing scheme for convection
Godunov's scheme
References
Computational fluid dynamics
Numerical differential equations | Upwind scheme | [
"Physics",
"Chemistry"
] | 651 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
14,646,055 | https://en.wikipedia.org/wiki/Steering%20engine | A steering engine is a power steering device for ships.
History
Prior to the invention of the steering engine, large steam-powered warships with manual steering needed huge crews to turn the rudder rapidly. The Royal Navy once used 78 men hauling on block and tackle gear to manually turn the rudder on HMS Minotaur, in a test of manual vs. steam powered steering.
The first steering engine with feedback was installed on Isambard Kingdom Brunel's Great Eastern in 1866.
Designed by Scottish engineer John McFarlane Gray and built by George Forrester and Company, this was a steam-powered mechanical amplifier used to drive the rudder position to match the wheel position. The size of Great Eastern, by far the largest ship of her day, made power steering a necessity. Steam-powered steering engines were employed on large steamships thereafter.
The Mississippi River style steamboat Belle of Louisville, (originally Idlewild and oldest in her class), is fitted with a steering engine. Original equipment when the boat was launched at Pittsburgh in 1915, the engine consists of a single double-acting steam cylinder mounted aft of and above the engines, coupled to the rudders, with the motion of travel abeam. The steam valves of the engine are controlled by mechanical linkages which extend up to levers mounted either side of the engine order telegraph, just aft of the pilot wheel in the pilot house above. The steering engine is open to public view. A functional description is given in the 1965 book Str. Belle of Louisville, by Alan L. Bates, the marine architect who supervised the restoration of the boat, who comments that when in use, the steering engine causes the pilot wheel to whirl "as fast as an electric fan." The same source also describes the functional need for steering hard-to in vessels of its type, whose combination of shallow draft and high above-water profile require rapid changes in rudder under shifting wind conditions, a need which is addressed by the steering engine.
See also
Power steering
Servomechanism
Ship's wheel
References
Control devices
Mechanical amplifiers
Watercraft components | Steering engine | [
"Technology",
"Engineering"
] | 421 | [
"Control devices",
"Control engineering",
"Mechanical amplifiers",
"Amplifiers"
] |
14,646,394 | https://en.wikipedia.org/wiki/Federal%20modernism | Federal modernism is an architectural style which emerged in the twentieth century encompassing various styles of modern architecture used in the design of federal buildings in the United States. Federal buildings in this style shunned ornamentation, focusing instead on functional efficiency and low costs. There is no universally accepted start date for federal modernism, with some early variants of modernism emerging as early as the 1920s, but the term is most often associated with the buildings built by the U.S. General Services Administration (GSA) in the 1950s through 1970s. Prominent architects associated with federal modernism include Ludwig Mies van der Rohe, Marcel Breuer, Walter Gropius, and Victor Lundy. Federal modernism has been criticized by some architects and politicians such as Donald Trump, either because they believe it lacks "authority" or due to a perceived lack of beauty.
History
Prior to the American Revolution, colonial America derived its public buildings from architectural styles and practices of Great Britain. After gaining independence, the American republic was influenced and inspired by classical Roman and Greek forms, representing the democratic ideals of law and citizenship of a new nation.
In 1852, after the population tripled in numbers, the Office of Construction and Office of the Supervising Architect were established under the Treasury Department to oversee federal design and construction and make the process more efficient and timely. Designs in this period moved away from classicism toward other styles like Renaissance Revival, and emphasized centralization and standardization.
While there is no universally accepted start date for federal modernism, in the early twentieth century the materials and building methods used in federal buildings changed, and reflected the styles of early modernism. These buildings utilized clean lines, flat surfaces, and simple geometric shapes, lacking the ornamentation prevalent in classical architecture. While classicism asserted permanence and authority, modernism celebrated innovation and freedom with its steel and glass materials.
During the New Deal, approximately 1,300 federally funded buildings were constructed nationwide in a simplified classic style. Sometimes referred to as "modern classic" or "stripped classic mode, the “style was so named because the basic form and symmetry of classicism were retained, but much of the ornamentation and motifs were reduced or removed.”
After the General Services Administration (GSA) was established in 1949 to “provide the resources needed by U.S. agencies to accomplish their missions,” federal buildings reflected an emphasis on functionalism rather than ornamentation. Federal modernism is most closely associated with the GSA buildings constructed between the 1950s and 1970s, which embodied this philosophy. Trends of functionalism included individual offices becoming less common while large open “universal” spaces became more common.
Characteristics
Lacking the ornamentation and ceremonial spaces of earlier styles, federal modernism instead incorporated sharp edges and emphasized functionality and efficiency. It often involved the use of many prefabricated elements, and inexpensive materials such as aluminum, concrete, and plastic. The lower cost of building in the modernist style helped it become widespread. With these changes, federal buildings began to resemble private office buildings, and it became challenging to differentiate between public and private structures in communities.
Modernist philosophy and the rapid pace of technological advancement led to buildings being constructed with intended lifespans of only 20-30 years, instead of centuries like their predecessors, due to “economics” and the “increasing requirements of comfort demanded by people”. This has led to many questions over whether it makes sense for the GSA to continually maintain and reinvest in these buildings as they age.
From the 1950s to 1970s, various styles of modern architecture were commonly used in federal buildings. These include International Style, New Formalism, Brutalism, and Expressionism.
Architects
Private architecture firms, along with government architects, produced designs in the federal modernist style for office buildings, courthouses, post offices, border stations, and museums. As a result of the inclusion of private firms, the demarcation between government and private architecture diminished.
Architects associated with federal modernism include prominent American modernist architects of the mid-twentieth century including Ludwig Mies van der Rohe, Marcel Breuer, Walter Gropius, and Victor Lundy.
Mies van der Rohe
Ludwig Mies van der Rohe was the chief designer of the Chicago Federal Center (also known as the Chicago Federal Complex) in Chicago, Illinois. He worked alongside the architects of the firms of Schmidt, Garden and Erikson, C.F. Murphy Associates, and A. Epstein and Sons. The construction took place between 1960-1974. The complex includes three buildings: the 45-story John C. Kluczynski Federal Building, the Everett McKinley Dirksen United States Courthouse, and a post office between these two towers.
A sculpture by Alexander Calder, "Flamingo," is installed in the complex's central plaza.
Walter Gropius and The Architects Collaborative
Walter Gropius, founder of the Bauhaus School, along with The Architects Collaborative, designed the John F. Kennedy Federal Building located at 15 Sudbury Street, Boston, Massachusetts.
Marcel Breuer
Marcel Breuer, who worked with Mies van der Rohe and Gropius as part of the Bauhaus School, designed the Robert C. Weaver Federal Building in Washington D.C. located at 451 7th Street, SW. It houses the headquarters of the U.S. Department of Housing and Urban Development.
Victor Lundy
In 1965, Victor Lundy designed the U.S. Tax Court Building located at 400 2nd Street NW, Washington, D.C., and it is listed on the National Register of Historic Places.
Reception
In 2007, some architects invited by the GSA to a forum complained that modernist courthouses did not have as much “gravitas, order and authority” as those built in the classical style.
Responses to federal modernism became subject to partisan bickering; in 2020, Donald Trump signed an executive order disapproving of modernism in federal buildings due to its perceived lack of beauty. Joe Biden subsequently overturned that executive order in 2021. And in response to Biden's executive order, Republicans in Congress introduced legislation in 2023 that would discourage the use of modernist architecture and instead favor classicism in federal building design.
References
External links
Federal Modernism on the General Services Administration website.
GSA modern building poster galleries.
Modernist architecture in the United States
American architectural styles | Federal modernism | [
"Engineering"
] | 1,273 | [
"Architecture stubs",
"Architecture"
] |
14,646,684 | https://en.wikipedia.org/wiki/Ammunition%20Design%20Group | Ammunition is a San Francisco, CA, design studio founded in 2007 by Robert Brunner. The current managing partners are Robert Brunner and Matt Rolandson. Ammunition was formed after it parted ways from Pentagram (design firm).The company designs hardware, software, and graphic identities for many companies, including Adobe Systems, Beats by Dre, Polaroid Corporation, and Square Inc.
Notable projects
Barnes & Noble Nook
Ammunition developed the industrial design, user interface and accessory system for the Barnes & Noble Nook e-readers.
Smartisan T1 smartphone
In May 2014, the company designed the Smartisan T1 and T2 smartphones for China-based Smartisan Technology Co. Ltd. The company won several awards for their design.
Awards
Ammunition has been recognized with numerous international design awards from the Industrial Designers Society of America, Red Dot, Core77 Design Awards, D&AD, and the Good Design Awards (Chicago).
In 2014, Ammunition won a Good Design award in the "Smartphone & Accessory" category for the Smartisan T1 smartphone.
In 2014, Ammunition announced that their work was recognised in the Product and Graphic categories of the Spark Awards and won 10 awards
In 2015, Ammunition won an iF Gold Award for the Smartisan T1 smartphone.
In 2016, Ammunition won the Cooper Hewitt Product Design award for noteworthy projects including Beats By Dr Dre, the Lyft glowstache and the UNICEF Kid Power Band.
In 2016, Ammunition won an iF Design Award for the Smartisan T2 smartphone IF Product Design Award for Smartisan T2 smartphone.
Ammunition won gold at the 2016 IDSA International Design Excellence Awards
Ammunition and Eargo won Gold at 2018 International Design Excellence Awards
In 2020 ammunition x Gantri won AD cleverest Award for the Signal Floor Light
Ammunition won runner up for the Consumer Technology Award, in the Core 77Design Awards 2024.
In 2021 Ammunition designed the all new trophy for the 10th anniversary of the Innovation by Design awards.
See also
Pentagram Design
Product design
References
External links
Official site
Wired news article on PC design
Product design
Industrial design firms
Companies based in San Francisco | Ammunition Design Group | [
"Engineering"
] | 427 | [
"Design stubs",
"Product design",
"Design"
] |
14,646,706 | https://en.wikipedia.org/wiki/Charge%20ordering | Charge ordering (CO) is a (first- or second-order) phase transition occurring mostly in strongly correlated materials such as transition metal oxides or organic conductors. Due to the strong interaction between electrons, charges are localized on different sites leading to a disproportionation and an ordered superlattice. It appears in different patterns ranging from vertical to horizontal stripes to a checkerboard–like pattern
, and it is not limited to the two-dimensional case. The charge order transition is accompanied by symmetry breaking and may lead to ferroelectricity. It is often found in close proximity to superconductivity and colossal magnetoresistance.
This long range order phenomena was first discovered in magnetite (Fe3O4) by Verwey in 1939.
He observed an increase of the electrical resistivity by two orders of magnitude at TCO=120K, suggesting a phase transition which is now well known as the Verwey transition. He was the first to propose the idea of an ordering process in this context. The charge ordered structure of magnetite was solved in 2011 by a group led by Paul Attfield with the results published in Nature. Periodic lattice distortions associated with charge order were later mapped in the manganite lattice to reveal striped domains containing topological disorder.
Theoretical description
The extended one-dimensional Hubbard model delivers a good description of the charge order transition with the on-site and nearest neighbor Coulomb repulsion U and V. It emerged that V is a crucial parameter and important for developing the charge order state. Further model calculations try to take the temperature and an interchain interaction into account.
The extended Hubbard model for a single chain including inter-site and on-site interaction V and U as well as the parameter for a small dimerization which can be typically found in the (TMTTF)2X compounds is presented as follows:
where t describes the transfer integral or the kinetic energy of the electron and and are the creation and annihilation operator, respectively, for an electron with the spin at the th or th site. denotes the density operator. For non-dimerized systems, can be set to zero Normally, the on-site Coulomb repulsion U stays unchanged only t and V can vary with pressure.
Examples
Organic conductors
Organic conductors consist of donor and acceptor molecules building separated planar sheets or columns. The energy difference in the ionization energy acceptor and the electron affinity of the donor leads to a charge transfer and consequently to free carriers whose number is normally fixed. The carriers are delocalized throughout the crystal due to the overlap of the molecular orbitals being also reasonable for the high anisotropic conductivity. That is why it will be distinct between different dimensional organic conductors. They possess a huge variety of ground states, for instance, charge ordering, spin-Peierls, spin-density wave, antiferromagnetic state, superconductivity, charge-density wave to name only some of them.
Quasi-one-dimensional organic conductors
The model system of one-dimensional conductors is the Bechgaard-Fabre salts family, (TMTTF)2X and (TMTSF)2X, where in the latter one sulfur is substituted by selenium leading to a more metallic behavior over a wide temperature range and exhibiting no charge order. While the TMTTF compounds depending on the counterions X show the conductivity of a semiconductor at room temperature and are expected to be more one-dimensional than (TMTSF)2X.
The transition temperature TCO for the TMTTF subfamily was registered over two order of magnitudes for the centrosymmetric anions X = Br, PF6, AsF6, SbF6 and the non-centrosymmetric anions X= BF4 and ReO4.
In the middle of the eighties, a new "structureless transition" was discovered by Coulon et al. conducting transport and thermopower measurements. They observed a suddenly rise of the resistivity and the thermopower at TCO while x-ray measurements showed no evidence for a change in the crystal symmetry or a formation of a superstructure. The transition was later confirmed by 13C-NMR and dielectric measurements.
Different measurements under pressure reveal a decrease of the transition temperature TCO by increasing the pressure. According to the phase diagram of that family, an increasing pressure applied to the TMTTF compounds can be understood as a shift from the semiconducting state (at room temperature) to a higher dimensional and metallic state as you can find for TMTSF compounds without a charge order state.
Quasi-two-dimensional organic conductors
A dimensional crossover can be induced not only by applying pressure, but also be substituting the donor molecules by other ones. From a historical point of view, the main aim was to synthesize an organic superconductor with a high TC. The key to reach that aim was to increase the orbital overlap in two dimension. With the BEDT-TTF and its huge π-electron system, a new family of quasi-two-dimensional organic conductors were created exhibiting also a great variety of the phase diagram and crystal structure arrangements.
At the turn of the 20th century, first NMR measurements on the θ-(BEDT-TTF)2RbZn(SCN)4 compound uncovered the known metal to insulator transition at TCO= 195 K as an charge order transition.
Transition metal oxides
The most prominent transition metal oxide revealing a CO transition is the magnetite Fe3O4 being a mixed-valence oxide where the iron atoms have a statistical distribution of Fe3+ and Fe2+ above the transition temperature. Below 122 K, the combination of 2+ and 3+ species arrange themselves in a regular pattern, whereas above that transition temperature (also referred to as the Verwey temperature in this case) the thermal energy is large enough to destroy the order.
Alkali metal oxides
The alkali metal oxides rubidium sesquioxide (Rb4O6) and caesium sesquioxide (Cs4O6) display charge ordering.
Detection of charge order
NMR spectroscopy is a powerful tool to measure the charge disproportionation. To apply this method to a certain system, it has to be doped with nuclei, for instance 13C as it is the case for TMTTF compounds, being active for NMR. The local probe nuclei are very sensitive to the charge on the molecule observable in the Knight shift K and the chemical shift D. The Knight shift K is proportional to the spin spin susceptibility χSp on the molecule. The charge order or charge disproportionation appear as a splitting or broadening of the certain feature in the spectrum.
The X-ray diffraction technique allows to determine the atomic position, but the extinction effect hinders to receive a high resolution spectrum. In the case of the organic conductors, the charge per molecule is measured by the change of the bond length of the C=C double bonds in the TTF molecule. A further problem arising by irradiating the organic conductors with x-rays is the destruction of the CO state.
In the organic molecules like TMTTF, TMTSF or BEDT-TFF, there are charge-sensitive modes changing their frequency depending on the local charge. Especially the C=C double bonds are quite sensitive to the charge. If a vibrational mode is infrared active or only visible in the Raman spectrum depends on its symmetry. In the case of BEDT-TTF, the most sensitive ones are the Raman active ν3, ν2 and the infrared out of phase mode ν27. Their frequency is linearly associated to the charge per molecule giving the opportunity to determine the degree of disproportionation.
The charge order transition is also a metal to insulator transition being observable in transport measurements as a sharp rise in the resistivity. Transport measurements are therefore a good tool to get first evidences of a possible charge order transition.
References
Electric and magnetic fields in matter
Phase transitions | Charge ordering | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,665 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
14,647,099 | https://en.wikipedia.org/wiki/Two-component%20regulatory%20system | In molecular biology, a two-component regulatory system serves as a basic stimulus-response coupling mechanism to allow organisms to sense and respond to changes in many different environmental conditions. Two-component systems typically consist of a membrane-bound histidine kinase that senses a specific environmental stimulus, and a corresponding response regulator that mediates the cellular response, mostly through differential expression of target genes. Although two-component signaling systems are found in all domains of life, they are most common by far in bacteria, particularly in Gram-negative and cyanobacteria; both histidine kinases and response regulators are among the largest gene families in bacteria. They are much less common in archaea and eukaryotes; although they do appear in yeasts, filamentous fungi, and slime molds, and are common in plants, two-component systems have been described as "conspicuously absent" from animals.
Mechanism
Two-component systems accomplish signal transduction through the phosphorylation of a response regulator (RR) by a histidine kinase (HK). Histidine kinases are typically homodimeric transmembrane proteins containing a histidine phosphotransfer domain and an ATP binding domain, though there are reported examples of histidine kinases in the atypical HWE and HisKA2 families that are not homodimers. Response regulators may consist only of a receiver domain, but usually are multi-domain proteins with a receiver domain and at least one effector or output domain, often involved in DNA binding. Upon detecting a particular change in the extracellular environment, the HK performs an autophosphorylation reaction, transferring a phosphoryl group from adenosine triphosphate (ATP) to a specific histidine residue. The cognate response regulator (RR) then catalyzes the transfer of the phosphoryl group to an aspartate residue on the response regulator's receiver domain. This typically triggers a conformational change that activates the RR's effector domain, which in turn produces the cellular response to the signal, usually by stimulating (or repressing) expression of target genes.
Many HKs are bifunctional and possess phosphatase activity against their cognate response regulators, so that their signaling output reflects a balance between their kinase and phosphatase activities. Many response regulators also auto-dephosphorylate, and the relatively labile phosphoaspartate can also be hydrolyzed non-enzymatically. The overall level of phosphorylation of the response regulator ultimately controls its activity.
Phosphorelays
Some histidine kinases are hybrids that contain an internal receiver domain. In these cases, a hybrid HK autophosphorylates and then transfers the phosphoryl group to its own internal receiver domain, rather than to a separate RR protein. The phosphoryl group is then shuttled to histidine phosphotransferase (HPT) and subsequently to a terminal RR, which can evoke the desired response. This system is called a phosphorelay. Almost 25% of bacterial HKs are of the hybrid type, as are the large majority of eukaryotic HKs.
Function
Two-component signal transduction systems enable bacteria to sense, respond, and adapt to a wide range of environments, stressors, and growth conditions. These pathways have been adapted to respond to a wide variety of stimuli, including nutrients, cellular redox state, changes in osmolarity, quorum signals, antibiotics, temperature, chemoattractants, pH and more. The average number of two-component systems in a bacterial genome has been estimated as around 30, or about 1–2% of a prokaryote's genome. A few bacteria have none at all – typically endosymbionts and pathogens – and others contain over 200. All such systems must be closely regulated to prevent cross-talk, which is rare in vivo.
In Escherichia coli, the osmoregulatory EnvZ/OmpR two-component system controls the differential expression of the outer membrane porin proteins OmpF and OmpC. The KdpD sensor kinase proteins regulate the kdpFABC operon responsible for potassium transport in bacteria including E. coli and Clostridium acetobutylicum. The N-terminal domain of this protein forms part of the cytoplasmic region of the protein, which may be the sensor domain responsible for sensing turgor pressure.
Histidine kinases
Signal transducing histidine kinases are the key elements in two-component signal transduction systems. Examples of histidine kinases are EnvZ, which plays a central role in osmoregulation, and CheA, which plays a central role in the chemotaxis system. Histidine kinases usually have an N-terminal ligand-binding domain and a C-terminal kinase domain, but other domains may also be present. The kinase domain is responsible for the autophosphorylation of the histidine with ATP, the phosphotransfer from the kinase to an aspartate of the response regulator, and (with bifunctional enzymes) the phosphotransfer from aspartyl phosphate to water. The kinase core has a unique fold, distinct from that of the Ser/Thr/Tyr kinase superfamily.
HKs can be roughly divided into two classes: orthodox and hybrid kinases. Most orthodox HKs, typified by the E. coli EnvZ protein, function as periplasmic membrane receptors and have a signal peptide and transmembrane segment(s) that separate the protein into a periplasmic N-terminal sensing domain and a highly conserved cytoplasmic C-terminal kinase core. Members of this family, however, have an integral membrane sensor domain. Not all orthodox kinases are membrane bound, e.g., the nitrogen regulatory kinase NtrB (GlnL) is a soluble cytoplasmic HK. Hybrid kinases contain multiple phosphodonor and phosphoacceptor sites and use multi-step phospho-relay schemes instead of promoting a single phosphoryl transfer. In addition to the sensor domain and kinase core, they contain a CheY-like receiver domain and a His-containing phosphotransfer (HPt) domain.
Evolution
The number of two-component systems present in a bacterial genome is highly correlated with genome size as well as ecological niche; bacteria that occupy niches with frequent environmental fluctuations possess more histidine kinases and response regulators. New two-component systems may arise by gene duplication or by lateral gene transfer, and the relative rates of each process vary dramatically across bacterial species. In most cases, response regulator genes are located in the same operon as their cognate histidine kinase; lateral gene transfers are more likely to preserve operon structure than gene duplications.
In eukaryotes
Two-component systems are rare in eukaryotes. They appear in yeasts, filamentous fungi, and slime molds, and are relatively common in plants, but have been described as "conspicuously absent" from animals. Two-component systems in eukaryotes likely originate from lateral gene transfer, often from endosymbiotic organelles, and are typically of the hybrid kinase phosphorelay type. For example, in the yeast Candida albicans, genes found in the nuclear genome likely originated from endosymbiosis and remain targeted to the mitochondria. Two-component systems are well-integrated into developmental signaling pathways in plants, but the genes probably originated from lateral gene transfer from chloroplasts. An example is the chloroplast sensor kinase (CSK) gene in Arabidopsis thaliana, derived from chloroplasts but now integrated into the nuclear genome. CSK function provides a redox-based regulatory system that couples photosynthesis to chloroplast gene expression; this observation has been described as a key prediction of the CoRR hypothesis, which aims to explain the retention of genes encoded by endosymbiotic organelles.
It is unclear why canonical two-component systems are rare in eukaryotes, with many similar functions having been taken over by signaling systems based on serine, threonine, or tyrosine kinases; it has been speculated that the chemical instability of phosphoaspartate is responsible, and that increased stability is needed to transduce signals in the more complex eukaryotic cell. Notably, cross-talk between signaling mechanisms is very common in eukaryotic signaling systems but rare in bacterial two-component systems.
Bioinformatics
Because of their sequence similarity and operon structure, many two-component systems – particularly histidine kinases – are relatively easy to identify through bioinformatics analysis. (By contrast, eukaryotic kinases are typically easily identified, but they are not easily paired with their substrates.) A database of prokaryotic two-component systems called P2CS has been compiled to document and classify known examples, and in some cases to make predictions about the cognates of "orphan" histidine kinase or response regulator proteins that are genetically unlinked to a partner.
References
External links
http://www.p2cs.org: The Prokaryotic 2-Component Systems Database
Protein domains
Signal transduction | Two-component regulatory system | [
"Chemistry",
"Biology"
] | 1,994 | [
"Protein classification",
"Signal transduction",
"Protein domains",
"Biochemistry",
"Neurochemistry"
] |
14,647,723 | https://en.wikipedia.org/wiki/Disclination | In crystallography, a disclination is a line defect in which there is compensation of an angular gap. They were first discussed by Vito Volterra in 1907, who provided an analysis of the elastic strains of a wedge disclination. By analogy to dislocations in crystals, the term, disinclination, was first used by Frederick Charles Frank and since then has been modified to its current usage, disclination. They have since been analyzed in some detail particularly by Roland deWit.
Disclinations are characterized by an angular vector (called a Frank vector), and the line of the disclination. When the vector and the line are the same they are sometimes called wedge disclinations which are common in fiveling nanoparticles. When the Frank vector and the line of the disclination are at right angles they are called twist disclinations. As pointed out by John D. Eshelby there is an intricate connection between disclinations and dislocations, with dislocation motion moving the position of a disclination.
Disclinations occur in many different materials, ranging from liquid crystals to nanoparticles and in elastically distorted materials.
Example in two dimensions
In 2D, disclinations and dislocations are point defects instead of line defects as in 3D. They are topological defects and play a central role in melting of 2D crystals within the KTHNY theory, based on two Kosterlitz–Thouless transitions.
Equally sized discs (spheres, particles, atoms) form a hexagonal crystal as dense packing in two dimensions. In such a crystal, each particle has six nearest neighbors. Local strain and twist (for example induced by thermal motion) can cause configurations where discs (or particles) have a coordination number different of six, typically five or seven. Disclinations are topological defects, therefore (starting from a hexagonal array) they can only be created in pairs. Ignoring surface/border effects, this implies that there are always as many 5-folded as 7-folded disclinations present in a perfectly plane 2D crystal. A "bound" pair of 5-7-folded disclinations is a dislocation. If myriad dislocations are thermally dissociated into isolated disclinations, then the monolayer of particles becomes an isotropic fluid in two dimensions. A 2D crystal is free of disclinations.
To transform a section of a hexagonal array into a 5-folded disclination (colored green in the figure), a triangular wedge of hexagonal elements (blue triangle) has to be removed; to create a 7-folded disclination (orange), an identical wedge must be inserted. The figure illustrates how disclinations destroy orientational order, while dislocations only destroy translational order in the far field (portions of the crystal far from the center of the disclination).
Disclinations are topological defects because they cannot be created locally by an affine transformation without cutting the hexagonal array outwards to infinity (or the border of a finite crystal). The undisturbed hexagonal crystal has a 60° symmetry, but when a wedge is removed to create a 5-folded disclination, the crystal symmetry is stretched to 72° – for a 7-folded disclination, it is compressed to about 51,4°. Thus, disclinations store elastic energy by disturbing the director field.
See also
References
Further reading
Crystallographic defects
Mechanics
Materials science
Condensed matter physics | Disclination | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 726 | [
"Applied and interdisciplinary physics",
"Crystallographic defects",
"Phases of matter",
"Materials science",
"Crystallography",
"Mechanics",
"Condensed matter physics",
"nan",
"Mechanical engineering",
"Materials degradation",
"Matter"
] |
14,648,893 | https://en.wikipedia.org/wiki/Andr%C3%A9%20Couder | André Couder (27 November 1897 – 16 January 1979) was a French optician and astronomer.
Information
From 1925, he worked in the optics laboratory of the Paris Observatory. Between 1952 and 1958 he was vice-president of the International Astronomical Union. A lunar crater, Couder, is named for him. He was awarded the Valz Prize in 1936, and the Janssen Medal from the French Academy of Sciences in 1952.
Couder was the President of the Société astronomique de France (SAF), the French astronomical society, from 1955-1957.
References
1897 births
1979 deaths
20th-century French astronomers
Members of the French Academy of Sciences
People from Alençon
Opticians | André Couder | [
"Astronomy"
] | 141 | [
"Opticians",
"History of astronomy"
] |
14,649,282 | https://en.wikipedia.org/wiki/Black%20oxide | Black oxide or blackening is a conversion coating for ferrous materials, stainless steel, copper and copper based alloys, zinc, powdered metals, and silver solder. It is used to add mild corrosion resistance, for appearance, and to minimize light reflection. To achieve maximal corrosion resistance the black oxide must be impregnated with oil or wax. Dual target magnetron sputtering (DMS) is used for preparing black oxide coatings. One of its advantages over other coatings is its minimal buildup.
Ferrous material
A standard black oxide is magnetite (FeO), which is more mechanically stable on the surface and provides better corrosion protection than red oxide (rust) FeO. Modern industrial approaches to forming black oxide include the hot and mid-temperature processes described below. Traditional methods are described in the article on bluing. They are of interest historically, and are also useful for hobbyists to form black oxide safely with little equipment and without toxic chemicals.
Low temperature oxide, also described below, is not a conversion coating—the low-temperature process does not oxidize the iron, but deposits a copper selenium compound.
Hot black oxide
Hot baths of sodium hydroxide (NaOH), nitrates such as sodium nitrate (), and/or nitrites such as sodium nitrite (NaNO2) at are used to convert the surface of the material into magnetite (FeO). Water must be periodically added to the bath, with proper controls to prevent a steam explosion.
Hot blackening involves dipping the part into various tanks. The workpiece is usually dipped by automated part carriers for transportation between tanks. These tanks contain, in order, alkaline detergent, water, sodium hydroxide at (the blackening compound), and finally the sealant, which is usually oil.
The NaOH (caustic soda) and elevated temperature cause FeO (black oxide) to form on the surface of the metal instead of FeO (red oxide; rust). While it is physically denser than red oxide, the fresh black oxide is porous, so oil is then applied as post treatment to the heated part, which seals it by "sinking" into it. The combination prevents corrosion of the workpiece. There are many advantages of blackening, including:
Blackening can be done in large batches, which is ideal for small parts.
There is no significant dimensional impact. The blacking process creates a layer about 1μm thick.
It is far cheaper than similar corrosion protection systems, such as paint and electroplating.
The oldest and most widely used specification for hot black oxide is MIL-DTL-13924, which covers four classes of processes for different substrates. Alternate specifications include AMS 2485, ASTM D769, and ISO 11408.
Iron(III) chloride (FeCl3) may also be used for steel blackening by dipping a piece of steel into a hot bath of 50% FeCl3 solution and then into a hot boiling water. The process is usually repeated several times.
Mid-temperature black oxide
Like hot black oxide, mid-temperature black oxide converts the surface of the metal to magnetite (FeO). However, mid-temperature black oxide blackens at a temperature of , significantly less than hot black oxide. This is advantageous because it is below the solution's boiling point, meaning there are no caustic fumes produced.
Since mid-temperature black oxide is most comparable to hot black oxide, it also can meet the military specification MIL-DTL-13924, as well as AMS 2485.
Cold black oxide
Cold black oxide, also known as room temperature black oxide, is applied at a temperature of . It is not an oxide conversion coating, but rather a deposited copper selenide (Cu2Se) compound. Cold black oxide is convenient for in-house blackening. This coating produces a similar color to the one the oxide conversion does, but tends to rub off easily and offers less abrasion resistance. The application of oil, wax, or lacquer brings the corrosion resistance up to par with the hot and mid-temperature. Applications for cold black oxide process include tooling and architectural finishing on steel. It is also known as cold bluing.
Copper
Black oxide for copper, sometimes known by the trade name Ebonol C, converts the copper surface to cupric oxide. For the process to work the surface has to have at least 65% copper; for copper surfaces that have less than 90% copper it must first be pretreated with an activating treatment. The finished coating is chemically stable and very adherent. It is stable up to ; above this temperature the coating degrades due to oxidation of the base copper. To increase corrosion resistance, the surface may be oiled, lacquered, or waxed. It is also used as a pre-treatment for painting or enamelling. The surface finish is usually satin, but it can be turned glossy by coating in a clear high-gloss enamel.
On a microscopic scale dendrites form on the surface finish, which trap light and increase absorptivity. Because of this property the coating is used in aerospace, microscopy and other optical applications to minimise light reflection.
In printed circuit boards (PCBs), the use of black oxide provides better adhesion for the fiberglass laminate layers. The PCB is dipped in a bath containing hydroxide, hypochlorite, and cuprate, which becomes depleted in all three components. This indicates that the black copper oxide comes partially from the cuprate and partially from the PCB copper circuitry. Under microscopic examination, there is no copper(I) oxide layer.
An applicable U.S. military specification is MIL-F-495E.
Stainless steel
Hot black oxide for stainless steel is a mixture of caustic, oxidizing, and sulfur salts. It blackens 300 and 400 series and the precipitation-hardened 17-4 PH stainless steel alloys. The solution can be used on cast iron and mild low-carbon steel. The resulting finish complies with military specification MIL-DTL–13924D Class 4 and offers abrasion resistance. Black oxide finish is used on surgical instruments in light-intensive environments to reduce eye fatigue.
Room-temperature blackening for stainless steel occurs by auto-catalytic reaction of copper-selenide depositing on the stainless-steel surface. It offers less abrasion resistance and the same corrosion protection as the hot blackening process.
Zinc
Black oxide for zinc is also known by the trade name Ebonol Z.
See also
Chemical coloring of metals
Electrochemical coloring of metals
Phosphate conversion coating
References
Chemical mixtures
Coatings
Corrosion prevention | Black oxide | [
"Chemistry"
] | 1,379 | [
"Corrosion prevention",
"Coatings",
"Corrosion",
"Chemical mixtures",
"nan"
] |
14,649,921 | https://en.wikipedia.org/wiki/List%20of%20largest%20cities | The United Nations (UN) uses three definitions for what constitutes a city, as not all cities in all jurisdictions are classified using the same criteria. Cities may be defined as the cities proper, the extent of their urban area, or their metropolitan regions.
Definitions
City proper (administrative)
A city can be defined by its administrative boundaries, otherwise known as city proper. UNICEF defines city proper as, "the population living within the administrative boundaries of a city or controlled directly from the city by a single authority." A city proper is a locality defined according to legal or political boundaries and an administratively recognised urban status that is usually characterised by some form of local government. Cities proper and their boundaries and population data may not include suburbs.
The use of city proper as defined by administrative boundaries may not include suburban areas where an important proportion of the population working or studying in the city lives. Because of this definition, the city proper population figure may differ greatly from the urban area population figure, as many cities are amalgamations of smaller municipalities (Australia), and conversely, many Chinese cities govern territories that extend well beyond the core urban area into suburban and rural areas. The Chinese municipality of Chongqing, which is the largest city proper in the world by population, comprises a huge administrative area of 82,403 km2, around the size of Austria. However, more than 70% of its 30-million population are agricultural workers living in a rural setting.
Urban area
A city can be defined as a conditionally contiguous urban area, without regard to territorial or other boundaries inside an urban area. UNICEF defines urban area as follows:
According to Demographia, an urban area is a continuously built up land mass of urban development that is within a labor market (metropolitan area or metropolitan region) and contains no rural land.
Metropolitan area
A city can be defined by the inhabitants of its demographic population, as by metropolitan area, or labour market area. UNICEF defines metropolitan area as follows:
In many countries, metropolitan areas are established either with an official organisation or only for statistical purposes. In the United States, metropolitan statistical area (MSA) is defined by the U.S. Office of Management and Budget (OMB). In the Philippines, metropolitan areas have an official agency, such as Metropolitan Manila Development Authority (MMDA), which manages Manila metropolitan area. Similar agencies exist in Indonesia, such as Jabodetabekjur Development Cooperation Agency for Jakarta metropolitan area.
List
There are 81 cities in the world with a population exceeding 5 million people, according to 2018 estimates by the United Nations. The UN figures include a mixture of city proper, metropolitan area, and urban area.
See also
Historical urban community sizes
List of largest cities throughout history
List of cities with over one million inhabitants
List of towns and cities with 100,000 or more inhabitants
List of largest cities by area
Notes
References
External links
UNSD Demographics Statistics – City population by sex, city and city type
Nordpil World Database of Large Urban Areas, 1950–2050
Cities-related lists of superlatives
Largest things
Urban geography | List of largest cities | [
"Physics",
"Mathematics"
] | 617 | [
"Quantity",
"Largest things",
"Physical quantities",
"Size"
] |
14,650,127 | https://en.wikipedia.org/wiki/Zygmunt%20Zawirski | Zygmunt Zawirski (29 July 1882 – 2 April 1948) was a Polish philosopher and logician.
His main field of study was philosophy of physics, history of science, multi-valued logic and relation of multi-valued logic to calculus of probability.
Biography
Zawirski was born on 29 July 1882 in the village of Berezowica Mała (Mala Berezovytsia) near Zbarazh (now Ukraine). In 1928 he became a professor of the Adam Mickiewicz University in Poznań and in 1937 professor of the Jagiellonian University in Kraków. In 1936 he became an editor of Kwartalnik Filozoficzny ("Philosophical Quarterly"). After 1945, he was president of the Krakowskie Towarzystwo Filozoficzne ("Kraków Philosophical Society").
He died on 2 April 1948 in Końskie, Poland.
Notable works
References
Further reading
1882 births
1948 deaths
Academic staff of Jagiellonian University
Historians of science
Mathematical logicians
Academic staff of Adam Mickiewicz University in Poznań
20th-century Polish historians
Polish male non-fiction writers
Polish logicians
People from Ternopil Oblast
20th-century Polish philosophers
Philosophers of physics | Zygmunt Zawirski | [
"Mathematics"
] | 253 | [
"Mathematical logic",
"Mathematical logicians"
] |
14,650,251 | https://en.wikipedia.org/wiki/Phthalimidopropiophenone | Phthalimidopropiophenone is a chemical intermediate used in the synthesis of cathinone. It has been found to be sold on the illicit market as a controlled substance analogue, but little is currently known about its pharmacology or toxicology.
Phthalimidopropiophenone is not an active stimulant, but is believed to be potentially capable of acting as a prodrug for cathinone when ingested, as similar N-substituted cathinone derivatives have been encountered by law enforcement, and were found to form cathinone in vivo by initial hydroxylation of the pyrrolidine ring followed by subsequent dehydrogenation to the corresponding lactam, then by double dealkylation of the pyrrolidine ring to form the primary amine. It is unclear how rapidly or to what extent this metabolic pathway is followed in humans however, and the phthalimido substituted cathinones encountered may have been produced merely as a more stable form for storage than the relatively unstable primary amine cathinone derivatives.
References
Stimulants
Ketones
Phthalimides
Cathinones | Phthalimidopropiophenone | [
"Chemistry"
] | 237 | [
"Ketones",
"Functional groups"
] |
14,650,318 | https://en.wikipedia.org/wiki/Israeli%20Cassini%20Soldner | Israeli Cassini Soldner (ICS), commonly known as the Old Israeli Grid (OIG; Reshet Yisra'el Ha-Yeshana) is the old geographic coordinate system for Israel. The name is derived from the Cassini Soldner projection it uses and the fact that it is optimized for Israel. ICS has been mostly replaced by the new coordinate system Israeli Transverse Mercator (ITM), also known as the New Israeli Grid (NIG), but still referenced by older books and navigation software.
History
The Cassini Soldner projection was used by the British Mandate of Palestine, when it was called the Palestine grid. The Palestine grid reached as south as Beer-Sheba. To avoid the existence of negative coordinates in the southern Negev, the False Northing of ICS was increased by 1000000. As a result, coordinates in the south of Israel are higher than 800000.
Examples
An ICS coordinate is generally given as a pair of two numbers (excluding any digits behind a decimal point which may be used in very precise surveying). The first number is always the Easting and the second is the Northing. The easting and northing are in metres from the false origin. The easting is always a 6 digit number while the northing has 6 or 7 digits.
The ICS coordinate for the Western Wall at Jerusalem is:
E 172249 m
N 1131586 m
The first figure is the easting and means that the location is 172,249 meters east from the false origin (along the X axis). The second figure is the northing and puts the location 1,131,586 meters north of the false origin (along the Y axis). Also notice how the easting in this example is indicated with an “E” and likewise an “N” for the northing. The fact that the coordinate is in meters is indicated by the lowercase m.
The table below shows the same coordinate in 3 different grids:
Grid parameters
The ICS coordinate system is defined by the following parameters:
Projection: Cassini Soldner
Reference ellipsoid: Clarke 80 Modified
a(m): 6378300.789
1/f: 293.466
Main datum point values:
Latitude of origin (D-M-S): 31 44 2.748999999990644
Longitude of origin (D-M-S): 35 12 43.490000000012970
Main point grid values:
False Easting (m): 170251.5549999999
False Northing(m): 1126867.909
Grid scale factor: 1
References
Sources
Official ICS Grid Definition
MAPI (Mapping Center of Israel) official website (Hebrew).
Geography educational website of Haifa's university.
External links
MAPI (Mapping Center of Israel) official website (Hebrew).
Geography educational website of Haifa's university.
Geographic coordinate systems
Geography of Israel
Land surveying systems
Geodesy | Israeli Cassini Soldner | [
"Mathematics"
] | 624 | [
"Geographic coordinate systems",
"Applied mathematics",
"Geodesy",
"Coordinate systems"
] |
14,650,395 | https://en.wikipedia.org/wiki/Monadic%20second-order%20logic | In mathematical logic, monadic second-order logic (MSO) is the fragment of second-order logic where the second-order quantification is limited to quantification over sets. It is particularly important in the logic of graphs, because of Courcelle's theorem, which provides algorithms for evaluating monadic second-order formulas over graphs of bounded treewidth. It is also of fundamental importance in automata theory, where the Büchi–Elgot–Trakhtenbrot theorem gives a logical characterization of the regular languages.
Second-order logic allows quantification over predicates. However, MSO is the fragment in which second-order quantification is limited to monadic predicates (predicates having a single argument). This is often described as quantification over "sets" because monadic predicates are equivalent in expressive power to sets (the set of elements for which the predicate is true).
Variants
Monadic second-order logic comes in two variants. In the variant considered over structures such as graphs and in Courcelle's theorem, the formula may involve non-monadic predicates (in this case the binary edge predicate ), but quantification is restricted to be over monadic predicates only. In the variant considered in automata theory and the Büchi–Elgot–Trakhtenbrot theorem, all predicates, including those in the formula itself, must be monadic, with the exceptions of equality () and ordering () relations.
Computational complexity of evaluation
Existential monadic second-order logic (EMSO) is the fragment of MSO in which all quantifiers over sets must be existential quantifiers, outside of any other part of the formula. The first-order quantifiers are not restricted. By analogy to Fagin's theorem, according to which existential (non-monadic) second-order logic captures precisely the descriptive complexity of the complexity class NP, the class of problems that may be expressed in existential monadic second-order logic has been called monadic NP. The restriction to monadic logic makes it possible to prove separations in this logic that remain unproven for non-monadic second-order logic. For instance, in the logic of graphs, testing whether a graph is disconnected belongs to monadic NP, as the test can be represented by a formula that describes the existence of a proper subset of vertices with no edges connecting them to the rest of the graph; however, the complementary problem, testing whether a graph is connected, does not belong to monadic NP. The existence of an analogous pair of complementary problems, only one of which has an existential second-order formula (without the restriction to monadic formulas) is equivalent to the inequality of NP and coNP, an open question in computational complexity.
By contrast, when we wish to check whether a Boolean MSO formula is satisfied by an input finite tree, this problem can be solved in linear time in the tree, by translating the Boolean MSO formula to a tree automaton and evaluating the automaton on the tree. In terms of the query, however, the complexity of this process is generally nonelementary. Thanks to Courcelle's theorem, we can also evaluate a Boolean MSO formula in linear time on an input graph if the treewidth of the graph is bounded by a constant.
For MSO formulas that have free variables, when the input data is a tree or has bounded treewidth, there are efficient enumeration algorithms to produce the set of all solutions, ensuring that the input data is preprocessed in linear time and that each solution is then produced in a delay linear in the size of each solution, i.e., constant-delay in the common case where all free variables of the query are first-order variables (i.e., they do not represent sets). There are also efficient algorithms for counting the number of solutions of the MSO formula in that case.
Decidability and complexity of satisfiability
The satisfiability problem for monadic second-order logic is undecidable in general because this logic subsumes first-order logic.
The monadic second-order theory of the infinite complete binary tree, called S2S, is decidable. As a consequence of this result, the following theories are decidable:
The monadic second-order theory of trees.
The monadic second-order theory of under successor (S1S).
WS2S and WS1S, which restrict quantification to finite subsets (weak monadic second-order logic). Note that for binary numbers (represented by subsets), addition is definable even in WS1S.
For each of these theories (S2S, S1S, WS2S, WS1S), the complexity of the decision problem is nonelementary.
Use of satisfiability of MSO on trees in verification
Monadic second-order logic of trees has applications in formal verification. Decision procedures for MSO satisfiability have been used to prove properties of programs manipulating linked data structures, as a form of shape analysis, and for symbolic reasoning in hardware verification.
See also
Descriptive complexity theory
Monadic predicate calculus
Second-order logic
References
Mathematical logic | Monadic second-order logic | [
"Mathematics"
] | 1,122 | [
"Mathematical logic"
] |
169,319 | https://en.wikipedia.org/wiki/Necessity%20and%20sufficiency | In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If then ", is necessary for , because the truth of is guaranteed by the truth of . (Equivalently, it is impossible to have without , or the falsity of ensures the falsity of .) Similarly, is sufficient for , because being true always implies that is true, but not being true does not always imply that is not true.
In general, a necessary condition is one (possibly one of several conditions) that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition. The assertion that a statement is a "necessary and sufficient" condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true, or simultaneously false.
In ordinary English (also natural language) "necessary" and "sufficient" indicate relations between conditions or states of affairs, not statements. For example, being a man is a necessary condition for being a brother, but it is not sufficient—while being a man sibling is a necessary and sufficient condition for being a brother.
Any conditional statement consists of at least one sufficient condition and at least one necessary condition.
In data analytics, necessity and sufficiency can refer to different causal logics, where necessary condition analysis and qualitative comparative analysis can be used as analytical techniques for examining necessity and sufficiency of conditions for a particular outcome of interest.
Definitions
In the conditional statement, "if S, then N", the expression represented by S is called the antecedent, and the expression represented by N is called the consequent. This conditional statement may be written in several equivalent ways, such as "N if S", "S only if N", "S implies N", "N is implied by S", , and "N whenever S".
In the above situation of "N whenever S," N is said to be a necessary condition for S. In common language, this is equivalent to saying that if the conditional statement is a true statement, then the consequent N must be true—if S is to be true (see third column of "truth table" immediately below). In other words, the antecedent S cannot be true without N being true. For example, in order for someone to be called Socrates, it is necessary for that someone to be Named. Similarly, in order for human beings to live, it is necessary that they have air.
One can also say S is a sufficient condition for N (refer again to the third column of the truth table immediately below). If the conditional statement is true, then if S is true, N must be true; whereas if the conditional statement is true and N is true, then S may be true or be false. In common terms, "the truth of S guarantees the truth of N". For example, carrying on from the previous example, one can say that knowing that someone is called Socrates is sufficient to know that someone has a Name.
A necessary and sufficient condition requires that both of the implications and (the latter of which can also be written as ) hold. The first implication suggests that S is a sufficient condition for N, while the second implication suggests that S is a necessary condition for N. This is expressed as "S is necessary and sufficient for N ", "S if and only if N ", or .
Necessity
The assertion that Q is necessary for P is colloquially equivalent to "P cannot be true unless Q is true" or "if Q is false, then P is false". By contraposition, this is the same thing as "whenever P is true, so is Q".
The logical relation between P and Q is expressed as "if P, then Q" and denoted "P ⇒ Q" (P implies Q). It may also be expressed as any of "P only if Q", "Q, if P", "Q whenever P", and "Q when P". One often finds, in mathematical prose for instance, several necessary conditions that, taken together, constitute a sufficient condition (i.e., individually necessary and jointly sufficient), as shown in Example 5.
Example 1 For it to be true that "John is a bachelor", it is necessary that it be also true that he is
unmarried,
male,
adult,
since to state "John is a bachelor" implies John has each of those three additional predicates.
Example 2 For the whole numbers greater than two, being odd is necessary to being prime, since two is the only whole number that is both even and prime.
Example 3Consider thunder, the sound caused by lightning. One says that thunder is necessary for lightning, since lightning never occurs without thunder. Whenever there is lightning, there is thunder. The thunder does not cause the lightning (since lightning causes thunder), but because lightning always comes with thunder, we say that thunder is necessary for lightning. (That is, in its formal sense, necessity doesn't imply causality.)
Example 4Being at least 30 years old is necessary for serving in the U.S. Senate. If you are under 30 years old, then it is impossible for you to be a senator. That is, if you are a senator, it follows that you must be at least 30 years old.
Example 5In algebra, for some set S together with an operation to form a group, it is necessary that be associative. It is also necessary that S include a special element e such that for every x in S, it is the case that e x and x e both equal x. It is also necessary that for every x in S there exist a corresponding element x″, such that both x x″ and x″ x equal the special element e. None of these three necessary conditions by itself is sufficient, but the conjunction of the three is.
Sufficiency
If P is sufficient for Q, then knowing P to be true is adequate grounds to conclude that Q is true; however, knowing P to be false does not meet a minimal need to conclude that Q is false.
The logical relation is, as before, expressed as "if P, then Q" or "P ⇒ Q". This can also be expressed as "P only if Q", "P implies Q" or several other variants. It may be the case that several sufficient conditions, when taken together, constitute a single necessary condition (i.e., individually sufficient and jointly necessary), as illustrated in example 5.
Example 1"John is a king" implies that John is male. So knowing that John is a king is sufficient to knowing that he is a male.
Example 2A number's being divisible by 4 is sufficient (but not necessary) for it to be even, but being divisible by 2 is both sufficient and necessary for it to be even.
Example 3 An occurrence of thunder is a sufficient condition for the occurrence of lightning in the sense that hearing thunder, and unambiguously recognizing it as such, justifies concluding that there has been a lightning bolt.
Example 4If the U.S. Congress passes a bill, the president's signing of the bill is sufficient to make it law. Note that the case whereby the president did not sign the bill, e.g. through exercising a presidential veto, does not mean that the bill has not become a law (for example, it could still have become a law through a congressional override).
Example 5That the center of a playing card should be marked with a single large spade (♠) is sufficient for the card to be an ace. Three other sufficient conditions are that the center of the card be marked with a single diamond (♦), heart (♥), or club (♣). None of these conditions is necessary to the card's being an ace, but their disjunction is, since no card can be an ace without fulfilling at least (in fact, exactly) one of these conditions.
Relationship between necessity and sufficiency
A condition can be either necessary or sufficient without being the other. For instance, being a mammal (N) is necessary but not sufficient to being human (S), and that a number is rational (S) is sufficient but not necessary to being a real number (N) (since there are real numbers that are not rational).
A condition can be both necessary and sufficient. For example, at present, "today is the Fourth of July" is a necessary and sufficient condition for "today is Independence Day in the United States". Similarly, a necessary and sufficient condition for invertibility of a matrix M is that M has a nonzero determinant.
Mathematically speaking, necessity and sufficiency are dual to one another. For any statements S and N, the assertion that "N is necessary for S" is equivalent to the assertion that "S is sufficient for N". Another facet of this duality is that, as illustrated above, conjunctions (using "and") of necessary conditions may achieve sufficiency, while disjunctions (using "or") of sufficient conditions may achieve necessity. For a third facet, identify every mathematical predicate N with the set T(N) of objects, events, or statements for which N holds true; then asserting the necessity of N for S is equivalent to claiming that T(N) is a superset of T(S), while asserting the sufficiency of S for N is equivalent to claiming that T(S) is a subset of T(N).
Psychologically speaking, necessity and sufficiency are both key aspects of the classical view of concepts. Under the classical theory of concepts, how human minds represent a category X, gives rise to a set of individually necessary conditions that define X. Together, these individually necessary conditions are sufficient to be X. This contrasts with the probabilistic theory of concepts which states that no defining feature is necessary or sufficient, rather that categories resemble a family tree structure.
Simultaneous necessity and sufficiency
To say that P is necessary and sufficient for Q is to say two things:
that P is necessary for Q, , and that P is sufficient for Q, .
equivalently, it may be understood to say that P and Q is necessary for the other, , which can also be stated as each is sufficient for or implies the other.
One may summarize any, and thus all, of these cases by the statement "P if and only if Q", which is denoted by , whereas cases tell us that is identical to .
For example, in graph theory a graph G is called bipartite if it is possible to assign to each of its vertices the color black or white in such a way that every edge of G has one endpoint of each color. And for any graph to be bipartite, it is a necessary and sufficient condition that it contain no odd-length cycles. Thus, discovering whether a graph has any odd cycles tells one whether it is bipartite and conversely. A philosopher might characterize this state of affairs thus: "Although the concepts of bipartiteness and absence of odd cycles differ in intension, they have identical extension.
In mathematics, theorems are often stated in the form "P is true if and only if Q is true".
Because, as explained in previous section, necessity of one for the other is equivalent to sufficiency of the other for the first one, e.g. is equivalent to , if P is necessary and sufficient for Q, then Q is necessary and sufficient for P. We can write and say that the statements "P is true if and only if Q, is true" and "Q is true if and only if P is true" are equivalent.
See also
References
External links
Critical thinking web tutorial: Necessary and Sufficient Conditions
Simon Fraser University: Concepts with examples
Concepts in logic
Metaphysical properties
Mathematical terminology | Necessity and sufficiency | [
"Mathematics"
] | 2,474 | [
"nan"
] |
169,320 | https://en.wikipedia.org/wiki/Radio-frequency%20identification | Radio-frequency identification (RFID) uses electromagnetic fields to automatically identify and track tags attached to objects. An RFID system consists of a tiny radio transponder called a tag, a radio receiver, and a transmitter. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually an identifying inventory number, back to the reader. This number can be used to track inventory goods.
Passive tags are powered by energy from the RFID reader's interrogating radio waves. Active tags are powered by a battery and thus can be read at a greater range from the RFID reader, up to hundreds of meters.
Unlike a barcode, the tag does not need to be within the line of sight of the reader, so it may be embedded in the tracked object. RFID is one method of automatic identification and data capture (AIDC).
RFID tags are used in many industries. For example, an RFID tag attached to an automobile during production can be used to track its progress through the assembly line, RFID-tagged pharmaceuticals can be tracked through warehouses, and implanting RFID microchips in livestock and pets enables positive identification of animals. Tags can also be used in shops to expedite checkout, and to prevent theft by customers and employees.
Since RFID tags can be attached to physical money, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information without consent has raised serious privacy concerns. These concerns resulted in standard specifications development addressing privacy and security issues.
In 2014, the world RFID market was worth US$8.89 billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This figure includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise from US$12.08 billion in 2020 to US$16.23 billion by 2029.
History
In 1945, Leon Theremin invented the "Thing", a listening device for the Soviet Union which retransmitted incident radio waves with the added audio information. Sound waves vibrated a diaphragm which slightly altered the shape of the resonator, which modulated the reflected radio frequency. Even though this device was a covert listening device, rather than an identification tag, it is considered to be a predecessor of RFID because it was passive, being energised and activated by waves from an outside source.
Similar technology, such as the Identification friend or foe transponder, was routinely used by the Allies and Germany in World War II to identify aircraft as friendly or hostile. Transponders are still used by most powered aircraft. An early work exploring RFID is the landmark 1948 paper by Harry Stockman, who predicted that "Considerable research and development work has to be done before the remaining basic problems in reflected-power communication are solved, and before the field of useful applications is explored."
Mario Cardullo's device, patented on January 23, 1973, was the first true ancestor of modern RFID, as it was a passive radio transponder with memory. The initial device was passive, powered by the interrogating signal, and was demonstrated in 1971 to the New York Port Authority and other potential users. It consisted of a transponder with 16 bit memory for use as a toll device. The basic Cardullo patent covers the use of radio frequency (RF), sound and light as transmission carriers. The original business plan presented to investors in 1969 showed uses in transportation (automotive vehicle identification, automatic toll system, electronic license plate, electronic manifest, vehicle routing, vehicle performance monitoring), banking (electronic chequebook, electronic credit card), security (personnel identification, automatic gates, surveillance) and medical (identification, patient history).
In 1973, an early demonstration of reflected power (modulated backscatter) RFID tags, both passive and semi-passive, was performed by Steven Depp, Alfred Koelle and Robert Freyman at the Los Alamos National Laboratory. The portable system operated at 915 MHz and used 12-bit tags. This technique is used by the majority of today's UHFID and microwave RFID tags.
In 1983, the first patent to be associated with the abbreviation RFID was granted to Charles Walton.
In 1996, the first patent for a batteryless RFID passive tag with limited interference was granted to David Everett, John Frech, Theodore Wright, and Kelly Rodriguez.
Design
A radio-frequency identification system uses tags, or labels attached to the objects to be identified. Two-way radio transmitter-receivers called interrogators or readers send a signal to the tag and read its response.
Tags
RFID tags are made out of three pieces:
a micro chip (an integrated circuit which stores and processes information and modulates and demodulates radio-frequency (RF) signals)
an antenna for receiving and transmitting the signal
a substrate
The tag information is stored in a non-volatile memory. The RFID tag includes either fixed or programmable logic for processing the transmission and sensor data, respectively.
RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its ID signal. A battery-assisted passive tag has a small battery on board and is activated when in the presence of an RFID reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than an active tag for signal transmission.
Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write, where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, read-multiple; "blank" tags may be written with an electronic product code by the user.
The RFID tag receives the message and then responds with its identification and other information. This may be only a unique tag serial number, or may be product-related information such as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them simultaneously.
Readers
RFID systems can be classified by the type of tag and reader. There are 3 types:
A Passive Reader Active Tag (PRAT) system has a passive reader which only receives radio signals from active tags (battery operated, transmit only). The reception range of a PRAT system reader can be adjusted from , allowing flexibility in applications such as asset protection and supervision.
An Active Reader Passive Tag (ARPT) system has an active reader, which transmits interrogator signals and also receives authentication replies from passive tags.
An Active Reader Active Tag (ARAT) system uses active tags activated with an interrogator signal from the active reader. A variation of this system could also use a Battery-Assisted Passive (BAP) tag which acts like a passive tag but has a small battery to power the tag's return reporting signal.
Fixed readers are set up to create a specific interrogation zone which can be tightly controlled. This allows a highly defined reading area for when tags go in and out of the interrogation zone. Mobile readers may be handheld or mounted on carts or vehicles.
Frequencies
Signaling
Signaling between the reader and the tag is done in several different incompatible ways, depending on the frequency band used by the tag. Tags operating on LF and HF bands are, in terms of radio wavelength, very close to the reader antenna because they are only a small percentage of a wavelength away. In this near field region, the tag is closely coupled electrically with the transmitter in the reader. The tag can modulate the field produced by the reader by changing the electrical loading the tag represents. By switching between lower and higher relative loads, the tag produces a change that the reader can detect. At UHF and higher frequencies, the tag is more than one radio wavelength away from the reader, requiring a different approach. The tag can backscatter a signal. Active tags may contain functionally separated transmitters and receivers, and the tag need not respond on a frequency related to the reader's interrogation signal.
An Electronic Product Code (EPC) is one common type of data stored in a tag. When written into the tag by an RFID printer, the tag contains a 96-bit string of data. The first eight bits are a header which identifies the version of the protocol. The next 28 bits identify the organization that manages the data for this tag; the organization number is assigned by the EPCGlobal consortium. The next 24 bits are an object class, identifying the kind of product. The last 36 bits are a unique serial number for a particular tag. These last two fields are set by the organization that issued the tag. Rather like a URL, the total electronic product code number can be used as a key into a global database to uniquely identify a particular product.
Often more than one tag will respond to a tag reader. For example, many individual products with tags may be shipped in a common box or on a common pallet. Collision detection is important to allow reading of data. Two different types of protocols are used to "singulate" a particular tag, allowing its data to be read in the midst of many similar tags. In a slotted Aloha system, the reader broadcasts an initialization command and a parameter that the tags individually use to pseudo-randomly delay their responses. When using an "adaptive binary tree" protocol, the reader sends an initialization symbol and then transmits one bit of ID data at a time; only tags with matching bits respond, and eventually only one tag matches the complete ID string.
Both methods have drawbacks when used with many tags or with multiple overlapping readers.
Bulk reading
"Bulk reading" is a strategy for interrogating multiple tags at the same time, but lacks sufficient precision for inventory control. A group of objects, all of them RFID tagged, are read completely from one single reader position at one time. However, as tags respond strictly sequentially, the time needed for bulk reading grows linearly with the number of labels to be read. This means it takes at least twice as long to read twice as many labels. Due to collision effects, the time required is greater.
A group of tags has to be illuminated by the interrogating signal just like a single tag. This is not a challenge concerning energy, but with respect to visibility; if any of the tags are shielded by other tags, they might not be sufficiently illuminated to return a sufficient response. The response conditions for inductively coupled HF RFID tags and coil antennas in magnetic fields appear better than for UHF or SHF dipole fields, but then distance limits apply and may prevent success.
Under operational conditions, bulk reading is not reliable. Bulk reading can be a rough guide for logistics decisions, but due to a high proportion of reading failures, it is not (yet) suitable for inventory management. However, when a single RFID tag might be seen as not guaranteeing a proper read, multiple RFID tags, where at least one will respond, may be a safer approach for detecting a known grouping of objects. In this respect, bulk reading is a fuzzy method for process support. From the perspective of cost and effect, bulk reading is not reported as an economical approach to secure process control in logistics.
Miniaturization
RFID tags are easy to conceal or incorporate in other items. For example, in 2009 researchers at Bristol University successfully glued RFID micro-transponders to live ants in order to study their behavior. This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances.
Hitachi holds the record for the smallest RFID chip, at 0.05 mm × 0.05 mm. This is 1/64th the size of the previous record holder, the mu-chip. Manufacture is enabled by using the silicon-on-insulator (SOI) process. These dust-sized chips can store 38-digit numbers using 128-bit Read Only Memory (ROM). A major challenge is the attachment of antennas, thus limiting read range to only millimeters.
TFID (Terahertz Frequency Identification)
In early 2020, MIT researchers demonstrated a terahertz frequency identification (TFID) tag that is barely 1 square millimeter in size. The devices are essentially a piece of silicon that are inexpensive, small, and function like larger RFID tags. Because of the small size, manufacturers could tag any product and track logistics information for minimal cost.
Uses
An RFID tag can be affixed to an object and used to track tools, equipment, inventory, assets, people, or other objects.
RFID offers advantages over manual systems or use of barcodes. The tag can be read if passed near a reader, even if it is covered by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can be read hundreds at a time; barcodes can only be read one at a time using current devices. Some RFID tags, such as battery-assisted passive tags, are also able to monitor temperature and humidity.
In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma sterilization, could cost up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers started at US$50 and could be over US$100 each. Battery-Assisted Passive (BAP) tags were in the US$3–10 range.
RFID can be used in a variety of applications, such as:
Access management
Tracking of goods
Tracking of persons and animals
Toll collection and contactless payment
Machine readable travel documents
Smartdust (for massively distributed sensor networks)
Locating lost airport baggage
Timing sporting events
Tracking and billing processes
Monitoring the physical state of perishable goods
In 2010, three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a reliability of 99.9%, and a stable international standard around HF and UHF passive RFID. The adoption of these standards were driven by EPCglobal, a joint venture between GS1 and GS1 US, which were responsible for driving global adoption of the barcode in the 1970s and 1980s. The EPCglobal Network was developed by the Auto-ID Center.
Commerce
RFID provides a way for organizations to identify and manage stock, tools and equipment (asset tracking), etc. without manual data entry. Manufactured products such as automobiles or garments can be tracked through the factory and through shipping to the customer. Automatic identification with RFID can be used for inventory systems. Many organisations require that their vendors place RFID tags on all shipments to improve supply chain management. Warehouse Management System incorporate this technology to speed up the receiving and delivery of the products and reduce the cost of labor needed in their warehouses.
Retail
RFID is used for item-level tagging in retail stores. This can enable more accurate and lower-labor-cost supply chain and store inventory tracking, as is done at Lululemon, though physically locating items in stores requires more expensive technology. RFID tags can be used at checkout; for example, at some stores of the French retailer Decathlon, customers perform self-checkout by either using a smartphone or putting items into a bin near the register that scans the tags without having to orient each one toward the scanner. Some stores use RFID-tagged items to trigger systems that provide customers with more information or suggestions, such as fitting rooms at Chanel and the "Color Bar" at Kendra Scott stores.
Item tagging can also provide protection against theft by customers and employees by using electronic article surveillance (EAS). Tags of different types can be physically removed with a special tool or deactivated electronically when payment is made. On leaving the shop, customers have to pass near an RFID detector; if they have items with active RFID tags, an alarm sounds, both indicating an unpaid-for item, and identifying what it is.
Casinos can use RFID to authenticate poker chips, and can selectively invalidate any chips known to be stolen.
Access control
RFID tags are widely used in identification badges, replacing earlier magnetic stripe cards. These badges need only be held within a certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.
Advertising
In 2010, Vail Resorts began using UHF Passive RFID tags in ski passes.
Facebook is using RFID cards at most of their live events to allow guests to automatically capture and post photos.
Automotive brands have adopted RFID for social media product placement more quickly than other industries. Mercedes was an early adopter in 2011 at the PGA Golf Championships, and by the 2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing.
Promotion tracking
To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.
Transportation and logistics
Yard management, shipping and freight and distribution centers use RFID tracking. In the railroad industry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the type, origin, destination, etc. of the commodities being carried.
In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and cargo at several airports and airlines.
Some countries are using RFID for vehicle registration and enforcement. RFID can help detect and retrieve stolen cars.
RFID is used in intelligent transportation systems. In New York City, RFID readers are deployed at intersections to track E-ZPass tags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic management center to be used in adaptive traffic control of the traffic lights.
Where ship, rail, or highway tanks are being loaded, a fixed RFID antenna contained in a transfer hose can read an RFID tag affixed to the tank, positively identifying it.
Infrastructure management and protection
At least one company has introduced RFID to identify and locate underground infrastructure assets such as gas pipelines, sewer lines, electrical cables, communication cables, etc.
Passports
The first RFID passports ("E-passport") were issued by Malaysia in 1998. In addition to information also contained on the visual data page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entry into and exit out of the country.
Other countries that insert RFID in passports include Norway (2005), Japan (March 1, 2006), most EU countries (around 2006), Singapore (2006), Australia, Hong Kong, the United States (2007), the United Kingdom and Northern Ireland (2006), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan (December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), Argentina (2012), Canada (2013), Uruguay (2015) and Israel (2017).
Standards for RFID passports are determined by the International Civil Aviation Organization (ICAO), and are contained in ICAO Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to the ISO/IEC 14443 RFID chips in e-passports as "contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front cover.
Since 2006, RFID tags included in new United States passports store the same information that is printed within the passport, and include a digital picture of the owner. The United States Department of State initially stated the chips could only be read from a distance of , but after widespread criticism and a clear demonstration that special equipment can read the test passports from away, the passports were designed to incorporate a thin metal lining to make it more difficult for unauthorized readers to skim information when the passport is closed. The department will also implement Basic Access Control (BAC), which functions as a personal identification number (PIN) in the form of characters printed on the passport data page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of any communication between the chip and interrogator.
Transportation payments
In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways.
Some bike lockers are operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or locker and is used to track and charge based on how long the bike is parked.
The Zipcar car-sharing service uses RFID cards for locking and unlocking cars and for member identification.
In Singapore, RFID replaces paper Season Parking Ticket (SPT).
Animal identification
RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak of mad-cow disease, RFID has become crucial in animal identification management. An implantable RFID tag or transponder can also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or "chips" on animals. The Canadian Cattle Identification Agency began using RFID tags as a replacement for barcode tags. Currently, CCIA tags are used in Wisconsin and by United States farmers on a voluntary basis. The USDA is currently developing its own program.
RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well.
Human implantation
Biocompatible microchip implants that use RFID technology are being routinely implanted in humans. The first-ever human to receive an RFID microchip implant was American artist Eduardo Kac in 1997. Kac implanted the microchip live on television (and also live on the Internet) in the context of his artwork Time Capsule.
A year later, British professor of cybernetics Kevin Warwick had an RFID chip implanted in his arm by his general practitioner, George Boulos. In 2004, the 'Baja Beach Club' operated by Conrad Chase in Barcelona and Rotterdam offered implanted chips to identify their VIP customers, who could in turn use it to pay for service. In 2009, British scientist Mark Gasson had an advanced glass capsule RFID device surgically implanted into his left hand and subsequently demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems.
The Food and Drug Administration in the United States approved the use of RFID chips in humans in 2004.
There is controversy regarding human applications of implantable RFID technology including concerns that individuals could potentially be tracked by carrying an identifier unique to them. Privacy advocates have protested against implantable RFID chips, warning of potential abuse. Some are concerned this could lead to abuse by an authoritarian government, to removal of freedoms, and to the emergence of an "ultimate panopticon", a society where all citizens behave in a socially accepted manner because others might be watching.
On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City demonstrated that they could clone the RFID signal from a human implanted RFID chip, indicating that the device was not as secure as was previously claimed.
The UFO religion Universe People is notorious online for their vocal opposition to human RFID chipping, which they claim is a saurian attempt to enslave the human race; one of their web domains is "dont-get-chipped".
Institutions
Hospitals and healthcare
Adoption of RFID in the medical industry has been widespread and very effective. Hospitals are among the first users to combine both active and passive RFID. Active tags track high-value, or frequently moved items, and passive tags track smaller, lower cost items that only need room-level identification. Medical facility rooms can collect data from transmissions of RFID badges worn by patients and employees, as well as from tags assigned to items such as mobile medical devices. The U.S. Department of Veterans Affairs (VA) recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.
Since 2004, a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems; the systems are typically used for workflow and inventory management.
The use of RFID to prevent mix-ups between sperm and ova in IVF clinics is also being considered.
In October 2004, the FDA approved the USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical treatments, according to the company. Anti-RFID activists Katherine Albrecht and Liz McIntyre discovered an FDA Warning Letter that spelled out health risks. According to the FDA, these include "adverse tissue reaction", "migration of the implanted transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility."
Libraries
Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management and self-service checkout by patrons. It can also act as a security device, taking the place of the more traditional electromagnetic security strip.
It is estimated that over 30 million library items worldwide now contain RFID tags, including some in the Vatican Library in Rome.
Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of books can be read simultaneously. Book tags can be read while books are in motion on a conveyor belt, which reduces staff time. This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories could be done on a whole shelf of materials within seconds.
However, as of 2008, this technology remained too costly for many smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was that a library which lends 100,000 books per year should plan on a cost of €50,000 (borrow- and return-stations: 12,500 each, detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be needed, resulting in some of them getting laid off, but that has so far not happened in North America where recent surveys have not returned a single library that cut staff because of adding RFID. In fact, library budgets are being reduced for personnel and increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size. Also, the tasks that RFID takes over are largely not the primary tasks of librarians. A finding in the Netherlands is that borrowers are pleased with the fact that staff are now more available for answering questions.
Privacy concerns have been raised surrounding library use of RFID. Because some RFID tags can be read up to away, there is some concern over whether sensitive information could be collected from an unwilling source. However, library RFID tags do not contain any patron information, and the tags used in the majority of libraries use a frequency only readable from approximately . Another concern is that a non-library agency could potentially record the RFID tags of every person leaving the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page, possibly put there by the publisher.
Museums
RFID technologies are now also implemented in end-user applications in museums. An example was the custom-designed temporary research application, "eXspot", at the Exploratorium, a science museum in San Francisco, California. A visitor entering the museum received an RF tag that could be carried as a card. The eXspot system enabled the visitor to receive information about specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a "personalized" website keyed to the RFID tag.
Schools and universities
In 2004, school authorities in the Japanese city of Osaka made a decision to start chipping children's clothing, backpacks, and student IDs in a primary school. Later, in 2007, a school in Doncaster, England, piloted a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms. St Charles Sixth Form College in west London, England, starting in 2008, uses an RFID card system to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly, Whitcliffe Mount School in Cleckheaton, England, uses RFID to track pupils and staff in and out of the building via a specially designed card. In the Philippines, during 2012, some schools already use RFID in IDs for borrowing books. Gates in those particular schools also have RFID scanners for buying items at school shops and canteens. RFID is also used in school libraries, and to sign in and out for student and teacher attendance.
Sports
RFID for timing races began in the early 1990s with pigeon racing, introduced by the company Deister Electronics in Germany. RFID can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for every entrant.
In races using RFID, racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags provide accurate readings with specially designed antennas. Rush error, lap count errors and accidents at race start are avoided, as anyone can start and finish at any time without being in a batch mode.
The design of the chip and of the antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe, or strapped to the ankle with . The chips must be about 400 mm from the mat, therefore giving very good temporal resolution. Alternatively, a chip plus a very large (125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at a height of about .
Passive and active RFID systems are used in off-road events such as Orienteering, Enduro and Hare and Hounds racing. Riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.
RFID is being adapted by many recruitment agencies which have a PET (physical endurance test) as their qualifying procedure, especially in cases where the candidate volumes may run into millions (Indian Railway recruitment cells, police and power sector).
A number of ski resorts have adopted RFID tags to provide skiers hands-free access to ski lifts. Skiers do not have to take their passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56MHz. The bulk of ski areas in Europe, from Verbier to Chamonix, use these systems.
The NFL in the United States equips players with RFID chips that measures speed, distance and direction traveled by each player in real-time. Currently, cameras stay focused on the quarterback; however, numerous plays are happening simultaneously on the field. The RFID chip will provide new insight into these simultaneous plays. The chip triangulates the player's position within six inches and will be used to digitally broadcast replays. The RFID chip will make individual player information accessible to the public. The data will be available via the NFL 2015 app. The RFID chips are manufactured by Zebra Technologies. Zebra Technologies tested the RFID chip in 18 stadiums last year to track vector data.
Complement to barcode
RFID tags are often a complement, but not a substitute, for Universal Product Code (UPC) or European Article Number (EAN) barcodes. They may never completely replace barcodes, due in part to their higher cost and the advantage of multiple data sources on the same object. Also, unlike RFID labels, barcodes can be generated and distributed electronically by e-mail or mobile phone, for printing or display by the recipient. An example is airline boarding passes. The new EPC, along with several other schemes, is widely available at reasonable cost.
The storage of data associated with tracking items will require many terabytes. Filtering and categorizing RFID data is needed to create useful information. It is likely that goods will be tracked by the pallet using RFID tags, and at package level with UPC or EAN from unique barcodes.
The unique identity is a mandatory requirement for RFID tags, despite special choice of the numbering scheme. RFID tag data capacity is large enough that each individual tag will have a unique code, while current barcodes are limited to a single type code for a particular product. The uniqueness of RFID tags means that a product may be tracked as it moves from location to location while being delivered to a person. This may help to combat theft and other forms of product loss. The tracing of products is an important feature that is well supported with RFID tags containing a unique identity of the tag and the serial number of the object. This may help companies cope with quality deficiencies and resulting recall campaigns, but also contributes to concern about tracking and profiling of persons after the sale.
Waste management
Since around 2007, there has been increasing development in the use of RFID in the waste management industry. RFID tags are installed on waste collection carts, linking carts to the owner's account for easy billing and service verification. The tag is embedded into a garbage and recycle container, and the RFID reader is affixed to the garbage and recycle trucks. RFID also measures a customer's set-out rate and provides insight as to the number of carts serviced by each waste collection vehicle. This RFID process replaces traditional "pay as you throw" (PAYT) municipal solid waste usage-pricing models.
Telemetry
Active RFID tags have the potential to function as low-cost remote sensors that broadcast telemetry back to a base station. Applications of tagometry data could include sensing of road conditions by implanted beacons, weather reports, and noise level monitoring.
Passive RFID tags can also report sensor data. For example, the Wireless Identification and Sensing Platform is a passive tag that reports temperature, acceleration and capacitance to commercial Gen2 RFID readers.
It is possible that active or battery-assisted passive (BAP) RFID tags could broadcast a signal to an in-store receiver to determine whether the RFID tag – and by extension, the product it is attached to – is in the store.
Regulation and standardization
To avoid injuries to humans and animals, RF transmission needs to be controlled.
A number of organizations have set standards for RFID, including the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), ASTM International, the DASH7 Alliance and EPCglobal.
Several specific industries have also set guidelines, including the Financial Services Technology Consortium (FSTC) for tracking IT Assets with RFID, the Computer Technology Industry Association CompTIA for certifying RFID engineers, and the International Air Transport Association (IATA) for luggage in airports.
Every country can set its own rules for frequency allocation for RFID tags, and not all radio bands are available in all countries. These frequencies are known as the ISM bands (Industrial Scientific and Medical bands). The return signal of the tag may still cause interference for other radio users.
Low-frequency (LF: 125–134.2 kHz and 140–148.5 kHz) (LowFID) tags and high-frequency (HF: 13.56 MHz) (HighFID) tags can be used globally without a license.
Ultra-high-frequency (UHF: 865–928 MHz) (Ultra-HighFID or UHFID) tags cannot be used globally as there is no single global standard, and regulations differ from country to country.
In North America, UHF can be used unlicensed for 902–928 MHz (±13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power. In Europe, RFID and other low-power radio applications are regulated by ETSI recommendations EN 300 220 and EN 302 208, and ERO recommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865–868 MHz. Readers are required to monitor a channel before transmitting ("Listen Before Talk"); this requirement has led to some restrictions on performance, the resolution of which is a subject of current research. The North American UHF standard is not accepted in France as it interferes with its military bands. On July 25, 2012, Japan changed its UHF band to 920 MHz, more closely matching the United States' 915 MHz band, establishing an international standard environment for RFID.
In some countries, a site license is needed, which needs to be applied for at the local authorities, and can be revoked.
As of 31 October 2014, regulations are in place in 78 countries representing approximately 96.5% of the world's GDP, and work on regulations was in progress in three countries representing approximately 1% of the world's GDP.
Standards that have been made regarding RFID include:
ISO 11784/11785 – Animal identification. Uses 134.2 kHz.
ISO 14223 – Radiofrequency identification of animals – Advanced transponders
ISO/IEC 14443: This standard is a popular HF (13.56 MHz) standard for HighFIDs which is being used as the basis of RFID-enabled passports under ICAO 9303. The Near Field Communication standard that lets mobile devices act as RFID readers/transponders is also based on ISO/IEC 14443.
ISO/IEC 15693: This is also a popular HF (13.56 MHz) standard for HighFIDs widely used for non-contact smart payment and credit cards.
ISO/IEC 18000: Information technology—Radio frequency identification for item management:
ISO/IEC 18092 Information technology—Telecommunications and information exchange between systems—Near Field Communication—Interface and Protocol (NFCIP-1)
ISO 18185: This is the industry standard for electronic seals or "e-seals" for tracking cargo containers using the 433 MHz and 2.4 GHz frequencies.
ISO/IEC 21481 Information technology—Telecommunications and information exchange between systems—Near Field Communication Interface and Protocol −2 (NFCIP-2)
ASTM D7434, Standard Test Method for Determining the Performance of Passive Radio Frequency Identification (RFID) Transponders on Palletized or Unitized Loads
ASTM D7435, Standard Test Method for Determining the Performance of Passive Radio Frequency Identification (RFID) Transponders on Loaded Containers
ASTM D7580, Standard Test Method for Rotary Stretch Wrapper Method for Determining the Readability of Passive RFID Transponders on Homogenous Palletized or Unitized Loads
ISO 28560-2— specifies encoding standards and data model to be used within libraries.
In order to ensure global interoperability of products, several organizations have set up additional standards for RFID testing. These standards include conformance, performance and interoperability tests.
EPC Gen2
EPC Gen2 is short for EPCglobal UHF Class 1 Generation 2.
EPCglobal, a joint venture between GS1 and GS1 US, is working on international standards for the use of mostly passive RFID and the Electronic Product Code (EPC) in the identification of many items in the supply chain for companies worldwide.
One of the missions of EPCglobal was to simplify the Babel of protocols prevalent in the RFID world in the 1990s. Two tag air interfaces (the protocol for exchanging information between a tag and a reader) were defined (but not ratified) by EPCglobal prior to 2003. These protocols, commonly known as Class 0 and Class 1, saw significant commercial implementation in 2002–2005.
In 2004, the Hardware Action Group created a new protocol, the Class 1 Generation 2 interface, which addressed a number of problems that had been experienced with Class 0 and Class 1 tags. The EPC Gen2 standard was approved in December 2004. This was approved after a contention from Intermec that the standard may infringe a number of their RFID-related patents. It was decided that the standard itself does not infringe their patents, making the standard royalty free. The EPC Gen2 standard was adopted with minor modifications as ISO 18000-6C in 2006.
In 2007, the lowest cost of Gen2 EPC inlay was offered by the now-defunct company SmartCode, at a price of $0.05 apiece in volumes of 100 million or more.
Problems and concerns
Data flooding
Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another, or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that are meaningful to an inventory control system.
Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various concepts have been designed, mainly offered as middleware performing the filtering from noisy and redundant raw data to significant processed data.
Global standardization
The frequencies used for UHF RFID in the USA are as of 2007 incompatible with those of Europe or Japan. Furthermore, no emerging standard has yet become as universal as the barcode. To address international trade concerns, it is necessary to use a tag that is operational within all of the international frequency domains.
Security concerns
A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal location privacy and corporate/military security. Such concerns have been raised with respect to the United States Department of Defense's recent adoption of RFID tags for supply chain management. More generally, privacy organizations have expressed concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in general-use products. This is mostly as a result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped on, from non-trivial distances. RFID used in access control, payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems but are also vulnerable to skimming and eavesdropping, albeit at shorter distances.
A second method of prevention is by using cryptography. Rolling codes and challenge–response authentication (CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader, as any messages that have been recorded would prove to be unsuccessful on repeat transmission. Rolling codes rely upon the tag's ID being changed after each interrogation, while CRA uses software to ask for a cryptographically coded response from the tag. The protocols used during CRA can be symmetric, or may use public key cryptography.
While a variety of secure protocols have been suggested for RFID tags,
in order to support long read range at low cost, many RFID tags have barely enough power available
to support very low-power and therefore simple security protocols such as cover-coding.
Unauthorized reading of RFID tags presents a risk to privacy and to business secrecy. Unauthorized readers can potentially use RFID information to identify or track packages, persons, carriers, or the contents of a package. Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption, as well as the possibility of legislation, and 700 scientific papers have been published on this matter since 2002. There are also concerns that the database structure of Object Naming Service may be susceptible to infiltration, similar to denial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable.
Health
Microchip–induced tumours have been noted during animal trials.
Shielding
In an effort to prevent the passive "skimming" of RFID-enabled cards or passports, the U.S. General Services Administration (GSA) issued a set of test procedures for evaluating electromagnetically opaque sleeves. For shielding products to be in compliance with FIPS-201 guidelines, they must meet or exceed this published standard; compliant products are listed on the website of the U.S. CIO's FIPS-201 Evaluation Program. The United States government requires that when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder. Although many wallets and passport holders are advertised to protect personal information, there is little evidence that RFID skimming is a serious threat; data encryption and use of EMV chips rather than RFID makes this sort of theft rare.
There are contradictory opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum shielding, essentially creating a Faraday cage, does work. Others claim that simply wrapping an RFID card in aluminum foil only makes transmission more difficult and is not completely effective at preventing it.
Shielding effectiveness depends on the frequency being used. Low-frequency LowFID tags, like those used in implantable devices for humans and pets, are relatively resistant to shielding, although thick metal foil will prevent most reads. High frequency HighFID tags (13.56 MHz—smart cards and access badges) are sensitive to shielding and are difficult to read when within a few centimetres of a metal surface. UHF Ultra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal surface, although their read range is actually increased when they are spaced 2–4 cm from a metal surface due to positive reinforcement of the reflected wave and the incident wave at the tag.
Privacy
The use of RFID has engendered considerable controversy and some consumer privacy advocates have initiated product boycotts. Consumer privacy experts Katherine Albrecht and Liz McIntyre are two prominent critics of the "spychip" technology. The two main privacy concerns regarding RFID are as follows:
As the owner of an item may not necessarily be aware of the presence of an RFID tag and the tag can be read at a distance without the knowledge of the individual, sensitive data may be acquired without consent.
If a tagged item is paid for by credit card or in conjunction with use of a loyalty card, then it would be possible to indirectly deduce the identity of the purchaser by reading the globally unique ID of that item contained in the RFID tag. This is a possibility if the person watching also had access to the loyalty card and credit card data, and the person with the equipment knows where the purchaser is going to be.
Most concerns revolve around the fact that RFID tags affixed to products remain functional even after the products have been purchased and taken home; thus, they may be used for surveillance and other purposes unrelated to their supply chain inventory functions.
The RFID Network responded to these fears in the first episode of their syndicated cable TV series, saying that they are unfounded, and let RF engineers demonstrate how RFID works. They provided images of RF engineers driving an RFID-enabled van around a building and trying to take an inventory of items inside. They also discussed satellite tracking of a passive RFID tag.
The concerns raised may be addressed in part by use of the Clipped Tag. The Clipped Tag is an RFID tag designed to increase privacy for the purchaser of an item. The Clipped Tag has been suggested by IBM researchers Paul Moskowitz and Guenter Karjoth. After the point of sale, a person may tear off a portion of the tag. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling.
However, read range is a function of both the reader and the tag itself. Improvements in technology may increase read ranges for tags. Tags may be read at longer ranges than they are designed for by increasing reader power. The limit on read distance then becomes the signal-to-noise ratio of the signal reflected from the tag back to the reader. Researchers at two security conferences have demonstrated that passive Ultra-HighFID tags normally read at ranges of up to 30 feet can be read at ranges of 50 to 69 feet using suitable equipment.
In January 2004, privacy advocates from CASPIAN and the German privacy group FoeBuD were invited to the METRO Future Store in Germany, where an RFID pilot project was implemented. It was uncovered by accident that METRO "Payback" customer loyalty cards contained RFID tags with customer IDs, a fact that was disclosed neither to customers receiving the cards, nor to this group of privacy advocates. This happened despite assurances by METRO that no customer identification data was tracked and all RFID usage was clearly disclosed.
During the UN World Summit on the Information Society (WSIS) in November 2005, Richard Stallman, the founder of the free software movement, protested the use of RFID security cards by covering his card with aluminum foil.
In 2004–2005, the Federal Trade Commission staff conducted a workshop and review of RFID privacy concerns and issued a report recommending best practices.
RFID was one of the main topics of the 2006 Chaos Communication Congress (organized by the Chaos Computer Club in Berlin) and triggered a large press debate. Topics included electronic passports, Mifare cryptography and the tickets for the FIFA World Cup 2006. Talks showed how the first real-world mass application of RFID at the 2006 FIFA Football World Cup worked. The group monochrom staged a "Hack RFID" song.
Government control
Some individuals have grown to fear the loss of rights due to RFID human implantation.
By early 2007, Chris Paget of San Francisco, California, showed that RFID information could be pulled from a US passport card by using only $250 worth of equipment. This suggests that with the information captured, it would be possible to clone such cards.
According to ZDNet, critics believe that RFID will lead to tracking individuals' every movement and will be an invasion of privacy. In the book SpyChips: How Major Corporations and Government Plan to Track Your Every Move by Katherine Albrecht and Liz McIntyre, one is encouraged to "imagine a world of no privacy. Where your every purchase is monitored and recorded in a database and your every belonging is numbered. Where someone many states away or perhaps in another country has a record of everything you have ever bought. What's more, they can be tracked and monitored remotely".
Deliberate destruction in clothing and other items
According to an RSA laboratories FAQ, RFID tags can be destroyed by a standard microwave oven; however, some types of RFID tags, particularly those constructed to radiate using large metallic antennas (in particular RF tags and EPC tags), may catch fire if subjected to this process for too long (as would any metallic item inside a microwave oven). This simple method cannot safely be used to deactivate RFID features in electronic devices, or those implanted in living tissue, because of the risk of damage to the "host". However the time required is extremely short (a second or two of radiation) and the method works in many other non-electronic and inanimate items, long before heat or fire become of concern.
Some RFID tags implement a "kill command" mechanism to permanently and irreversibly disable them. This mechanism can be applied if the chip itself is trusted or the mechanism is known by the person that wants to "kill" the tag.
UHF RFID tags that comply with the EPC2 Gen 2 Class 1 standard usually support this mechanism, while protecting the chip from being killed with a password. Guessing or cracking this needed 32-bit password for killing a tag would not be difficult for a determined attacker.
See also
AS5678
Balise
Bin bug
Campus card
Chipless RFID
FASTag
Internet of Things
Mass surveillance
Microchip implant (human)
Mobile RFID
Near Field Communication (NFC)
PositiveID
Privacy by design
Proximity card
Resonant inductive coupling
RFdump
RFID in schools
RFID Journal
RFID on metal
RSA blocker tag
Smart label
Speedpass
TecTile
Tracking system
References
External links
An open source RFID library used as door opener
What is RFID? Educational video by The RFID Network
What is RFID? – animated explanation
IEEE Council on RFID
Automatic identification and data capture
Privacy
Ubiquitous computing
Wireless
Radio frequency interfaces | Radio-frequency identification | [
"Technology",
"Engineering"
] | 11,177 | [
"Radio electronics",
"Telecommunications engineering",
"Wireless",
"Data",
"Automatic identification and data capture",
"Radio-frequency identification"
] |
169,324 | https://en.wikipedia.org/wiki/Logical%20equivalence | In logic and mathematics, statements and are said to be logically equivalent if they have the same truth value in every model. The logical equivalence of and is sometimes expressed as , , , or , depending on the notation being used.
However, these symbols are also used for material equivalence, so proper interpretation would depend on the context. Logical equivalence is different from material equivalence, although the two concepts are intrinsically related.
Logical equivalences
In logic, many common logical equivalences exist and are often listed as laws or properties. The following tables illustrate some of these.
General logical equivalences
Logical equivalences involving conditional statements
Logical equivalences involving biconditionals
Where represents XOR.
Examples
In logic
The following statements are logically equivalent:
If Lisa is in Denmark, then she is in Europe (a statement of the form ).
If Lisa is not in Europe, then she is not in Denmark (a statement of the form ).
Syntactically, (1) and (2) are derivable from each other via the rules of contraposition and double negation. Semantically, (1) and (2) are true in exactly the same models (interpretations, valuations); namely, those in which either Lisa is in Denmark is false or Lisa is in Europe is true.
(Note that in this example, classical logic is assumed. Some non-classical logics do not deem (1) and (2) to be logically equivalent.)
Relation to material equivalence
Logical equivalence is different from material equivalence. Formulas and are logically equivalent if and only if the statement of their material equivalence () is a tautology.
The material equivalence of and (often written as ) is itself another statement in the same object language as and . This statement expresses the idea "' if and only if '". In particular, the truth value of can change from one model to another.
On the other hand, the claim that two formulas are logically equivalent is a statement in metalanguage, which expresses a relationship between two statements and . The statements are logically equivalent if, in every model, they have the same truth value.
See also
Entailment
Equisatisfiability
If and only if
Logical biconditional
Logical equality
≡ the iff symbol (U+2261 IDENTICAL TO)
∷ the a is to b as c is to d symbol (U+2237 PROPORTION)
⇔ the double struck biconditional (U+21D4 LEFT RIGHT DOUBLE ARROW)
↔ the bidirectional arrow (U+2194 LEFT RIGHT ARROW)
References
Mathematical logic
Metalogic
Logical consequence
Equivalence (mathematics) | Logical equivalence | [
"Mathematics"
] | 534 | [
"Mathematical logic"
] |
169,358 | https://en.wikipedia.org/wiki/Foundations%20of%20mathematics | Foundations of mathematics are the logical and mathematical framework that allows the development of mathematics without generating self-contradictory theories, and, in particular, to have reliable concepts of theorems, proofs, algorithms, etc. This may also include the philosophical study of the relation of this framework with reality.
The term "foundations of mathematics" was not coined before the end of the 19th century, although foundations were first established by the ancient Greek philosophers under the name of Aristotle's logic and systematically applied in Euclid's Elements. A mathematical assertion is considered as truth only if it is a theorem that is proved from true premises by means of a sequence of syllogisms (inference rules), the premises being either already proved theorems or self-evident assertions called axioms or postulates.
These foundations were tacitly assumed to be definitive until the introduction of infinitesimal calculus by Isaac Newton and Gottfried Wilhelm Leibniz in the 17th century. This new area of mathematics involved new methods of reasoning and new basic concepts (continuous functions, derivatives, limits) that were not well founded, but had astonishing consequences, such as the deduction from Newton's law of gravitation that the orbits of the planets are ellipses.
During the 19th century, progress was made towards elaborating precise definitions of the basic concepts of infinitesimal calculus, notably the natural and real numbers. This led, near the end of the 19th century, to a series of seemingly paradoxical mathematical results that challenged the general confidence in the reliability and truth of mathematical results. This has been called the foundational crisis of mathematics.
The resolution of this crisis involved the rise of a new mathematical discipline called mathematical logic that includes set theory, model theory, proof theory, computability and computational complexity theory, and more recently, parts of computer science. Subsequent discoveries in the 20th century then stabilized the foundations of mathematics into a coherent framework valid for all mathematics. This framework is based on a systematic use of axiomatic method and on set theory, specifically ZFC, the Zermelo–Fraenkel set theory with the axiom of choice.
It results from this that the basic mathematical concepts, such as numbers, points, lines, and geometrical spaces are not defined as abstractions from reality but from basic properties (axioms). Their adequation with their physical origins does not belong to mathematics anymore, although their relation with reality is still used for guiding mathematical intuition: physical reality is still used by mathematicians to choose axioms, find which theorems are interesting to prove, and obtain indications of possible proofs.
Ancient Greece
Most civilisations developed some mathematics, mainly for practical purposes, such as counting (merchants), surveying (delimitation of fields), prosody, astronomy, and astrology. It seems that ancient Greek philosophers were the first to study the nature of mathematics and its relation with the real world.
Zeno of Elea (490 c. 430 BC) produced several paradoxes he used to support his thesis that movement does not exist. These paradoxes involve mathematical infinity, a concept that was outside the mathematical foundations of that time and was not well understood before the end of the 19th century.
The Pythagorean school of mathematics originally insisted that the only numbers are natural numbers and ratios of natural numbers. The discovery (around 5th century BC) that the ratio of the diagonal of a square to its side is not the ratio of two natural numbers was a shock to them which they only reluctantly accepted. A testimony of this is the modern terminology of irrational number for referring to a number that is not the quotient of two integers, since "irrational" means originally "not reasonable" or "not accessible with reason".
The fact that length ratios are not represented by rational numbers was resolved by Eudoxus of Cnidus (408–355 BC), a student of Plato, who reduced the comparison of two irrational ratios to comparisons of integer multiples of the magnitudes involved. His method anticipated that of Dedekind cuts in the modern definition of real numbers by Richard Dedekind (1831–1916); see .
In the Posterior Analytics, Aristotle (384–322 BC) laid down the logic for organizing a field of knowledge by means of primitive concepts, axioms, postulates, definitions, and theorems. Aristotle took a majority of his examples for this from arithmetic and from geometry, and his logic served as the foundation of mathematics for centuries. This method resembles the modern axiomatic method but with a big philosophical difference: axioms and postulates were supposed to be true, being either self-evident or resulting from experiments, while no other truth than the correctness of the proof is involved in the axiomatic method. So, for Aristotle, a proved theorem is true, while in the axiomatic methods, the proof says only that the axioms imply the statement of the theorem.
Aristotle's logic reached its high point with Euclid's Elements (300 BC), a treatise on mathematics structured with very high standards of rigor: Euclid justifies each proposition by a demonstration in the form of chains of syllogisms (though they do not always conform strictly to Aristotelian templates).
Aristotle's syllogistic logic, together with its exemplification by Euclid's Elements, are recognized as scientific achievements of ancient Greece, and remained as the foundations of mathematics for centuries.
Before infinitesimal calculus
During Middle Ages, Euclid's Elements stood as a perfectly solid foundation for mathematics, and philosophy of mathematics concentrated on the ontological status of mathematical concepts; the question was whether they exist independently of perception (realism) or within the mind only (conceptualism); or even whether they are simply names of collection of individual objects (nominalism).
In Elements, the only numbers that are considered are natural numbers and ratios of lengths. This geometrical view of non-integer numbers remained dominant until the end of Middle Ages, although the rise of algebra led to consider them independently from geometry, which implies implicitly that there are foundational primitives of mathematics. For example, the transformations of equations introduced by Al-Khwarizmi and the cubic and quartic formulas discovered in the 16th century result from algebraic manipulations that have no geometric counterpart.
Nevertheless, this did not challenge the classical foundations of mathematics since all properties of numbers that were used can be deduced from their geometrical definition.
In 1637, René Descartes published La Géométrie, in which he showed that geometry can be reduced to algebra by means coordinates, which are numbers determining the position of a point. This gives to the numbers that he called real numbers a more foundational role (before him, numbers were defined as the ratio of two lengths). Descartes' book became famous after 1649 and paved the way to infinitesimal calculus.
Infinitesimal calculus
Isaac Newton (1642–1727) in England and Leibniz (1646–1716) in Germany independently developed the infinitesimal calculus for dealing with mobile points (such as planets in the sky) and variable quantities.
This needed the introduction of new concepts such as continuous functions, derivatives and limits. For dealing with these concepts in a logical way, they were defined in terms of infinitesimals that are hypothetical numbers that are infinitely close to zero. The strong implications of infinitesimal calculus on foundations of mathematics is illustrated by a pamphlet of the Protestant philosopher George Berkeley (1685–1753), who wrote "[Infinitesimals] are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?".
Also, a lack of rigor has been frequently invoked, because infinitesimals and the associated concepts were not formally defined (lines and planes were not formally defined either, but people were more accustomed to them). Real numbers, continuous functions, derivatives were not formally defined before the 19th century, as well as Euclidean geometry. It is only in the 20th century that a formal definition of infinitesimals has been given, with the proof that the whole infinitesimal can be deduced from them.
Despite its lack of firm logical foundations, infinitesimal calculus was quickly adopted by mathematicians, and validated by its numerous applications; in particular the fact that the planet trajectories can be deduced from the Newton's law of gravitation.
19th century
In the 19th century, mathematics developed quickly in many directions. Several of the problems that were considered led to questions on the foundations of mathematics. Frequently, the proposed solutions led to further questions that were often simultaneously of philosophical and mathematical nature. All these questions led, at the end of the 19th century and the beginning of the 20th century, to debates which have been called the foundational crisis of mathematics. The following subsections describe the main such foundational problems revealed during the 19th century.
Real analysis
Cauchy (1789–1857) started the project of giving rigorous bases to infinitesimal calculus. In particular, he rejected the heuristic principle that he called the generality of algebra, which consisted to apply properties of algebraic operations to infinite sequences without proper proofs. In his Cours d'Analyse (1821), he considered very small quantities, which could presently be called "sufficiently small quantities"; that is, a sentence such that "if is very small must be understood as "there is a (sufficiently large) natural number such that ". In the proofs he used this in a way that predated the modern (ε, δ)-definition of limit.
The modern (ε, δ)-definition of limits and continuous functions was first developed by Bolzano in 1817, but remained relatively unknown, and Cauchy probably did know Bolzano's work.
Karl Weierstrass (1815–1897) formalized and popularized the (ε, δ)-definition of limits, and discovered some pathological functions that seemed paradoxical at this time, such as continuous, nowhere-differentiable functions. Indeed, such functions contradict previous conceptions of a function as a rule for computation or a smooth graph.
At this point, the program of arithmetization of analysis (reduction of mathematical analysis to arithmetic and algebraic operations) advocated by Weierstrass was essentially completed, except for two points.
Firstly, a formal definition of real numbers was still lacking. Indeed, beginning with Richard Dedekind in 1858, several mathematicians worked on the definition of the real numbers, including Hermann Hankel, Charles Méray, and Eduard Heine, but this is only in 1872 that two independent complete definitions of real numbers were published: one by Dedekind, by means of Dedekind cuts; the other one by Georg Cantor as equivalence classes of Cauchy sequences.
Several problems were left open by these definitions, which contributed to the foundational crisis of mathematics. Firstly both definitions suppose that rational numbers and thus natural numbers are rigorously defined; this was done a few years later with Peano axioms. Secondly, both definitions involve infinite sets (Dedekind cuts and sets of the elements of a Cauchy sequence), and Cantor's set theory was published several years later.
The third problem is more subtle: and is related to the foundations of logic: classical logic is a first order logic; that is, quantifiers apply to variables representing individual elements, not to variables representing (infinite) sets of elements. The basic property of the completeness of the real numbers that is required for defining and using real numbers involves a quantification on infinite sets. Indeed, this property may be expressed either as for every infinite sequence of real numbers, if it is a Cauchy sequence, it has a limit that is a real number, or as every subset of the real numbers that is bounded has a least upper bound that is a real number. This need of quantification over infinite sets is one of the motivation of the development of higher-order logics during the first half of the 20th century.
Non-Euclidean geometries
Before the 19th century, there were many failed attempts to derive the parallel postulate from other axioms of geometry. In an attempt to prove that its negation leads to a contradiction, Johann Heinrich Lambert (1728–1777) started to build hyperbolic geometry and introduced the hyperbolic functions and computed the area of a hyperbolic triangle (where the sum of angles is less than 180°).
Continuing the construction of this new geometry, several mathematicians proved independently that if it is inconsistent, then Euclidean geometry is also inconsistent and thus that the parallel postulate cannot be proved. This was proved by Nikolai Lobachevsky in 1826, János Bolyai (1802–1860) in 1832 and Carl Friedrich Gauss (unpublished).
Later in the 19th century, the German mathematician Bernhard Riemann developed Elliptic geometry, another non-Euclidean geometry where no parallel can be found and the sum of angles in a triangle is more than 180°. It was proved consistent by defining points as pairs of antipodal points on a sphere (or hypersphere), and lines as great circles on the sphere.
These proofs of unprovability of the parallel postulate lead to several philosophical problems, the main one being that before this discovery, the parallel postulate and all its consequences were considered as true. So, the non-Euclidean geometries challenged the concept of mathematical truth.
Synthetic vs. analytic geometry
Since the introduction of analytic geometry by René Descartes in the 17th century, there were two approaches to geometry, the old one called synthetic geometry, and the new one, where everything is specified in terms of real numbers called coordinates.
Mathematicians did not worry much about the contradiction between these two approaches before the mid-nineteenth century, where there was "an acrimonious controversy between the proponents of synthetic and analytic methods in projective geometry, the two sides accusing each other of mixing projective and metric concepts". Indeed, there is no concept of distance in a projective space, and the cross-ratio, which is a number, is a basic concept of synthetic projective geometry.
Karl von Staudt developed a purely geometric approach to this problem by introducing "throws" that form what is presently called a field, in which the cross ratio can be expressed.
Apparently, the problem of the equivalence between analytic and synthetic approach was completely solved only with Emil Artin's book Geometric Algebra published in 1957. It was well known that, given a field , one may define affine and projective spaces over in terms of -vector spaces. In these spaces, the Pappus hexagon theorem holds. Conversely, if the Pappus hexagon theorem is included in the axioms of a plane geometry, then one can define a field such that the geometry is the same as the affine or projective geometry over .
Natural numbers
The work of making rigorous real analysis and the definition of real numbers, consisted of reducing everything to rational numbers and thus to natural numbers, since positive rational numbers are fractions of natural numbers. There was therefore a need of a formal definition of natural numbers, which imply as axiomatic theory of arithmetic. This was started with Charles Sanders Peirce in 1881 and Richard Dedekind in 1888, who defined a natural numbers as the cardinality of a finite set.. However, this involves set theory, which was not formalized at this time.
Giuseppe Peano provided in 1888 a complete axiomatisation based on the ordinal property of the natural numbers. The last Peano's axiom is the only one that induces logical difficulties, as it begin with either "if is a set then" or "if is a predicate then". So, Peano's axioms induce a quantification on infinite sets, and this means that Peano arithmetic is what is presently called a Second-order logic.
This was not well understood at that times, but the fact that infinity occurred in the definition of the natural numbers was a problem for many mathematicians of this time. For example, Henri Poincaré stated that axioms can only be demonstrated in their finite application, and concluded that it is "the power of the mind" which allows conceiving of the indefinite repetition of the same act. This applies in particular to the use of the last Peano axiom for showing that the successor function generates all natural numbers. Also, Leopold Kronecker said "God made the integers, all else is the work of man". This may be interpreted as "the integers cannot be mathematically defined".
Infinite sets
Before the second half of the 19th century, infinity was a philosophical concept that did not belong to mathematics. However, with the rise of infinitesimal calculus, mathematicians became to be accustomed to infinity, mainly through potential infinity, that is, as the result of an endless process, such as the definition of an infinite sequence, an infinite series or a limit. The possibility of an actual infinity was the subject of many philosophical disputes.
Sets, and more specially infinite sets were not considered as a mathematical concept; in particular, there was no fixed term for them. A dramatic change arose with the work of Georg Cantor who was the first mathematician to systematically study infinite sets. In particular, he introduced cardinal numbers that measure the size of infinite sets, and ordinal numbers that, roughly speaking, allow one to continue to count after having reach infinity. One of his major results is the discovery that there are strictly more real numbers than natural numbers (the cardinal of the continuum of the real numbers is greater than that of the natural numbers).
These results were rejected by many mathematicians and philosophers, and led to debates that are a part of the foundational crisis of mathematics.
The crisis was amplified with the Russel's paradox that asserts that the phrase "the set of all sets" is self-contradictory. This condradiction introduced a doubt on the consistency of all mathematics.
With the introduction of the Zermelo–Fraenkel set theory () and its adoption by the mathematical community, the doubt about the consistency was essentially removed, although consistency of set theory cannot be proved because of Gödel's incompleteness theorem.
Mathematical logic
In 1847, De Morgan published his laws and George Boole devised an algebra, now called Boolean algebra, that allows expressing Aristotle's logic in terms of formulas and algebraic operations. Boolean algebra is the starting point of mathematization logic and the basis of propositional calculus
Independently, in the 1870's, Charles Sanders Peirce and Gottlob Frege extended propositional calculus by introducing quantifiers, for building predicate logic.
Frege pointed out three desired properties of a logical theory:consistency (impossibility of proving contradictory statements), completeness (any statement is either provable or refutable; that is, its negation is provable), and decidability (there is a decision procedure to test every statement).
By near the turn of the century, Bertrand Russell popularized Frege's work and discovered Russel's paradox which implies that the phrase "the set of all sets" is self-contradictory. This paradox seemed to make the whole mathematics inconsistent and is one of the major causes of the foundational crisis of mathematics.
Foundational crisis
The foundational crisis of mathematics
arose at the end of the 19th century and the beginning of the 20th century with the discovery of several paradoxes or counter-intuitive results.
The first one was the proof that the parallel postulate cannot be proved. This results from a construction of a non-Euclidean geometry inside Euclidean geometry, whose inconsistency would imply the inconsistency of Euclidean geometry. A well known paradox is Russell's paradox, which shows that the phrase "the set of all sets that do not contain themselves" is self-contradictory. Other philosophical problems were the proof of the existence of mathematical objects that cannot be computed or explicitly described, and the proof of the existence of theorems of arithmetic that cannot be proved with Peano arithmetic.
Several schools of philosophy of mathematics were challenged with these problems in the 20th century, and are described below.
These problems were also studied by mathematicians, and this led to establish mathematical logic as a new area of mathematics, consisting of providing mathematical definitions to logics (sets of inference rules), mathematical and logical theories, theorems, and proofs, and of using mathematical methods to prove theorems about these concepts.
This led to unexpected results, such as Gödel's incompleteness theorems, which, roughly speaking, assert that, if a theory contains the standard arithmetic, it cannot be used to prove that it itself is not self-contradictory; and, if it is not self-contradictory, there are theorems that cannot be proved inside the theory, but are nevertheless true in some technical sense.
Zermelo–Fraenkel set theory with the axiom of choice (ZFC) is a logical theory established by Ernst Zermelo and Abraham Fraenkel. It became the standard foundation of modern mathematics, and, unless the contrary is explicitly specified, it is used in all modern mathematical texts, generally implicitly.
Simultaneously, the axiomatic method became a de facto standard: the proof of a theorem must result from explicit axioms and previously proved theorems by the application of clearly defined inference rules. The axioms need not correspond to some reality. Nevertheless, it is an open philosophical problem to explain why the axiom systems that lead to rich and useful theories are those resulting from abstraction from the physical reality or other mathematical theory.
In summary, the foundational crisis is essentially resolved, and this opens new philosophical problems. In particular, it cannot be proved that the new foundation (ZFC) is not self-contradictory. It is a general consensus that, if this would happen, the problem could be solved by a mild modification of ZFC.
Philosophical views
When the foundational crisis arose, there was much debate among mathematicians and logicians about what should be done for restoring confidence in mathematics. This involved philosophical questions about mathematical truth, the relationship of mathematics with reality, the reality of mathematical objects, and the nature of mathematics.
For the problem of foundations, there was two main options for trying to avoid paradoxes. The first one led to intuitionism and constructivism, and consisted to restrict the logical rules for remaining closer to intuition, while the second, which has been called formalism, considers that a theorem is true if it can be deduced from axioms by applying inference rules (formal proof), and that no "trueness" of the axioms is needed for the validity of a theorem.
Formalism
It has been claimed that formalists, such as David Hilbert (1862–1943), hold that mathematics is only a language and a series of games. Hilbert insisted that formalism, called "formula game" by him, is a fundamental part of mathematics, but that mathematics must not be reduced to formalism. Indeed, he used the words "formula game" in his 1927 response to L. E. J. Brouwer's criticisms:
Thus Hilbert is insisting that mathematics is not an arbitrary game with arbitrary rules; rather it must agree with how our thinking, and then our speaking and writing, proceeds.
The foundational philosophy of formalism, as exemplified by David Hilbert, is a response to the paradoxes of set theory, and is based on formal logic. Virtually all mathematical theorems today can be formulated as theorems of set theory. The truth of a mathematical statement, in this view, is represented by the fact that the statement can be derived from the axioms of set theory using the rules of formal logic.
Merely the use of formalism alone does not explain several issues: why we should use the axioms we do and not some others, why we should employ the logical rules we do and not some others, why "true" mathematical statements (e.g., the laws of arithmetic) appear to be true, and so on. Hermann Weyl posed these very questions to Hilbert:
In some cases these questions may be sufficiently answered through the study of formal theories, in disciplines such as reverse mathematics and computational complexity theory. As noted by Weyl, formal logical systems also run the risk of inconsistency; in Peano arithmetic, this arguably has already been settled with several proofs of consistency, but there is debate over whether or not they are sufficiently finitary to be meaningful. Gödel's second incompleteness theorem establishes that logical systems of arithmetic can never contain a valid proof of their own consistency. What Hilbert wanted to do was prove a logical system S was consistent, based on principles P that only made up a small part of S. But Gödel proved that the principles P could not even prove P to be consistent, let alone S.
Intuitionism
Intuitionists, such as L. E. J. Brouwer (1882–1966), hold that mathematics is a creation of the human mind. Numbers, like fairy tale characters, are merely mental entities, which would not exist if there were never any human minds to think about them.
The foundational philosophy of intuitionism or constructivism, as exemplified in the extreme by Brouwer and Stephen Kleene, requires proofs to be "constructive" in nature the existence of an object must be demonstrated rather than inferred from a demonstration of the impossibility of its non-existence. For example, as a consequence of this the form of proof known as reductio ad absurdum is suspect.
Some modern theories in the philosophy of mathematics deny the existence of foundations in the original sense. Some theories tend to focus on mathematical practice, and aim to describe and analyze the actual working of mathematicians as a social group. Others try to create a cognitive science of mathematics, focusing on human cognition as the origin of the reliability of mathematics when applied to the real world. These theories would propose to find foundations only in human thought, not in any objective outside construct. The matter remains controversial.
Logicism
Logicism is a school of thought, and research programme, in the philosophy of mathematics, based on the thesis that mathematics is an extension of logic or that some or all mathematics may be derived in a suitable formal system whose axioms and rules of inference are 'logical' in nature. Bertrand Russell and Alfred North Whitehead championed this theory initiated by Gottlob Frege and influenced by Richard Dedekind.
Set-theoretic Platonism
Many researchers in axiomatic set theory have subscribed to what is known as set-theoretic Platonism, exemplified by Kurt Gödel.
Several set theorists followed this approach and actively searched for axioms that may be considered as true for heuristic reasons and that would decide the continuum hypothesis. Many large cardinal axioms were studied, but the hypothesis always remained independent from them and it is now considered unlikely that CH can be resolved by a new large cardinal axiom. Other types of axioms were considered, but none of them has reached consensus on the continuum hypothesis yet. Recent work by Hamkins proposes a more flexible alternative: a set-theoretic multiverse allowing free passage between set-theoretic universes that satisfy the continuum hypothesis and other universes that do not.
Indispensability argument for realism
This argument by Willard Quine and Hilary Putnam says (in Putnam's shorter words),
However, Putnam was not a Platonist.
Rough-and-ready realism
Few mathematicians are typically concerned on a daily, working basis over logicism, formalism or any other philosophical position. Instead, their primary concern is that the mathematical enterprise as a whole always remains productive. Typically, they see this as ensured by remaining open-minded, practical and busy; as potentially threatened by becoming overly-ideological, fanatically reductionistic or lazy.
Such a view has also been expressed by some well-known physicists.
For example, the Physics Nobel Prize laureate Richard Feynman said
And Steven Weinberg:
Weinberg believed that any undecidability in mathematics, such as the continuum hypothesis, could be potentially resolved despite the incompleteness theorem, by finding suitable further axioms to add to set theory.
Philosophical consequences of Gödel's completeness theorem
Gödel's completeness theorem establishes an equivalence in first-order logic between the formal provability of a formula and its truth in all possible models. Precisely, for any consistent first-order theory it gives an "explicit construction" of a model described by the theory; this model will be countable if the language of the theory is countable. However this "explicit construction" is not algorithmic. It is based on an iterative process of completion of the theory, where each step of the iteration consists in adding a formula to the axioms if it keeps the theory consistent; but this consistency question is only semi-decidable (an algorithm is available to find any contradiction but if there is none this consistency fact can remain unprovable).
More paradoxes
The following lists some notable results in metamathematics. Zermelo–Fraenkel set theory is the most widely studied axiomatization of set theory. It is abbreviated ZFC when it includes the axiom of choice and ZF when the axiom of choice is excluded.
1920: Thoralf Skolem corrected Leopold Löwenheim's proof of what is now called the downward Löwenheim–Skolem theorem, leading to Skolem's paradox discussed in 1922, namely the existence of countable models of ZF, making infinite cardinalities a relative property.
1922: Proof by Abraham Fraenkel that the axiom of choice cannot be proved from the axioms of Zermelo set theory with urelements.
1931: Publication of Gödel's incompleteness theorems, showing that essential aspects of Hilbert's program could not be attained. It showed how to construct, for any sufficiently powerful and consistent recursively axiomatizable system such as necessary to axiomatize the elementary theory of arithmetic on the (infinite) set of natural numbers a statement that formally expresses its own unprovability, which he then proved equivalent to the claim of consistency of the theory; so that (assuming the consistency as true), the system is not powerful enough for proving its own consistency, let alone that a simpler system could do the job. It thus became clear that the notion of mathematical truth cannot be completely determined and reduced to a purely formal system as envisaged in Hilbert's program. This dealt a final blow to the heart of Hilbert's program, the hope that consistency could be established by finitistic means (it was never made clear exactly what axioms were the "finitistic" ones, but whatever axiomatic system was being referred to, it was a 'weaker' system than the system whose consistency it was supposed to prove).
1936: Alfred Tarski proved his truth undefinability theorem.
1936: Alan Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
1938: Gödel proved the consistency of the axiom of choice and of the generalized continuum hypothesis.
1936–1937: Alonzo Church and Alan Turing, respectively, published independent papers showing that a general solution to the Entscheidungsproblem is impossible: the universal validity of statements in first-order logic is not decidable (it is only semi-decidable as given by the completeness theorem).
1955: Pyotr Novikov showed that there exists a finitely presented group G such that the word problem for G is undecidable.
1963: Paul Cohen showed that the Continuum Hypothesis is unprovable from ZFC. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.
1964: Inspired by the fundamental randomness in physics, Gregory Chaitin starts publishing results on algorithmic information theory (measuring incompleteness and randomness in mathematics).
1966: Paul Cohen showed that the axiom of choice is unprovable in ZF even without urelements.
1970: Hilbert's tenth problem is proven unsolvable: there is no recursive solution to decide whether a Diophantine equation (multivariable polynomial equation) has a solution in integers.
1971: Suslin's problem is proven to be independent from ZFC.
Toward resolution of the crisis
Starting in 1935, the Bourbaki group of French mathematicians started publishing a series of books to formalize many areas of mathematics on the new foundation of set theory.
The intuitionistic school did not attract many adherents, and it was not until Bishop's work in 1967 that constructive mathematics was placed on a sounder footing.
One may consider that Hilbert's program has been partially completed, so that the crisis is essentially resolved, satisfying ourselves with lower requirements than Hilbert's original ambitions. His ambitions were expressed in a time when nothing was clear: it was not clear whether mathematics could have a rigorous foundation at all.
There are many possible variants of set theory, which differ in consistency strength, where stronger versions (postulating higher types of infinities) contain formal proofs of the consistency of weaker versions, but none contains a formal proof of its own consistency. Thus the only thing we do not have is a formal proof of consistency of whatever version of set theory we may prefer, such as ZF.
In practice, most mathematicians either do not work from axiomatic systems, or if they do, do not doubt the consistency of ZFC, generally their preferred axiomatic system. In most of mathematics as it is practiced, the incompleteness and paradoxes of the underlying formal theories never played a role anyway, and in those branches in which they do or whose formalization attempts would run the risk of forming inconsistent theories (such as logic and category theory), they may be treated carefully.
The development of category theory in the middle of the 20th century showed the usefulness of set theories guaranteeing the existence of larger classes than does ZFC, such as Von Neumann–Bernays–Gödel set theory or Tarski–Grothendieck set theory, albeit that in very many cases the use of large cardinal axioms or Grothendieck universes is formally eliminable.
One goal of the reverse mathematics program is to identify whether there are areas of "core mathematics" in which foundational issues may again provoke a crisis.
See also
Aristotelian realist philosophy of mathematics
Mathematical logic
Brouwer–Hilbert controversy
Church–Turing thesis
Controversy over Cantor's theory
Epistemology
Euclid's Elements
Hilbert's problems
Implementation of mathematics in set theory
Liar paradox
New Foundations
Philosophy of mathematics
Principia Mathematica
Quasi-empiricism in mathematics
Mathematical thought of Charles Peirce
Notes
References
Avigad, Jeremy (2003) Number theory and elementary arithmetic, Philosophia Mathematica Vol. 11, pp. 257–284
Eves, Howard (1990), Foundations and Fundamental Concepts of Mathematics Third Edition, Dover Publications, INC, Mineola NY, (pbk.) cf §9.5 Philosophies of Mathematics pp. 266–271. Eves lists the three with short descriptions prefaced by a brief introduction.
Goodman, N.D. (1979), "Mathematics as an Objective Science", in Tymoczko (ed., 1986).
Hart, W.D. (ed., 1996), The Philosophy of Mathematics, Oxford University Press, Oxford, UK.
Hersh, R. (1979), "Some Proposals for Reviving the Philosophy of Mathematics", in (Tymoczko 1986).
Hilbert, D. (1922), "Neubegründung der Mathematik. Erste Mitteilung", Hamburger Mathematische Seminarabhandlungen 1, 157–177. Translated, "The New Grounding of Mathematics. First Report", in (Mancosu 1998).
Katz, Robert (1964), Axiomatic Analysis, D. C. Heath and Company.
In Chapter III A Critique of Mathematic Reasoning, §11. The paradoxes, Kleene discusses Intuitionism and Formalism in depth. Throughout the rest of the book he treats, and compares, both Formalist (classical) and Intuitionist logics with an emphasis on the former. Extraordinary writing by an extraordinary mathematician.
Mancosu, P. (ed., 1998), From Hilbert to Brouwer. The Debate on the Foundations of Mathematics in the 1920s, Oxford University Press, Oxford, UK.
Putnam, Hilary (1967), "Mathematics Without Foundations", Journal of Philosophy 64/1, 5–22. Reprinted, pp. 168–184 in W.D. Hart (ed., 1996).
—, "What is Mathematical Truth?", in Tymoczko (ed., 1986).
Troelstra, A. S. (no date but later than 1990), "A History of Constructivism in the 20th Century", A detailed survey for specialists: §1 Introduction, §2 Finitism & §2.2 Actualism, §3 Predicativism and Semi-Intuitionism, §4 Brouwerian Intuitionism, §5 Intuitionistic Logic and Arithmetic, §6 Intuitionistic Analysis and Stronger Theories, §7 Constructive Recursive Mathematics, §8 Bishop's Constructivism, §9 Concluding Remarks. Approximately 80 references.
Tymoczko, T. (1986), "Challenging Foundations", in Tymoczko (ed., 1986).
—,(ed., 1986), New Directions in the Philosophy of Mathematics, 1986. Revised edition, 1998.
van Dalen D. (2008), "Brouwer, Luitzen Egbertus Jan (1881–1966)", in Biografisch Woordenboek van Nederland. URL:http://www.inghist.nl/Onderzoek/Projecten/BWN/lemmata/bwn2/brouwerle [2008-03-13]
Weyl, H. (1921), "Über die neue Grundlagenkrise der Mathematik", Mathematische Zeitschrift 10, 39–79. Translated, "On the New Foundational Crisis of Mathematics", in (Mancosu 1998).
Wilder, Raymond L. (1952), Introduction to the Foundations of Mathematics, John Wiley and Sons, New York, NY.
External links
Logic and Mathematics
Foundations of Mathematics: past, present, and future, May 31, 2000, 8 pages.
A Century of Controversy over the Foundations of Mathematics by Gregory Chaitin.
Mathematical logic
History of mathematics
Philosophy of mathematics | Foundations of mathematics | [
"Mathematics"
] | 7,947 | [
"Mathematical logic",
"nan",
"Foundations of mathematics"
] |
169,398 | https://en.wikipedia.org/wiki/The%20Atomic%20Cafe | The Atomic Cafe is a 1982 American documentary film directed by Kevin Rafferty, Jayne Loader and Pierce Rafferty. It is a compilation of clips from newsreels, military training films, and other footage produced in the United States early in the Cold War on the subject of nuclear warfare. Without any narration, the footage is edited and presented in a manner to demonstrate how misinformation and propaganda was used by the U.S. government and popular culture to ease fears about nuclear weapons among the American public.
In 2016, the film was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant."
Synopsis
The film covers the beginnings of the era of nuclear warfare, created from a broad range of archival material from the 1940s, 1950s and early 1960s including newsreel clips, television news footage, U.S. government-produced films (including military training films), advertisements, television and radio programs. News footage reflected the prevailing understanding of the media and public. The film covers both the impact of the atomic bomb on popular culture and daily life, as well as documents the military's increasing fascination with carrying out more and more dangerous tests. The film opens with footage of the Trinity Test and concludes with a montage of stock footage simulating a nuclear attack on the United States.
Though the topic of atomic holocaust is a grave matter, much of the humor derives from the modern audience's reaction to old training films, such as the Duck and Cover film shown in schools. Another sequence involves footage of US Army training maneuvers in which soldiers are instructed to walk into a mushroom cloud as part of an exercise to study how efficiently the armed forces could kill the survivors of a nuclear bomb strike if Soviet Soldiers ever made it to US soil; prior to the beginning of the exercise, the soldiers are informed, "Viewed from a safe distance, the atomic bomb is one of the most beautiful sights ever seen by man."
People shown
The following people are shown in excerpts from speeches, interviews and news reports, along with several unnamed actors, civilians, members of the armed forces and narrators: Lloyd Bentsen, William H. P. Blandy, Owen Brewster, Frank Gallop, Lyndon Johnson, Maurice Joyce, Nikita Khrushchev, Brien McMahon, Seymour Melman, George Molan, Richard Nixon, Robert E. Stripling, Val Peterson, George Portell, Bill Burrud, George Putnam, Ronald Reagan, Dwight D. Eisenhower, Joseph Stalin, Douglas MacArthur,
Ethel Rosenberg, Julius Rosenberg, Mario Salvadori, Lewis Strauss, Paul Tibbets, Kermit Beahan, Harry S. Truman, and James E. Van Zandt.
Historical context
The Atomic Cafe, referred to as a "compilation verite" with no "voice of God narration" or any recently shot footage, was released at the height of nostalgia and cynicism in America. By 1982, Americans lost much of their faith in their government following the Vietnam War and the Watergate scandal the previous decade, alongside the seemingly never-ending arms race with the Soviet Union. The Atomic Cafe reflects and reinforces this idea as it exposes how the atomic bomb's dangers were downplayed and how the government used films to shape public opinion. Loader, who grew up in 50s-60s Fort Worth, Texas living across the street to E.O. "Soapy" Gillam, better known as the "bomb shelter king of North Texas", while also remembering one of her friends used her family's bunker as a clubhouse/secret party spot, felt compelled to revisiting the era that formed her childhood.
The Atomic Cafe was also released during the Reagan administration's civil defense revival. Barry Posen and Stephen Van Evera explain this revival in their article "Defense Policy and the Reagan Administration: Departure from Containment" published in International Security. They argue that in 1981–82 the Reagan administration was moving from an essentially defensive grand strategy of containment to a more offensive strategy. Due to the greater demands of its more offensive strategy "the Reagan Administration ... proposed the biggest military buildup since the Korean War." Of key relevance to The Atomic Cafe, the Reagan move toward offense included the adoption of a more aggressive nuclear strategy that required a large U.S. nuclear buildup. Containment only required that U.S. strategic nuclear forces be capable of one mission: inflicting unacceptable damage on the Soviet Union even after absorbing an all-out Soviet surprise attack. To this "assured destruction" mission the Reagan administration added a second "counterforce" mission, which required the capacity to launch a nuclear first strike against Soviet strategic nuclear forces that would leave the Soviets unable to inflict unacceptable damage on the U.S. in retaliation. The U.S. had always invested in counterforce but the Reagan administration put even greater emphasis on it. The counterforce mission was far more demanding than the assured destruction mission, and required a vast expansion of U.S. nuclear forces to fulfill. Civil defense was a component of a counterforce strategy, as it reduced Soviet retaliatory capacity, hence civil defense was a candidate for more spending under Reagan's counterforce nuclear strategy. Posen and Van Evera argue that this counterforce strategy was a warrant for an open-ended U.S. nuclear buildup.
Bob Mielke, in "Rhetoric and Ideology in the Nuclear Test Documentary" (Film Quarterly) discusses the release of The Atomic Cafe: "This satire feature was released at the height of the nuclear freeze movement (which was in turn responding to the Reagan administration's surreal handling of the arms race.)"
In "Atomic Café" (Film Quarterly), Fred Glass points out that the technical and cultural background needed to create the film was not available in 1955. The film's themes, critical of government propaganda and the nuclear arms race, would have been seen as unpatriotic during the McCarthy era. And getting the necessary permits and funding to make Atomic Café can be quite difficult.
Patricia Aufderheide, in Documentary Film: A Very Short Introduction touches on the significance of The Atomic Cafe as a window into the past of government propaganda and disinformation during the years following the advent of the atomic bomb. Propaganda, also known as disinformation, public diplomacy, and strategic communication, continues to be an important tool for governments. But stand-alone documentary is no longer an important part of public relations campaigns aimed at the general public.
It has also been known as a postmodernist film.
Production
The Atomic Cafe was produced over a five-year period through the collaborative efforts of three directors: Jayne Loader and brothers Kevin and Pierce Rafferty. For this film, the Rafferty brothers and Loader formed a production company called The Archives Project. The filmmakers opted not to use narration. Instead, they deployed carefully constructed sequences of film clips to make their points. Jayne Loader has referred to The Atomic Cafe as "compilation verite": a compilation film with no "Voice of God" narration and no new footage added by the filmmakers. The soundtrack utilizes atomic-themed songs from the Cold War era to underscore the themes of the film.
The film cost $300,000 to make. The group did receive some financial support from outside sources, including the Film Fund, a New York City based non-profit. Grants comprised a nominal amount of the team's budget, and the film was largely funded by the filmmakers themselves. Jayne Loader stated in an interview, "Had we relied on grants, we would have starved." Pierce Rafferty helped to support the team and the film financially by working as a consultant and researcher on several other documentary films including the Oscar-nominated El Salvador: Another Vietnam, the Oscar-nominated With Babies and Banners, and The Life and Times of Rosie the Riveter (which also was inducted into the National Film Registry). The Rafferty brothers had also received an inheritance that they used to support the team during the five years it took to make the film. About 75% of the film is made up of government materials that were in the public domain. Though they could use those public domain materials for free, they had to make copies of the films at their own expense. This along with the newsreel and commercial stock footage that comprises the other 25% of the film (along with the music royalties) represents the bulk of the trio's expenditures.
Release
The film was released on March 17, 1982, in New York, New York. In August 1982, a tie-in companion book of the same name, written by Kevin Rafferty, Jayne Loader and Pierce Rafferty was released by Bantam Books. A 4K digital restoration of the film, created by IndieCollect, premiered at SXSW in 2018.
Home media
The 20th Anniversary Edition of the film was released in DVD format in Region 1 on March 26, 2002, by Docudrama. A 4K restored version was released on Blu-ray on December 4, 2018, by Kino Lorber.
In 1995, Jayne Loader's Public Shelter, an educational CD-ROM and website – with clips from The Atomic Cafe, plus additional material from declassified films, audio, photographs, and text files that archive the history, technology, and culture of the Atomic Age – was released by EJL Productions, a company formed by Jayne Loader and her first husband, Eric Schwaab. Though it garnered positive national reviews and awards, the self-distributed Public Shelter CD-ROM sold only 500 copies and failed to find a national publisher. Loader and Schwaab divorced. The website folded in 1999.
Reception and legacy
Critical response
When The Atomic Cafe was released, film critic Roger Ebert discussed the style and methods the filmmakers used, writing, "The makers of The Atomic Cafe sifted through thousands of feet of Army films, newsreels, government propaganda films and old television broadcasts to come up with the material in their film, which is presented without any narration, as a record of some of the ways in which the bomb entered American folklore. There are songs, speeches by politicians, and frightening documentary footage of guinea-pig American troops shielding themselves from an atomic blast and then exposing themselves to radiation neither they nor their officers understood." He also reviewed it with Gene Siskel who saw it more as a piece of Americana and a curiosity.
Critic Vincent Canby of the New York Times praised the film, calling the film "a devastating collage-film that examines official and unofficial United States attitudes toward the atomic age" and a film that "deserves national attention." Canby was so taken by The Atomic Cafe that he mentioned it in a subsequent article – comparing it, favorably, to the 1981 blockbuster Porky's.
Critic Glenn Erickson discussed the editorial message of the film's producers: The makers of The Atomic Cafe clearly have a message to get across, and to achieve that goal they use the inherent absurdity of their source material in creative ways. But they're careful to make sure they leave them essentially untransformed. When we see Nixon and J. Edgar Hoover posing with a strip of microfilm, we know we're watching a newsreel. The content isn't cheated. Except in wrapup montages, narration from one source isn't used over another. When raw footage is available, candid moments are seen of speechmakers (including President Truman) when they don't know the cameras are rolling. Caught laughing incongruously before a solemn report on an atom threat, Truman comes off as callously flip ...
On Rotten Tomatoes the film has an approval rating of 93% based on reviews from 29 critics.
Deirdre Boyle, an Associate Professor and Academic Coordinator of the Graduate Certificate in Documentary Media Studies at The New School and an author of Subject to Change: Guerrilla Television Revisited, claimed that "By compiling propaganda or fictions denying 'nuclear-truth', The Atomic Cafe reveals the American public's lack of resistance to the fear generated by the government propaganda films and the misinformation they generated. Whether Americans of the time lacked the ability to resist or reject this misinformation about the atomic bomb is a debatable truth."
The Oxford Handbook of Science Fiction said it was, in quotes, a "mockumentary" from its editing and called it, "The most powerful satire of the official treatments of the atomic age".
Influences
In 2016, The Atomic Cafe was one of the 25 films selected for preservation in the annual United States' National Film Registry of the Library of Congress for being deemed "culturally, historically, or aesthetically significant". The press release for the Registry stated that "The influential film compilation 'Atomic Cafe' provocatively documents the post-World War II threat of nuclear war as depicted in a wide assortment of archival footage from the period ..."
Controversial documentary filmmaker Michael Moore was inspired by the film that he tweeted:
"This is the movie that told me that a documentary about a deadly serious subject could be very funny. Then I asked the people who made it to teach me how to do it. They did. That movie became my first – 'Roger & Me'."
Accolades
Wins
Boston Society of Film Critics: BSFC Award, Best Documentary; 1983.
Nomination
British Academy Film Awards: Flaherty Documentary Award, Kevin Rafferty, Jayne Loader and Pierce Rafferty; 1983.
Soundtrack
Atomic Cafe: Radioactive Rock 'n Roll, Blues, Country & Gospel is the soundtrack to the 1982 film The Atomic Cafe. A vinyl LP record was released in 1982 by Rounder Records. Some of the credits for the record include: co-produced by Charles Wolfe, The Archives Project (Jayne Loader, Kevin Rafferty and Pierce Rafferty), album cover artwork by Dennis Pohl, cover design by Mel Green, and booklet text by Charles Wolfe.
Track listing
Featured in the film but not the soundtrack were "13 Women" by Bill Haley and His Comets, Glenn Miller's version of "Flying Home", a couple of themes from Miklos Rozsa, Arthur Fiedler's take on Franz Liszt's Hungarian Rhapsody No. 2, Charles Mackerras's interpretation of "The Old Castle" from Pictures at an Exhibition and Floyd Tillman's original 1948 version of "This Cold War with You" that was heard during the credits.
See also
Atomic Age
Bruce Conner – experimental collage filmmaker that inspired the filmmakers similar in content
Emile de Antonio – documentary filmmaker (which also inspired the co-directors) known for Point of Order, a 1964 study on Joseph McCarthy and the Army–McCarthy hearings
Culture during the Cold War
Duck and cover
How to Photograph an Atomic Bomb
List of films about nuclear issues
Nuclear weapons in popular culture
Dr. Strangelove – the 1964 Stanley Kubrick classic to which critics compared The Atomic Cafe.
Reefer Madness – the 1936 cult classic to which critics also compared it.
Fallout – the video game series featuring Atomic Age aesthetics
United States in the 1950s
References
Notes
External links
"The Atomic Cafe", an essay by John Mills on the National Film Registry site
Interview with filmmaker Jayne Loader about The Atomic Cafe
Homepage
1982 films
1982 documentary films
American documentary films
Cold War films
American collage films
Documentary films about nuclear war and weapons
1982 independent films
American independent films
Nuclear warfare
Compilation films
United States National Film Registry films
1980s English-language films
1980s American films
Postmodern films
Documentary films about the atomic bombings of Hiroshima and Nagasaki
1980s satirical films
English-language documentary films
English-language independent films | The Atomic Cafe | [
"Chemistry"
] | 3,187 | [
"Radioactivity",
"Nuclear warfare"
] |
169,448 | https://en.wikipedia.org/wiki/Membrane%20topology | Topology of a transmembrane protein refers to locations of N- and C-termini of membrane-spanning polypeptide chain with respect to the inner or outer sides of the biological membrane occupied by the protein.
Several databases provide experimentally determined topologies of membrane proteins. They include Uniprot, TOPDB, OPM, and ExTopoDB. There is also a database of domains located conservatively on a certain side of membranes, TOPDOM.
Several computational methods were developed, with a limited success, for predicting transmembrane alpha-helices and their topology. Pioneer methods utilized the fact that membrane-spanning regions contain more hydrophobic residues than other parts of the protein, however applying different hydrophobic scales altered the prediction results. Later, several statistical methods were developed to improve the topography prediction and a special alignment method was introduced. According to the positive-inside rule, cytosolic loops near the lipid bilayer contain more positively-charged amino acids. Applying this rule resulted in the first topology prediction methods. There is also a negative-outside rule in transmembrane alpha-helices from single-pass proteins, although negatively charged residues are rarer than positively charged residues in transmembrane segments of proteins. As more structures were determined, machine learning algorithms appeared. Supervised learning methods are trained on a set of experimentally determined structures, however, these methods highly depend on the training set. Unsupervised learning methods are based on the principle that topology depends on the maximum divergence of the amino acid distributions in different structural parts. It was also shown that locking a segment location based on prior knowledge about the structure improves the prediction accuracy. This feature has been added to some of the existing prediction methods. The most recent methods use consensus prediction (i.e. they use several algorithms to determine the final topology) and automatically incorporate previously determined experimental informations. HTP database provides a collection of topologies that are computationally predicted for human transmembrane proteins.
Discrimination of signal peptides and transmembrane segments is an additional problem in topology prediction treated with a limited success by different methods. Both signal peptides and transmembrane segments contain hydrophobic regions which form α-helices. This causes the cross-prediction between them, which is a weakness of many transmembrane topology predictors. By predicting signal peptides and transmembrane helices simultaneously (Phobius), the errors caused by cross-prediction are reduced and the performance is substantially increased. Another feature used to increase the accuracy of the prediction is the homology (PolyPhobius).”
It is also possible to predict beta-barrel membrane proteins' topology.
See also
Endomembrane system
Integral membrane protein
Protein topology
Structural biology
Transmembrane domain
References
Integral membrane proteins
Membrane biology
Molecular topology | Membrane topology | [
"Chemistry",
"Mathematics"
] | 577 | [
"Membrane biology",
"Molecular topology",
"Topology",
"Molecular biology"
] |
169,493 | https://en.wikipedia.org/wiki/Petal | Petals are modified leaves that form an inner whorl surrounding the reproductive parts of flowers. They are often brightly coloured or unusually shaped to attract pollinators. All of the petals of a flower are collectively known as the corolla. Petals are usually surrounded by an outer whorl of modified leaves called sepals, that collectively form the calyx and lie just beneath the corolla. The calyx and the corolla together make up the perianth, the non-reproductive portion of a flower. When the petals and sepals of a flower are difficult to distinguish, they are collectively called tepals. Examples of plants in which the term tepal is appropriate include genera such as Aloe and Tulipa. Conversely, genera such as Rosa and Phaseolus have well-distinguished sepals and petals. When the undifferentiated tepals resemble petals, they are referred to as "petaloid", as in petaloid monocots, orders of monocots with brightly coloured tepals. Since they include Liliales, an alternative name is lilioid monocots.
Although petals are usually the most conspicuous parts of animal-pollinated flowers, wind-pollinated species, such as the grasses, either have very small petals or lack them entirely (apetalous).
Corolla
The collection of all petals in a flower is referred to as the corolla. The role of the corolla in plant evolution has been studied extensively since Charles Darwin postulated a theory of the origin of elongated corollae and corolla tubes.
A corolla of separate petals, without fusion of individual segments, is apopetalous. If the petals are free from one another in the corolla, the plant is polypetalous or choripetalous; while if the petals are at least partially fused, it is gamopetalous or sympetalous. In the case of fused tepals, the term is syntepalous. The corolla in some plants forms a tube.
Variations
Petals can differ dramatically in different species. The number of petals in a flower may hold clues to a plant's classification. For example, flowers on eudicots (the largest group of dicots) most frequently have four or five petals while flowers on monocots have three or six petals, although there are many exceptions to this rule.
The petal whorl or corolla may be either radially or bilaterally symmetrical. If all of the petals are essentially identical in size and shape, the flower is said to be regular or actinomorphic (meaning "ray-formed"). Many flowers are symmetrical in only one plane (i.e., symmetry is bilateral) and are termed irregular or zygomorphic (meaning "yoke-" or "pair-formed"). In irregular flowers, other floral parts may be modified from the regular form, but the petals show the greatest deviation from radial symmetry. Examples of zygomorphic flowers may be seen in orchids and members of the pea family.
In many plants of the aster family such as the sunflower, Helianthus annuus, the circumference of the flower head is composed of ray florets. Each ray floret is anatomically an individual flower with a single large petal. Florets in the centre of the disc typically have no or very reduced petals. In some plants such as Narcissus, the lower part of the petals or tepals are fused to form a floral cup (hypanthium) above the ovary, and from which the petals proper extend.
A petal often consists of two parts: the upper broader part, similar to a leaf blade, also called the blade; and the lower narrower part, similar to a leaf petiole, called the claw, separated from each other at the limb. Claws are distinctly developed in petals of some flowers of the family Brassicaceae, such as Erysimum cheiri.
The inception and further development of petals show a great variety of patterns. Petals of different species of plants vary greatly in colour or colour pattern, both in visible light and in ultraviolet. Such patterns often function as guides to pollinators and are variously known as nectar guides, pollen guides, and floral guides.
Genetics
The genetics behind the formation of petals, in accordance with the ABC model of flower development, are that sepals, petals, stamens, and carpels are modified versions of each other. It appears that the mechanisms to form petals evolved very few times (perhaps only once), rather than evolving repeatedly from stamens.
Significance of pollination
Pollination is an important step in the sexual reproduction of higher plants. Pollen is produced by the male flower or by the male organs of hermaphroditic flowers.
Pollen does not move on its own and thus requires wind or animal pollinators to disperse the pollen to the stigma of the same or nearby flowers. However, pollinators are rather selective in determining the flowers they choose to pollinate. This develops competition between flowers and as a result flowers must provide incentives to appeal to pollinators (unless the flower self-pollinates or is involved in wind pollination). Petals play a major role in competing to attract pollinators. Henceforth pollination dispersal could occur and the survival of many species of flowers could prolong.
Functions and purposes
Petals have various functions and purposes depending on the type of plant. In general, petals operate to protect some parts of the flower and attract/repel specific pollinators.
Function
This is where the positioning of the flower petals are located on the flower is the corolla e.g. the buttercup having shiny yellow flower petals which contain guidelines amongst the petals in aiding the pollinator towards the nectar. Pollinators have the ability to determine specific flowers they wish to pollinate. Using incentives, flowers draw pollinators and set up a mutual relation between each other in which case the pollinators will remember to always guard and pollinate these flowers (unless incentives are not consistently met and competition prevails).
Scent
The petals could produce different scents to allure desirable pollinators or repel undesirable pollinators. Some flowers will also mimic the scents produced by materials such as decaying meat, to attract pollinators to them.
Colour
Various colour traits are used by different petals that could attract pollinators that have poor smelling abilities, or that only come out at certain parts of the day. Some flowers can change the colour of their petals as a signal to mutual pollinators to approach or keep away.
Shape and size
Furthermore, the shape and size of the flower/petals are important in selecting the type of pollinators they need. For example, large petals and flowers will attract pollinators at a large distance or that are large themselves.
Collectively, the scent, colour, and shape of petals all play a role in attracting/repelling specific pollinators and providing suitable conditions for pollinating. Some pollinators include insects, birds, bats, and wind.
In some petals, a distinction can be made between a lower narrowed, stalk-like basal part referred to as the claw, and a wider distal part referred to as the blade (or limb). Often, the claw and blade are at an angle with one another.
Types of pollination
Wind pollination
Wind-pollinated flowers often have small, dull petals and produce little or no scent. Some of these flowers will often have no petals at all. Flowers that depend on wind pollination will produce large amounts of pollen because most of the pollen scattered by the wind tends to not reach other flowers.
Attracting insects
Flowers have various regulatory mechanisms to attract insects. One such helpful mechanism is the use of colour guiding marks. Insects such as the bee or butterfly can see the ultraviolet marks which are contained on these flowers, acting as an attractive mechanism which is not visible towards the human eye. Many flowers contain a variety of shapes acting to aid with the landing of the visiting insect and also influence the insect to brush against anthers and stigmas (parts of the flower). One such example of a flower is the pohutukawa (Metrosideros excelsa), which acts to regulate colour in a different way. The pohutukawa contains small petals also having bright large red clusters of stamens.
Another attractive mechanism for flowers is the use of scents which are highly attractive to humans. One such example is the rose. On the other hand, some flowers produce the smell of rotting meat and are attractive to insects such as flies. Darkness is another factor that flowers have adapted to as nighttime conditions limit vision and colour-perception. Fragrancy can be especially useful for flowers that are pollinated at night by moths and other flying insects.
Attracting birds
Flowers are also pollinated by birds and must be large and colourful to be visible against natural scenery. In New Zealand, such bird–pollinated native plants include: kowhai (Sophora species), flax (Phormium tenax) and kaka beak (Clianthus puniceus). Flowers adapt the mechanism on their petals to change colour in acting as a communicative mechanism for the bird to visit. An example is the tree fuchsia (Fuchsia excorticata), which are green when needing to be pollinated and turn red for the birds to stop coming and pollinating the flower.
Bat-pollinated flowers
Flowers can be pollinated by short-tailed bats. An example of this is the dactylanthus (Dactylanthus taylorii). This plant has its home under the ground acting the role of a parasite on the roots of forest trees. The dactylanthus has only its flowers pointing to the surface and the flowers lack colour but have the advantage of containing much nectar and a strong scent. These act as a useful mechanism in attracting the bat.
References
Bibliography
Plant morphology
Plant reproductive system
Pollination | Petal | [
"Biology"
] | 2,054 | [
"Plant morphology",
"Plants"
] |
169,552 | https://en.wikipedia.org/wiki/Fifth%20force | In physics, a fifth force refers to a hypothetical fundamental interaction (also known as fundamental force) beyond the four known interactions in nature: gravitational, electromagnetic, strong nuclear, and weak nuclear forces.
Some speculative theories have proposed a fifth force to explain various anomalous observations that do not fit existing theories. The specific characteristics of a putative fifth force depend on which hypothesis is being advanced. No evidence to support these models has been found.
The term is also used as "the Fifth force" when referring to a specific theory advanced by Ephraim Fischbach in 1971 to explain experimental deviations in the theory of gravity. Later analysis failed to reproduce those deviations.
History
The term fifth force originates in a 1986 paper by Ephraim Fischbach et al. who reanalyzed the data from the Eötvös experiment of Loránd Eötvös from earlier in the century; the reanalysis found a distance dependence to gravity that deviates from the inverse square law.
The reanalysis was sparked by theoretical work in 1971 by Fujii proposing a model that changes distance dependence with a Yukawa potential-like term:
The parameter characterizes the strength and the range of the interaction. Fischbach's paper found a strength around 1% of gravity and a range of a few hundred meters.
The effect of this potential can be described equivalently as exchange of vector and/or scalar bosons, that is a predicting as yet undetected new particles.
However, many subsequent attempts to reproduce the deviations have failed.
Theory
Theoretical proposals for a fifth-force are driven by inconsistencies between the existing models of general relativity and quantum field theory, and also between the hierarchy problem and the cosmological constant problem. Both issues suggest the possibility of corrections to the gravitational potential around .
The accelerating expansion of the universe has been attributed to a form of energy called dark energy. Some physicists speculate that a form of dark energy called quintessence could be a fifth force.
Experimental approaches
There are at least three kinds of searches that can be undertaken, which depend on the kind of force being considered, and its range.
Equivalence principle
One way to search for a fifth force is with tests of the strong equivalence principle, one of the most powerful tests of general relativity, also known as Einstein's theory of gravity. Alternative theories of gravity, such as Brans–Dicke theory, postulate a fifth possibly one with infinite range. This is because gravitational interactions, in theories other than general relativity, have degrees of freedom other than the "metric", which dictates the curvature of space, and different kinds of degrees of freedom produce different effects. For example, a scalar field cannot produce the bending of light rays.
The fifth force would manifest itself in an effect on solar system orbits, called the Nordtvedt effect. This is tested with Lunar Laser Ranging experiment and very-long-baseline interferometry.
Extra dimensions
Another kind of fifth force, which arises in Kaluza–Klein theory, where the universe has extra dimensions, or in supergravity or string theory is the Yukawa force, which is transmitted by a light scalar field (i.e. a scalar field with a long Compton wavelength, which determines the range). This has prompted a much recent interest, as a theory of supersymmetric large extra dimensions with size slightly less than a has prompted an experimental effort to test gravity on very small scales. This requires extremely sensitive experiments which search for a deviation from the inverse-square law of gravity over a range of distances. Essentially, they are looking for signs that the Yukawa interaction is engaging at a certain length.
Australian researchers, attempting to measure the gravitational constant deep in a mine shaft, found a discrepancy between the predicted and measured value, with the measured value being two percent too small. They concluded that the results may be explained by a repulsive fifth force with a range from a few centimetres to a kilometre. Similar experiments have been carried out on board a submarine, USS Dolphin (AGSS-555), while deeply submerged. A further experiment measuring the gravitational constant in a deep borehole in the Greenland ice sheet found discrepancies of a few percent, but it was not possible to eliminate a geological source for the observed signal.
Earth's mantle
Another experiment uses the Earth's mantle as a giant particle detector, focusing on geoelectrons.
Cepheid variables
Jain et al. (2012) examined existing data on the rate of pulsation of over a thousand cepheid variable stars in 25 galaxies. Theory suggests that the rate of cepheid pulsation in galaxies screened from a hypothetical fifth force by neighbouring clusters, would follow a different pattern from cepheids that are not screened. They were unable to find any variation from Einstein's theory of gravity.
Other approaches
Some experiments used a lake plus a tower that is eters high. A comprehensive review by Ephraim Fischbach and Carrick Talmadge suggested there is no compelling evidence for the fifth force, though scientists still search for it. The Fischbach–Talmadge article was written in 1992, and since then, other evidence has come to light that may indicate a fifth force.
The above experiments search for a fifth force that is, like gravity, independent of the composition of an object, so all objects experience the force in proportion to their masses. Forces that depend on the composition of an object can be very sensitively tested by torsion balance experiments of a type invented by Loránd Eötvös. Such forces may depend, for example, on the ratio of protons to neutrons in an atomic nucleus, nuclear spin, or the relative amount of different kinds of binding energy in a nucleus (see the semi-empirical mass formula). Searches have been done from very short ranges, to municipal scales, to the scale of the Earth, the Sun, and dark matter at the center of the galaxy.
Claims of new particles
In 2015, Attila Krasznahorkay at ATOMKI, the Hungarian Academy of Sciences's Institute for Nuclear Research in Debrecen, Hungary, and his colleagues posited the existence of a new, light boson only 34 times heavier than the electron (17 MeV). In an effort to find a dark photon, the Hungarian team fired protons at thin targets of lithium-7, which created unstable beryllium-8 nuclei that then decayed and ejected pairs of electrons and positrons. Excess decays were observed at an opening angle of 140° between the and , and a combined energy of 17 MeV, which indicated that a small fraction of beryllium-8 will shed excess energy in the form of a new particle.
In November 2019, Krasznahorkay announced that he and his team at ATOMKI had successfully observed the same anomalies in the decay of stable helium atoms as had been observed in beryllium-8, strengthening the case for the X17 particle's existence.
Feng et al. (2016) proposed that a protophobic (i.e. "proton-ignoring") X-boson with a mass of 16.7 MeV with suppressed couplings to protons relative to neutrons and electrons and femtometer range could explain the data. The force may explain the muon anomaly and provide a dark matter candidate. Several research experiments are underway to attempt to validate or refute these results.
See also
References
Force | Fifth force | [
"Physics",
"Mathematics"
] | 1,533 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
169,553 | https://en.wikipedia.org/wiki/Online%20dating | Online dating, also known as internet dating, virtual dating, or mobile app dating, is a method used by people with a goal of searching for and interacting with potential romantic or sexual partners, via the internet. An online dating service is a company that promotes and provides specific mechanisms for the practice of online dating, generally in the form of dedicated websites or software applications accessible on personal computers or mobile devices connected to the internet. A wide variety of unmoderated matchmaking services, most of which are profile-based with various communication functionalities, is offered by such companies.
Online dating services allow users to become "members" by creating a profile and uploading personal information including (but not limited to) age, gender, sexual orientation, location, and appearance. Most services also encourage members to add photos or videos to their profile. Once a profile has been created, members can view the profiles of other members of the service, using the visible profile information to decide whether or not to initiate contact. Most services offer digital messaging, while others provide additional services such as webcasts, online chat, telephone chat (VOIP), and message boards. Members can constrain their interactions to the online space, or they can arrange a date to meet in person.
A great diversity of online dating services currently exist. Some have a broad membership base of diverse users looking for many different types of relationships. Other sites target highly specific demographics based on features like shared interests, location, religion, sexual orientation or relationship type. Online dating services also differ widely in their revenue streams. Some sites are completely free and depend on advertising for revenue. Others utilize the freemium revenue model, offering free registration and use, with optional, paid, premium services. Still others rely solely on paid membership subscriptions.
Trends
Social trends and public opinions
A 2005 study found that online daters may have more liberal social attitudes compared to the general population in the United States.
Race and online dating
A 2009 study found that African Americans were the least desired demographic in online dating; and were the least interested in forming interracial relationships with non-Black Americans.
In 2008, a study investigated racial preferences using a sample of 6,070 profiles on Yahoo! Personals. Just 29% of white men excluded women of color, compared to the 64% of white women who excluded men of color. Follow-up studies conducted by these authors in 2009 and 2011 found similar patterns: white women were less open to interracial relationships than white men.
In 2018, a study analyzed the activity of approximately 200,000 users of an online dating app in the United States. The authors found that White men and Asian women were the most desired.
In 2021, a comprehensive analysis of online dating trends in the United States suggested that the rise of online dating has exacerbated underlying racial biases in dating. The authors found that White men were preferred by women of color, while men of color generally preferred women of color. White men were accepting of Asian and Hispanic women, yet White women tended to exclude non-White men.
However, these authors also disputed some common notions about racial bias in online dating. For example, White women did not reject Asian men more so than Black or Hispanic men. Black and Hispanic women were just as accepting of Asian men as they were of men of the same race. This is inconsistent with the idea that Asian men are particularly disadvantaged in online dating, relative to other men of color. The authors also dispute the notion that Asian women's high outmarriage rate is due to "self hatred", as their interviews found that these marriages form out of perceived compatibility, rather than self hatred. Gay Hispanic men did not have a preference for white partners.
Gender
According to a 2018 study, among American daters, male desirability increased until the age of 50; while women's desirability declined steeply after the age of 20.
In terms of educational attainment, men's desirability only increased the more educated they were. For women, however, educational attainment beyond the level of a bachelor's degree actually decreased their desirability. The authors suggested that besides individual preferences and partner availability, this pattern may be due to the fact that by the late 2010s, women were more likely to attend and graduate from university. In order to estimate the desirability of a given individual, the researchers looked at the number of messages they received and the desirability of the senders.
Developmental psychologist Michelle Drouin, who was not involved in the study, told The New York Times this finding is in accordance with theories in psychology and sociology based on biological evolution in that youth is a sign of fertility. She added that women with advanced degrees are often viewed as more focused on their careers than family. Licensed psychotherapist Stacy Kaiser told MarketWatch men typically prefer younger women because "they are more easy to impress; they are more (moldable) in terms of everything from emotional behavior to what type of restaurant to eat at," and because they tend to be "more fit, have less expectations and less baggage." On the other hand, women look for (financial) stability and education, attributes that come with age, said Kaiser. These findings regarding age and attractiveness are consistent with earlier research by the online dating services OKCupid and Zoosk.
In 2016, Gareth Tyson of the Queen Mary University of London and his colleagues published a paper analyzing the behavior of Tinder users in New York City and London. In order to minimize the number of variables, they created profiles of white heterosexual people only. For each sex, there were three accounts using stock photographs, two with actual photographs of volunteers, one with no photos whatsoever, and one that was apparently deactivated. The researchers pointedly only used pictures of people of average physical attractiveness. Tyson and his team wrote an algorithm that collected the biographical information of all the matches, liked them all, then counted the number of returning likes.
They found that men and women employed drastically different mating strategies. Men liked a large proportion of the profiles they viewed, but received returning likes only 0.6% of the time; women were much more selective but received matches 10% of the time. Men received matches at a much slower rate than women. Once they received a match, women were far more likely than men to send a message, 21% compared to 7%, but they took more time before doing so. Tyson and his team found that for the first two-thirds of messages from each sex, women sent them within 18 minutes of receiving a match compared to five minutes for men. Men's first messages had an average of a dozen characters, and were typical simple greetings; by contrast, initial messages by women averaged 122 characters.
Tyson and his collaborators found that the male profiles that had three profile pictures received far more matches than those without one. By sending out questionnaires to frequent Tinder users, the researchers discovered that the reason why men tended to like a large proportion of the women they saw was to increase their chances of getting a match. This led to a feedback loop in which men liked more and more of the profiles they saw while women could afford to be even more selective in liking profiles because of a greater probability of a match.
Aided by the text-analysis program Linguistic Inquiry and Word Count, Bruch and Newman discovered that men generally had lower chances of receiving a response after sending more "positively worded" messages. When a man tried to woo a woman more desirable than he was, he received a response 21% of the time; by contrast, when a woman attempted to court a man, she received a reply about half the time. In fact, over 80% of the first messages in the data set obtained for the purposes of the study were from men, and women were highly selective in choosing whom to respond to, a rate of less than 20%. Therefore, studying women's replies yielded much insight into their preferences. Bruch and Newman were also able to establish the existence of dating 'leagues'. Generally speaking, people were able to accurately estimate where they ranked on the dating hierarchy. Very few responded to the messages of people less desirable than they were. Nevertheless, although the probability of a response is low, it is well above zero, and if the other person does respond, it can a self-esteem booster, said Kaiser. Co-author of the study Mark Newman told BBC News, "There is a trade-off between how far up the ladder you want to reach and how low a reply rate you are willing to put up with." Bruch and Newman found that while people spent a lot of time crafting lengthy messages to those they considered to be a highly desirable partner, this hardly made a difference, judging by the response rate. Keeping messages concise is well-advised. Previous studies also suggest that about 70% of the dating profile should be about oneself and the rest about the desired partner.
Data from the Chinese online dating giant Zhenai.com reveals that while men are most interested in how a woman looks, women care more about a man's income. Profession is also quite important. Chinese men favor women working as primary school teachers and nurses while Chinese women prefer men in the IT or finance industry. Women in IT or finance are the least desired. Zhenai enables users to send each other digital "winks". For a man, the more money he earns the more "winks" he receives. For a woman, her income does not matter until the 50,000-yuan mark (US$7,135), after which the number of "winks" falls slightly. Men typically prefer women three years younger than they are whereas women look for men who are three years older on average. However, this changes if the man becomes exceptionally wealthy; the more money he makes the more likely he is to look for younger women.
In general, people in their 20s employ the "self-service dating service" while women in their late 20s and up tend to use the matchmaking service. This is because of the social pressure in China on "leftover women" (Sheng nu), meaning those in their late 20s but still not married. Women who prefer not to ask potentially embarrassing questions – such as whether both spouses will handle household finances, whether or not they will live with his parents, or how many children he wants to have, if any – will get a matchmaker to do it for them. Both sexes prefer matchmakers who are women.
Desirability and physical appearance
At least three quarters of the sample surveyed attempted to date aspirationally, meaning they tried to initiate a relationship with someone who was more desirable, 25% more desirable, to be exact. Bruch recommended sending out more greeting messages, noting that people sometimes managed to upgrade their 'league'. Michael Rosenfeld, a sociologist not involved with the study, told The Atlantic, "The idea that persistence pays off makes sense to me, as the online-dating world has a wider choice set of potential mates to choose from. The greater choice set pays dividends to people who are willing to be persistent in trying to find a mate." Using optimal stopping theory, one can show that the best way to select the best potential partner is to reject the first 37%, then pick the one who is better than the previous set. The probability of picking the best potential mate this way is 37%. (This is approximately the reciprocal of Euler's number, . See derivation of the optimal policy.) However, making online contact is only the first step, and indeed, most conversations failed to birth a relationship. As two potential partners interact more and more, the superficial information available from a dating website or smartphone application becomes less important than their characters.
Despite being a platform designed to be less centered on physical appearance, OkCupid co-founder Christian Rudder stated in 2009 that the male OkCupid users who were rated most physically attractive by female OkCupid users received 11 times as many messages as the lowest-rated male users did, the medium-rated male users received about four times as many messages, and the one-third of female users who were rated most physically attractive by the male users received about two-thirds of all messages sent by male users. According to a former company product manager, the majority of female Bumble users typically set a floor height of six feet for male users which limits their matching opportunities to only 15% of the male population.
Niche dating sites
Sites with specific demographics have become popular as a way to narrow the pool of potential matches. Successful niche sites pair people by race, sexual orientation or religion. In March 2008, the top 5 overall sites held 7% less market share than they did one year ago while the top sites from the top five major niche dating categories made considerable gains. Niche sites cater to people with special interests, such as sports fans, racing and automotive fans, medical or other professionals, people with political or religious preferences, people with medical conditions, or those living in rural farm communities.
Some dating services have been created specifically for those living with HIV and other venereal diseases in an effort to eliminate the need to lie about one's health in order to find a partner. Public health officials in Rhode Island and Utah claimed in 2015 that Tinder and similar apps were responsible for uptick of such conditions.
Some sites, referred to as adult dating sites, match individuals seeking short-term sexual encounters.
Economic trends
Although some sites offer free trials and/or profiles, most memberships can cost upwards of $60 per month. In 2008, online dating services in the United States generated $957 million in revenue.
Most free dating websites depend on advertising revenue, using tools such as Google AdSense and affiliate marketing. Since advertising revenues are modest compared to membership fees, this model requires numerous page views to achieve profitability. However, Sam Yagan describes dating sites as ideal advertising platforms because of the wealth of demographic data made available by users.
In November 2023, the stock prices of Match Group and Bumble were down 31% and 35% on the year respectively, continuing a more than two-year decline since the latter's initial public offering in February 2021 and after posting declines more than double that of the S&P 500 during the 2022 stock market decline. In addition to price increases, slowing paid user growth, and flattening app download rates following the end of the COVID-19 lockdowns, assessments among financial analysts of an oversaturated market, concerns about low consumer satisfaction with the services, and growing skepticism about dating app features and algorithms contributed to the declines. Match Group and Bumble account for nearly the entire market share of the online dating industry, and the companies lost a combined $40 billion in market value from 2021 through 2024. Match Group and Bumble shares continued to fall during the first quarter of 2024 while the S&P 500 rose, and the number of paid users for Match Group fell by 6% during the first quarter of 2024 while Bumble's paid users grew by 18% in comparison to a 3% decline and a 31% increase for each company respectively during the first quarter of 2023.
Matching and divorce rates
In 2012, social psychologists Benjamin Karney, Harry Reis, and others published an analysis of online dating in Psychological Science in the Public Interest that concluded that the matching algorithms of online dating services are only negligibly better at matching people than if they were matched at random. In 2014, Kang Zhao at the University of Iowa constructed a new approach based on the algorithms used by Amazon and Netflix, based on recommendations rather than the autobiographical notes of match seekers. Users' activities reflect their tastes and attractiveness, or the lack thereof, they reasoned. This algorithm increases the chances of a response by 40%, the researchers found. E-commerce firms also employ this "collaborative filtering" technique. Nevertheless, it is still not known what the algorithm for finding the perfect match would be.
However, while collaborative filtering and recommender systems have been demonstrated to be more effective than matching systems based on similarity and complementarity, they have also been demonstrated to be highly skewed to the preferences of early users and against racial minorities such as African Americans and Hispanic Americans which led to the rise of niche dating sites for those groups. In 2014, the Better Business Bureau's National Advertising Division criticized eHarmony's claims of creating a greater number of marriages and more durable and satisfying marriages than alternative dating websites, and in 2018, the Advertising Standards Authority banned eHarmony advertisements in the United Kingdom after the company was unable to provide any evidence to verify its advertisements' claims that its website's matching algorithm was scientifically proven to give its users a greater chance of finding long-term intimate relationships.
Data released by Tinder in 2018 showed that of the 1.6 billion swipes it recorded per day, only 26 million result in matches (a match rate of approximately only 1.63%), despite users logging into the app on average 11 times per day, with male user sessions averaging 7.2 minutes and female user sessions averaging 8.5 minutes (or 79.2 minutes and 93.5 minutes per day respectively). Also, a Tinder user interviewed anonymously in an article published in the December 2018 issue of The Atlantic estimated that only one in 10 of their matches actually resulted in an exchange of messages with the other user they were matched with, with another anonymous Tinder user saying, "Getting right-swiped is a good ego boost even if I have no intention of meeting someone."
In 2012, Karney, Reis, and their co-authors suggested that the availability of a large pool of potential partners "may lead online daters to objectify potential partners and might even undermine their willingness to commit to one of them." In October 2019, a Pew Research Center survey of 4,860 U.S. adults showed that 54 percent of U.S. adults believed that relationships formed through dating sites or apps were just as successful as those that began in person, 38 percent believed these relationships were less successful, while only 5 percent believed them to be more successful.
Noting the research of Karney, Reis, and their co-authors comparing online to offline dating and the research of communications studies scholar Nicole Ellison and her co-authors comparing online dating to comparative shopping, political scientist Robert D. Putnam cited the October 2019 Pew Research Center survey in the afterword to the second edition of Bowling Alone (2020) in expressing skepticism about whether online dating was leading to a greater number of long-term intimate relationships. Social psychologist David Buss has estimated that approximately 30 percent of the men on Tinder are married.
Buss has argued further "Apps like Tinder and OkCupid give people the impression that there are thousands or millions of potential mates out there. One dimension of this is the impact it has on men's psychology. When there is ... a perceived surplus of women, the whole mating system tends to shift towards short-term dating," and there is a feeling of disconnect when choosing future partners. In addition, the cognitive process identified by psychologist Barry Schwartz as the "paradox of choice" (also referred to as "choice overload" or "fear of a better option") was cited in an article published in The Atlantic that suggested that the appearance of an abundance of potential partners causes online daters to be less likely to choose a partner and be less satisfied with their choices of partners.
Research on associations between online dating and divorce rates have found conflicting results. While research published in the Journal of Family and Economic Issues in September 2011 found no relationship between increased internet access and higher divorce rates in the United States, subsequent research published in the Review of Economics of the Household in June 2020 did find a correlation between increased access to broadband internet or mobile phones and higher divorce rates in rural counties and lower divorces rates in metropolitan areas in the United States. In June 2013, PNAS USA published a representative survey of 19,131 U.S. adults married between 2005 and 2012 that found that marriages that began online were slightly less likely to result in separation or divorce in comparison to marriages formed offline and were associated with slightly higher marital satisfaction. In July 2014, Computers in Human Behavior published a study that found that after controlling for various economic, demographic, and psychological variables that state-by-state differences in the United States in Facebook and other social networking service (SNS) user account rates was correlated with higher divorce rates and diminished marriage quality. In October 2015, Cyberpsychology, Behavior, and Social Networking published a study of 371 undergraduate students at a university in the Midwestern United States that found that Facebook friend lists increased physical and emotional infidelities among couples, lowered relationship commitment, and diminished relationship quality due to psychological priming effects.
In November 2016, the Journal of International Social Issues published a study that found that U.S. states with a higher Google Trends search volume index for Match.com in 2013 had fewer marriages in 2014, while U.S. states with higher search volume indices for Hinge, Bumble, Plenty of Fish, and Facebook in 2013 had a greater number of divorces in 2014. In February 2019, Technological Forecasting and Social Change published a study examining associations between broadband internet access and divorce in China using provincial data from 2002 to 2014 that found that for every 1% increase in the number of broadband subscribers the number of divorces grew by 0.008%. In December 2020, PLOS One published a study on online dating in Switzerland that found that couples formed through online dating had stronger cohabiting intentions than those formed offline and no differences in relationship satisfaction. In January 2024, Computers in Human Behavior published a survey of 923 married U.S. adults where roughly half of the subjects met their spouses online that found evidence for an "online dating effect" where online daters reported less satisfying and durable marriages, but the researchers suggested that the differences could be explained by societal marginalization and geographic distance.
Online matchmaking services
In 2008, a variation of the online dating model emerged in the form of introduction sites, where members have to search and contact other members, who introduce them to other members whom they deem compatible. Introduction sites differ from the traditional online dating model, and attracted many users and significant investor interest.
In China, the number of separations per a thousand couples doubled, from 1.46 in 2006 to about three in 2016, while the number of actual divorces continues to rise, according to the Ministry of Civil Affairs. Demand for online dating services among divorcees keeps growing, especially in the large cities such as Beijing, Shanghai, Shenzhen and Guangzhou. In addition, more and more people are expected to use online dating and matchmaking services as China continues to urbanize in the late 2010s and 2020s.
Reception
Opinions and usage of online dating services also differ widely. A 2005 study of data collected by the Pew Internet & American Life Project found that individuals are more likely to use an online dating service if they use the Internet for a greater number of tasks, and less likely to use such a service if they are trusting of others.
Attitudes towards online dating improved visibly between 2005 and 2015, the Pew Research Center found. In particular, the number of people who thought that online dating was a good way to meet people rose from 44% in 2005 to 59%. Although only a negligible number of people dated online in 2005, that rose to 11% in 2013 and then 15% in 2015. In particular, the number of American adults who had used an online dating site went from 9% in 2013 to 12% in 2015 while those who used an online dating software application on their mobile phones jumped from 3% to 9% during the same period. This increase was driven mainly by people aged 18 to 24, for whom usage almost tripled. At the same time, usage among those between the ages of 55 and 64 doubled.
According to a 2015 study by the Pew Research Center, people who had used online dating services had a higher opinion of such services than those who had not. 80% of the users said that online dating sites are a good way to meet potential partners.
In 2016, Consumer Reports surveyed approximately 115,000 online dating service subscribers across multiple platforms and found that while 44 percent of survey respondents stated that usage of online dating services led to a serious long-term intimate relationship or marriage, a subset of approximately 9,600 subscribers that had used at least one online dating service within the previous two years rated satisfaction with the services they used lower than Consumer Reports surveys of consumer satisfaction with technical support services and rated satisfaction with free online dating services as slightly more satisfactory than services with paid subscriptions. In the October 2019 Pew Research Center survey, 57% of survey respondents who had used online dating said their experiences on the platforms was very or somewhat positive while 42% said their experiences were very or somewhat negative, and 76% of survey respondents felt that online dating has had neither a positive or negative effect on dating and relationships or a mostly negative effect while 22% felt that online dating has had a mostly positive effect.
In a July 2022 survey of 6,034 U.S. adults conducted by the Pew Research Center, 53% of survey respondents who had used online dating said their experiences on the platforms were either very or somewhat positive while 46% said their experiences were either very or somewhat negative, 54% of all survey respondents said they believed that dating apps either made no difference in finding a partner or spouse or made doing so harder while 42% said they believed that dating apps made finding a partner or spouse easier, and 80% of survey respondents felt that online dating has had neither a positive or negative effect on dating and relationships or a mostly negative effect while 18% felt that online dating has had a mostly positive effect.
Trust and safety issues
As online dating services are not required to routinely conduct background checks on members, it is possible for profile information to be misrepresented or falsified. Also, there may be users on dating services that have illicit intentions (i.e. date rape, procurement, etc).
OKCupid once introduced a real name policy, but that was later taken removed due to unpopularity with its users.
Only some online dating services are providing important safety information such as STD status of its users or other infectious diseases, but many do not.
Some online dating services which are popular amongst members of queer communities are sometimes used by people as a means of meeting these audiences for the purpose of gaybashing or trans bashing.
A form of misrepresentation is that members may lie about their height, weight, age, or marital status in an attempt to market or brand themselves in a particular way. Users may also carefully manipulate profiles as a form of impression management. Online daters have raised concerns about ghosting, the practice of ceasing all communication with a person without explaining why. Ghosting appears to be becoming more common. Various explanations have been suggested, but social media is often blamed, as are dating apps and the relative anonymity and isolation in modern-day dating and hookup culture, which make it easier to behave poorly with few social repercussions.
Online dating site members may try to balance an accurate representation with maintaining their image in a desirable way. One study found that nine out of ten participants had lied on at least one attribute, though lies were often slight; weight was the most lied about attribute, and age was the least lied about. Furthermore, knowing a large amount of superficial information about a potential partner's interests may lead to a false sense of security when meeting up with a new person. Gross misrepresentation may be less likely on matrimonials sites than on casual dating sites.
Some profiles may not even represent real humans but rather they may be fake "bait profiles" placed online by site owners to attract new paying members, or "spam profiles" created by advertisers to market services and products.
Opinions on regarding the safety of online dating are mixed. Over 50% of research participants in a 2011 study did not view online dating as a dangerous activity, whereas 43% thought that online dating involved risk. Date rape is a form of acquaintance rape and dating violence. The two phrases are often used interchangeably, but date rape specifically refers to a rape in which there has been some sort of romantic or potentially sexual relationship between the two parties. Acquaintance rape also includes rapes in which the victim and perpetrator have been in a non-romantic, non-sexual relationship, for example as co-workers or neighbors. According to the United States Bureau of Justice Statistics (BJS), date rapes are among the most common forms of rape cases. Date rape most commonly takes place among college students when alcohol is involved or date rape drugs are taken. One of the most targeted groups are women between the ages of 16 and 24.
In the October 2019 Pew Research Center survey, 53% of survey respondents said believed that dating apps were a very or somewhat safe way to meet potential partners while 46% believed they were a not too safe or not at all safe way to do so, and 50% online dating respondents said that they believed that scam accounts were common. In the July 2022 Pew Research Center survey, 49% of survey respondents said believed that dating apps were a not too safe or not at all safe way to meet potential partners while 48% believed they were a very or somewhat safe way to do so, and 52% online dating respondents said that they believed that scam accounts were common.
In response to these issues, over 120 Facebook groups named Are We Dating The Same Guy? were created where women share red flags about men and check that he is not dating another person. It is done by taking screenshots of a man's dating profile and posting it onto her city's designated Facebook group, asking "any tea?". Other users in the group will then share information about the man and share warnings. The groups are moderated by volunteers, and have been described as a feminist group.
Billing complaints
Online subscription-based services can suffer from complaints about billing practices. Some online dating service providers may have fraudulent membership fees or credit card charges. Some sites do not allow members to preview available profiles before paying a subscription fee. Furthermore, different functionalities may be offered to members who have paid or not paid for subscriptions, resulting in some confusion around who can view or contact whom.
Consolidation within the online dating industry has led to different newspapers and magazines now advertising the same website database under different names. In the UK, for example, Time Out ("London Dating"), The Times ("Encounters"), and The Daily Telegraph ("Kindred Spirits"), all offer differently named portals to the same service—meaning that a person who subscribes through more than one publication has unwittingly paid more than once for access to the same service.
Imbalanced gender ratios
Little is known about the sex ratio controlled for age. eHarmony's membership is about 57% female and 43% male, whereas the ratio at Match.com is about the reverse of that. On specialty niche websites where the primary demographic is male, there is typically a very unbalanced ratio of male to female or female to male. As of June 2015, 62% of Tinder users were male and 38% were female.
Studies have suggested that men are far more likely to send messages on dating sites than women. In addition, men tend to message the most attractive women regardless of their own attractiveness. This leads to the most attractive women on these sites receiving an overwhelming number of messages, which can in some cases result in them leaving the site.
There is some evidence that there may be differences in how women online rate male attractiveness as opposed to how men rate female attractiveness. The distribution of ratings given by men of female attractiveness appears to be the normal distribution, while ratings of men given by women is highly skewed, with 80% of men rated as below average.
Allegations of discrimination
Gay rights groups have complained that certain websites that restrict their dating services to heterosexual couples are discriminating against homosexuals. Homosexual customers of the popular eHarmony dating website have made many attempts to litigate discriminatory practices. eHarmony was sued in 2007 by a lesbian claiming that "[s]uch outright discrimination is hurtful and disappointing for a business open to the public in this day and age." In light of discrimination by sexual orientation by dating websites, some services such as GayDar.net and Chemistry.com cater more to homosexual dating.
Lawsuits filed against online dating services
A 2011 class action lawsuit alleged Match.com failed to remove inactive profiles, did not accurately disclose the number of active members, and does not police its site for fake profiles; the inclusion of expired and spam profiles as valid served to both artificially inflate the total number of profiles and camouflage a skewed gender ratio in which active users were disproportionately single males. The suit claimed up to 60 percent were inactive profiles, fake or fraudulent users. Some of the spam profiles were alleged to be using images of porn actresses, models, or people from other dating sites. Former employees alleged Match routinely and intentionally over-represented the number of active members on the website and a huge percentage were not real members but 'filler profiles'.
A 2012 class action against Successful Match ended with a November 2014 California jury award of $1.4 million in compensatory damages and $15 million in punitive damages. SuccessfulMatch operated a dating site for people with STDs, PositiveSingles, which it advertised as offering a "fully anonymous profile" which is "100% confidential". The company failed to disclose that it was placing those same profiles on a long list of affiliate site domains such as GayPozDating.com, AIDSDate.com, HerpesInMouth.com, ChristianSafeHaven.com, MeetBlackPOZ.com, HIVGayMen.com, STDHookup.com, BlackPoz.com, and PositivelyKinky.com. This falsely implied that those users were black, Christian, gay, HIV-positive or members of other groups with which the registered members did not identify. The jury found PositiveSingles guilty of fraud, malice, and oppression as the plaintiffs' race, sexual orientation, HIV status, and religion were misrepresented by exporting each dating profile to niche sites associated with each trait.
In 2013, a former employee sued adultery website Ashley Madison claiming repetitive strain injuries as creating 1000 fake profiles in one three week span "required an enormous amount of keyboarding" which caused the worker to develop severe pain in her wrists and forearms. AshleyMadison's parent company, Avid Life Media, countersued in 2014, alleging the worker kept confidential documents, including copies of her "work product and training materials". The firm claimed the fake profiles were for "quality assurance testing" to test a new Brazilian version of the site for "consistency and reliability".
In January 2014, an already-married Facebook user attempting to close a pop-up advertisement for Zoosk.com found that one click instead copied personal info from her Facebook profile to create an unwanted online profile seeking a mate, leading to a flood of unexpected responses from amorous single males.
In 2014, It's Just Lunch International was the target of a New York class action alleging unjust enrichment as IJL staff relied on a uniform, misleading script which informed prospective customers during initial interviews that IJL already had at least two matches in mind for those customers' first dates regardless of whether or not that was true.
In 2014, the US Federal Trade Commission fined UK-based JDI Dating (a group of 18 websites, including Cupidswand.com and FlirtCrowd.com) over US$600000, finding that "the defendants offered a free plan that allowed users to set up a profile with personal information and photos. As soon as a new user set up a free profile, he or she began to receive messages that appeared to be from other members living nearby, expressing romantic interest or a desire to meet. However, users were unable to respond to these messages without upgrading to a paid membership ... [t]he messages were almost always from fake, computer-generated profiles — 'Virtual Cupids' — created by the defendants, with photos and information designed to closely mimic the profiles of real people." The FTC also found that paid memberships were being renewed without client authorisation.
On June 30, 2014, co-founder and former marketing vice president of Tinder, Whitney Wolfe, filed a sexual harassment and sex discrimination suit in Los Angeles County Superior Court against IAC-owned Match Group, the parent company of Tinder. The lawsuit alleged that her fellow executives and co-founders Rad and Mateen had engaged in discrimination, sexual harassment, and retaliation against her, while Tinder's corporate supervisor, IAC's Sam Yagan, did nothing. IAC suspended CMO Mateen from his position pending an ongoing investigation, and stated that it "acknowledges that Mateen sent private messages containing 'inappropriate content,' but it believes Mateen, Rad and the company are innocent of the allegations". In December 2018, The Verge reported that Tinder had dismissed Rosette Pambakian, the company's vice president of marketing and communication who had accused Tinder's former CEO Greg Blatt of sexual assault, along with several other employees who were part of the group of Tinder employees who had previously sued the Match Group for $2 billion.
Government regulation
U.S. government regulation of dating services began with the International Marriage Broker Regulation Act (IMBRA) which took effect in March 2007 after a federal judge in Georgia upheld a challenge from the dating site European Connections. The law requires dating services meeting specific criteria—including having as their primary business to connect U.S. citizens/residents with foreign nationals—to conduct, among other procedures, sex offender checks on U.S. customers before contact details can be provided to the non-U.S. citizen. In 2008, the state of New Jersey passed a law which requires the sites to disclose whether they perform background checks.
In the People's Republic of China, using a transnational matchmaking agency involving a monetary transaction is illegal. The Philippines prohibits the business of organizing or facilitating marriages between Filipinas and foreign men under the Republic Act 6955 (the Anti-Mail-Order Bride Law) of June 13, 1990; this law is routinely circumvented by basing mail-order bride websites outside the country.
Singapore's Social Development Network is the governmental organization facilitating dating activities in the country. Singapore's government has actively acted as a matchmaker for singles for the past few decades, and thus only 4% of Singaporeans have ever used an online dating service, despite the country's high rate of internet penetration.
In December 2010, a New York State Law called the "Internet Dating Safety Act" (S5180-A) went into effect that requires online dating sites with customers in New York State to warn users not to disclose personal information to people they do not know.
See also
Comparison of online dating websites
FOSTA-SESTA
List of social networking services
Matrimonial website
Mobile dating
Online dating applications
Online identity
Timeline of online dating services
Sexual selection in humans
Dating agency
References
Further reading
Pew Internet & American Life Project study of online dating in the United States
, November 7, 2019 CBS News
Dating
Intimate relationships
Matchmaking
Social media
Dating service | Online dating | [
"Technology"
] | 8,087 | [
"Mobile content",
"Computing and society",
"Social media",
"Social software"
] |
169,570 | https://en.wikipedia.org/wiki/Sierpi%C5%84ski%20number | In number theory, a Sierpiński number is an odd natural number k such that is composite for all natural numbers n. In 1960, Wacław Sierpiński proved that there are infinitely many odd integers k which have this property.
In other words, when k is a Sierpiński number, all members of the following set are composite:
If the form is instead , then k is a Riesel number.
Known Sierpiński numbers
The sequence of currently known Sierpiński numbers begins with:
78557, 271129, 271577, 322523, 327739, 482719, 575041, 603713, 903983, 934909, 965431, 1259779, 1290677, 1518781, 1624097, 1639459, 1777613, 2131043, 2131099, 2191531, 2510177, 2541601, 2576089, 2931767, 2931991, ... .
The number 78557 was proved to be a Sierpiński number by John Selfridge in 1962, who showed that all numbers of the form have a factor in the covering set }. For another known Sierpiński number, 271129, the covering set is }. Most currently known Sierpiński numbers possess similar covering sets.
However, in 1995 A. S. Izotov showed that some fourth powers could be proved to be Sierpiński numbers without establishing a covering set for all values of n. His proof depends on the aurifeuillean factorization . This establishes that all give rise to a composite, and so it remains to eliminate only using a covering set.
Sierpiński problem
The Sierpiński problem asks for the value of the smallest Sierpiński number. In private correspondence with Paul Erdős, Selfridge conjectured that 78,557 was the smallest Sierpiński number. No smaller Sierpiński numbers have been discovered, and it is now believed that 78,557 is the smallest number.
To show that 78,557 really is the smallest Sierpiński number, one must show that all the odd numbers smaller than 78,557 are not Sierpiński numbers. That is, for every odd k below 78,557, there needs to exist a positive integer n such that is prime. The distributed volunteer computing project PrimeGrid is attempting to eliminate all the remaining values of k:
k = 21181, 22699, 24737, 55459, and 67607.
Prime Sierpiński problem
In 1976, Nathan Mendelsohn determined that the second provable Sierpiński number is the prime k = 271129. The prime Sierpiński problem asks for the value of the smallest prime Sierpiński number, and there is an ongoing "Prime Sierpiński search" which tries to prove that 271129 is the first Sierpiński number which is also a prime.
Extended Sierpiński problem
Suppose that both preceding Sierpiński problems had finally been solved, showing that 78557 is the smallest Sierpiński number and that 271129 is the smallest prime Sierpiński number. This still leaves unsolved the question of the second Sierpinski number; there could exist a composite Sierpiński number k such that . An ongoing search is trying to prove that 271129 is the second Sierpiński number, by testing all k values between 78557 and 271129, prime or not.
Simultaneously Sierpiński and Riesel
A number that is both Sierpiński and Riesel is a Brier number (after Éric Brier). The smallest five known examples are 3316923598096294713661, 10439679896374780276373, 11615103277955704975673, 12607110588854501953787, and 17855036657007596110949 ().
See also
Cullen number
Proth number
Seventeen or Bust
Woodall number
References
Further reading
External links
The Sierpinski problem: definition and status
Archived at Ghostarchive and the Wayback Machine:
Eponymous numbers in mathematics
Prime numbers
Sierpinski-Selfridge conjecture
Unsolved problems in number theory
Science and technology in Poland | Sierpiński number | [
"Mathematics"
] | 944 | [
"Unsolved problems in mathematics",
"Prime numbers",
"Mathematical objects",
"Unsolved problems in number theory",
"Conjectures",
"Mathematical problems",
"Numbers",
"Number theory"
] |
169,589 | https://en.wikipedia.org/wiki/List%20of%20continuity-related%20mathematical%20topics | In mathematics, the terms continuity, continuous, and continuum are used in a variety of related ways.
Continuity of functions and measures
Continuous function
Absolutely continuous function
Absolute continuity of a measure with respect to another measure
Continuous probability distribution: Sometimes this term is used to mean a probability distribution whose cumulative distribution function (c.d.f.) is (simply) continuous. Sometimes it has a less inclusive meaning: a distribution whose c.d.f. is absolutely continuous with respect to Lebesgue measure. This less inclusive sense is equivalent to the condition that every set whose Lebesgue measure is 0 has probability 0.
Geometric continuity
Parametric continuity
Continuum
Continuum (set theory), the real line or the corresponding cardinal number
Linear continuum, any ordered set that shares certain properties of the real line
Continuum (topology), a nonempty compact connected metric space (sometimes a Hausdorff space)
Continuum hypothesis, a conjecture of Georg Cantor that there is no cardinal number between that of countably infinite sets and the cardinality of the set of all real numbers. The latter cardinality is equal to the cardinality of the set of all subsets of a countably infinite set.
Cardinality of the continuum, a cardinal number that represents the size of the set of real numbers
See also
Continuous variable
Mathematical analysis
Mathematics-related lists | List of continuity-related mathematical topics | [
"Mathematics"
] | 266 | [
"Mathematical analysis"
] |
169,633 | https://en.wikipedia.org/wiki/Outline%20of%20computer%20science | Computer science (also called computing science) is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.
Computer science can be described as all of the following:
Academic discipline
Science
Applied science
Subfields
Mathematical foundations
Coding theory – Useful in networking, programming, system development, and other areas where computers communicate with each other.
Game theory – Useful in artificial intelligence and cybernetics.
Discrete mathematics - Study of discrete structures. Used in digital computer systems.
Graph theory – Foundations for data structures and searching algorithms.
Mathematical logic – Boolean logic and other ways of modeling logical queries; the uses and limitations of formal proof methods.
Number theory – Theory of the integers. Used in cryptography as well as a test domain in artificial intelligence.
Algorithms and data structures
Algorithms – Sequential and parallel computational procedures for solving a wide range of problems.
Data structures – The organization and manipulation of data.
Artificial intelligence
Outline of artificial intelligence
Artificial intelligence – The implementation and study of systems that exhibit an autonomous intelligence or behavior of their own.
Automated reasoning – Solving engines, such as used in Prolog, which produce steps to a result given a query on a fact and rule database, and automated theorem provers that aim to prove mathematical theorems with some assistance from a programmer.
Computer vision – Algorithms for identifying three-dimensional objects from a two-dimensional picture.
Soft computing, the use of inexact solutions for otherwise extremely difficult problems:
Machine learning - Development of models that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data.
Evolutionary computing - Biologically inspired algorithms.
Natural language processing - Building systems and algorithms that analyze, understand, and generate natural (human) languages.
Robotics – Algorithms for controlling the behaviour of robots.
Communication and security
Networking – Algorithms and protocols for reliably communicating data across different shared or dedicated media, often including error correction.
Computer security – Practical aspects of securing computer systems and computer networks.
Cryptography – Applies results from complexity, probability, algebra and number theory to invent and break codes, and analyze the security of cryptographic protocols.
Computer architecture
Computer architecture – The design, organization, optimization and verification of a computer system, mostly about CPUs and Memory subsystems (and the bus connecting them).
Operating systems – Systems for managing computer programs and providing the basis of a usable system.
Computer graphics
Computer graphics – Algorithms both for generating visual images synthetically, and for integrating or altering visual and spatial information sampled from the real world.
Image processing – Determining information from an image through computation.
Information visualization – Methods for representing and displaying abstract data to facilitate human interaction for exploration and understanding.
Concurrent, parallel, and distributed systems
Parallel computing - The theory and practice of simultaneous computation; data safety in any multitasking or multithreaded environment.
Concurrency (computer science) – Computing using multiple concurrent threads of execution, devising algorithms for solving problems on various processors to achieve maximal speed-up compared to sequential execution.
Distributed computing – Computing using multiple computing devices over a network to accomplish a common objective or task and thereby reducing the latency involved in single processor contributions for any task.
Databases
Outline of databases
Relational databases – the set theoretic and algorithmic foundation of databases.
Structured Storage - non-relational databases such as NoSQL databases.
Data mining – Study of algorithms for searching and processing information in documents and databases; closely related to information retrieval.
Programming languages and compilers
Compiler theory – Theory of compiler design, based on Automata theory.
Programming language pragmatics – Taxonomy of programming languages, their strength and weaknesses. Various programming paradigms, such as object-oriented programming.
Programming language theory - Theory of programming language design
Formal semantics – rigorous mathematical study of the meaning of programs.
Type theory – Formal analysis of the types of data, and the use of these types to understand properties of programs — especially program safety.
Scientific computing
Computational science – constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems.
Numerical analysis – Approximate numerical solution of mathematical problems such as root-finding, integration, the solution of ordinary differential equations; the approximation of special functions.
Symbolic computation – Manipulation and solution of expressions in symbolic form, also known as Computer algebra.
Computational physics – Numerical simulations of large non-analytic systems
Computational chemistry – Computational modelling of theoretical chemistry in order to determine chemical structures and properties
Bioinformatics and Computational biology – The use of computer science to maintain, analyse, store biological data and to assist in solving biological problems such as Protein folding, function prediction and Phylogeny.
Computational neuroscience – Computational modelling of neurophysiology.
Computational linguistics
Computational logic
Computational engineering
Software engineering
Outline of software engineering
Formal methods – Mathematical approaches for describing and reasoning about software design.
Software engineering – The principles and practice of designing, developing, and testing programs, as well as proper engineering practices.
Algorithm design – Using ideas from algorithm theory to creatively design solutions to real tasks.
Computer programming – The practice of using a programming language to implement algorithms.
Human–computer interaction – The study and design of computer interfaces that people use.
Reverse engineering – The application of the scientific method to the understanding of arbitrary existing software.
Theory of computation
Automata theory – Different logical structures for solving problems.
Computability theory – What is calculable with the current models of computers. Proofs developed by Alan Turing and others provide insight into the possibilities of what may be computed and what may not.
List of unsolved problems in computer science
Computational complexity theory – Fundamental bounds (especially time and storage space) on classes of computations.
Quantum computing theory – Explores computational models involving quantum superposition of bits.
History
History of computer science
List of pioneers in computer science
History of Artificial Intelligence
History of Operating Systems
Professions
Computer Scientist
Programmer (Software developer)
Teacher/Professor
Software engineer
Software architect
Software tester
Hardware engineer
Data analyst
Interaction designer
Network administrator
Data scientist
Data and data structures
Data structure
Data type
Associative array and Hash table
Array
List
Tree
String
Matrix (computer science)
Database
Programming paradigms
Imperative programming/Procedural programming
Functional programming
Logic programming
Declarative Programming
Event-Driven Programming
Object oriented programming
Class
Inheritance
Object
See also
Abstraction
Big O notation
Closure
Compiler
Cognitive science
External links
List of Computer Scientists
Glossary of Computer Science
Computer science
Computer science
Outline
Computer science topics | Outline of computer science | [
"Technology"
] | 1,307 | [
"Computing-related lists",
"Computer science"
] |
169,650 | https://en.wikipedia.org/wiki/Rod%20of%20Asclepius | In Greek mythology, the Rod of Asclepius (⚕; , , , sometimes also spelled Asklepios), also known as the Staff of Aesculapius and as the asklepian, is a serpent-entwined rod wielded by the Greek god Asclepius, a deity associated with healing and medicine. In modern times, it is the predominant symbol for medicine and health care, although it is sometimes confused with the similar caduceus, which has two snakes and a pair of wings.
Greek mythology and Greek society
The Rod of Asclepius takes its name from the Greek god Asclepius, a deity associated with healing and medicinal arts in ancient Greek religion and mythology. Asclepius' attributes, the snake and the staff, sometimes depicted separately in antiquity, are combined in this symbol.
The most famous temple of Asclepius was at Epidaurus in north-eastern Peloponnese. Another famous healing temple (or asclepeion) was located on the island of Kos, where Hippocrates, the legendary "father of medicine", may have begun his career. Other asclepieia were situated in Trikala, Gortys (Arcadia), and Pergamum in Asia.
In honour of Asclepius, a particular type of non-venomous rat snake was often used in healing rituals, and these snakes – the Aesculapian snakes – crawled around freely on the floor in dormitories where the sick and injured slept. These snakes were introduced at the founding of each new temple of Asclepius throughout the classical world. From about 300 BCE onwards, the cult of Asclepius grew very popular and pilgrims flocked to his healing temples (Asclepieia) to be cured of their ills. Ritual purification would be followed by offerings or sacrifices to the god (according to means), and the supplicant would then spend the night in the holiest part of the sanctuary – the abaton (or adyton). Any dreams or visions would be reported to a priest who would prescribe the appropriate therapy by a process of interpretation. Some healing temples also used sacred dogs to lick the wounds of sick petitioners.
The original Hippocratic Oath began with the invocation "I swear by Apollo the Healer and by Asclepius and by Hygieia and Panacea and by all the gods ..."
The serpent and the staff appear to have been separate symbols that were combined at some point in the development of the Asclepian cult. The significance of the serpent has been interpreted in many ways; sometimes the shedding of skin and renewal is emphasized as symbolizing rejuvenation, while other assessments center on the serpent as a symbol that unites and expresses the dual nature of the work of the Apothecary Physician, who deals with life and death, sickness and health. The ambiguity of the serpent as a symbol, and the contradictions it is thought to represent, reflect the ambiguity of the use of drugs, which can help or harm, as reflected in the meaning of the term , which meant "drug", "medicine", and "poison" in ancient Greek. However the word may become less ambiguous when "medicine" is understood as something that heals the one taking it because it poisons that which afflicts it, meaning medicine is designed to kill or drive away something and any healing happens as a result of that thing being gone, not as a direct effect of medicine. Products deriving from the bodies of snakes were known to have medicinal properties in ancient times, and in ancient Greece, at least some were aware that snake venom that might be fatal if it entered the bloodstream could often be imbibed. Snake venom appears to have been prescribed in some cases as a form of therapy.
The staff has also been variously interpreted. One view is that it, like the serpent, "conveyed notions of resurrection and healing", while another (not necessarily incompatible) is that the staff was a walking stick associated with itinerant physicians. Cornutus, a Greek philosopher probably active in the first century CE, in the Theologiae Graecae Compendium (Ch. 33) offers a view of the significance of both snake and staff:
In any case, the two symbols certainly merged in antiquity as representations of the snake coiled about the staff are common.
Confusion with the caduceus
Similar Biblical story
The rod of Asclepius has been likened to the Old Testament story of Moses's brazen serpent (also known as Nehushtan), a brass sculpture of a snake on a rod which had the power of protecting the Israelites from the bites of venomous snakes.
Unicode
A symbol for the rod of Asclepius has a code point () in the Miscellaneous Symbols table of the Unicode Standard: the spelling is theirs.
Modern use
A number of organizations and services use the rod of Asclepius as their logo, or part of their logo. These include:
Asia
Africa
Kenya Medical Research Institute
Kenya Medical Training College
Nigerian Medical Association
South African Medical Research Council former coat of arms
South African Military Health Service
South Pacific
Australian Medical Association
Australian Medical Students' Association
Medical Council of New Zealand
Royal New Zealand Army Medical Corps
Royal Australian Army Medical Corps
Canada
Europe
United States
Worldwide
Medical Protection Society
Star of Life, symbol of emergency medical services
World Health Organization
Variation
In Russia, the emblem of Main Directorate for Drugs Control features a variation with a sword and a snake on the shield.
See also
Iron crutch (symbol of Traditional Chinese medicine)
Notes
References
External links
Asclepius
Heraldic charges
Medical symbols
Royal Army Medical Corps
Snakes in art
Symbols
Walking sticks
Objects in Greek mythology
no:Asklepios#Asklepiosstaven | Rod of Asclepius | [
"Mathematics"
] | 1,186 | [
"Symbols"
] |
169,674 | https://en.wikipedia.org/wiki/Eucalypt | Eucalypt is any woody plant with capsule fruiting bodies belonging to one of seven closely related genera (of the tribe Eucalypteae) found across Australia:
Eucalyptus, Corymbia, Angophora, Stockwellia, Allosyncarpia, Eucalyptopsis and Arillastrum. In Australia, they are commonly known as gum trees or stringybarks.
Taxonomy
For an example of changing historical perspectives, in 1991, largely genetic evidence indicated that some prominent Eucalyptus species were actually more closely related to Angophora than to other eucalypts; they were accordingly split off into the new genus Corymbia.
Although separate, all of these genera and their species are allied and it remains the standard to refer to the members of all seven genera Angophora, Corymbia, Eucalyptus, Stockwellia, Allosyncarpia, Eucalyptopsis and Arillastrum as "eucalypts" or as the eucalypt group.
The extant genera Stockwellia, Allosyncarpia, Eucalyptopsis and Arillastrum comprise six known species, restricted to monsoon forests and rainforests in north-eastern Australia, the Arnhem Land plateau, New Guinea, the Moluccas and New Caledonia. These genera are recognised as having evolved from ancient lineages of the family Myrtaceae. According to genetic, fossil and morphological evidence, it is hypothesised that they evolved into separate taxa before the evolution of the more widespread and well-known genera Eucalyptus, Corymbia and Angophora, and all of their many species.
Eucalyptus deglupta has naturally spread the furthest from the Australian geographic origin of the genus Eucalyptus, being the only species known growing naturally in the nearby northern hemisphere, from New Guinea to New Britain, Sulawesi, Seram Island to Mindanao, Philippines. Eucalyptus urophylla also grows naturally as far west as the Flores and Timor islands.
Adaptations
Eucalypts from fire-prone habitats are attuned to withstand fire in several ways:
Their seeds are often held in an insulated capsule, which opens only after a bushfire. Once cooled down, the land becomes a freshly fertilised seed bed.
Oils in the leaves tend to make the fire more severe and therefore more damaging to less attuned species, giving an evolutionary advantage to the eucalypts.
Epicormic buds under the often thick bark of the trunk and branches are ready to sprout new stems and leaves after a fire.
These advantages work well in areas affected by long dry spells.
Over 700 eucalypt species dominate landscapes all over Australia, but diversity is reduced in rainforests and arid environments.
A fungal plant pathogen (from the family Sporocadaceae), Allelochaeta brevilata is found on species of eucalypts in Australia.
See also
Orthorhinus cylindrirostris
References
External links
Plant Guide: Eucalyptus, Corymbia and Angophora at Australian Native Plants Society
Eucalypt Research at Currency Creek Arboretum
Eucalypt
Myrtaceae
Plant common names | Eucalypt | [
"Biology"
] | 638 | [
"Plants",
"Plant common names",
"Common names of organisms"
] |
169,785 | https://en.wikipedia.org/wiki/Dewberry | The dewberries are a group of species in the genus Rubus, section Rubus, closely related to the blackberries. They are small trailing (rather than upright or high-arching) brambles with aggregate fruits, reminiscent of the raspberry, but are usually purple to black instead of red.
Description
The plants do not have upright canes like some other Rubus species, but have stems that trail along the ground, putting forth new roots along the length of the stem. The stems are covered with fine spines or stickers. Around March and April, the plants start to grow white flowers that develop into small green berries. The tiny green berries grow red and then a deep purple-blue as they ripen. When the berries are ripe, they are tender and difficult to pick in any quantity without squashing them. The berries are sweet and often less seedy than blackberries.
In the winter the leaves often remain on the stems, but may turn dark red.
The European dewberry, Rubus caesius, grows more upright like other brambles. Its fruits are a deep, almost black, purple and are coated with a thin layer or 'dew' of waxy droplets. Thus, they appear sky-blue (caesius being Latin for pale blue). Its fruits are small and retain a markedly tart taste even when fully ripe.
Species
Rubus Section Caesii, European dewberry
European dewberry, Rubus caesius L.
Rubus Section Flagellares, American dewberries
Rubus aboriginum Rydb., synonyms:
Rubus almus (L.H. Bailey) L.H.Bailey
Rubus austrinus L.H.Bailey
Rubus bollianus L.H.Bailey
Rubus clair-brownii''' L.H.Bailey
Rubus decor L.H. Bailey
Rubus flagellaris Willd. var. almus L.H.Bailey
Rubus foliaceus L.H. Bailey
Rubus ignarus L.H. Bailey
Rubus ricei L.H. Bailey
Aberdeen dewberry, Rubus depavitus L.H.Bailey
Northern dewberry, Rubus flagellaris Willd.
Swamp dewberry, Rubus hispidus L.
Upland dewberry, Rubus invisus (L.H.Bailey) Britton
Pacific dewberry, Rubus ursinus Cham. & Schltdl.
Southern dewberry Rubus trivialis L.H.Bailey
Distribution and habitat
Dewberries are common throughout most of the Northern Hemisphere and are thought of as a beneficial weed. Rubus caesius is frequently restricted to coastal communities, especially sand dune systems.
Ecology
The leaves are sometimes eaten by the larvae of some Lepidoptera species including peach blossom moths.
Uses
The leaves can be used to make a herbal tea, and the berries are edible and taste sweet. They can be eaten raw, or used to make cobbler, jam, or pie.
In the late 19th and early 20th centuries, the town of Cameron, North Carolina, was known as the "dewberry capital of the world" for large scale cultivation of this berry which was shipped out for widespread consumption. Local growers made extensive use of the railroads in the area to ship them nationally and internationally.
See also
Black raspberry
Boysenberry, a cross between a dewberry and a loganberry
Cloudberry, a dioecious Rubus'' species
Youngberry
References
External links
Berries
Plant common names
Rubus | Dewberry | [
"Biology"
] | 716 | [
"Plant common names",
"Common names of organisms",
"Plants"
] |
169,823 | https://en.wikipedia.org/wiki/List%20of%20two-dimensional%20geometric%20shapes | This is a list of two-dimensional geometric shapes in Euclidean and other geometries. For mathematical objects in more dimensions, see list of mathematical shapes. For a broader scope, see list of shapes.
Generally composed of straight line segments
Angle
Balbis
Concave polygon
Constructible polygon
Convex polygon
Cyclic polygon
Equiangular polygon
Equilateral polygon
Penrose tile
Polyform
Regular polygon
Simple polygon
Tangential polygon
Polygons with specific numbers of sides
Henagon – 1 side
Digon – 2 sides
Triangle – 3 sides
Acute triangle
Equilateral triangle
Heptagonal triangle
Isosceles triangle
Golden Triangle
Obtuse triangle
Rational triangle
Heronian triangle
Pythagorean triangle
Isosceles heronian triangle
Primitive Heronian triangle
Right triangle
30-60-90 triangle
Isosceles right triangle
Kepler triangle
Scalene triangle
Quadrilateral – 4 sides
Cyclic quadrilateral
Kite
Rectangle
Rhomboid
Rhombus
Square (regular quadrilateral)
Tangential quadrilateral
Trapezoid
Isosceles trapezoid
Trapezus
Pentagon – 5 sides
Hexagon – 6 sides
Lemoine hexagon
Heptagon – 7 sides
Octagon – 8 sides
Nonagon – 9 sides
Decagon – 10 sides
Hendecagon – 11 sides
Dodecagon – 12 sides
Tridecagon – 13 sides
Tetradecagon – 14 sides
Pentadecagon – 15 sides
Hexadecagon – 16 sides
Heptadecagon – 17 sides
Octadecagon – 18 sides
Enneadecagon – 19 sides
Icosagon – 20 sides
Icosikaihenagon - 21 sides
Icosikaidigon - 22 sides
Icositrigon - 23 sides
Icositetragon - 24 sides
Icosikaipentagon - 25 sides
Icosikaihexagon - 26 sides
Icosikaiheptagon - 27 sides
Icosikaioctagon - 28 sides
Icosikaienneagon - 29 sides
Triacontagon - 30 sides
Tetracontagon - 40 sides
Pentacontagon - 50 sides
Hexacontagon - 60 sides
Heptacontagon - 70 sides
Octacontagon - 80 sides
Enneacontagon - 90 sides
Hectogon - 100 sides
Dihectogon - 200 sides
Trihectogon - 300 sides
Tetrahectogon - 400 sides
Pentahectogon - 500 sides
Hexahectogon - 600 sides
Heptahectogon - 700 sides
Octahectogon - 800 sides
Enneahectogon - 900 sides
Chiliagon - 1,000 sides
Myriagon - 10,000 sides
Megagon - 1,000,000 sides
Star polygon – there are multiple types of stars
Pentagram - star polygon with 5 sides
Hexagram – star polygon with 6 sides
Star of David (example)
Heptagram – star polygon with 7 sides
Octagram – star polygon with 8 sides
Star of Lakshmi (example)
Enneagram - star polygon with 9 sides
Decagram - star polygon with 10 sides
Hendecagram - star polygon with 11 sides
Dodecagram - star polygon with 12 sides
Apeirogon - generalized polygon with countably infinite set of sides
Curved
Composed of circular arcs
Annulus
Arbelos
Circle
Archimedes' twin circles
Bankoff circle
Circular triangle
Reuleaux triangle
Circumcircle
Disc
Incircle and excircles of a triangle
Nine-point circle
Circular sector
Circular segment
Crescent
Lens, vesica piscis (fish bladder)
Lune
Quatrefoil
Reuleaux polygon
Reuleaux triangle
Salinon
Semicircle
Tomahawk
Trefoil
Triquetra
Not composed of circular arcs
Archimedean spiral
Astroid
Cardioid
Deltoid
Ellipse
Various lemniscates
Oval
Cartesian oval
Cassini oval
Oval of Booth
Superellipse
Taijitu
Tomoe
Magatama
See also
List of triangle topics
List of circle topics
Glossary of shapes with metaphorical names
References
2
two-dimensional
ar:قائمة الأشكال الهندسية
it:Elenco strutturato di forme geometriche
sv:Geometrisk figur | List of two-dimensional geometric shapes | [
"Mathematics"
] | 892 | [
"Geometric shapes",
"Mathematical objects",
"Geometric objects"
] |
169,917 | https://en.wikipedia.org/wiki/John%20Stewart%20Bell | John Stewart Bell FRS (28 July 1928 – 1 October 1990) was a physicist from Northern Ireland and the originator of Bell's theorem, an important theorem in quantum physics regarding hidden-variable theories.
In 2022, the Nobel Prize in Physics was awarded to Alain Aspect, John Clauser, and Anton Zeilinger for work on Bell inequalities and the experimental validation of Bell's theorem.
Biography
Early life and work
Bell was born in Belfast, Northern Ireland. When he was 11 years old, he decided to be a scientist, and at 16 graduated from Belfast Technical High School. Bell then attended the Queen's University of Belfast, where, in 1948, he obtained a bachelor's degree in experimental physics and, a year later, a bachelor's degree in mathematical physics. He went on to complete a PhD in physics at the University of Birmingham in 1956, specialising in nuclear physics and quantum field theory. In 1954, he married Mary Ross, also a physicist, whom he had met while working on accelerator physics at Malvern, UK. Bell became a vegetarian in his teen years. According to his wife, Bell was an atheist.
Bell's career began with the UK Atomic Energy Research Establishment, near Harwell, Oxfordshire, known as AERE or Harwell Laboratory. In 1960, he moved to work for the European Organization for Nuclear Research (CERN, Conseil Européen pour la Recherche Nucléaire), in Geneva, Switzerland. There he worked almost exclusively on theoretical particle physics and on accelerator design, but found time to pursue a major avocation, investigating the foundations of quantum theory. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1987. Also of significance during his career, Bell, together with John Bradbury Sykes, M. J. Kearsley, and W. H. Reid, translated several volumes of the ten-volume Course of Theoretical Physics of Lev Landau and Evgeny Lifshitz, making these works available to an English-speaking audience in translation, all of which remain in print.
Bell was a proponent of pilot wave theory. In 1987, inspired by Ghirardi–Rimini–Weber theory, he also advocated collapse theories. He said about the interpretation of quantum mechanics: "Well, you see, I don't really know. For me it's not something where I have a solution to sell!"
Critique of von Neumann's proof
Bell was impressed that the formulation of David Bohm's nonlocal hidden-variable theory did not require a "movable boundary" between the quantum system and the classical apparatus:
A possibility is that we find exactly where the boundary lies. More plausible to me is that we will find that there is no boundary. ... The wave functions would prove to be a provisional or incomplete description of the quantum-mechanical part, of which an objective account would become possible. It is this possibility, of a homogeneous account of the world, which is for me the chief motivation of the study of the so-called "hidden variable" possibility.
Bell also criticized the standard formalism of quantum mechanics on the grounds of lack of physical precision:
For the good books known to me are not much concerned with physical precision. This is clear already from their vocabulary. Here are some words which, however legitimate and necessary in application, have no place in a formulation with any pretension to physical precision: system, apparatus, environment, microscopic, macroscopic, reversible, irreversible, observable, information, measurement. ... On this list of bad words from good books, the worst of all is "measurement".
To thoroughly explore the viability of Bohm's theory, Bell needed to answer the challenge of the so-called impossibility proofs against hidden variables. Bell addressed these in a paper entitled "On the Problem of Hidden Variables in Quantum Mechanics". (Due to publishing delays, this paper did not appear until 1966, two years after his more famous work on the EPR paradox .) He showed that John von Neumann's no hidden variables proof does not prove the impossibility of hidden variables, as was widely claimed, due to its reliance on a physical assumption that is not valid for quantum mechanics—namely, that the probability-weighted average of the sum of observable quantities equals the sum of the average values of each of the separate observable quantities. This flaw in von Neumann's proof had been previously discovered by Grete Hermann in 1935, but did not become common knowledge until after it was rediscovered by Bell. Bell reportedly said, "The proof of von Neumann is not merely false but foolish!" In this same work, Bell showed that a stronger effort at such a proof (based upon Gleason's theorem) also fails to eliminate the hidden-variables program.
However, in 2010, Jeffrey Bub published an argument that Bell (and, implicitly, Hermann) had misconstrued von Neumann's proof, saying that it does not attempt to prove the absolute impossibility of hidden variables, and is actually not flawed, after all. (Thus, it was the physics community as a whole that had misinterpreted von Neumann's proof as applying universally.) Bub provides evidence that von Neumann understood the limits of his proof, but there is no record of von Neumann attempting to correct the near universal misinterpretation which lingered for over 30 years and exists to some extent to this day. Von Neumann's proof does not in fact apply to contextual hidden variables, as in Bohm's theory. Bub's conclusion has, in turn, been questioned.
Bell's theorem
In 1964, after a year's leave from CERN that he spent at Stanford University, the University of Wisconsin–Madison and Brandeis University, Bell wrote a paper entitled "On the Einstein–Podolsky–Rosen paradox". In this work, he showed that carrying forward EPR's analysis permits one to derive the famous Bell's theorem. The resultant inequality, derived from basic assumptions that apply to all classical situations, is violated by quantum theory.
There is some disagreement regarding what Bell's inequality—in conjunction with the EPR analysis—can be said to imply. Bell held that not only local hidden variables, but any and all local theoretical explanations must conflict with the predictions of quantum theory: "It is known that with Bohm's example of EPR correlations, involving particles with spin, there is an irreducible nonlocality." According to an alternative interpretation, not all local theories in general, but only local hidden-variables theories (or "local realist" theories) have shown to be incompatible with the predictions of quantum theory.
Conclusions from experimental tests
In 1972 an experiment was conducted that, when extrapolated to ideal detector efficiencies, showed a violation of Bell's inequality.It was the first of many such experiments. Bell himself concluded from these experiments that "It now seems that the non-locality is deeply rooted in quantum mechanics itself and will persist in any completion." This, according to Bell, also implied that quantum theory is not locally causal and cannot be embedded into any locally causal theory. Bell regretted that results of the tests did not agree with the concept of local hidden variables:
For me, it is so reasonable to assume that the photons in those experiments carry with them programs, which have been correlated in advance, telling them how to behave. This is so rational that I think that when Einstein saw that, and the others refused to see it, he was the rational man. The other people, although history has justified them, were burying their heads in the sand. ... So for me, it is a pity that Einstein's idea doesn't work. The reasonable thing just doesn't work."
Bell seemed to have become resigned to the notion that future experiments would continue to agree with quantum mechanics and violate his inequality. Referring to the Bell test experiments, he remarked:
It is difficult for me to believe that quantum mechanics, working very well for currently practical set-ups, will nevertheless fail badly with improvements in counter efficiency ..."
Some people continue to believe that agreement with Bell's inequalities might yet be saved. They argue that in the future much more precise experiments could reveal that one of the known loopholes, for example the so-called "fair sampling loophole", had been biasing the interpretations. Most mainstream physicists are highly skeptical about all these "loopholes", admitting their existence but continuing to believe that Bell's inequalities must fail.
Bell remained interested in objective 'observer-free' quantum mechanics. He felt that at the most fundamental level, physical theories ought not to be concerned with observables, but with 'be-ables': "The beables of the theory are those elements which might correspond to elements of reality, to things which exist. Their existence does not depend on 'observation'." He remained impressed with Bohm's hidden variables as an example of such a scheme and he attacked the more subjective alternatives such as the Copenhagen interpretation.
Teaching special theory of relativity
Bell and his wife, Mary Ross Bell, also a physicist, contributed substantially to the physics of particle accelerators, and with numerous young theorists at CERN, Bell developed particle physics itself. An overview of this work is available in the volume of collected works edited by Mary Bell, Kurt Gottfried, and Martinus Veltman.
Apart from his particle physics research, Bell often raised an issue of special relativity comprehension, and although there is only one written report on this topic available ("How to teach special relativity"), this was a critical subject to him. Bell admired Einstein's contribution to special relativity, but warned in 1985 "Einstein's approach is ... pedagogically dangerous, in my opinion". In 1989 on the occasion of the centenary of the Lorentz-FitzGerald body contraction Bell writes "A great deal of nonsense has been written about the FitzGerald contraction". Bell preferred to think of Lorentz-FitzGerald contraction as a phenomenon that is real and observable as a property of a material body, which was also Einstein's opinion, but in Bell's view Einstein's approach leaves a lot of room for misinterpretation. This situation and the background of Bell's position is described in detail by his collaborator Johann Rafelski in the textbook "Relativity Matters" (2017). In order to combat misconceptions surrounding Lorentz-FitzGerald body contraction Bell adopted and promoted a relativistic thought experiment which became widely known as Bell's spaceship paradox.
Death
Bell died unexpectedly of a cerebral hemorrhage in Geneva in 1990. Unknown to Bell, he had reportedly been nominated for a Nobel Prize that year. His contribution to the issues raised by EPR was significant. Some regard him as having demonstrated the failure of local realism (local hidden variables). Bell's own interpretation is that locality itself had met its demise.
Legacy
In 2008, the John Stewart Bell Prize was created by the Centre for Quantum Information and Quantum Control at the University of Toronto. The prize is awarded every other year for significant contributions first published during the six preceding years. The award recognizes major advances relating to the foundations of quantum mechanics and to the applications of these principles. In 2009, the first award was presented by Alain Aspect to Nicolas Gisin for his theoretical and experimental work on foundations and applications of quantum physics — notably quantum nonlocality, quantum cryptography, and quantum teleportation.
At the CERN site in Meyrin, close to Geneva, there is a street called Route Bell in honour of John Stewart Bell.
In 2016, his colleague from CERN, Reinhold Bertlmann, wrote a lengthy piece, "Bell's Universe: A Personal Recollection", explaining in some detail his amazement at finding out about Bell's paper on Bertlmann's socks, in which Bell compared the EPR paradox with socks.
A day was named after him, referring to the date he released Bell's Theorem, 4 November.
Northern Ireland
Since 2015, a street has been named Bell's Theorem Crescent in his city of birth, Belfast.
The John Bell House, named in his honour, finished construction in 2016 and houses over 400 students in Belfast city centre.
The pedestrian entrance to the Olympia leisure centre in Belfast located 200 meters from Bell's childhood home is named the "John Stewart Bell Entrance" in honour of the local man.
In the Queen's University of Belfast one of the Physics lecture theatres is named in honour of John Stewart Bell.
There is a blue plaque commemorating John Stewart Bell in Queen's university main campus
There is a blue plaque commemorating John Stewart Bell at his childhood home in Tates Avenue in Belfast
In 2017 the Institute of Physics commissioned classical composer Matthew Whiteside's Quartet No 4 (Entangled) to be performed at the 2018 NI Science Festival inspired by Bell's work; the piece went on to become the title track on Whiteside's second album and was the inspiration for a short film by Marisa Zanotti.
Books
2004 edition with introduction by Alain Aspect and two additional papers: .
See also
Epistemological Letters
EPR paradox, a thought experiment by Einstein, Podolsky, and Rosen published in 1935 as an attack on quantum theory
Local hidden-variable theory
Quantum entanglement
Bell's theorem, published in 1964
Bell state
Bell test experiments
CHSH inequality, an experiment-practical formulation of Bell's theorem
GHZ experiment
Superdeterminism
Other work by Bell:
Adler–Bell–Jackiw anomaly
Bell's spaceship paradox
Footnotes
References
External links
1928 births
1990 deaths
Alumni of Queen's University Belfast
Alumni of the University of Birmingham
People associated with CERN
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Society
Atheists from Northern Ireland
Particle physicists
Scientists from Belfast
20th-century physicists from Northern Ireland
British quantum physicists
Members of the American Academy of Arts and Letters
Translators from Russian
Translators to English | John Stewart Bell | [
"Physics"
] | 2,890 | [
"Particle physicists",
"Particle physics"
] |
169,934 | https://en.wikipedia.org/wiki/Modal%20particle | In linguistics, modal particles are always uninflected words, and are a type of grammatical particle. They are used to indicate how the speaker thinks that the content of the sentence relates to the participants' common knowledge or to add emotion to the meaning of the sentence. Languages that use many modal particles in their spoken form include Dutch, Danish, German, Hungarian, Russian, Telugu, Nepali, Norwegian, Indonesian, Sinitic languages, and Japanese. The translation is often not straightforward and depends on the context.
Examples
German
The German particle ja is used to indicate that a sentence contains information that is obvious or already known to both the speaker and the hearer. The sentence Der neue Teppich ist rot means "The new carpet is red". Der neue Teppich ist ja rot may thus mean "As we are both aware, the new carpet is red", which would typically be followed by some conclusion from this fact. However, if the speaker says the same thing upon first seeing the new carpet, the meaning is "I'm seeing that the carpet is obviously red", which would typically express surprise. In speech the latter meaning can be inferred from a strong emphasis on rot and higher-pitched voice.
Dutch
In Dutch, modal particles are frequently used to add mood to a sentence, especially in spoken language. For instance:
Politeness
Kan je even het licht aandoen? (literally: "Can you briefly turn on the light?" with the added "even" indicating that it will not take you long to do so.)
Weet u misschien waar het station is? ("Do you perhaps know where the train station is?") Misschien here denotes a very polite and friendly request: "Could you tell me the way to the train station, please?"
Wil je soms wat drinken? ("Do you occasionally want a drink?")Soms here conveys a sincere interest in the answer to a question: "I'm curious if you would like to drink something?"
Frustration
Doe het toch maar. ("Do it nevertheless, however.")Toch here indicates anger and maar lack of consideration: "I don't really care what you think, just do it!"
Ben je nou nog niet klaar? ("Are you still not ready yet?")Nou here denotes loss of patience: "Don't tell me you still haven't finished!"
Modal particles may be combined to indicate mood in a very precise way. In this combination of six modal particles the first two emphasise the command, the second two are toning down the command, and the final two transform the command into a request:
Luister dan nu toch maar eens even. ("Listen + at this moment + now + just + will you? + only once + only for a while", meaning: "Just listen, will you?")
Because of this progressive alteration these modal particles cannot move around freely when stacked in this kind of combination. However, some other modal particles can be added to the equation on any given place, such as gewoon, juist, trouwens. Also, replacing the "imperative weakener" maar by gewoon (indicating normalcy or acceptable behavior), changes the mood of the sentence completely, now indicating utter frustration with someone who is failing to do something very simple:
Luister dan nou toch gewoon eens even! ("For once, can you just simply listen for a minute?")
References
Parts of speech | Modal particle | [
"Technology"
] | 746 | [
"Parts of speech",
"Components"
] |
169,938 | https://en.wikipedia.org/wiki/Liberia%20Drug%20Enforcement%20Agency | The Liberia Drug Enforcement Agency (LDEA) is an agency established within the Liberian government on December 23, 1998, charged with fighting drug-related crimes. The LDEA is supervised by the Ministry of Justice and is charged with fighting drug trafficking at the country's borders, arresting traffickers and dealers, and destroying illegal drugs. The LDEA is not responsible for overseeing commerce in legal drugs and other pharmaceuticals; such substances are within the purview of the Pharmaceutical Board of Liberia.
History
Before its creation, fighting drug crimes was a responsibility of the Ministry of Defense. The agency began as the National Drug Committee of the Interim Government of National Unity; it was created in 1993 during the presidency of Amos Sawyer. Five years later, the committee was converted into its present form: President Charles Taylor signed a bill passed by the National Legislature that created the LDEA and patterned it after the Drug Enforcement Administration in the United States.
In 2011, the LDEA boss was Director Henry Shaw, but by 2012 he had been replaced by Anthony Souh. According to LDEA boss Anthony Souh, the agency suffers from substantial internal corruption. Directors are appointed by the President subject to confirmation by the Senate of Liberia.
References
External links
Government agencies established in 1998
Drug control law
Law enforcement in Liberia
Drugs in Liberia
Government agencies of Liberia | Liberia Drug Enforcement Agency | [
"Chemistry"
] | 265 | [
"Drug control law",
"Regulation of chemicals"
] |
169,942 | https://en.wikipedia.org/wiki/Troy%20weight | Troy weight is a system of units of mass that originated in the Kingdom of England in the 15th century and is primarily used in the precious metals industry. The troy weight units are the grain, the pennyweight (24 grains), the troy ounce (20 pennyweights), and the troy pound (12 troy ounces). The troy grain is equal to the grain unit of the avoirdupois system, but the troy ounce is heavier than the avoirdupois ounce, and the troy pound is lighter than the avoirdupois pound. Legally, one troy ounce (oz t) equals exactly 31.1034768 grams.
Etymology
Troy weight is generally supposed to take its name from the French market town of Troyes where English merchants traded at least as early as the early 9th century. The name troy is first attested in 1390, describing the weight of a platter, in an account of the travels in Europe of the Earl of Derby.
Charles Moore Watson (1844–1916) proposes an alternative etymology: The Assize of Weights and Measures (also known as ), one of the statutes of uncertain date from the reign of either Henry III or Edward I, thus before 1307, specifies ""—which the Public Record Commissioners translate as "troy weight". The word refers to markets. Wright's The English Dialect Dictionary lists the word troi as meaning a balance, related to the alternate form 'tron' which also means market or the place of weighing. From this, Watson suggests that 'troy' derives from the manner of weighing by balance precious goods such as bullion or drugs; in contrast to the word 'avoirdupois' used to describe bulk goods such as corn or coal, sometimes weighed in ancient times by a kind of steelyard called the auncel.
Troy weight referred to the Tower system; the earliest reference to the modern troy weights is in 1414.
History
The origin of the troy weight system is unknown. Although the name probably comes from the Champagne fairs at Troyes, in northeastern France. English troy weights were nearly identical to the troy weight system of Bremen. (The Bremen troy ounce had a mass of 480.8 British Imperial grains.)
Many aspects of the troy weight system were indirectly derived from the Roman monetary system. Before they used coins, early Romans used bronze bars of varying weights as currency. An ("heavy bronze") weighed one pound. One twelfth of an was called an , or in English, an "ounce". Before the adoption of the metric system, many systems of troy weights were in use in various parts of Europe, among them Holland troy, Paris troy, etc. Their values varied from one another by up to several percentage points. Troy weights were first used in England in the 15th century and were made official for gold and silver in 1527. The British Imperial system of weights and measures (also known as Imperial units) was established in 1824, prior to which the troy weight system was a subset of pre-Imperial English units.
The troy ounce in modern use is essentially the same as the British Imperial troy ounce (1824–1971), adopted as an official weight standard for United States coinage by act of Congress on May 19, 1828. The British Imperial troy ounce (known more commonly simply as the imperial troy ounce) was based on, and virtually identical with, the pre-1824 British troy ounce and the pre-1707 English troy ounce. (1824 was the year the British Imperial system of weights and measures was adopted; 1707 was the year of the Act of Union which created the Kingdom of Great Britain.) Troy ounces have been used in England since the early 15th century, and the English troy ounce was officially adopted for coinage in 1527. Before that time, various sorts of troy ounces were in use on the continent.
The troy ounce and grain were also part of the apothecaries' system. This was long used in medicine, but has been largely replaced by the metric system (milligrams). The only troy weight in widespread use is the British Imperial troy ounce and its American counterpart. Both are based on a grain of 0.06479891 gram (exact, by definition), with 480 grains to a troy ounce (compared with grains for an ounce avoirdupois). The British Empire abolished the 12-ounce troy pound in the 19th century. It has been retained, though rarely used, in the American system. Larger amounts of precious metals are conventionally counted in hundreds or thousands of troy ounces, or in kilograms.
Troy ounces have been and are still often used in precious metal markets in countries that otherwise use International System of Units (SI). However, the People's Bank of China which had been using troy measurements in minting Gold Pandas since 1982 from 2016 specifies Chinese bullion coins in an integer numbers of grams.
Units of measurement
Troy pound (lb t)
The troy pound (lb t) consists of twelve troy ounces and thus is . (An avoirdupois pound is approximately 21.53% heavier at , and consists of sixteen avoirdupois ounces).
Troy ounce (oz t)
A troy ounce weighs 480 grains. Since the implementation of the international yard and pound agreement of 1 July 1959, the grain measure is defined as precisely . Thus one troy ounce = × /grain = . Since the ounce avoirdupois is defined as 437.5 grains, a troy ounce is exactly = or about 1.09714 ounces avoirdupois or about 9.7% more. The troy ounce for trading precious metals is considered to be sufficiently approximated by 31.10 g in EU directive 80/181/EEC.
The Dutch troy system is based on a mark of 8 ounces, the ounce of 20 engels (pennyweights), the engel of 32 as. The mark was rated as 3,798 troy grains or 246.084 grams. The divisions are identical to the tower system.
Pennyweight (dwt)
The pennyweight symbol is dwt. One pennyweight weighs 24 grains, and 20 pennyweights make one troy ounce. Because there were 12 troy ounces in the old troy pound, there would have been 240 pennyweights to the pound (mass) just as there were 240 pennies in the original pound-sterling. However, prior to 1526, the English pound sterling was based on the tower pound, which is of a troy pound. The d in dwt stands for denarius, the ancient Roman coin that equates loosely to a penny. The symbol d for penny can be recognized in the form of British pre-decimal pennies, in which pounds, shillings, and pence were indicated using the symbols £, s, and d, respectively.
Troy grain
There is no specific 'troy grain'. All Imperial systems use the same measure of mass called a grain (historically of barley), each weighing of an avoirdupois pound (and thus a little under 65 milligrams).
Mint masses
Mint masses, also known as moneyers' masses, were legalized by Act of Parliament dated 17 July 1649 entitled An Act touching the monies and coins of England. A grain is 20 mites, a mite is 24 droits, a droit is 20 perits, a perit is 24 blanks.
Conversions
The troy system was used in the apothecaries' system but with different further subdivisions.
See also
Bullion coin
Carat (mass)
Conversion of units
Fluid ounce
Mark (unit)
Tola (unit), a traditional unit of mass equal to exactly of a troy ounce
United States customary units
Explanatory footnotes
References
Precious metals
Systems of units
Units of mass
Customary units of measurement in the United States | Troy weight | [
"Physics",
"Mathematics"
] | 1,600 | [
"Matter",
"Systems of units",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
169,945 | https://en.wikipedia.org/wiki/Rounding | Rounding or rounding off means replacing a number with an approximate value that has a shorter, simpler, or more explicit representation. For example, replacing $ with $, the fraction 312/937 with 1/3, or the expression √2 with .
Rounding is often done to obtain a value that is easier to report and communicate than the original. Rounding can also be important to avoid misleadingly precise reporting of a computed number, measurement, or estimate; for example, a quantity that was computed as but is known to be accurate only to within a few hundred units is usually better stated as "about ".
On the other hand, rounding of exact numbers will introduce some round-off error in the reported result. Rounding is almost unavoidable when reporting many computations – especially when dividing two numbers in integer or fixed-point arithmetic; when computing mathematical functions such as square roots, logarithms, and sines; or when using a floating-point representation with a fixed number of significant digits. In a sequence of calculations, these rounding errors generally accumulate, and in certain ill-conditioned cases they may make the result meaningless.
Accurate rounding of transcendental mathematical functions is difficult because the number of extra digits that need to be calculated to resolve whether to round up or down cannot be known in advance. This problem is known as "the table-maker's dilemma".
Rounding has many similarities to the quantization that occurs when physical quantities must be encoded by numbers or digital signals.
A wavy equals sign (≈, approximately equal to) is sometimes used to indicate rounding of exact numbers, e.g. 9.98 ≈ 10. This sign was introduced by Alfred George Greenhill in 1892.
Ideal characteristics of rounding methods include:
Rounding should be done by a function. This way, when the same input is rounded in different instances, the output is unchanged.
Calculations done with rounding should be close to those done without rounding.
As a result of (1) and (2), the output from rounding should be close to its input, often as close as possible by some metric.
To be considered rounding, the range will be a subset of the domain, in general discrete. A classical range is the integers, Z.
Rounding should preserve symmetries that already exist between the domain and range. With finite precision (or a discrete domain), this translates to removing bias.
A rounding method should have utility in computer science or human arithmetic where finite precision is used, and speed is a consideration.
Because it is not usually possible for a method to satisfy all ideal characteristics, many different rounding methods exist.
As a general rule, rounding is idempotent; i.e., once a number has been rounded, rounding it again to the same precision will not change its value. Rounding functions are also monotonic; i.e., rounding two numbers to the same absolute precision will not exchange their order (but may give the same value). In the general case of a discrete range, they are piecewise constant functions.
Types of rounding
Typical rounding problems include:
Rounding to integer
The most basic form of rounding is to replace an arbitrary number by an integer. All the following rounding modes are concrete implementations of an abstract single-argument "round()" procedure. These are true functions (with the exception of those that use randomness).
Directed rounding to an integer
These four methods are called directed rounding to an integer, as the displacements from the original number to the rounded value are all directed toward or away from the same limiting value (0, +∞, or −∞). Directed rounding is used in interval arithmetic and is often required in financial calculations.
If is positive, round-down is the same as round-toward-zero, and round-up is the same as round-away-from-zero. If is negative, round-down is the same as round-away-from-zero, and round-up is the same as round-toward-zero. In any case, if is an integer, is just .
Where many calculations are done in sequence, the choice of rounding method can have a very significant effect on the result. A famous instance involved a new index set up by the Vancouver Stock Exchange in 1982. It was initially set at 1000.000 (three decimal places of accuracy), and after 22 months had fallen to about 520, although the market appeared to be rising. The problem was caused by the index being recalculated thousands of times daily, and always being truncated (rounded down) to 3 decimal places, in such a way that the rounding errors accumulated. Recalculating the index for the same period using rounding to the nearest thousandth rather than truncation corrected the index value from 524.811 up to 1098.892.
For the examples below, refers to the sign function applied to the original number, .
Rounding down
One may round down (or take the floor, or round toward negative infinity): is the largest integer that does not exceed .
For example, 23.7 gets rounded to 23, and −23.2 gets rounded to −24.
Rounding up
One may also round up (or take the ceiling, or round toward positive infinity): is the smallest integer that is not less than .
For example, 23.2 gets rounded to 24, and −23.7 gets rounded to −23.
Rounding toward zero
One may also round toward zero (or truncate, or round away from infinity): is the integer that is closest to such that it is between 0 and (included); i.e. is the integer part of , without its fraction digits.
For example, 23.7 gets rounded to 23, and −23.7 gets rounded to −23.
Rounding away from zero
One may also round away from zero (or round toward infinity): is the integer that is closest to 0 (or equivalently, to ) such that is between 0 and (included).
For example, 23.2 gets rounded to 24, and −23.2 gets rounded to −24.
Rounding to the nearest integer
These six methods are called rounding to the nearest integer. Rounding a number to the nearest integer requires some tie-breaking rule for those cases when is exactly half-way between two integers – that is, when the fraction part of is exactly 0.5.
If it were not for the 0.5 fractional parts, the round-off errors introduced by the round to nearest method would be symmetric: for every fraction that gets rounded down (such as 0.268), there is a complementary fraction (namely, 0.732) that gets rounded up by the same amount.
When rounding a large set of fixed-point numbers with uniformly distributed fractional parts, the rounding errors by all values, with the omission of those having 0.5 fractional part, would statistically compensate each other. This means that the expected (average) value of the rounded numbers is equal to the expected value of the original numbers when numbers with fractional part 0.5 from the set are removed.
In practice, floating-point numbers are typically used, which have even more computational nuances because they are not equally spaced.
Rounding half up
One may round half up (or round half toward positive infinity), a tie-breaking rule that is widely used in many disciplines. That is, half-way values of are always rounded up. If the fractional part of is exactly 0.5, then
For example, 23.5 gets rounded to 24, and −23.5 gets rounded to −23.
Some programming languages (such as Java and Python) use "half up" to refer to round half away from zero rather than round half toward positive infinity.
This method only requires checking one digit to determine rounding direction in two's complement and similar representations.
Rounding half down
One may also round half down (or round half toward negative infinity) as opposed to the more common round half up. If the fractional part of is exactly 0.5, then
For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −24.
Some programming languages (such as Java and Python) use "half down" to refer to round half toward zero rather than round half toward negative infinity.
Rounding half toward zero
One may also round half toward zero (or round half away from infinity) as opposed to the conventional round half away from zero. If the fractional part of is exactly 0.5, then if is positive, and if is negative.
For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −23.
This method treats positive and negative values symmetrically, and therefore is free of overall positive/negative bias if the original numbers are positive or negative with equal probability. It does, however, still have bias toward zero.
Rounding half away from zero
One may also round half away from zero (or round half toward infinity), a tie-breaking rule that is commonly taught and used, namely: If the fractional part of is exactly 0.5, then if is positive, and if is negative.
For example, 23.5 gets rounded to 24, and −23.5 gets rounded to −24.
This can be more efficient on computers that use sign-magnitude representation for the values to be rounded, because only the first omitted digit needs to be considered to determine if it rounds up or down. This is one method used when rounding to significant figures due to its simplicity.
This method, also known as commercial rounding, treats positive and negative values symmetrically, and therefore is free of overall positive/negative bias if the original numbers are positive or negative with equal probability. It does, however, still have bias away from zero.
It is often used for currency conversions and price roundings (when the amount is first converted into the smallest significant subdivision of the currency, such as cents of a euro) as it is easy to explain by just considering the first fractional digit, independently of supplementary precision digits or sign of the amount (for strict equivalence between the paying and recipient of the amount).
Rounding half to even
One may also round half to even, a tie-breaking rule without positive/negative bias and without bias toward/away from zero. By this convention, if the fractional part of is 0.5, then is the even integer nearest to . Thus, for example, 23.5 becomes 24, as does 24.5; however, −23.5 becomes −24, as does −24.5. This function minimizes the expected error when summing over rounded figures, even when the inputs are mostly positive or mostly negative, provided they are neither mostly even nor mostly odd.
This variant of the round-to-nearest method is also called convergent rounding, statistician's rounding, Dutch rounding, Gaussian rounding, odd–even rounding, or bankers' rounding.
This is the default rounding mode used in IEEE 754 operations for results in binary floating-point formats.
By eliminating bias, repeated addition or subtraction of independent numbers, as in a one-dimensional random walk, will give a rounded result with an error that tends to grow in proportion to the square root of the number of operations rather than linearly.
However, this rule distorts the distribution by increasing the probability of evens relative to odds. Typically this is less important than the biases that are eliminated by this method.
Rounding half to odd
One may also round half to odd, a similar tie-breaking rule to round half to even. In this approach, if the fractional part of is 0.5, then is the odd integer nearest to . Thus, for example, 23.5 becomes 23, as does 22.5; while −23.5 becomes −23, as does −22.5.
This method is also free from positive/negative bias and bias toward/away from zero, provided the numbers to be rounded are neither mostly even nor mostly odd. It also shares the round half to even property of distorting the original distribution, as it increases the probability of odds relative to evens. It was the method used for bank balances in the United Kingdom when it decimalized its currency.
This variant is almost never used in computations, except in situations where one wants to avoid increasing the scale of floating-point numbers, which have a limited exponent range. With round half to even, a non-infinite number would round to infinity, and a small value would round to a normal non-zero value. Effectively, this mode prefers preserving the existing scale of tie numbers, avoiding out-of-range results when possible for numeral systems of even radix (such as binary and decimal)..
Rounding to prepare for shorter precision
This rounding mode is used to avoid getting a potentially wrong result after multiple roundings. This can be achieved if all roundings except the final one are done using rounding to prepare for shorter precision ("RPSP"), and only the final rounding uses the externally requested mode.
With decimal arithmetic, final digits of 0 and 5 are avoided; if there is a choice between numbers with the least significant digit 0 or 1, 4 or 5, 5 or 6, 9 or 0, then the digit different from 0 or 5 shall be selected; otherwise, the choice is arbitrary. IBM defines that, in the latter case, a digit with the smaller magnitude shall be selected. RPSP can be applied with the step between two consequent roundings as small as a single digit (for example, rounding to 1/10 can be applied after rounding to 1/100).
For example, when rounding to integer,
20.0 is rounded to 20;
20.01, 20.1, 20.9, 20.99, 21, 21.01, 21.9, 21.99 are rounded to 21;
22.0, 22.1, 22.9, 22.99 are rounded to 22;
24.0, 24.1, 24.9, 24.99 are rounded to 24;
25.0 is rounded to 25;
25.01, 25.1 are rounded to 26.
In the example from "Double rounding" section, rounding 9.46 to one decimal gives 9.4, which rounding to integer in turn gives 9.
With binary arithmetic, this rounding is also called "round to odd" (not to be confused with "round half to odd"). For example, when rounding to 1/4 (0.01 in binary),
⇒ result is 2 (10.00 in binary)
⇒ result is 2.25 (10.01 in binary)
⇒ result is 2.5 (10.10 in binary)
⇒ result is 2.75 (10.11 in binary)
⇒ result is 3 (11.00 in binary)
For correct results, each rounding step must remove at least 2 binary digits, otherwise, wrong results may appear. For example,
3.125 RPSP to 1/4 ⇒ result is 3.25
3.25 RPSP to 1/2 ⇒ result is 3.5
3.5 round-half-to-even to 1 ⇒ result is 4 (wrong)
If the erroneous middle step is removed, the final rounding to integer rounds 3.25 to the correct value of 3.
RPSP is implemented in hardware in IBM zSeries and pSeries.
Randomized rounding to an integer
Alternating tie-breaking
One method, more obscure than most, is to alternate direction when rounding a number with 0.5 fractional part. All others are rounded to the closest integer. Whenever the fractional part is 0.5, alternate rounding up or down: for the first occurrence of a 0.5 fractional part, round up, for the second occurrence, round down, and so on. Alternatively, the first 0.5 fractional part rounding can be determined by a random seed. "Up" and "down" can be any two rounding methods that oppose each other - toward and away from positive infinity or toward and away from zero.
If occurrences of 0.5 fractional parts occur significantly more than a restart of the occurrence "counting", then it is effectively bias free. With guaranteed zero bias, it is useful if the numbers are to be summed or averaged.
Random tie-breaking
If the fractional part of is 0.5, choose randomly between and , with equal probability. All others are rounded to the closest integer.
Like round-half-to-even and round-half-to-odd, this rule is essentially free of overall bias, but it is also fair among even and odd values. An advantage over alternate tie-breaking is that the last direction of rounding on the 0.5 fractional part does not have to be "remembered".
Stochastic rounding
Rounding as follows to one of the closest integer toward negative infinity and the closest integer toward positive infinity, with a probability dependent on the proximity is called stochastic rounding and will give an unbiased result on average.
For example, 1.6 would be rounded to 1 with probability 0.4 and to 2 with probability 0.6.
Stochastic rounding can be accurate in a way that a rounding function can never be. For example, suppose one started with 0 and added 0.3 to that one hundred times while rounding the running total between every addition. The result would be 0 with regular rounding, but with stochastic rounding, the expected result would be 30, which is the same value obtained without rounding. This can be useful in machine learning where the training may use low precision arithmetic iteratively. Stochastic rounding is also a way to achieve 1-dimensional dithering.
Comparison of approaches for rounding to an integer
Rounding to other values
Rounding to a specified multiple
The most common type of rounding is to round to an integer; or, more generally, to an integer multiple of some increment – such as rounding to whole tenths of seconds, hundredths of a dollar, to whole multiples of 1/2 or 1/8 inch, to whole dozens or thousands, etc.
In general, rounding a number to a multiple of some specified positive value entails the following steps:
For example, rounding dollars to whole cents (i.e., to a multiple of 0.01) entails computing , then rounding that to 218, and finally computing .
When rounding to a predetermined number of significant digits, the increment depends on the magnitude of the number to be rounded (or of the rounded result).
The increment is normally a finite fraction in whatever numeral system is used to represent the numbers. For display to humans, that usually means the decimal numeral system (that is, is an integer times a power of 10, like 1/1000 or 25/100). For intermediate values stored in digital computers, it often means the binary numeral system ( is an integer times a power of 2).
The abstract single-argument "round()" function that returns an integer from an arbitrary real value has at least a dozen distinct concrete definitions presented in the rounding to integer section. The abstract two-argument "roundToMultiple()" function is formally defined here, but in many cases it is used with the implicit value for the increment and then reduces to the equivalent abstract single-argument function, with also the same dozen distinct concrete definitions.
Logarithmic rounding
Rounding to a specified power
Rounding to a specified power is very different from rounding to a specified multiple; for example, it is common in computing to need to round a number to a whole power of 2. The steps, in general, to round a positive number to a power of some positive number other than 1, are:
Many of the caveats applicable to rounding to a multiple are applicable to rounding to a power.
Scaled rounding
This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified power. Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale.
For example, resistors are supplied with preferred numbers on a logarithmic scale. In particular, for resistors with a 10% accuracy, they are supplied with nominal values 100, 120, 150, 180, 220, etc. rounded to multiples of 10 (E12 series). If a calculation indicates a resistor of 165 ohms is required then , and . The logarithm of 165 is closer to the logarithm of 180 therefore a 180 ohm resistor would be the first choice if there are no other considerations.
Whether a value rounds to or depends upon whether the squared value is greater than or less than the product . The value 165 rounds to 180 in the resistors example because is greater than .
Floating-point rounding
In floating-point arithmetic, rounding aims to turn a given value into a value with a specified number of digits. In other words, should be a multiple of a number that depends on the magnitude of . The number is a power of the base (usually 2 or 10) of the floating-point representation.
Apart from this detail, all the variants of rounding discussed above apply to the rounding of floating-point numbers as well. The algorithm for such rounding is presented in the Scaled rounding section above, but with a constant scaling factor , and an integer base .
Where the rounded result would overflow the result for a directed rounding is either the appropriate signed infinity when "rounding away from zero", or the highest representable positive finite number (or the lowest representable negative finite number if is negative), when "rounding toward zero". The result of an overflow for the usual case of round to nearest is always the appropriate infinity.
Rounding to a simple fraction
In some contexts it is desirable to round a given number to a "neat" fraction – that is, the nearest fraction whose numerator and denominator do not exceed a given maximum. This problem is fairly distinct from that of rounding a value to a fixed number of decimal or binary digits, or to a multiple of a given unit . This problem is related to Farey sequences, the Stern–Brocot tree, and continued fractions.
Rounding to an available value
Finished lumber, writing paper, electronic components, and many other products are usually sold in only a few standard values.
Many design procedures describe how to calculate an approximate value, and then "round" to some standard size using phrases such as "round down to nearest standard value", "round up to nearest standard value", or "round to nearest standard value".
When a set of preferred values is equally spaced on a logarithmic scale, choosing the closest preferred value to any given value can be seen as a form of scaled rounding. Such rounded values can be directly calculated.
Arbitrary bins
More general rounding rules can separate values at arbitrary break points, used for example in data binning. A related mathematically formalized tool is signpost sequences, which use notions of distance other than the simple difference – for example, a sequence may round to the integer with the smallest relative (percent) error.
Rounding in other contexts
Dithering and error diffusion
When digitizing continuous signals, such as sound waves, the overall effect of a number of measurements is more important than the accuracy of each individual measurement. In these circumstances, dithering, and a related technique, error diffusion, are normally used. A related technique called pulse-width modulation is used to achieve analog type output from an inertial device by rapidly pulsing the power with a variable duty cycle.
Error diffusion tries to ensure the error, on average, is minimized. When dealing with a gentle slope from one to zero, the output would be zero for the first few terms until the sum of the error and the current value becomes greater than 0.5, in which case a 1 is output and the difference subtracted from the error so far. Floyd–Steinberg dithering is a popular error diffusion procedure when digitizing images.
As a one-dimensional example, suppose the numbers , , , and occur in order and each is to be rounded to a multiple of . In this case the cumulative sums, , , , and , are each rounded to a multiple of : , , , and . The first of these and the differences of adjacent values give the desired rounded values: , , , and .
Monte Carlo arithmetic
Monte Carlo arithmetic is a technique in Monte Carlo methods where the rounding is randomly up or down. Stochastic rounding can be used for Monte Carlo arithmetic, but in general, just rounding up or down with equal probability is more often used. Repeated runs will give a random distribution of results which can indicate the stability of the computation.
Exact computation with rounded arithmetic
It is possible to use rounded arithmetic to evaluate the exact value of a function with integer domain and range. For example, if an integer is known to be a perfect square, its square root can be computed by converting to a floating-point value , computing the approximate square root of with floating point, and then rounding to the nearest integer . If is not too big, the floating-point round-off error in will be less than 0.5, so the rounded value will be the exact square root of . This is essentially why slide rules could be used for exact arithmetic.
Double rounding
Rounding a number twice in succession to different levels of precision, with the latter precision being coarser, is not guaranteed to give the same result as rounding once to the final precision except in the case of directed rounding. For instance rounding 9.46 to one decimal gives 9.5, and then 10 when rounding to integer using rounding half to even, but would give 9 when rounded to integer directly. Borman and Chatfield discuss the implications of double rounding when comparing data rounded to one decimal place to specification limits expressed using integers.
In Martinez v. Allstate and Sendejo v. Farmers, litigated between 1995 and 1997, the insurance companies argued that double rounding premiums was permissible and in fact required. The US courts ruled against the insurance companies and ordered them to adopt rules to ensure single rounding.
Some computer languages and the IEEE 754-2008 standard dictate that in straightforward calculations the result should not be rounded twice. This has been a particular problem with Java as it is designed to be run identically on different machines, special programming tricks have had to be used to achieve this with x87 floating point. The Java language was changed to allow different results where the difference does not matter and require a strictfp qualifier to be used when the results have to conform accurately; strict floating point has been restored in Java 17.
In some algorithms, an intermediate result is computed in a larger precision, then must be rounded to the final precision. Double rounding can be avoided by choosing an adequate rounding for the intermediate computation. This consists in avoiding to round to midpoints for the final rounding (except when the midpoint is exact). In binary arithmetic, the idea is to round the result toward zero, and set the least significant bit to 1 if the rounded result is inexact; this rounding is called sticky rounding. Equivalently, it consists in returning the intermediate result when it is exactly representable, and the nearest floating-point number with an odd significand otherwise; this is why it is also known as rounding to odd. A concrete implementation of this approach, for binary and decimal arithmetic, is implemented as Rounding to prepare for shorter precision.
Table-maker's dilemma
William M. Kahan coined the term "The Table-Maker's Dilemma" for the unknown cost of rounding transcendental functions:
The IEEE 754 floating-point standard guarantees that add, subtract, multiply, divide, fused multiply–add, square root, and floating-point remainder will give the correctly rounded result of the infinite-precision operation. No such guarantee was given in the 1985 standard for more complex functions and they are typically only accurate to within the last bit at best. However, the 2008 standard guarantees that conforming implementations will give correctly rounded results which respect the active rounding mode; implementation of the functions, however, is optional.
Using the Gelfond–Schneider theorem and Lindemann–Weierstrass theorem, many of the standard elementary functions can be proved to return transcendental results, except on some well-known arguments; therefore, from a theoretical point of view, it is always possible to correctly round such functions. However, for an implementation of such a function, determining a limit for a given precision on how accurate results need to be computed, before a correctly rounded result can be guaranteed, may demand a lot of computation time or may be out of reach. In practice, when this limit is not known (or only a very large bound is known), some decision has to be made in the implementation (see below); but according to a probabilistic model, correct rounding can be satisfied with a very high probability when using an intermediate accuracy of up to twice the number of digits of the target format plus some small constant (after taking special cases into account).
Some programming packages offer correct rounding. The GNU MPFR package gives correctly rounded arbitrary precision results. Some other libraries implement elementary functions with correct rounding in IEEE 754 double precision (binary64):
IBM's ml4j, which stands for Mathematical Library for Java, written by Abraham Ziv and Moshe Olshansky in 1999, correctly rounded to nearest only. This library was claimed to be portable, but only binaries for PowerPC/AIX, SPARC/Solaris and x86/Windows NT were provided. According to its documentation, this library uses a first step with an accuracy a bit larger than double precision, a second step based on double-double arithmetic, and a third step with a 768-bit precision based on arrays of IEEE 754 double-precision floating-point numbers.
IBM's Accurate portable mathematical library (abbreviated as APMathLib or just MathLib), also called libultim, in rounding to nearest only. This library uses up to 768 bits of working precision. It was included in the GNU C Library in 2001, but the "slow paths" (providing correct rounding) were removed from 2018 to 2021.
CRlibm, written in the old Arénaire team (LIP, ENS Lyon), first distributed in 2003. It supports the 4 rounding modes and is proved, using the knowledge of the hardest-to-round cases. More efficient than IBM MathLib. Succeeded by Metalibm (2014), which automates the formal proofs.
Sun Microsystems's libmcr of 2004, in the 4 rounding modes. For the difficult cases, this library also uses multiple precision, and the number of words is increased by 2 each time the Table-maker's dilemma occurs (with undefined behavior in the very unlikely event that some limit of the machine is reached).
The CORE-MATH project (2022) provides some correctly rounded functions in the 4 rounding modes for x86-64 processors. Proved using the knowledge of the hardest-to-round cases.
LLVM libc provides some correctly rounded functions in the 4 rounding modes.
There exist computable numbers for which a rounded value can never be determined no matter how many digits are calculated. Specific instances cannot be given but this follows from the undecidability of the halting problem. For instance, if Goldbach's conjecture is true but unprovable, then the result of rounding the following value, , up to the next integer cannot be determined: either =1+10− where is the first even number greater than 4 which is not the sum of two primes, or =1 if there is no such number. The rounded result is 2 if such a number exists and 1 otherwise. The value before rounding can however be approximated to any given precision even if the conjecture is unprovable.
Interaction with string searches
Rounding can adversely affect a string search for a number. For example, rounded to four digits is "3.1416" but a simple search for this string will not discover "3.14159" or any other value of rounded to more than four digits. In contrast, truncation does not suffer from this problem; for example, a simple string search for "3.1415", which is truncated to four digits, will discover values of truncated to more than four digits.
History
The concept of rounding is very old, perhaps older than the concept of division itself. Some ancient clay tablets found in Mesopotamia contain tables with rounded values of reciprocals and square roots in base 60.
Rounded approximations to , the length of the year, and the length of the month are also ancient – see base 60 examples.
The round-half-to-even method has served as American Standard Z25.1 and ASTM standard E-29 since 1940. The origin of the terms unbiased rounding and statistician's rounding are fairly self-explanatory. In the 1906 fourth edition of Probability and Theory of Errors Robert Simpson Woodward called this "the computer's rule", indicating that it was then in common use by human computers who calculated mathematical tables. For example, it was recommended in Simon Newcomb's c. 1882 book Logarithmic and Other Mathematical Tables. Lucius Tuttle's 1916 Theory of Measurements called it a "universally adopted rule" for recording physical measurements. Churchill Eisenhart indicated the practice was already "well established" in data analysis by the 1940s.
The origin of the term bankers' rounding remains more obscure. If this rounding method was ever a standard in banking, the evidence has proved extremely difficult to find. To the contrary, section 2 of the European Commission report The Introduction of the Euro and the Rounding of Currency Amounts suggests that there had previously been no standard approach to rounding in banking; and it specifies that "half-way" amounts should be rounded up.
Until the 1980s, the rounding method used in floating-point computer arithmetic was usually fixed by the hardware, poorly documented, inconsistent, and different for each brand and model of computer. This situation changed after the IEEE 754 floating-point standard was adopted by most computer manufacturers. The standard allows the user to choose among several rounding modes, and in each case specifies precisely how the results should be rounded. These features made numerical computations more predictable and machine-independent, and made possible the efficient and consistent implementation of interval arithmetic.
Currently, much research tends to round to multiples of 5 or 2. For example, Jörg Baten used age heaping in many studies, to evaluate the numeracy level of ancient populations. He came up with the ABCC Index, which enables the comparison of the numeracy among regions possible without any historical sources where the population literacy was measured.
Rounding functions in programming languages
Most programming languages provide functions or special syntax to round fractional numbers in various ways. The earliest numeric languages, such as Fortran and C, would provide only one method, usually truncation (toward zero). This default method could be implied in certain contexts, such as when assigning a fractional number to an integer variable, or using a fractional number as an index of an array. Other kinds of rounding had to be programmed explicitly; for example, rounding a positive number to the nearest integer could be implemented by adding 0.5 and truncating.
In the last decades, however, the syntax and the standard libraries of most languages have commonly provided at least the four basic rounding functions (up, down, to nearest, and toward zero). The tie-breaking method can vary depending on the language and version or might be selectable by the programmer. Several languages follow the lead of the IEEE 754 floating-point standard, and define these functions as taking a double-precision float argument and returning the result of the same type, which then may be converted to an integer if necessary. This approach may avoid spurious overflows because floating-point types have a larger range than integer types. Some languages, such as PHP, provide functions that round a value to a specified number of decimal digits (e.g., from 4321.5678 to 4321.57 or 4300). In addition, many languages provide a printf or similar string formatting function, which allows one to convert a fractional number to a string, rounded to a user-specified number of decimal places (the precision). On the other hand, truncation (round to zero) is still the default rounding method used by many languages, especially for the division of two integer values.
In contrast, CSS and SVG do not define any specific maximum precision for numbers and measurements, which they treat and expose in their DOM and in their IDL interface as strings as if they had infinite precision, and do not discriminate between integers and floating-point values; however, the implementations of these languages will typically convert these numbers into IEEE 754 double-precision floating-point values before exposing the computed digits with a limited precision (notably within standard JavaScript or ECMAScript interface bindings).
Other rounding standards
Some disciplines or institutions have issued standards or directives for rounding.
US weather observations
In a guideline issued in mid-1966, the U.S. Office of the Federal Coordinator for Meteorology determined that weather data should be rounded to the nearest round number, with the "round half up" tie-breaking rule. For example, 1.5 rounded to integer should become 2, and −1.5 should become −1. Prior to that date, the tie-breaking rule was "round half away from zero".
Negative zero in meteorology
Some meteorologists may write "−0" to indicate a temperature between 0.0 and −0.5 degrees (exclusive) that was rounded to an integer. This notation is used when the negative sign is considered important, no matter how small is the magnitude; for example, when rounding temperatures in the Celsius scale, where below zero indicates freezing.
See also
Cash rounding, dealing with the absence of extremely low-value coins
Data binning, a similar operation
Gal's accurate tables
Guard digit
Interval arithmetic
ISO/IEC 80000
Kahan summation algorithm
Party-list proportional representation
Signed-digit representation
Truncation
Notes
References
External links
An introduction to different rounding algorithms that is accessible to a general audience but especially useful to those studying computer science and electronics.
How To Implement Custom Rounding Procedures by Microsoft (broken)
Arithmetic
Computer arithmetic
Statistical data transformation
Theory of computation | Rounding | [
"Mathematics"
] | 7,900 | [
"Computer arithmetic",
"Arithmetic",
"Number theory"
] |
169,946 | https://en.wikipedia.org/wiki/Grain%20%28unit%29 | A grain is a unit of measurement of mass, and in the troy weight, avoirdupois, and apothecaries' systems, equal to exactly . It is nominally based upon the mass of a single ideal seed of a cereal. From the Bronze Age into the Renaissance, the average masses of wheat and barley grains were part of the legal definitions of units of mass. Expressions such as "thirty-two grains of wheat, taken from the middle of the ear" appear to have been ritualistic formulas. Another source states that it was defined such that 252.458 units would balance of distilled water at an ambient air-water pressure and temperature of and respectively. Another book states that Captain Henry Kater, of the British Standards Commission, arrived at this value experimentally.
The grain was the legal foundation of traditional English weight systems, and is the only unit that is equal throughout the troy, avoirdupois, and apothecaries' systems of mass. The unit was based on the weight of a single grain of barley which was equal to about the weight of a single grain of wheat. The fundamental unit of the pre-1527 English weight system, known as Tower weights, was based on the wheat grain. The tower "wheat" grain was defined as exactly (≈) of the troy "barley" grain.
Since the implementation of the international yard and pound agreement of 1 July 1959, the grain or troy grain (symbol: gr) measure has been defined in terms of units of mass in the International System of Units as precisely . One gram is thus approximately equivalent to . The unit formerly used by jewellers to measure pearls, diamonds, and other precious stones, called the jeweller's grain or pearl grain, is equal to . The grain was also the name of a traditional French unit equal to .
In both British Imperial units and United States customary units, there are precisely 7,000 grains per avoirdupois pound, and 5,760 grains per troy pound or apothecaries' pound. It is obsolete in the United Kingdom and, like most other non-SI units, it has no basis in law and cannot be used in commerce.
Current usage
Grains are commonly used to measure the mass of bullets and propellants. In archery, the grain is the standard unit used to weigh arrows.
In North America, the hardness of water is often measured in grains per U.S. gallon () of calcium carbonate equivalents. Otherwise, water hardness is measured in the dimensionless unit of parts per million (), numerically equivalent to concentration measured in milligrams per litre. One grain per U.S. gallon is approximately . Soft water contains of calcium carbonate equivalents, while hard water contains .
Though no longer recommended, in the U.S., grains are still used occasionally in medicine as part of the apothecaries' system, especially in prescriptions for older medicines such as aspirin or phenobarbital. For example, the dosage of a standard tablet of aspirin is sometimes given as . In that example the grain is approximated to , though the grain can also be approximated to , depending on the medication and manufacturer. The apothecaries' system has its own system of notation, in which the units symbol or abbreviation is followed by the quantity in lower case Roman numerals. For amounts less than one, the quantity is written as a fraction, or for one half, ss (or variations such as ss., ṡṡ, or s̅s̅). Therefore, a prescription for tablets containing 325 mg of aspirin and 30 mg of codeine can be written "ASA gr. v c̄ cod. gr. ss tablets" (using the medical abbreviations ASA for acetylsalicylic acid [aspirin], c̄ for "with", and cod. for codeine). The apothecaries' system has gradually been replaced by the metric system, and the use of the grain in prescriptions is now rare.
In the U.S., particulate emission levels, used to monitor and regulate pollution, are sometimes measured in grains per cubic foot instead of the more usual by volume. This is the same unit commonly used to measure the amount of moisture in the air, also known as the absolute humidity. The SI unit used to measure particulate emissions and absolute humidity is mg/m. One grain per cubic foot is approximately .
History
At least since antiquity, grains of wheat or barley were used by Mediterranean traders to define units of mass; along with other seeds, especially those of the carob tree. According to a longstanding tradition, one carat (the mass of a carob seed) was equivalent to the weight of four wheat grains or three barleycorns. Since the weights of these seeds are highly variable, especially that of the cereals as a function of moisture, this is a convention more than an absolute law.
The history of the modern British grain can be traced back to a royal decree in thirteenth century England, re-iterating decrees that go back as far as King Offa (eighth century). The Tower pound was one of many monetary pounds of 240 silver pennies.
The pound in question is the Tower pound. The Tower pound, abolished in 1527, consisted of 12 ounces like the troy pound, but was (≈6%) lighter. The weight of the original sterling pennies was 22½ troy grains, or 32 "Tower grains".
Physical grain weights were made and sold commercially at least as late as the early 1900s, and took various forms, from squares of sheet metal to manufactured wire shapes and coin-like weights.
The troy pound was only "the pound of Pence, Spices, Confections, as of Electuaries", as such goods might be measured by a troi or small balance. The old troy standard was set by King Offa's currency reform, and was in full use in 1284 (Assize of Weights and Measures, King Edward I), but was restricted to currency (the pound of pennies) until it was abolished in 1527. This pound was progressively replaced by a new pound, based on the weight of 120 silver dirhems of 48 grains. The new pound used a barley-corn grain, rather than a wheat grain.
Avoirdupois (goods of weight) refers to those things measured by the lesser but quicker balances: the bismar or auncel, the Roman balance, and the steelyard. The original mercantile pound of 25 shillings or 15 (Tower) ounces was displaced by, variously, the pound of the Hanseatic League (16 tower ounces) and by the pound of the then-important wool trade (16 ounces of 437 grains). A new pound of grains was inadvertently created as 16 troy ounces, referring to the new troy rather than the old troy. Eventually, the wool pound won out.
The avoirdupois pound was defined in prototype, rated as to grains. In the Imperial Weights and Measures Act 1824 (5 Geo. 4. c. 74), the avoirdupois pound was defined as grains exactly. The Weights and Measures Act 1855 authorised Miller's new standards to replace those lost in the fire that destroyed the Houses of Parliament. The standard was an avoirdupois pound, the grain being defined as of it.
The division of the carat into four grains survives in both senses well into the early twentieth century. For pearls and diamonds, weight is quoted in carats, divided into four grains. The carat was eventually set to 205 milligrams (1877), and later 200 milligrams. For touch or fineness of gold, the fraction of gold was given as a weight, the total being a solidus of 24 carats or 96 grains.
See also
English unit
Notes
References
Units of mass
Imperial units
Customary units of measurement in the United States
Ammunition | Grain (unit) | [
"Physics",
"Mathematics"
] | 1,627 | [
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
170,089 | https://en.wikipedia.org/wiki/Numerical%20integration | In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral.
The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical integration", especially as applied to one-dimensional integrals. Some authors refer to numerical integration over more than one dimension as cubature; others take "quadrature" to include higher-dimensional integration.
The basic problem in numerical integration is to compute an approximate solution to a definite integral
to a given degree of accuracy. If is a smooth function integrated over a small number of dimensions, and the domain of integration is bounded, there are many methods for approximating the integral to the desired precision.
Numerical integration has roots in the geometrical problem of finding a square with the same area as a given plane figure (quadrature or squaring), as in the quadrature of the circle.
The term is also sometimes used to describe the numerical solution of differential equations.
Motivation and need
There are several reasons for carrying out numerical integration, as opposed to analytical integration by finding the antiderivative:
The integrand may be known only at certain points, such as obtained by sampling. Some embedded systems and other computer applications may need numerical integration for this reason.
A formula for the integrand may be known, but it may be difficult or impossible to find an antiderivative that is an elementary function. An example of such an integrand is , the antiderivative of which (the error function, times a constant) cannot be written in elementary form.
It may be possible to find an antiderivative symbolically, but it may be easier to compute a numerical approximation than to compute the antiderivative. That may be the case if the antiderivative is given as an infinite series or product, or if its evaluation requires a special function that is not available.
History
The term "numerical integration" first appears in 1915 in the publication A Course in Interpolation and Numeric Integration for the Mathematical Laboratory by David Gibb.
"Quadrature" is a historical mathematical term that means calculating area. Quadrature problems have served as one of the main sources of mathematical analysis. Mathematicians of Ancient Greece, according to the Pythagorean doctrine, understood calculation of area as the process of constructing geometrically a square having the same area (squaring). That is why the process was named "quadrature". For example, a quadrature of the circle, Lune of Hippocrates, The Quadrature of the Parabola. This construction must be performed only by means of compass and straightedge.
The ancient Babylonians used the trapezoidal rule to integrate the motion of Jupiter along the ecliptic.
For a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side (the Geometric mean of a and b). For this purpose it is possible to use the following fact: if we draw the circle with the sum of a and b as the diameter, then the height BH (from a point of their connection to crossing with a circle) equals their geometric mean. The similar geometrical construction solves a problem of a quadrature for a parallelogram and a triangle.
Problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge had been proved in the 19th century to be impossible. Nevertheless, for some figures (for example the Lune of Hippocrates) a quadrature can be performed. The quadratures of a sphere surface and a parabola segment done by Archimedes became the highest achievement of the antique analysis.
The area of the surface of a sphere is equal to quadruple the area of a great circle of this sphere.
The area of a segment of the parabola cut from it by a straight line is 4/3 the area of the triangle inscribed in this segment.
For the proof of the results Archimedes used the Method of exhaustion of Eudoxus.
In medieval Europe the quadrature meant calculation of area by any method. More often the Method of indivisibles was used; it was less rigorous, but more simple and powerful. With its help Galileo Galilei and Gilles de Roberval found the area of a cycloid arch, Grégoire de Saint-Vincent investigated the area under a hyperbola (Opus Geometricum, 1647), and Alphonse Antonio de Sarasa, de Saint-Vincent's pupil and commentator, noted the relation of this area to logarithms.
John Wallis algebrised this method: he wrote in his Arithmetica Infinitorum (1656) series that we now call the definite integral, and he calculated their values. Isaac Barrow and James Gregory made further progress: quadratures for some algebraic curves and spirals. Christiaan Huygens successfully performed a quadrature of some Solids of revolution.
The quadrature of the hyperbola by Saint-Vincent and de Sarasa provided a new function, the natural logarithm, of critical importance.
With the invention of integral calculus came a universal method for area calculation. In response, the term "quadrature" has become traditional, and instead the modern phrase "computation of a univariate definite integral" is more common.
Methods for one-dimensional integrals
A quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration.
Numerical integration methods can generally be described as combining evaluations of the integrand to get an approximation to the integral. The integrand is evaluated at a finite set of points called integration points and a weighted sum of these values is used to approximate the integral. The integration points and weights depend on the specific method used and the accuracy required from the approximation.
An important part of the analysis of any numerical integration method is to study the behavior of the approximation error as a function of the number of integrand evaluations. A method that yields a small error for a small number of evaluations is usually considered superior. Reducing the number of evaluations of the integrand reduces the number of arithmetic operations involved, and therefore reduces the total error. Also, each evaluation takes time, and the integrand may be arbitrarily complicated.
Quadrature rules based on step functions
A "brute force" kind of numerical integration can be done, if the integrand is reasonably well-behaved (i.e. piecewise continuous and of bounded variation), by evaluating the integrand with very small increments.
This simplest method approximates the function by a step function (a piecewise constant function, or a segmented polynomial of degree zero) that passes through the point . This is called the midpoint rule or rectangle rule
Quadrature rules based on interpolating functions
A large class of quadrature rules can be derived by constructing interpolating functions that are easy to integrate. Typically these interpolating functions are polynomials. In practice, since polynomials of very high degree tend to oscillate wildly, only polynomials of low degree are used, typically linear and quadratic.
The interpolating function may be a straight line (an affine function, i.e. a polynomial of degree 1)
passing through the points and .
This is called the trapezoidal rule
For either one of these rules, we can make a more accurate approximation by breaking up the interval into some number of subintervals, computing an approximation for each subinterval, then adding up all the results. This is called a composite rule, extended rule, or iterated rule. For example, the composite trapezoidal rule can be stated as
where the subintervals have the form with and Here we used subintervals of the same length but one could also use intervals of varying length .
Interpolation with polynomials evaluated at equally spaced points in yields the Newton–Cotes formulas, of which the rectangle rule and the trapezoidal rule are examples. Simpson's rule, which is based on a polynomial of order 2, is also a Newton–Cotes formula.
Quadrature rules with equally spaced points have the very convenient property of nesting. The corresponding rule with each interval subdivided includes all the current points, so those integrand values can be re-used.
If we allow the intervals between interpolation points to vary, we find another group of quadrature formulas, such as the Gaussian quadrature formulas. A Gaussian quadrature rule is typically more accurate than a Newton–Cotes rule that uses the same number of function evaluations, if the integrand is smooth (i.e., if it is sufficiently differentiable). Other quadrature methods with varying intervals include Clenshaw–Curtis quadrature (also called Fejér quadrature) methods, which do nest.
Gaussian quadrature rules do not nest, but the related Gauss–Kronrod quadrature formulas do.
Adaptive algorithms
Extrapolation methods
The accuracy of a quadrature rule of the Newton–Cotes type is generally a function of the number of evaluation points. The result is usually more accurate as the number of evaluation points increases, or, equivalently, as the width of the step size between the points decreases. It is natural to ask what the result would be if the step size were allowed to approach zero. This can be answered by extrapolating the result from two or more nonzero step sizes, using series acceleration methods such as Richardson extrapolation. The extrapolation function may be a polynomial or rational function. Extrapolation methods are described in more detail by Stoer and Bulirsch (Section 3.4) and are implemented in many of the routines in the QUADPACK library.
Conservative (a priori) error estimation
Let have a bounded first derivative over i.e. The mean value theorem for where gives
for some depending on .
If we integrate in from to on both sides and take the absolute values, we obtain
We can further approximate the integral on the right-hand side by bringing the absolute value into the integrand, and replacing the term in by an upper bound
where the supremum was used to approximate.
Hence, if we approximate the integral by the quadrature rule our error is no greater than the right hand side of . We can convert this into an error analysis for the Riemann sum, giving an upper bound of
for the error term of that particular approximation. (Note that this is precisely the error we calculated for the example .) Using more derivatives, and by tweaking the quadrature, we can do a similar error analysis using a Taylor series (using a partial sum with remainder term) for f. This error analysis gives a strict upper bound on the error, if the derivatives of f are available.
This integration method can be combined with interval arithmetic to produce computer proofs and verified calculations.
Integrals over infinite intervals
Several methods exist for approximate integration over unbounded intervals. The standard technique involves specially derived quadrature rules, such as Gauss-Hermite quadrature for integrals on the whole real line and Gauss-Laguerre quadrature for integrals on the positive reals. Monte Carlo methods can also be used, or a change of variables to a finite interval; e.g., for the whole line one could use
and for semi-infinite intervals one could use
as possible transformations.
Multidimensional integrals
The quadrature rules discussed so far are all designed to compute one-dimensional integrals. To compute integrals in multiple dimensions, one approach is to phrase the multiple integral as repeated one-dimensional integrals by applying Fubini's theorem (the tensor product rule). This approach requires the function evaluations to grow exponentially as the number of dimensions increases. Three methods are known to overcome this so-called curse of dimensionality.
A great many additional techniques for forming multidimensional cubature integration rules for a variety of weighting functions are given in the monograph by Stroud.
Integration on the sphere has been reviewed by Hesse et al. (2015).
Monte Carlo
Monte Carlo methods and quasi-Monte Carlo methods are easy to apply to multi-dimensional integrals. They may yield greater accuracy for the same number of function evaluations than repeated integrations using one-dimensional methods.
A large class of useful Monte Carlo methods are the so-called Markov chain Monte Carlo algorithms, which include the Metropolis–Hastings algorithm and Gibbs sampling.
Sparse grids
Sparse grids were originally developed by Smolyak for the quadrature of high-dimensional functions. The method is always based on a one-dimensional quadrature rule, but performs a more sophisticated combination of univariate results. However, whereas the tensor product rule guarantees that the weights of all of the cubature points will be positive if the weights of the quadrature points were positive, Smolyak's rule does not guarantee that the weights will all be positive.
Bayesian quadrature
Bayesian quadrature is a statistical approach to the numerical problem of computing integrals and falls under the field of probabilistic numerics. It can provide a full handling of the uncertainty over the solution of the integral expressed as a Gaussian process posterior variance.
Connection with differential equations
The problem of evaluating the definite integral
can be reduced to an initial value problem for an ordinary differential equation by applying the first part of the fundamental theorem of calculus. By differentiating both sides of the above with respect to the argument x, it is seen that the function F satisfies
Numerical methods for ordinary differential equations, such as Runge–Kutta methods, can be applied to the restated problem and thus be used to evaluate the integral. For instance, the standard fourth-order Runge–Kutta method applied to the differential equation yields Simpson's rule from above.
The differential equation has a special form: the right-hand side contains only the independent variable (here ) and not the dependent variable (here ). This simplifies the theory and algorithms considerably. The problem of evaluating integrals is thus best studied in its own right.
Conversely, the term "quadrature" may also be used for the solution of differential equations: "solving by quadrature" or "reduction to quadrature" means expressing its solution in terms of integrals.
See also
Truncation error (numerical integration)
Clenshaw–Curtis quadrature
Gauss-Kronrod quadrature
Riemann Sum or Riemann Integral
Trapezoidal rule
Romberg's method
Tanh-sinh quadrature
Nonelementary Integral
References
Philip J. Davis and Philip Rabinowitz, Methods of Numerical Integration.
George E. Forsythe, Michael A. Malcolm, and Cleve B. Moler, Computer Methods for Mathematical Computations. Englewood Cliffs, NJ: Prentice-Hall, 1977. (See Chapter 5.)
Josef Stoer and Roland Bulirsch, Introduction to Numerical Analysis. New York: Springer-Verlag, 1980. (See Chapter 3.)
Boyer, C. B., A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach, New York: Wiley, 1989 (1991 pbk ed. ).
Eves, Howard, An Introduction to the History of Mathematics, Saunders, 1990, ,
S.L.Sobolev and V.L.Vaskevich: The Theory of Cubature Formulas, Kluwer Academic, ISBN 0-7923-4631-9 (1997).
External links
Integration: Background, Simulations, etc. at Holistic Numerical Methods Institute
Lobatto Quadrature from Wolfram Mathworld
Lobatto quadrature formula from Encyclopedia of Mathematics
Implementations of many quadrature and cubature formulae within the free Tracker Component Library.
SageMath Online Integrator
Numerical analysis | Numerical integration | [
"Mathematics"
] | 3,314 | [
"Mathematical relations",
"Computational mathematics",
"Approximations",
"Numerical analysis"
] |
170,097 | https://en.wikipedia.org/wiki/Mean%20free%20path | In physics, mean free path is the average distance over which a moving particle (such as an atom, a molecule, or a photon) travels before substantially changing its direction or energy (or, in a specific context, other properties), typically as a result of one or more successive collisions with other particles.
Scattering theory
Imagine a beam of particles being shot through a target, and consider an infinitesimally thin slab of the target (see the figure). The atoms (or particles) that might stop a beam particle are shown in red. The magnitude of the mean free path depends on the characteristics of the system. Assuming that all the target particles are at rest but only the beam particle is moving, that gives an expression for the mean free path:
where is the mean free path, is the number of target particles per unit volume, and is the effective cross-sectional area for collision.
The area of the slab is , and its volume is . The typical number of stopping atoms in the slab is the concentration times the volume, i.e., . The probability that a beam particle will be stopped in that slab is the net area of the stopping atoms divided by the total area of the slab:
where is the area (or, more formally, the "scattering cross-section") of one atom.
The drop in beam intensity equals the incoming beam intensity multiplied by the probability of the particle being stopped within the slab:
This is an ordinary differential equation:
whose solution is known as Beer–Lambert law and has the form , where is the distance traveled by the beam through the target, and is the beam intensity before it entered the target; is called the mean free path because it equals the mean distance traveled by a beam particle before being stopped. To see this, note that the probability that a particle is absorbed between and is given by
Thus the expectation value (or average, or simply mean) of is
The fraction of particles that are not stopped (attenuated) by the slab is called transmission , where is equal to the thickness of the slab.
Kinetic theory of gases
In the kinetic theory of gases, the mean free path of a particle, such as a molecule, is the average distance the particle travels between collisions with other moving particles. The derivation above assumed the target particles to be at rest; therefore, in reality, the formula holds for a beam particle with a high speed relative to the velocities of an ensemble of identical particles with random locations. In that case, the motions of target particles are comparatively negligible, hence the relative velocity .
If, on the other hand, the beam particle is part of an established equilibrium with identical particles, then the square of relative velocity is:
In equilibrium, and are random and uncorrelated, therefore , and the relative speed is
This means that the number of collisions is times the number with stationary targets. Therefore, the following relationship applies:
and using (ideal gas law) and (effective cross-sectional area for spherical particles with diameter ), it may be shown that the mean free path is
where k is the Boltzmann constant, is the pressure of the gas and is the absolute temperature.
In practice, the diameter of gas molecules is not well defined. In fact, the kinetic diameter of a molecule is defined in terms of the mean free path. Typically, gas molecules do not behave like hard spheres, but rather attract each other at larger distances and repel each other at shorter distances, as can be described with a Lennard-Jones potential. One way to deal with such "soft" molecules is to use the Lennard-Jones σ parameter as the diameter.
Another way is to assume a hard-sphere gas that has the same viscosity as the actual gas being considered. This leads to a mean free path
where is the molecular mass, is the density of ideal gas, and μ is the dynamic viscosity. This expression can be put into the following convenient form
with being the specific gas constant, equal to 287 J/(kg*K) for air.
The following table lists some typical values for air at different pressures at room temperature. Note that different definitions of the molecular diameter, as well as different assumptions about the value of atmospheric pressure (100 vs 101.3 kPa) and room temperature (293.17 K vs 296.15 K or even 300 K) can lead to slightly different values of the mean free path.
In other fields
Radiography
In gamma-ray radiography the mean free path of a pencil beam of mono-energetic photons is the average distance a photon travels between collisions with atoms of the target material. It depends on the material and the energy of the photons:
where μ is the linear attenuation coefficient, μ/ρ is the mass attenuation coefficient and ρ is the density of the material. The mass attenuation coefficient can be looked up or calculated for any material and energy combination using the National Institute of Standards and Technology (NIST) databases.
In X-ray radiography the calculation of the mean free path is more complicated, because photons are not mono-energetic, but have some distribution of energies called a spectrum. As photons move through the target material, they are attenuated with probabilities depending on their energy, as a result their distribution changes in process called spectrum hardening. Because of spectrum hardening, the mean free path of the X-ray spectrum changes with distance.
Sometimes one measures the thickness of a material in the number of mean free paths. Material with the thickness of one mean free path will attenuate to 37% (1/e) of photons. This concept is closely related to half-value layer (HVL): a material with a thickness of one HVL will attenuate 50% of photons. A standard x-ray image is a transmission image, an image with negative logarithm of its intensities is sometimes called a number of mean free paths image.
Electronics
In macroscopic charge transport, the mean free path of a charge carrier in a metal is proportional to the electrical mobility , a value directly related to electrical conductivity, that is:
where q is the charge, is the mean free time, m* is the effective mass, and vF is the Fermi velocity of the charge carrier. The Fermi velocity can easily be derived from the Fermi energy via the non-relativistic kinetic energy equation. In thin films, however, the film thickness can be smaller than the predicted mean free path, making surface scattering much more noticeable, effectively increasing the resistivity.
Electron mobility through a medium with dimensions smaller than the mean free path of electrons occurs through ballistic conduction or ballistic transport. In such scenarios electrons alter their motion only in collisions with conductor walls.
Optics
If one takes a suspension of non-light-absorbing particles of diameter d with a volume fraction Φ, the mean free path of the photons is:
where Qs is the scattering efficiency factor. Qs can be evaluated numerically for spherical particles using Mie theory.
Acoustics
In an otherwise empty cavity, the mean free path of a single particle bouncing off the walls is:
where V is the volume of the cavity, S is the total inside surface area of the cavity, and F is a constant related to the shape of the cavity. For most simple cavity shapes, F is approximately 4.
This relation is used in the derivation of the Sabine equation in acoustics, using a geometrical approximation of sound propagation.
Nuclear and particle physics
In particle physics the concept of the mean free path is not commonly used, being replaced by the similar concept of attenuation length. In particular, for high-energy photons, which mostly interact by electron–positron pair production, the radiation length is used much like the mean free path in radiography.
Independent-particle models in nuclear physics require the undisturbed orbiting of nucleons within the nucleus before they interact with other nucleons.
See also
Scattering theory
Ballistic conduction
Vacuum
Knudsen number
Optics
References
External links
Mean free path calculator
Gas Dynamics Toolbox: Calculate mean free path for mixtures of gases using VHS model
Statistical mechanics
Scattering, absorption and radiative transfer (optics) | Mean free path | [
"Physics",
"Chemistry"
] | 1,673 | [
"Statistical mechanics",
" absorption and radiative transfer (optics)",
"Scattering"
] |
170,141 | https://en.wikipedia.org/wiki/Natural%20experiment | A natural experiment is a study in which individuals (or clusters of individuals) are exposed to the experimental and control conditions that are determined by nature or by other factors outside the control of the investigators. The process governing the exposures arguably resembles random assignment. Thus, natural experiments are observational studies and are not controlled in the traditional sense of a randomized experiment (an intervention study). Natural experiments are most useful when there has been a clearly defined exposure involving a well defined subpopulation (and the absence of exposure in a similar subpopulation) such that changes in outcomes may be plausibly attributed to the exposure. In this sense, the difference between a natural experiment and a non-experimental observational study is that the former includes a comparison of conditions that pave the way for causal inference, but the latter does not.
Natural experiments are employed as study designs when controlled experimentation is extremely difficult to implement or unethical, such as in several research areas addressed by epidemiology (like evaluating the health impact of varying degrees of exposure to ionizing radiation in people living near Hiroshima at the time of the atomic blast) and economics (like estimating the economic return on amount of schooling in US adults).
History
One of the best-known early natural experiments was the 1854 Broad Street cholera outbreak in London, England. On 31 August 1854, a major outbreak of cholera struck Soho. Over the next three days, 127 people near Broad Street died. By the end of the outbreak 616 people died. The physician John Snow identified the source of the outbreak as the nearest public water pump, using a map of deaths and illness that revealed a cluster of cases around the pump.
In this example, Snow discovered a strong association between the use of the water from the pump, and deaths and illnesses due to cholera. Snow found that the Southwark and Vauxhall Waterworks Company, which supplied water to districts with high attack rates, obtained the water from the Thames downstream from where raw sewage was discharged into the river. By contrast, districts that were supplied water by the Lambeth Waterworks Company, which obtained water upstream from the points of sewage discharge, had low attack rates. Given the near-haphazard patchwork development of the water supply in mid-nineteenth century London, Snow viewed the developments as "an experiment...on the grandest scale." Of course, the exposure to the polluted water was not under the control of any scientist. Therefore, this exposure has been recognized as being a natural experiment.
Recent examples
Family size
An aim of a study Angrist and Evans (1998) was to estimate the effect of family size on the labor market outcomes of the mother. For at least two reasons, the correlations between family size and various outcomes (e.g., earnings) do not inform us about how family size causally affects labor market outcomes. First, both labor market outcomes and family size may be affected by unobserved "third" variables (e.g., personal preferences). Second, labor market outcomes themselves may affect family size (called "reverse causality"). For example, a woman may defer having a child if she gets a raise at work. The authors observed that two-child families with either two boys or two girls are substantially more likely to have a third child than two-child families with one boy and one girl. The sex of the first two children, then, constitutes a kind of natural experiment: it is as if an experimenter had randomly assigned some families to have two children and others to have three. The authors were then able to credibly estimate the causal effect of having a third child on labor market outcomes. Angrist and Evans found that childbearing had a greater impact on poor and less educated women than on highly educated women although the earnings impact of having a third child tended to disappear by that child's 13th birthday. They also found that having a third child had little impact on husbands' earnings.
Game shows
Within economics, game shows are a frequently studied form of natural experiment. While game shows might seem to be artificial contexts, they can be considered natural experiments due to the fact that the context arises without interference of the scientist. Game shows have been used to study a wide range of different types of economic behavior, such as decision making under risk and cooperative behavior.
Smoking ban
In Helena, Montana a smoking ban was in effect in all public spaces, including bars and restaurants, during the six-month period from June 2002 to December 2002. Helena is geographically isolated and served by only one hospital. The investigators observed that the rate of heart attacks dropped by 40% while the smoking ban was in effect. Opponents of the law prevailed in getting the enforcement of the law suspended after six months, after which the rate of heart attacks went back up. This study was an example of a natural experiment, called a case-crossover experiment, where the exposure is removed for a time and then returned. The study also noted its own weaknesses which potentially suggest that the inability to control variables in natural experiments can impede investigators from drawing firm conclusions.'
Nuclear weapons testing
Nuclear weapons testing released large quantities of radioactive isotopes into the atmosphere, some of which could be incorporated into biological tissues. The release stopped after the Partial Nuclear Test Ban Treaty in 1963, which prohibited atmospheric nuclear tests. This resembled a large-scale pulse-chase experiment, but could not have been performed as a regular experiment in humans due to scientific ethics. Several types of observations were made possible (in people born before 1963), such as determination of the rate of replacement for cells in different human tissues.
Vietnam War draft
An important question in economics research is what determines earnings. Angrist (1990) evaluated the effects of military service on lifetime earnings. Using statistical methods developed in econometrics, Angrist capitalized on the approximate random assignment of the Vietnam War draft lottery, and used it as an instrumental variable associated with eligibility (or non-eligibility) for military service. Because many factors might predict whether someone serves in the military, the draft lottery frames a natural experiment whereby those drafted into the military can be compared against those not drafted because the two groups should not differ substantially prior to military service. Angrist found that the earnings of veterans were, on average, about 15 percent less than the earnings of non-veterans.
Industrial melanism
With the Industrial Revolution in the nineteenth century, many species of moth, including the well-studied peppered moth, responded to the atmospheric pollution of sulphur dioxide and soot around cities with industrial melanism, a dramatic increase in the frequency of dark forms over the formerly abundant pale, speckled forms. In the twentieth century, as regulation improved and pollution fell, providing the conditions for a large-scale natural experiment, the trend towards industrial melanism was reversed, and melanic forms quickly became scarce. The effect led the evolutionary biologists L. M. Cook and J. R. G. Turner to conclude that "natural selection is the only credible explanation for the overall decline".
See also
Common garden experiment
References
Epidemiology
Experiments
Observational study | Natural experiment | [
"Environmental_science"
] | 1,443 | [
"Epidemiology",
"Environmental social science"
] |
170,165 | https://en.wikipedia.org/wiki/Fermi%20energy | The Fermi energy is a concept in quantum mechanics usually referring to the energy difference between the highest and lowest occupied single-particle states in a quantum system of non-interacting fermions at absolute zero temperature.
In a Fermi gas, the lowest occupied state is taken to have zero kinetic energy, whereas in a metal, the lowest occupied state is typically taken to mean the bottom of the conduction band.
The term "Fermi energy" is often used to refer to a different yet closely related concept, the Fermi level (also called electrochemical potential).
There are a few key differences between the Fermi level and Fermi energy, at least as they are used in this article:
The Fermi energy is only defined at absolute zero, while the Fermi level is defined for any temperature.
The Fermi energy is an energy difference (usually corresponding to a kinetic energy), whereas the Fermi level is a total energy level including kinetic energy and potential energy.
The Fermi energy can only be defined for non-interacting fermions (where the potential energy or band edge is a static, well defined quantity), whereas the Fermi level remains well defined even in complex interacting systems, at thermodynamic equilibrium.
Since the Fermi level in a metal at absolute zero is the energy of the highest occupied single particle state,
then the Fermi energy in a metal is the energy difference between the Fermi level and lowest occupied single-particle state, at zero-temperature.
Context
In quantum mechanics, a group of particles known as fermions (for example, electrons, protons and neutrons) obey the Pauli exclusion principle. This states that two fermions cannot occupy the same quantum state. Since an idealized non-interacting Fermi gas can be analyzed in terms of single-particle stationary states, we can thus say that two fermions cannot occupy the same stationary state. These stationary states will typically be distinct in energy. To find the ground state of the whole system, we start with an empty system, and add particles one at a time, consecutively filling up the unoccupied stationary states with the lowest energy. When all the particles have been put in, the Fermi energy is the kinetic energy of the highest occupied state.
As a consequence, even if we have extracted all possible energy from a Fermi gas by cooling it to near absolute zero temperature, the fermions are still moving around at a high speed. The fastest ones are moving at a velocity corresponding to a kinetic energy equal to the Fermi energy. This speed is known as the Fermi velocity. Only when the temperature exceeds the related Fermi temperature, do the particles begin to move significantly faster than at absolute zero.
The Fermi energy is an important concept in the solid state physics of metals and superconductors. It is also a very important quantity in the physics of quantum liquids like low temperature helium (both normal and superfluid 3He), and it is quite important to nuclear physics and to understanding the stability of white dwarf stars against gravitational collapse.
Formula and typical values
The Fermi energy for a three-dimensional, non-relativistic, non-interacting ensemble of identical spin- fermions is given by
where N is the number of particles, m0 the rest mass of each fermion, V the volume of the system, and the reduced Planck constant.
Metals
Under the free electron model, the electrons in a metal can be considered to form a Fermi gas. The number density of conduction electrons in metals ranges between approximately 1028 and 1029 electrons/m3, which is also the typical density of atoms in ordinary solid matter. This number density produces a Fermi energy of the order of 2 to 10 electronvolts.
White dwarfs
Stars known as white dwarfs have mass comparable to the Sun, but have about a hundredth of its radius. The high densities mean that the electrons are no longer bound to single nuclei and instead form a degenerate electron gas. Their Fermi energy is about 0.3 MeV.
Nucleus
Another typical example is that of the nucleons in the nucleus of an atom. The radius of the nucleus admits deviations, so a typical value for the Fermi energy is usually given as 38 MeV.
Related quantities
Using this definition of above for the Fermi energy, various related quantities can be useful.
The Fermi temperature is defined as
where is the Boltzmann constant, and the Fermi energy. The Fermi temperature can be thought of as the temperature at which thermal effects are comparable to quantum effects associated with Fermi statistics. The Fermi temperature for a metal is a couple of orders of magnitude above room temperature.
Other quantities defined in this context are Fermi momentum
and Fermi velocity
These quantities are respectively the momentum and group velocity of a fermion at the Fermi surface.
The Fermi momentum can also be described as
where , called the Fermi wavevector, is the radius of the Fermi sphere.
is the electron density.
These quantities may not be well-defined in cases where the Fermi surface is non-spherical.
See also
Fermi–Dirac statistics: the distribution of electrons over stationary states for non-interacting fermions at non-zero temperature.
Fermi level
Quasi Fermi level
Notes
References
Further reading
Condensed matter physics
Fermi–Dirac statistics | Fermi energy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,094 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
170,167 | https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann%20statistics | In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible.
The expected number of particles with energy for Maxwell–Boltzmann statistics is
where:
is the energy of the i-th energy level,
is the average number of particles in the set of states with energy ,
is the degeneracy of energy level i, that is, the number of states with energy which may nevertheless be distinguished from each other by some other means,
μ is the chemical potential,
k is the Boltzmann constant,
T is absolute temperature,
N is the total number of particles:
Z is the partition function:
e is Euler's number
Equivalently, the number of particles is sometimes expressed as
where the index i now specifies a particular state rather than the set of all states with energy , and .
History
Maxwell–Boltzmann statistics grew out of the Maxwell–Boltzmann distribution, most likely as a distillation of the underlying technique. The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system.
Applicability
Maxwell–Boltzmann statistics is used to derive the Maxwell–Boltzmann distribution of an ideal gas. However, it can also be used to extend that distribution to particles with a different energy–momentum relation, such as relativistic particles (resulting in Maxwell–Jüttner distribution), and to other than three-dimensional spaces.
Maxwell–Boltzmann statistics is often described as the statistics of "distinguishable" classical particles. In other words, the configuration of particle A in state 1 and particle B in state 2 is different from the case in which particle B is in state 1 and particle A is in state 2. This assumption leads to the proper (Boltzmann) statistics of particles in the energy states, but yields non-physical results for the entropy, as embodied in the Gibbs paradox.
At the same time, there are no real particles that have the characteristics required by Maxwell–Boltzmann statistics. Indeed, the Gibbs paradox is resolved if we treat all particles of a certain type (e.g., electrons, protons,photon etc.) as principally indistinguishable. Once this assumption is made, the particle statistics change. The change in entropy in the entropy of mixing example may be viewed as an example of a non-extensive entropy resulting from the distinguishability of the two types of particles being mixed.
Quantum particles are either bosons (following Bose–Einstein statistics) or fermions (subject to the Pauli exclusion principle, following instead Fermi–Dirac statistics). Both of these quantum statistics approach the Maxwell–Boltzmann statistics in the limit of high temperature and low particle density.
Derivations
Maxwell–Boltzmann statistics can be derived in various statistical mechanical thermodynamic ensembles:
The grand canonical ensemble, exactly.
The canonical ensemble, exactly.
The microcanonical ensemble, but only in the thermodynamic limit.
In each case it is necessary to assume that the particles are non-interacting, and that multiple particles can occupy the same state and do so independently.
Derivation from microcanonical ensemble
Suppose we have a container with a huge number of very small particles all with identical physical characteristics (such as mass, charge, etc.). Let's refer to this as the system. Assume that though the particles have identical properties, they are distinguishable. For example, we might identify each particle by continually observing their trajectories, or by placing a marking on each one, e.g., drawing a different number on each one as is done with lottery balls.
The particles are moving inside that container in all directions with great speed. Because the particles are speeding around, they possess some energy. The Maxwell–Boltzmann distribution is a mathematical function that describes about how many particles in the container have a certain energy. More precisely, the Maxwell–Boltzmann distribution gives the non-normalized probability (this means that the probabilities do not add up to 1) that the state corresponding to a particular energy is occupied.
In general, there may be many particles with the same amount of energy . Let the number of particles with the same energy be , the number of particles possessing another energy be , and so forth for all the possible energies To describe this situation, we say that is the occupation number of the energy level If we know all the occupation numbers then we know the total energy of the system. However, because we can distinguish between which particles are occupying each energy level, the set of occupation numbers does not completely describe the state of the system. To completely describe the state of the system, or the microstate, we must specify exactly which particles are in each energy level. Thus when we count the number of possible states of the system, we must count each and every microstate, and not just the possible sets of occupation numbers.
To begin with, assume that there is only one state at each energy level (there is no degeneracy). What follows next is a bit of combinatorial thinking which has little to do in accurately describing the reservoir of particles. For instance, let's say there is a total of boxes labelled . With the concept of combination, we could calculate how many ways there are to arrange into the set of boxes, where the order of balls within each box isn’t tracked. First, we select balls from a total of balls to place into box , and continue to select for each box from the remaining balls, ensuring that every ball is placed in one of the boxes. The total number of ways that the balls can be arranged is
As every ball has been placed into a box, , and we simplify the expression as
This is just the multinomial coefficient, the number of ways of arranging N items into k boxes, the l-th box holding Nl items, ignoring the permutation of items in each box.
Now, consider the case where there is more than one way to put particles in the box (i.e. taking the degeneracy problem into consideration). If the -th box has a "degeneracy" of , that is, it has "sub-boxes" ( boxes with the same energy . These states/boxes with the same energy are called degenerate states.), such that any way of filling the -th box where the number in the sub-boxes is changed is a distinct way of filling the box, then the number of ways of filling the i-th box must be increased by the number of ways of distributing the objects in the "sub-boxes". The number of ways of placing distinguishable objects in "sub-boxes" is (the first object can go into any of the boxes, the second object can also go into any of the boxes, and so on). Thus the number of ways that a total of particles can be classified into energy levels according to their energies, while each level having distinct states such that the i-th level accommodates particles is:
This is the form for W first derived by Boltzmann. Boltzmann's fundamental equation relates the thermodynamic entropy S to the number of microstates W, where k is the Boltzmann constant. It was pointed out by Gibbs however, that the above expression for W does not yield an extensive entropy, and is therefore faulty. This problem is known as the Gibbs paradox. The problem is that the particles considered by the above equation are not indistinguishable. In other words, for two particles (A and B) in two energy sublevels the population represented by [A,B] is considered distinct from the population [B,A] while for indistinguishable particles, they are not. If we carry out the argument for indistinguishable particles, we are led to the Bose–Einstein expression for W:
The Maxwell–Boltzmann distribution follows from this Bose–Einstein distribution for temperatures well above absolute zero, implying that . The Maxwell–Boltzmann distribution also requires low density, implying that . Under these conditions, we may use Stirling's approximation for the factorial:
to write:
Using the fact that for we can again use Stirling's approximation to write:
This is essentially a division by N! of Boltzmann's original expression for W, and this correction is referred to as .
We wish to find the for which the function is maximized, while considering the constraint that there is a fixed number of particles and a fixed energy in the container. The maxima of and are achieved by the same values of and, since it is easier to accomplish mathematically, we will maximize the latter function instead. We constrain our solution using Lagrange multipliers forming the function:
Finally
In order to maximize the expression above we apply Fermat's theorem (stationary points), according to which local extrema, if exist, must be at critical points (partial derivatives vanish):
By solving the equations above () we arrive to an expression for :
Substituting this expression for into the equation for and assuming that yields:
or, rearranging:
Boltzmann realized that this is just an expression of the Euler-integrated fundamental equation of thermodynamics. Identifying E as the internal energy, the Euler-integrated fundamental equation states that :
where T is the temperature, P is pressure, V is volume, and μ is the chemical potential. Boltzmann's equation is the realization that the entropy is proportional to with the constant of proportionality being the Boltzmann constant. Using the ideal gas equation of state (PV = NkT), It follows immediately that and so that the populations may now be written:
Note that the above formula is sometimes written:
where is the absolute activity.
Alternatively, we may use the fact that
to obtain the population numbers as
where Z is the partition function defined by:
In an approximation where εi is considered to be a continuous variable, the Thomas–Fermi approximation yields a continuous degeneracy g proportional to so that:
which is just the Maxwell–Boltzmann distribution for the energy.
Derivation from canonical ensemble
In the above discussion, the Boltzmann distribution function was obtained via directly analysing the multiplicities of a system. Alternatively, one can make use of the canonical ensemble. In a canonical ensemble, a system is in thermal contact with a reservoir. While energy is free to flow between the system and the reservoir, the reservoir is thought to have infinitely large heat capacity as to maintain constant temperature, T, for the combined system.
In the present context, our system is assumed to have the energy levels with degeneracies . As before, we would like to calculate the probability that our system has energy .
If our system is in state , then there would be a corresponding number of microstates available to the reservoir. Call this number . By assumption, the combined system (of the system we are interested in and the reservoir) is isolated, so all microstates are equally probable. Therefore, for instance, if , we can conclude that our system is twice as likely to be in state than . In general, if is the probability that our system is in state ,
Since the entropy of the reservoir , the above becomes
Next we recall the thermodynamic identity (from the first law of thermodynamics):
In a canonical ensemble, there is no exchange of particles, so the term is zero. Similarly, This gives
where and denote the energies of the reservoir and the system at , respectively. For the second equality we have used the conservation of energy. Substituting into the first equation relating :
which implies, for any state s of the system
where Z is an appropriately chosen "constant" to make total probability 1. (Z is constant provided that the temperature T is invariant.)
where the index s runs through all microstates of the system. Z is sometimes called the Boltzmann sum over states (or "Zustandssumme" in the original German). If we index the summation via the energy eigenvalues instead of all possible states, degeneracy must be taken into account. The probability of our system having energy is simply the sum of the probabilities of all corresponding microstates:
where, with obvious modification,
this is the same result as before.
Comments on this derivation:
Notice that in this formulation, the initial assumption "... suppose the system has total N particles..." is dispensed with. Indeed, the number of particles possessed by the system plays no role in arriving at the distribution. Rather, how many particles would occupy states with energy follows as an easy consequence.
What has been presented above is essentially a derivation of the canonical partition function. As one can see by comparing the definitions, the Boltzmann sum over states is equal to the canonical partition function.
Exactly the same approach can be used to derive Fermi–Dirac and Bose–Einstein statistics. However, there one would replace the canonical ensemble with the grand canonical ensemble, since there is exchange of particles between the system and the reservoir. Also, the system one considers in those cases is a single particle state, not a particle. (In the above discussion, we could have assumed our system to be a single atom.)
See also
Bose–Einstein statistics
Fermi–Dirac statistics
Boltzmann factor
Notes
References
Bibliography
Carter, Ashley H., "Classical and Statistical Thermodynamics", Prentice–Hall, Inc., 2001, New Jersey.
Raj Pathria, "Statistical Mechanics", Butterworth–Heinemann, 1996.
Concepts in physics
James Clerk Maxwell
Ludwig Boltzmann | Maxwell–Boltzmann statistics | [
"Physics"
] | 2,877 | [
"nan"
] |
170,215 | https://en.wikipedia.org/wiki/Stepper%20motor | A stepper motor, also known as step motor or stepping motor, is a brushless DC electric motor that rotates in a series of small and discrete angular steps. Stepper motors can be set to any given step position without needing a position sensor for feedback. The step position can be rapidly increased or decreased to create continuous rotation, or the motor can be ordered to actively hold its position at one given step. Motors vary in size, speed, step resolution, and torque.
Switched reluctance motors are very large stepping motors with a reduced pole count. They generally employ closed-loop commutators.
Mechanism
Brushed DC motors rotate continuously when DC voltage is applied to their terminals. The stepper motor is known for its property of converting a train of input pulses (typically square waves) into a precisely defined increment in the shaft’s rotational position. Each pulse rotates the shaft through a fixed angle.
Stepper motors effectively have multiple "toothed" electromagnets arranged as a stator around a central rotor, a gear-shaped piece of iron. The electromagnets are energized by an external driver circuit or a micro controller. To make the motor shaft turn, one electromagnet is first given power, which magnetically attracts the gear's teeth. When the gear's teeth are aligned to the first electromagnet, they are slightly offset from the next electromagnet. This means that when the next electromagnet is turned on and the first is turned off, the gear rotates slightly to align with the next one. From there the process is repeated. Each of the partial rotations is called a "step", with an integer number of steps making a full rotation. In that way, the motor can be turned by a precise angle.
The circular arrangement of electromagnets is divided into groups, each group called a phase, and there is an equal number of electromagnets per group. The number of groups is chosen by the designer of the stepper motor. The electromagnets of each group are interleaved with the electromagnets of other groups to form a uniform pattern of arrangement. For example, if the stepper motor has two groups identified as A or B, and ten electromagnets in total, then the grouping pattern would be ABABABABAB.
Electromagnets within the same group are all energized together. Because of this, stepper motors with more phases typically have more wires (or leads) to control the motor.
Types
There are three main types of stepper motors: permanent magnet, variable reluctance, and hybrid synchronous.
Permanent magnet motors use a permanent magnet (PM) in the rotor and operate on the attraction or repulsion between the rotor magnet and the stator electromagnets. Pulses move the rotor clockwise or anticlockwise in discrete steps. If left powered at a final step, a strong detent remains at that shaft location. This detent has a predictable spring rate and specified torque limit; slippage occurs if the limit is exceeded. If current is removed, a lesser detent still remains, holding shaft position against spring or other torque influences. Stepping can then be resumed while reliably being synchronized with control electronics.
Permanent magnet stepper motors have simple DC switching electronics, a power-off detent, and no position readout. These qualities are ideal for applications such as paper printers, 3D printers, and robotics. Such applications track position simply by counting the number of steps that each motor has been instructed to take.
Variable reluctance (VR) motors have a soft iron rotor and operate based on the principle that minimum reluctance occurs with minimum gap, so the rotor points are attracted toward the stator's magnetic poles. Variable reluctance motors have detents when powered on, but not when powered off.
Hybrid synchronous motors are a combination of the permanent magnet and variable reluctance types, to maximize power in a small size.
Phases
Two phase
There are two basic winding arrangements for the electromagnetic coils in a two phase stepper motor: bipolar and unipolar.
Unipolar motors
A unipolar stepper motor has one winding with center tap per phase. Each section of windings is switched on for each direction of magnetic field. Since in this arrangement a magnetic pole can be reversed without switching the polarity of the common wire, the commutation circuit can be simply a single switching transistor for each half winding. Typically, given a phase, the center tap of each winding is made common: three leads per phase and six leads for a typical two phase motor. Often, these two phase commons are internally joined, so the motor has only five leads.
A microcontroller or stepper motor controller can be used to activate the drive transistors in the right order, and this ease of operation makes unipolar motors popular with hobbyists; they are probably the cheapest way to get precise angular movements.
For the experimenter, the windings can be identified by touching the terminal wires together in PM motors. If the terminals of a coil are connected, the shaft becomes harder to turn. One way to distinguish the center tap (common wire) from a coil-end wire is by measuring the resistance. Resistance between common wire and coil-end wire is always half of the resistance between coil-end wires. This is because there is twice the length of coil between the ends and only half from center (common wire) to the end. A quick way to determine if the stepper motor is working is to short circuit every two pairs and try turning the shaft. Whenever a higher-than-normal resistance is felt, it indicates that the circuit to the particular winding is closed and that the phase is working.
Bipolar motors
Bipolar motors have a pair of single winding connections per phase. The current in a winding needs to be reversed in order to reverse a magnetic pole, so the driving circuit must be more complicated, typically with an H-bridge arrangement (however there are several off-the-shelf driver chips available to make this a simple affair). There are two leads per phase, none is common.
A typical driving pattern for a two coil bipolar stepper motor would be: A+ B+ A− B−. I.e. drive coil A with positive current, then remove current from coil A; then drive coil B with positive current, then remove current from coil B; then drive coil A with negative current (flipping polarity by switching the wires e.g. with an H bridge), then remove current from coil A; then drive coil B with negative current (again flipping polarity same as coil A); the cycle is complete and begins anew.
Static friction effects using an H-bridge have been observed with certain drive topologies.
Dithering the stepper signal at a higher frequency than the motor can respond to will reduce this "static friction" effect.
Because windings are better utilized, they are more powerful than a unipolar motor of the same weight. This is due to the physical space occupied by the windings. A unipolar motor has twice the amount of wire in the same space, but only half used at any point in time, hence is 50% efficient (or approximately 70% of the torque output available). Though a bipolar stepper motor is more complicated to drive, the abundance of driver chips means this is much less difficult to achieve.
An 8-lead stepper is like a unipolar stepper, but the leads are not joined to common internally to the motor. This kind of motor can be wired in several configurations:
Unipolar.
Bipolar with series windings. This gives higher inductance but lower current per winding.
Bipolar with parallel windings. This requires higher current but can perform better as the winding inductance is reduced.
Bipolar with a single winding per phase. This method will run the motor on only half the available windings, which will reduce the available low speed torque but require less current
Higher-phase count
Multi-phase stepper motors with many phases tend to have much lower levels of vibration. While they are more expensive, they do have a higher power density and with the appropriate drive electronics are often better suited to the application.
Driver circuits
Stepper motor performance is strongly dependent on the driver circuit. Torque curves may be extended to greater speeds if the stator poles can be reversed more quickly, the limiting factor being a combination of the winding inductance. To overcome the inductance and switch the windings quickly, one must increase the drive voltage. This leads further to the necessity of limiting the current that these high voltages may otherwise induce.
An additional limitation, often comparable to the effects of inductance, is the back-EMF of the motor. As the motor's rotor turns, a sinusoidal voltage is generated proportional to the speed (step rate). This AC voltage is subtracted from the voltage waveform available to induce a change in the current.
L/R driver circuits
L/R driver circuits are also referred to as constant voltage drives because a constant positive or negative voltage is applied to each winding to set the step positions. However, it is winding current, not voltage that applies torque to the stepper motor shaft. The current I in each winding is related to the applied voltage V by the winding inductance L and the winding resistance R. The resistance R determines the maximum current according to Ohm's law I=V/R. The inductance L determines the maximum rate of change of the current in the winding according to the formula for an inductor dI/dt = V/L. The resulting current for a voltage pulse is a quickly increasing current as a function of inductance. This reaches the V/R value and holds for the remainder of the pulse. Thus when controlled by a constant voltage drive, the maximum speed of a stepper motor is limited by its inductance since at some speed, the voltage U will be changing faster than the current I can keep up. In simple terms the rate of change of current is L / R (e.g. a 10 mH inductance with 2 ohms resistance will take 5 ms to reach approx 2/3 of maximum torque or around 24 ms to reach 99% of max torque). To obtain high torque at high speeds requires a large drive voltage with a low resistance and low inductance.
With an L/R drive it is possible to control a low voltage resistive motor with a higher voltage drive simply by adding an external resistor in series with each winding. This will waste power in the resistors, and generate heat. It is therefore considered a low performing option, albeit simple and cheap.
Modern voltage-mode drivers overcome some of these limitations by approximating a sinusoidal voltage waveform to the motor phases. The amplitude of the voltage waveform is set up to increase with step rate. If properly tuned, this compensates the effects of inductance and back-EMF, allowing decent performance relative to current-mode drivers, but at the expense of design effort (tuning procedures) that are simpler for current-mode drivers.
Chopper drive circuits
Chopper drive circuits are referred to as controlled current drives because they generate a controlled current in each winding rather than applying a constant voltage. Chopper drive circuits are most often used with two-winding bipolar motors, the two windings being driven independently to provide a specific motor torque CW or CCW. On each winding, a "supply" voltage is applied to the winding as a square wave voltage; example 8 kHz. The winding inductance smooths the current which reaches a level according to the square wave duty cycle. Most often bipolar supply (+ and - ) voltages are supplied to the controller relative to the winding return. So 50% duty cycle results in zero current. 0% results in full V/R current in one direction. 100% results in full current in the opposite direction. This current level is monitored by the controller by measuring the voltage across a small sense resistor in series with the winding. This requires additional electronics to sense winding currents, and control the switching, but it allows stepper motors to be driven with higher torque at higher speeds than L/R drives. It also allows the controller to output predetermined current levels rather than fixed. Integrated electronics for this purpose are widely available.
Phase current waveforms
A stepper motor is a polyphase AC synchronous motor (see Theory below), and it is ideally driven by sinusoidal current. A full-step waveform is a gross approximation of a sinusoid, and is the reason why the motor exhibits so much vibration. Various drive techniques have been developed to better approximate a sinusoidal drive waveform: these are half stepping and microstepping.
Wave drive (one phase on)
In this drive method only a single phase is activated at a time. It has the same number of steps as the full-step drive, but the motor will have significantly less torque than rated. It is rarely used. The animated figure shown above is a wave drive motor. In the animation, rotor has 25 teeth and it takes 4 steps to rotate by one tooth position. So there will be = 100 steps per full rotation and each step will be = .
Full-step drive (two phases on)
This is the usual method for full-step driving the motor. Two phases are always on so the motor will provide its maximum rated torque. As soon as one phase is turned off, another one is turned on. Wave drive and single phase full step are both one and the same, with same number of steps but difference in torque.
Half-stepping
When half-stepping, the drive alternates between two phases on and a single phase on. This increases the angular resolution. The motor also has less torque (approx 70%) at the full-step position (where only a single phase is on). This may be mitigated by increasing the current in the active winding to compensate. The advantage of half stepping is that the drive electronics need not change to support it. In animated figure shown above, if we change it to half-stepping, then it will take 8 steps to rotate by 1 tooth position. So there will be 25×8 = 200 steps per full rotation and each step will be 360/200 = 1.8°. Its angle per step is half of the full step.
Microstepping
What is commonly referred to as microstepping is often sine–cosine microstepping in which the winding current approximates a sinusoidal AC waveform. The common way to achieve sine-cosine current is with chopper-drive circuits. Sine–cosine microstepping is the most common form, but other waveforms can be used. Regardless of the waveform used, as the microsteps become smaller, motor operation becomes smoother, thereby greatly reducing resonance in any parts the motor may be connected to, as well as the motor itself. Resolution will be limited by the mechanical stiction, backlash, and other sources of error between the motor and the end device. Gear reducers may be used to increase resolution of positioning.
Step size reduction is an important step motor feature and a fundamental reason for their use in positioning.
Example: many modern hybrid step motors are rated such that the travel of every full step (example 1.8 degrees per full step or 200 full steps per revolution) will be within 3% or 5% of the travel of every other full step, as long as the motor is operated within its specified operating ranges. Several manufacturers show that their motors can easily maintain the 3% or 5% equality of step travel size as step size is reduced from full stepping down to 1/10 stepping. Then, as the microstepping divisor number grows, step size repeatability degrades. At large step size reductions it is possible to issue many microstep commands before any motion occurs at all and then the motion can be a "jump" to a new position. Some stepper controller ICs use increased current to minimise such missed steps, especially when the peak current pulses in one phase would otherwise be very brief.
Theory
A step motor can be viewed as a synchronous AC motor with the number of poles (on both rotor and stator) increased, taking care that they have no common denominator. Additionally, soft magnetic material with many teeth on the rotor and stator cheaply multiplies the number of poles (reluctance motor). Modern steppers are of hybrid design, having both permanent magnets and soft iron cores.
To achieve full rated torque, the coils in a stepper motor must reach their full rated current during each step. Winding inductance and counter-EMF generated by a moving rotor tend to resist changes in drive current, so that as the motor speeds up, less and less time is spent at full current—thus reducing motor torque. As speeds further increase, the current will not reach the rated value, and eventually the motor will cease to produce torque.
Pull-in torque
This is the measure of the torque produced by a stepper motor when it is operated without an acceleration state. At low speeds the stepper motor can synchronize itself with an applied step frequency, and this pull-in torque must overcome friction and inertia. It is important to make sure that the load on the motor is frictional rather than inertial as the friction reduces any unwanted oscillations.
The pull-in curve defines an area called the start/stop region. Into this region, the motor can be started/stopped instantaneously with a load applied and without loss of synchronism.
Pull-out torque
The stepper motor pull-out torque is measured by accelerating the motor to the desired speed and then increasing the torque loading until the motor stalls or misses steps. This measurement is taken across a wide range of speeds and the results are used to generate the stepper motor's dynamic performance curve. As noted below this curve is affected by drive voltage, drive current and current switching techniques. A designer may include a safety factor between the rated torque and the estimated full load torque required for the application.
Detent torque
Synchronous electric motors using permanent magnets have a resonant position holding torque (called detent torque or cogging, and sometimes included in the specifications) when not driven electrically. Soft iron reluctance cores do not exhibit this behavior.
Ringing and resonance
When the motor moves a single step it overshoots the final resting point and oscillates round this point as it comes to rest. This undesirable ringing is experienced as motor rotor vibration and is more pronounced in unloaded motors. An unloaded or under loaded motor may, and often will, stall if the vibration experienced is enough to cause loss of synchronisation.
Stepper motors have a natural frequency of operation. When the excitation frequency matches this resonance the ringing is more pronounced, steps may be missed, and stalling is more likely. Motor resonance frequency can be calculated from the formula:
where is the holding torque in N·m, is the number of pole pairs, and is the rotor inertia in kg·m². The magnitude of the undesirable ringing is dependent on the back EMF resulting from rotor velocity. The resultant current promotes damping, so the drive circuit characteristics are important. The rotor ringing can be described in terms of damping factor.
Ratings and specifications
Stepper motors' nameplates typically give only the winding current and occasionally the voltage and winding resistance. The rated voltage will produce the rated winding current at DC: but this is mostly a meaningless rating, as all modern drivers are current limiting and the drive voltages greatly exceed the motor rated voltage.
Datasheets from the manufacturer often indicate Inductance. Back-EMF is equally relevant, but seldom listed (it is straightforward to measure with an oscilloscope). These figures can be helpful for more in-depth electronics design, when deviating from standard supply voltages, adapting third party driver electronics, or gaining insight when choosing between motor models with otherwise similar size, voltage, and torque specifications.
A stepper's low-speed torque will vary directly with current. How quickly the torque falls off at faster speeds depends on the winding inductance and the drive circuitry it is attached to, especially the driving voltage.
Steppers should be sized according to published torque curve, which is specified by the manufacturer at particular drive voltages or using their own drive circuitry. Dips in the torque curve suggest possible resonances, whose impact on the application should be understood by designers.
Step motors adapted to harsh environments are often referred to as IP65 rated.
NEMA stepper motors
The US National Electrical Manufacturers Association (NEMA) standardises various dimensions, marking and other aspects of stepper motors, in NEMA standard (NEMA ICS 16-2001). NEMA stepper motors are labeled by faceplate size, NEMA 17 being a stepper motor with a faceplate and dimensions given in inches. The standard also lists motors with faceplate dimensions given in metric units. These motors are typically referred with NEMA DD, where DD is the diameter of the faceplate in inches multiplied by 10 (e.g., NEMA 17 has a diameter of 1.7 inches). There are further specifiers to describe stepper motors, and such details may be found in the ICS 16-2001 standard.
Applications
Computer controlled stepper motors are a type of motion-control positioning system. They are typically digitally controlled as part of an open loop system for use in holding or positioning applications.
In the field of lasers and optics they are frequently used in precision positioning equipment such as linear actuators, linear stages, rotation stages, goniometers, and mirror mounts. Other uses are in packaging machinery, and positioning of valve pilot stages for fluid control systems.
Commercially, stepper motors are used in floppy disk drives, flatbed scanners, computer printers, plotters, slot machines, image scanners, compact disc drives, intelligent lighting, camera lenses, CNC machines, and 3D printers. Some programming hobbyists have used arrays of stepper motors as electronic musical instruments by programming the motors to rotate at the frequencies of different musical tones, in a sequence that imitates that found in a MIDI file.
Stepper motor system
A stepper motor system consists of three basic elements, often combined with some type of user interface (host computer, PLC or dumb terminal):
Indexers The indexer (or controller) is a microprocessor capable of generating step pulses and direction signals for the driver. In addition, the indexer is typically required to perform many other sophisticated command functions.
Drivers The driver (or amplifier) converts the indexer command signals into the power necessary to energize the motor windings. There are numerous types of drivers, with different voltage and current ratings and construction technology. Not all drivers are suitable to run all motors, so when designing a motion control system, the driver selection process is critical.
Stepper motors The stepper motor is an electromagnetic device that converts digital pulses into mechanical shaft rotation.
Advantages
Low cost for control achieved
High torque at startup and low speeds
Ruggedness
Simplicity of construction
Can operate in an open loop control system
Low maintenance (high reliability)
Less likely to stall or slip
Will work in any environment
Can be used in robotics in a wide scale.
High reliability
The rotation angle of the motor is proportional to the input pulse.
The motor has full torque at standstill (if the windings are energized)
Precise positioning and repeatability of movement, since good stepper motors have an accuracy of 3–5% of a step and this error is non-cumulative from one step to the next.
Excellent response to starting/stopping/reversing.
Very reliable since there are no contact brushes in the motor. Therefore, the life of the motor is simply dependent on the life of the bearing.
The motor's response to digital input pulses provides open-loop control, making the motor simpler and less costly to control.
It is possible to achieve very low-speed synchronous rotation with a load that is directly coupled to the shaft.
A wide range of rotational speeds can be realized, as the speed is proportional to the frequency of the input pulses.
Disadvantages
Resonance effect often exhibited at low speeds and decreasing torque with increasing speed.
See also
Brushed DC electric motor
Brushless DC electric motor
Flange
Fractional horsepower motors
Lavet-type stepping motor
Servo motor
Solenoid
Three-phase AC synchronous motors
ULN2003A (stepper motor) driver IC
References
External links
Controlling a stepper motor without microcontroller
Zaber Microstepping Tutorial. Retrieved on 2007-11-15.
Stepper System Overview. Retrieved on 2023-7-20.
Control of Stepping Motors - A Tutorial – Douglas W. Jones, The University of Iowa
NEMA motor, RepRapWiki
Stepping Motor Drive Guide from Dover Motion
Electric motors
Actuators | Stepper motor | [
"Technology",
"Engineering"
] | 5,140 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
170,350 | https://en.wikipedia.org/wiki/Desert%20climate | The desert climate or arid climate (in the Köppen climate classification BWh and BWk) is a dry climate sub-type in which there is a severe excess of evaporation over precipitation. The typically bald, rocky, or sandy surfaces in desert climates are dry and hold little moisture, quickly evaporating the already little rainfall they receive. Covering 14.2% of Earth's land area, hot deserts are the second-most common type of climate on Earth after the Polar climate.
There are two variations of a desert climate according to the Köppen climate classification: a hot desert climate (BWh), and a cold desert climate (BWk). To delineate "hot desert climates" from "cold desert climates", a mean annual temperature of is used as an isotherm so that a location with a BW type climate with the appropriate temperature above this isotherm is classified as "hot arid subtype" (BWh), and a location with the appropriate temperature below the isotherm is classified as "cold arid subtype" (BWk).
Most desert/arid climates receive between of rainfall annually, although some of the most consistently hot areas of Central Australia, the Sahel and Guajira Peninsula can be, due to extreme potential evapotranspiration, classed as arid with the annual rainfall as high as .
Precipitation
Although no part of Earth is known for certain to be rainless, in the Atacama Desert of northern Chile, the average annual rainfall over 17 years was only . Some locations in the Sahara Desert such as Kufra, Libya, record an even drier of rainfall annually. The official weather station in Death Valley, United States reports annually, but in 40 months between 1931 and 1934 a total of just of rainfall was measured.
To determine whether a location has an arid climate, the precipitation threshold is determined. The precipitation threshold (in millimetres) involves first multiplying the average annual temperature in °C by 20, then adding 280 if 70% or more of the total precipitation is in the high-sun summer half of the year (April through September in the Northern Hemisphere, or October through March in the Southern), or 140 if 30–70% of the total precipitation is received during the applicable period, or 0 if less than 30% of the total precipitation is so received there. If the area's annual precipitation is less than half the threshold (50%), it is classified as a BW (desert climate), while 50–100% of the threshold results in a semi-arid climate.
Hot desert climates
Hot desert climates (BWh) are typically found under the subtropical ridge in the lower middle latitudes or the subtropics, often between 20° and 33° north and south latitudes. In these locations, stable descending air and high pressure aloft clear clouds and create hot, arid conditions with intense sunshine. Hot desert climates are found across vast areas of North Africa, West Asia, northwestern parts of the Indian Subcontinent, southwestern Africa, interior Australia, the Southwestern United States, northern Mexico, sections of southeastern Spain, the coast of Peru, and Chile. This makes hot deserts present in every continent except Antarctica.
At the time of high sun (summer), scorching, desiccating heat prevails. Hot-month average temperatures are normally between , and midday readings of are common. The world's absolute heat records, over , are generally in the hot deserts, where the heat potential can be the highest on the planet. This includes the record of in Death Valley, which is currently considered the highest temperature recorded on Earth. Some deserts in the tropics consistently experience very high temperatures all year long, even during wintertime. These locations feature some of the highest annual average temperatures recorded on Earth, exceeding , up to nearly in Dallol, Ethiopia. This last feature is seen in sections of Africa and Arabia. During colder periods of the year, night-time temperatures can drop to freezing or below due to the exceptional radiation loss under the clear skies. However, temperatures rarely drop far below freezing under the hot subtype.
Hot desert climates can be found in the deserts of North Africa such as the wide Sahara Desert, the Libyan Desert or the Nubian Desert; deserts of the Horn of Africa such as the Danakil Desert or the Grand Bara Desert; deserts of Southern Africa such as the Namib Desert or the Kalahari Desert; deserts of West Asia such as the Arabian Desert, or the Syrian Desert; deserts of South Asia such as Dasht-e Lut and Dasht-e Kavir of Iran or the Thar Desert of India and Pakistan; deserts of the United States and Mexico such as the Mojave Desert, the Sonoran Desert or the Chihuahuan Desert; deserts of Australia such as the Simpson Desert or the Great Victoria Desert and many other regions. In Europe, the hot desert climate can only be found on southeastern coast of Spain as well as small inland parts of southeastern, especially parts of the Tabernas Desert.
Hot deserts are lands of extremes: most of them are among the hottest, the driest, and the sunniest places on Earth because of nearly constant high pressure; the almost permanent removal of low-pressure systems, dynamic fronts, and atmospheric disturbances; sinking air motion; dry atmosphere near the surface and aloft; the exacerbated exposure to the sun where solar angles are always high makes this desert inhospitable to most species.
Cold desert climates
Cold desert climates (BWk) usually feature hot (or warm in a few instances), dry summers, though summers are not typically as hot as hot desert climates. Unlike hot desert climates, cold desert climates tend to feature cold, dry winters. Snow tends to be rare in regions with this climate. The Gobi Desert in northern China and Mongolia is one example of a cold desert. Though hot in the summer, it shares the freezing winters of the rest of Inner Asia. Summers in South America's Atacama Desert are mild, with only slight temperature variations between seasons. Cold desert climates are typically found at higher altitudes than hot desert climates and are usually drier than hot desert climates.
Cold desert climates are typically located in temperate zones in the 30s and 40s latitudes, usually in the leeward rain shadow of high mountains, restricting precipitation from the westerly winds. An example of this is the Patagonian Desert in Argentina, bounded by the Andes ranges to its west. In the case of Central Asia, mountains restrict precipitation from the eastern monsoon. The Kyzyl Kum, Taklamakan and Katpana Desert deserts of Central Asia are other significant examples of BWk climates. The Ladakh region and the city of Leh in the Great Himalayas in India also have a cold desert climate. In North America, the cold desert climate occurs in the drier parts of the Great Basin Desert and the Bighorn Basin in Big Horn and Washakie County in Wyoming. The Hautes Plaines, located in the northeastern section of Morocco and in Algeria, is another prominent example of a cold desert climate. In Europe, this climate only occurs in some inland parts of southeastern Spain, such as in Lorca.
Polar climate desert areas in the Arctic and Antarctic regions receive very little precipitation during the year owing to the cold, dry air freezing most precipitation. Polar desert climates have desert-like features that occur in cold desert climates, including intermittent streams, hypersaline lakes, and extremely barren terrain in unglaciated areas such as the McMurdo Dry Valleys of Antarctica. These areas are generally classified as having polar climates because they have average summer temperatures below even if they have some characteristics of extreme non-polar deserts.
Climate charts
Hot deserts
Cold deserts
See also
List of deserts
Dry climate
Semi-arid
Desert
References
External links
Desert climate summary
Desert climate explanation
Desert report/essay
Climate of Africa
Climate of Asia
Climate of Australia
Climate of North America
Climate of South America
climate
+Climate
Köppen climate types | Desert climate | [
"Biology"
] | 1,626 | [
"Deserts",
"Ecosystems"
] |
170,353 | https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov%20theorem | In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance). The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator (which also drops linearity), ridge regression, or simply any degenerate estimator.
The theorem was named after Carl Friedrich Gauss and Andrey Markov, although Gauss' work significantly predates Markov's. But while Gauss derived the result under the assumption of independence and normality, Markov reduced the assumptions to the form stated above. A further generalization to non-spherical errors was given by Alexander Aitken.
Scalar Case Statement
Suppose we are given two random variable vectors, and that we want to find the best linear estimator of given , using the best linear estimator
Where the parameters and are both real numbers.
Such an estimator would have the same mean and standard deviation as , that is, .
Therefore, if the vector has respective mean and standard deviation , the best linear estimator would be
since has the same mean and standard deviation as .
Statement
Suppose we have, in matrix notation, the linear relationship
expanding to,
where are non-random but unobservable parameters, are non-random and observable (called the "explanatory variables"), are random, and so are random. The random variables are called the "disturbance", "noise" or simply "error" (will be contrasted with "residual" later in the article; see errors and residuals in statistics). Note that to include a constant in the model above, one can choose to introduce the constant as a variable with a newly introduced last column of X being unity i.e., for all . Note that though as sample responses, are observable, the following statements and arguments including assumptions, proofs and the others assume under the only condition of knowing but not
The Gauss–Markov assumptions concern the set of error random variables, :
They have mean zero:
They are homoscedastic, that is all have the same finite variance: for all and
Distinct error terms are uncorrelated:
A linear estimator of is a linear combination
in which the coefficients are not allowed to depend on the underlying coefficients , since those are not observable, but are allowed to depend on the values , since these data are observable. (The dependence of the coefficients on each is typically nonlinear; the estimator is linear in each and hence in each random which is why this is "linear" regression.) The estimator is said to be unbiased if and only if
regardless of the values of . Now, let be some linear combination of the coefficients. Then the mean squared error of the corresponding estimation is
in other words, it is the expectation of the square of the weighted sum (across parameters) of the differences between the estimators and the corresponding parameters to be estimated. (Since we are considering the case in which all the parameter estimates are unbiased, this mean squared error is the same as the variance of the linear combination.) The best linear unbiased estimator (BLUE) of the vector of parameters is one with the smallest mean squared error for every vector of linear combination parameters. This is equivalent to the condition that
is a positive semi-definite matrix for every other linear unbiased estimator .
The ordinary least squares estimator (OLS) is the function
of and (where denotes the transpose of ) that minimizes the sum of squares of residuals (misprediction amounts):
The theorem now states that the OLS estimator is a best linear unbiased estimator (BLUE).
The main idea of the proof is that the least-squares estimator is uncorrelated with every linear unbiased estimator of zero, i.e., with every linear combination whose coefficients do not depend upon the unobservable but whose expected value is always zero.
Remark
Proof that the OLS indeed minimizes the sum of squares of residuals may proceed as follows with a calculation of the Hessian matrix and showing that it is positive definite.
The MSE function we want to minimize is
for a multiple regression model with p variables. The first derivative is
where is the design matrix
The Hessian matrix of second derivatives is
Assuming the columns of are linearly independent so that is invertible, let , then
Now let be an eigenvector of .
In terms of vector multiplication, this means
where is the eigenvalue corresponding to . Moreover,
Finally, as eigenvector was arbitrary, it means all eigenvalues of are positive, therefore is positive definite. Thus,
is indeed a global minimum.
Or, just see that for all vectors . So the Hessian is positive definite if full rank.
Proof
Let be another linear estimator of with where is a non-zero matrix. As we're restricting to unbiased estimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that of the OLS estimator. We calculate:
Therefore, since is unobservable, is unbiased if and only if . Then:
Since is a positive semidefinite matrix, exceeds by a positive semidefinite matrix.
Remarks on the proof
As it has been stated before, the condition of is a positive semidefinite matrix is equivalent to the property that the best linear unbiased estimator of is (best in the sense that it has minimum variance). To see this, let another linear unbiased estimator of .
Moreover, equality holds if and only if . We calculate
This proves that the equality holds if and only if which gives the uniqueness of the OLS estimator as a BLUE.
Generalized least squares estimator
The generalized least squares (GLS), developed by Aitken, extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix. The Aitken estimator is also a BLUE.
Gauss–Markov theorem as stated in econometrics
In most treatments of OLS, the regressors (parameters of interest) in the design matrix are assumed to be fixed in repeated samples. This assumption is considered inappropriate for a predominantly nonexperimental science like econometrics. Instead, the assumptions of the Gauss–Markov theorem are stated conditional on .
Linearity
The dependent variable is assumed to be a linear function of the variables specified in the model. The specification must be linear in its parameters. This does not mean that there must be a linear relationship between the independent and dependent variables. The independent variables can take non-linear forms as long as the parameters are linear. The equation qualifies as linear while can be transformed to be linear by replacing by another parameter, say . An equation with a parameter dependent on an independent variable does not qualify as linear, for example , where is a function of .
Data transformations are often used to convert an equation into a linear form. For example, the Cobb–Douglas function—often used in economics—is nonlinear:
But it can be expressed in linear form by taking the natural logarithm of both sides:
This assumption also covers specification issues: assuming that the proper functional form has been selected and there are no omitted variables.
One should be aware, however, that the parameters that minimize the residuals of the transformed equation do not necessarily minimize the residuals of the original equation.
Strict exogeneity
For all observations, the expectation—conditional on the regressors—of the error term is zero:
where is the data vector of regressors for the ith observation, and consequently is the data matrix or design matrix.
Geometrically, this assumption implies that and are orthogonal to each other, so that their inner product (i.e., their cross moment) is zero.
This assumption is violated if the explanatory variables are measured with error, or are endogenous. Endogeneity can be the result of simultaneity, where causality flows back and forth between both the dependent and independent variable. Instrumental variable techniques are commonly used to address this problem.
Full rank
The sample data matrix must have full column rank.
Otherwise is not invertible and the OLS estimator cannot be computed.
A violation of this assumption is perfect multicollinearity, i.e. some explanatory variables are linearly dependent. One scenario in which this will occur is called "dummy variable trap," when a base dummy variable is not omitted resulting in perfect correlation between the dummy variables and the constant term.
Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate. The estimates will be less precise and highly sensitive to particular sets of data. Multicollinearity can be detected from condition number or the variance inflation factor, among other tests.
Spherical errors
The outer product of the error vector must be spherical.
This implies the error term has uniform variance (homoscedasticity) and no serial correlation. If this assumption is violated, OLS is still unbiased, but inefficient. The term "spherical errors" will describe the multivariate normal distribution: if in the multivariate normal density, then the equation is the formula for a ball centered at μ with radius σ in n-dimensional space.
Heteroskedasticity occurs when the amount of error is correlated with an independent variable. For example, in a regression on food expenditure and income, the error is correlated with income. Low income people generally spend a similar amount on food, while high income people may spend a very large amount or as little as low income people spend. Heteroskedastic can also be caused by changes in measurement practices. For example, as statistical offices improve their data, measurement error decreases, so the error term declines over time.
This assumption is violated when there is autocorrelation. Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line. Autocorrelation is common in time series data where a data series may experience "inertia." If a dependent variable takes a while to fully absorb a shock. Spatial autocorrelation can also occur geographic areas are likely to have similar errors. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. In these cases, correcting the specification is one possible way to deal with autocorrelation.
When the spherical errors assumption may be violated, the generalized least squares estimator can be shown to be BLUE.
See also
Independent and identically distributed random variables
Linear regression
Measurement uncertainty
Other unbiased statistics
Best linear unbiased prediction (BLUP)
Minimum-variance unbiased estimator (MVUE)
References
Further reading
External links
Earliest Known Uses of Some of the Words of Mathematics: G (brief history and explanation of the name)
Proof of the Gauss Markov theorem for multiple linear regression (makes use of matrix algebra)
A Proof of the Gauss Markov theorem using geometry
Theorems in statistics | Gauss–Markov theorem | [
"Mathematics"
] | 2,447 | [
"Mathematical problems",
"Mathematical theorems",
"Theorems in statistics"
] |
170,366 | https://en.wikipedia.org/wiki/Normal%20matrix | In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose :
The concept of normal matrices can be extended to normal operators on infinite-dimensional normed spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis.
The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix satisfying the equation is diagonalizable. (The converse does not hold because diagonalizable matrices may have non-orthogonal eigenspaces.) Thus and where is a diagonal matrix whose diagonal values are in general complex.
The left and right singular vectors in the singular value decomposition of a normal matrix differ only in complex phase from each other and from the corresponding eigenvectors, since the phase must be factored out of the eigenvalues to form singular values.
Special cases
Among complex matrices, all unitary, Hermitian, and skew-Hermitian matrices are normal, with all eigenvalues being unit modulus, real, and imaginary, respectively. Likewise, among real matrices, all orthogonal, symmetric, and skew-symmetric matrices are normal, with all eigenvalues being complex conjugate pairs on the unit circle, real, and imaginary, respectively. However, it is not the case that all normal matrices are either unitary or (skew-)Hermitian, as their eigenvalues can be any complex number, in general. For example,
is neither unitary, Hermitian, nor skew-Hermitian, because its eigenvalues are ; yet it is normal because
Consequences
The concept of normality is important because normal matrices are precisely those to which the spectral theorem applies:
The diagonal entries of are the eigenvalues of , and the columns of are the eigenvectors of . The matching eigenvalues in come in the same order as the eigenvectors are ordered as columns of .
Another way of stating the spectral theorem is to say that normal matrices are precisely those matrices that can be represented by a diagonal matrix with respect to a properly chosen orthonormal basis of . Phrased differently: a matrix is normal if and only if its eigenspaces span and are pairwise orthogonal with respect to the standard inner product of .
The spectral theorem for normal matrices is a special case of the more general Schur decomposition which holds for all square matrices. Let be a square matrix. Then by Schur decomposition it is unitary similar to an upper-triangular matrix, say, . If is normal, so is . But then must be diagonal, for, as noted above, a normal upper-triangular matrix is diagonal.
The spectral theorem permits the classification of normal matrices in terms of their spectra, for example:
In general, the sum or product of two normal matrices need not be normal. However, the following holds:
In this special case, the columns of are eigenvectors of both and and form an orthonormal basis in . This follows by combining the theorems that, over an algebraically closed field, commuting matrices are simultaneously triangularizable and a normal matrix is diagonalizable – the added result is that these can both be done simultaneously.
Equivalent definitions
It is possible to give a fairly long list of equivalent definitions of a normal matrix. Let be a complex matrix. Then the following are equivalent:
is normal.
is diagonalizable by a unitary matrix.
There exists a set of eigenvectors of which forms an orthonormal basis for .
for every .
The Frobenius norm of can be computed by the eigenvalues of : .
The Hermitian part and skew-Hermitian part of commute.
is a polynomial (of degree ) in .
for some unitary matrix .
and commute, where we have the polar decomposition with a unitary matrix and some positive semidefinite matrix .
commutes with some normal matrix with distinct eigenvalues.
for all where has singular values and has eigenvalues that are indexed with ordering .
Some but not all of the above generalize to normal operators on infinite-dimensional Hilbert spaces. For example, a bounded operator satisfying (9) is only quasinormal.
Normal matrix analogy
It is occasionally useful (but sometimes misleading) to think of the relationships of special kinds of normal matrices as analogous to the relationships of the corresponding type of complex numbers of which their eigenvalues are composed. This is because any function of a non-defective matrix acts directly on each of its eigenvalues, and the conjugate transpose of its spectral decomposition is , where is the diagonal matrix of eigenvalues. Likewise, if two normal matrices commute and are therefore simultaneously diagonalizable, any operation between these matrices also acts on each corresponding pair of eigenvalues.
The conjugate transpose is analogous to the complex conjugate.
Unitary matrices are analogous to complex numbers on the unit circle.
Hermitian matrices are analogous to real numbers.
Hermitian positive definite matrices are analogous to positive real numbers.
Skew Hermitian matrices are analogous to purely imaginary numbers.
Invertible matrices are analogous to non-zero complex numbers.
The inverse of a matrix has each eigenvalue inverted.
A uniform scaling matrix is analogous to a constant number.
In particular, the zero is analogous to 0, and
the identity matrix is analogous to 1.
An idempotent matrix is an orthogonal projection with each eigenvalue either 0 or 1.
A normal involution has eigenvalues .
As a special case, the complex numbers may be embedded in the normal 2×2 real matrices by the mapping
which preserves addition and multiplication. It is easy to check that this embedding respects all of the above analogies.
See also
Hermitian matrix
Least-squares normal matrix
Notes
Citations
Sources
.
Matrices
ja:正規作用素 | Normal matrix | [
"Mathematics"
] | 1,255 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
170,375 | https://en.wikipedia.org/wiki/Chengdu | Chengdu is the capital city of the Chinese province of Sichuan. With a population of 20,937,757 at the 2020 census, it is the fourth most populous city in China, and it is the only city with a population of over 20 million apart from direct-administered municipalities. It is traditionally the hub of Western China.
Chengdu is in central Sichuan. The surrounding Chengdu Plain is known as the "Country of Heaven" and the "Land of Abundance". Its prehistoric settlers included the Sanxingdui culture. The site of Dujiangyan, an ancient irrigation system, is designated as a World Heritage Site. The Jin River flows through the city. Chengdu's culture reflects that of its province, Sichuan; in 2011, it was recognized by UNESCO as a city of gastronomy. It is associated with the giant panda, a Chinese national symbol that inhabits the area of Sichuan; the city is home to the Chengdu Research Base of Giant Panda Breeding.
Founded by the Kingdom of Shu in the 4th century BC, Chengdu is unique as the only major Chinese settlement that has maintained its name unchanged throughout the imperial, republican, and communist eras for more than two thousand years. It was the capital of Liu Bei's Shu Han Empire during the Three Kingdoms Era, as well as several other local kingdoms during the Middle Ages. During World War II, refugees from eastern China fleeing from the Japanese settled in Chengdu. After the war, Chengdu was briefly the capital of the Nationalist republican government until it withdrew to Taipei on the island of Taiwan. Under the PRC, Chengdu's importance as a link between Eastern China and Western China expanded, with railways built to Chongqing in 1952, and Kunming and Tibet afterward. In the 1960s, Chengdu became an important defense industry hub.
Chengdu is now one of the most important economic, financial, commercial, cultural, transportation, research, and communication centers in China. Its economy is diverse, characterized by the machinery, automobile, medicine, food, and information technology industries. Chengdu is a leading financial hub, ranking 35th globally on the 2021 Global Financial Centres Index. Chengdu also hosts many international companies; more than 300 Fortune 500 companies have established branches in Chengdu. Chengdu is the third Chinese city with two international airports after Beijing and Shanghai. Chengdu Shuangliu International Airport, and the newly built Tianfu International Airport, a hub of Air China and Sichuan Airlines, is one of the 30 busiest airports in the world, and the Chengdu railway station is one of the six biggest in China. Chengdu is considered a "Beta + (global second-tier)" city classification (along with Barcelona and Washington, D.C.) according to the GaWC. As of 2023, the city also hosts 23 foreign consulates, the fourth most in China behind Beijing, Shanghai, and Guangzhou. Chengdu is the seat of the Western Theater Command region of the People's Liberation Army. In 2023, Chengdu became the third Chinese city to host the 31st FISU Summer World University Games, after Beijing 2001 and Shenzhen 2011. The city will also host the 2025 World Games. It is considered one of the best cities in China to live in, and also a national central city of China.
Chengdu is one of the world's top 25 cities by scientific research output. The city is home to the greatest number of universities and research institutes in Western China. Notably, these include: Sichuan University, University of Electronic Science and Technology of China, Southwestern University of Finance and Economics, Southwest Jiaotong University, Chengdu University of Technology, Sichuan Normal University, and Xihua University.
Name
The name Chengdu is attested in sources dating back to the Warring States period. It has been called the only major city in China to have remained at an unchanged location with an unchanged name throughout the imperial, republican, and communist eras. However, it also had other names; for example, it was briefly known as "Xijing" (Western Capital) in the 17th century. Etymology of the name is unclear. The earliest and most widely known explanation, although not generally accepted by modern scholars, is provided in the 10th-century geographical work Universal Geography of the Taiping Era, which states that the ninth king of Shu's Kaiming dynasty named his new capital Chengdu after a statement by King Tai of Zhou that a settlement needed "one year to become a town, two to become a city, and three to become a metropolis." (The character for cheng may mean "turned into" while du can mean either a metropolis or a capital).
The present spelling is based on pinyin romanization; its Postal Map romanization was "Chengtu". Its former status as the seat of the Chengdu Prefecture prompted Marco Polo's spellings "Sindafu", "Sin-din-fu", &c. and the Protestant missionaries' romanization "Ching-too Foo".
Although the official name of the city has remained (almost) constant, the surrounding area has sometimes taken other names, including "Yizhou". Chinese nicknames for the city include the , variously derived from the old city walls' shape on a map or a legend that Zhang Yi had planned their course by following a turtle's tracks; the (see Sichuan brocade), a contraction of the earlier "City of the Brocade Official", after an imperial office established under the Western Han; the (Rongcheng, 蓉城), from the hibiscus which King Meng Chang of the Later Shu ordered planted upon the city wall during the 10th century.
According to Étienne de la Vaissière, "Baghshūr" () may be the Sogdian name for the region of Chengdu. This toponym is attested near Merv, but
not far from Chengdu are found the large salt water wells of the Yangtze basin.
Logo
The city logo adopted in 2011 is inspired by the Golden Sun Bird, an ancient relic unearthed in 2001 from the Jinsha Site.
History
Early history
Archaeological discoveries at the Sanxingdui and Jinsha Site have established that the area surrounding Chengdu was inhabited over four thousand years ago, in the 18th–10th centuryBC. At the time of China's Xia, Shang, and Zhou dynasties, it represented a separate ancient bronze-wielding culture that, following its partial sinification, became known to the Chinese as Shu. Shu was conquered by Qin in 316BC, and the settlement was re-founded by Qin general Zhang Yi.
Pre-Qin to Qin and Han dynasties
In the early stage of the Xia dynasty or even earlier, the ancient Shu Kingdom located on the Chengdu Plain has formed a relatively developed bronze civilization, becoming an important source of Chinese civilization and one of the birthplaces of the Chinese nation. According to records, there were five dynasties in the ancient Shu Kingdom, and their capitals were Qushang (now Wenjiang District, Chengdu), Piyi (now Pidu District), Xindu, and Guangdu. At the end of the Spring and Autumn period (around the 4th century BC), the fifth King Kaiming moved the capital to Chengdu. According to "Taiping Huanyu Ji", the name of the city is borrowed from the history of the establishment of the capital in the Western Zhou dynasty. The allusions of Zhou Wang Qianqi's "one year, he lived in a cluster, two years became a city, and three years Chengdu," because of the name Chengdu, it has been used to this day. Therefore, Chengdu has become a rare city in China and the world that has not changed its name since its establishment. Some people think that Chengdu is a transliteration of ancient Shu place names. There is a saying that "Guangdu, Xindu and Chengdu" are collectively referred to as the "Three Capitals of Ancient Shu". Nowadays, there are many cultural relics of ancient Shu Kingdom in Chengdu Plain, such as Sanxingdui Ruins, Jinsha Ruins, Yufu Ancient City Ruins, Wangcong Temple, etc. Jinsha Ruins located in the urban area of Chengdu is a peak of the development of ancient Shu culture.
The Golden Mask of the Shang and Zhou dynasties at the Jinsha Site.
The ancient state of Shu was the first target to be conquered by the Qin state in the process of unifying the world. King Huiwen of Qin had prepared for this for many years, and opened up the Shiniu Road (that is, the Jinniu Road) from Qin to Shu. In 316 BC, King Huiwen of Qin took advantage of the mutual attack between Ba and Shu and sent Sima Cuo to lead his army into Shu along the Shiniu Road, capturing the land in a few months. After that, the king of Qin established three abolitions of Shu Hou, and finally established Shu County, and the county seat of Chengdu County was established in Chengdu, the former capital of Shu. In 311 BC, Zhang Yi of the Qin dynasty built the Chengdu city wall according to the system of the capital Xianyang, building a large city and a small city. In 256 BC, King Zhao of Qin appointed Li Bing as the governor of Shu County. During his tenure, he presided over the construction of the world-famous Dujiangyan Water Conservancy Project. The Chengdu Plain has been fertile and wild for thousands of miles since then. After decades of operation, Chengdu replaced Guanzhong Plain in the late Qin dynasty and was called the "Land of Abundance", and this reputation has continued to this day.
During the Han dynasty, the Chengdu economy, especially its brocade industry, prospered, becoming an important source of tribute to the court. The imperial court invested in Chengdu and specially set up Jinguan management and built "Jinguan City" in the southwest of Chengdu, "Jinguan City" and "Jincheng" becoming nicknames for Chengdu. In the second year of Emperor Ping of the Yuan dynasty, the population of Chengdu reached 76,000 households, or about 354,000 people, making it one of the most populous cities at that time. Towards the six major cities. In the third year of the reign of Emperor Jing of the Han dynasty (141 BC), the Wen Dang, the prefect of Shu County, established the world's earliest local government-run school, "Wenweng Shishi", in Chengdu. In the Han dynasty, Chengdu's literature and art also reached a high level. All the most famous literary masters in the Han dynasty were from Chengdu, including Sima Xiangru, Yang Xiong, and Wang Bao.
In the former Han dynasty, the whole country was divided into 14 prefectural governors' departments, among which the Yizhou governor was established in Luoxian (now Guanghan City, Sichuan), and the governor later moved to Chengdu. In the first year of Emperor Guangwu's reign (25 years) in the Eastern Han dynasty, Gongsun Shu established himself as the emperor in Chengdu, and the country's name was "married family". In the twelfth year of Jianwu in the Later Han dynasty (36 years), the Great Sima Wuhan of the Eastern Han dynasty finally captured Chengdu after five years of war, and his family perished. In the fifth year of Zhongping (188), Emperor Ling of Han, the court accepted Liu Yan's suggestion and changed the provincial governors to state shepherds with actual recruitment and command power. In the fifth year of Chuping (194), it moved to Chengdu. At that time, the Yizhou Provincial Governor's Department was the place where the Hu people in the Western Regions were operating.
Imperial era
Under the Han, the brocade produced in Chengdu became fashionable and was exported throughout China. A "Brocade Official" () was established to oversee its production and transaction. After the fall of the Eastern Han, Liu Bei ruled Shu Han, the southwestern of the Three Kingdoms, from Chengdu. His minister Zhuge Liang called the area the "Land of Abundance". Under the Tang, Chengdu was considered the second most prosperous city in China after Yangzhou. Both Li Bai and Du Fu lived in the city. Li Bai praised it as "lying above the empyrean." The city's present Caotang ("Grass Hall") was constructed in 1078 in honor of an earlier, more humble structure of that name erected by Du Fu in 760, the second year of his 4-year stay. The Taoist Qingyang Gong ("Green Goat Temple") was built in the 9th century.
Chengdu was the capital of Wang Jian's Former Shu from 907 to 925, when it was conquered by the Later Tang. The Later Shu was founded by Meng Zhixiang in 934, with its capital at Chengdu. Its second and last king, Meng Chang beautified the city by ordering hibiscus to be planted upon the city walls.
The Song conquered the city in 965, introducing the first widely used paper money in the world. Su Shi praised it as "the southwestern metropolis". At the fall of the Song, a rebel leader set up a short-lived kingdom known as Great Shu (, Dàshǔ). Allegedly the Mongols called for the death of a million people in the city but the city's population had less than 30,000 residents (not Chengdu prefecture). The aged males who had not fled were killed while in typical fashion, the women, children and artisans were enslaved and deported. During the Yuan dynasty, most of Sichuan's residents were deported to Hunan during the insurgency of the western ethnic tribes of western Sichuan. Marco Polo visited Chengdu and wrote about the Anshun Bridge or an earlier version of it.
At the fall of the Ming, the rebel Zhang Xianzhong established his Great Western Kingdom () with its capital at Chengdu; it lasted only from 1643 to 1646. Zhang was said to have massacred a large number of people in Chengdu and throughout Sichuan. In any case, Chengdu was said to have become a virtual ghost town frequented by tigers and the depopulation of Sichuan necessitated the resettlement of millions of people from other provinces during the Qing dynasty. Following the Columbian Exchange, the Chengdu Plain became one of China's principal sources of tobacco. Pi County was considered to have the highest quality in Sichuan, which was the center of the country's cigar and cigarette production, the rest of the country long continuing to consume snuff instead.
Modern era
In 1911, Chengdu's branch of the Railway Protection Movement helped trigger the Wuchang Uprising, which led to the Xinhai Revolution that overthrew the Qing dynasty.
During World War II, the capital city of China was forced to move inland from Nanjing to Wuhan in 1937 and from Wuhan to Chengdu, then from Chengdu to Chongqing in 1938, as the Kuomintang (KMT) government under Generalissimo Chiang Kai-shek ultimately retreated to Sichuan to escape from the invading Japanese forces. They brought with them into Sichuan business people, workers, and academics who founded many of the industries and cultural institutions which continue to make Chengdu an important cultural and commercial production center.
Chengdu became a military center for the KMT to regroup in the War of Resistance. Chengdu was beyond the reach of the Imperial Japanese ground forces and escort fighter planes. However, the Japanese frequently flew in the then-highly advanced twin-engine long-ranged G3M "Nell" medium bombers to conduct massive aerial bombardments of both civilian and military targets in Chongqing and Chengdu. The massed formation of the G3M bombers provided heavy firepower against Chinese fighter planes assigned to the defense of Chongqing and Chengdu, which continued to cause problems for the Japanese attacks.
Slow and vulnerable obsolescent Chinese fighter aircraft burning low-grade fuel were still sufficiently dangerous in the hands of capable pilots against the Japanese schnellbomber-terror bombing raiders; on 4 November 1939 for instance, Capt. Cen Zeliu (Wade-Giles: Shen Tse-Liu) led his 17th Fighter Squadron, 5th Fighter Group of seven cannon-equipped Dewoitine D.510 fighters in a level head-on attack against an incoming coming raid of 72 IJANF G3M bombers (Capt. Cen chose this tactic knowing that the operation of the Hispano-Suiza HS.404 20mm autocannon in his D.510 is likely to fail under the g-loads of a high-deflection diving attack), with Capt. Cen pummeling the lead G3M of the IJN's 13th Kōkūtai's CO Captain Kikushi Okuda with cannon fire, sending the G3M crashing down in flames over Chengdu, along with three other G3M bombers destroyed in the Chengdu raid that day. With the death of Captain Okuda in the air battle over Chengdu, the IJN became the highest-ranking IJN Air officer to be killed-in-action in the War of Resistance/World War II thus far.
In mid-late 1940, unknown to the Americans and European allies, the Imperial Japanese appeared in the skies over Chongqing and Chengdu with the world's most advanced fighter plane at the time: the A6M "Zero" fighter that dominated the skies over China against the increasingly obsolete Russian-made Polikarpov I-15/I-153s and I-16s that were the principal fighter planes of the Chinese Nationalist Air Force. This would later prove to be a rude awakening for the Allied forces in the Pacific War following the attack on Pearl Harbor. One of the first American ace fighter pilots of the war and original volunteer fighter pilot for the Chinese Nationalist Air Force, Major Huang Xinrui (nicknamed "Buffalo" by his comrades) died as a result of battling the Zero fighters along with his squadronmates Cen Zeliu and Lin Heng (younger brother of renowned architect Lin Huiyin) defending Chengdu on 14 March 1941.
Following the attack on Pearl Harbor at the end of 1941, the United States began setting up stations at airbases in China. In 1944, the American XX Bomber Command launched Operation Matterhorn, an ambitious plan to base B-29 Superfortresses in Chengdu and strategically bomb the Japanese Home Islands. The operating base was located in Xinjin Airport in the southwestern part of the Chengdu metropolitan area. Because the operation required a massive airlift of fuel and supplies over the Himalayas, it was not a significant military success, but it did earn Chengdu the distinction of launching the first serious retaliation against the Japanese homeland.
During the Chinese Civil War, Chengdu was the last city on the Chinese mainland to be held by the Kuomintang. President Chiang Kai-shek and his son Chiang Ching-kuo directed the defense of the city from the Chengdu Central Military Academy () until 1949, when Communist forces took the city on 27 December. The People's Liberation Army took the city without any resistance after a deal was negotiated between the People's Liberation Army and the commander of the KMT Army guarding the city. On 10 December the remnants of the Nationalist Chinese government evacuated to Taiwan.
The Chengdu Tianfu New Area is a sustainable planned city that will be outside of Central Chengdu. The city is also planned to be self-sustaining, with every residence being a two-minute walk from a park.
The Great City
In 2019, Chengdu overtook Shenzhen, China's technology hub, as the best-performing Chinese economy. The city has surged in population in the last two decades. Investments into a Europe-Chengdu Express Railway have been made, providing even more opportunity for the city to grow. As a way to preserve farmland and accommodate the growing population of Chengdu, China is building a hyper-dense satellite city centered around a central mass-transit hub called the Great City where any destination within the city is within a 15-minute walk. This proto-type city is intended to provide affordable, high-quality lifestyle, which provides people-oriented spaces that does not require a car to navigate.
Their current urban-planning focus in the city of Chengdu is to make the city 'a city within a park' rather than creating parks within a city. The Great City falls in line with the Chengdu 'park city' initiative, prioritizing the environment, public space and quality of life. It will consist of 15% park and green space and be situated on a area. Although 25% of the space will be dedicated to roads, one half of the roads will be pedestrian-oriented. This transit system provides direct transport to Chengdu itself. It is expected that the city will consume 48% less energy than cities of similar size.
The goal of the 'park city' project is to allow a city like Chengdu to compete with Beijing and Shanghai without stripping the city of its character. The city of Chengdu is already known for its focus on quality of life, which includes affordable housing, good public schools, trees and bike lanes.
Geography
The vast plain on which Chengdu is located has an elevation ranging from .
Northwest Chengdu is bordered by the high and steep Longmen Mountains in the north-west and in the west by the Qionglai Mountains, the elevation of which exceeds and includes Miao Jiling () and Xiling Snow Mountain (). The western mountainous area is also home to a large primitive forest with abundant biological resources and a giant panda habitat. East of Chengdu stands the low Longquan Mountains and the west bordering area of the hilly land of middle reaches of Min River, an area noted by several converging rivers. Since ancient times, Chengdu has been known as "the Abundant Land" owing to its fertile soil, favorable climate, and novel Dujiangyan Irrigation System.
Chengdu is located at the western edge of the Sichuan Basin and sits on the Chengdu Plain; the dominating terrain is plains. The prefecture ranges in latitude from 30° 05' to 31° 26' N, while its longitude ranges from 102° 54' to 104° 53' E, stretching for from east to west and south to north, administering of land. Neighboring prefectures are Deyang (NE), Ziyang (SE), Meishan (S), Ya'an (SW), and the Ngawa Tibetan and Qiang Autonomous Prefecture (N). The urban area, with an elevation of , features a few rivers, three of them being the Jin, Fu, and Sha Rivers. Outside of the immediate urban area, the topography becomes more complex: to the east lies the Longquan Mountains () and the Penzhong Hills (); to the west lie the Qionglai Mountains, which rise to in Dayi County. The highest point in Chengdu is Daxuetang (also known as Miaojiling) in Xiling Snow Mountain in Dayi County, with an altitude of 5,364 meters. The lowest point is the river bank at the exit of Tuojiang River in Jianyang City, with an altitude of 359 meters.
Climate
Chengdu has a monsoon-influenced humid subtropical climate (Köppen Cwa) and is largely warm with high relative humidity all year. It has four distinct seasons, with moderate rainfall concentrated mainly in the warmer months, and relieved from both sweltering summers and freezing winters. The Qin Mountains (Qinling) to the far north help shield the city from cold Siberian winds in the winter; because of this, the short winter is milder than in the Lower Yangtze. The 24-hour daily mean temperature in January is , and snow is rare but there are a few periods of frost each winter. The summer is hot and humid, but not to the extent of the "Three Furnaces" cities of Chongqing, Wuhan, and Nanjing, all of which lie in the Yangtze basin. The 24-hour daily mean temperature in July and August is around , with afternoon highs sometimes reaching ; sustained heat as found in much of eastern China is rare. Rainfall occurs most frequently and is concentrated in July and August, with very little of it in the cooler months. Chengdu also has one of the lowest annual sunshine totals nationally, with less sunshine annually than much of Northern Europe. With monthly percent possible sunshine ranging from 15 percent in December to 32 percent in August, the city receives 1006 hours of bright sunshine annually. Spring (March–April) tends to be sunnier and warmer in the day than autumn (October–November). The annual mean is , and extremes have ranged from to .
Administrative divisions
Chengdu is a sub-provincial city, serves as the capital of Sichuan. It has direct jurisdiction over 12 districts, 5 county-level cities and 3 counties:
Tianfu New Area
Chengdu Economic and Technological Development Zone
Chengdu Hi-tech Industrial Development Zone
Chengdu Tianfu Software Park
Chengdu Export Processing Zone
Cityscape
As of July 2013, the world's largest building in terms of floor area, the New Century Global Center, is located in the city. The structure is in size with of floor area, housing retail outlets, a movie theaters, offices, hotels, a water park with artificial beach and waves and a Mediterranean-style village comprising a large 5-star hotel, a skating rink and a 15,000-spot parking area.
Demographics
According to the 2020 Chinese census, the municipality had 20,937,757 inhabitants; the metropolitan area itself was home to 16,045,577 inhabitants including those of the 12 urban districts plus Guanghan City (in Deyang). Chengdu is the largest city in Sichuan and the fourth largest in China. 21,192,000 for 2021, adding more residents than any other city in the country.
As of 2015, the OECD (Organization for Economic Cooperation and Development) estimated the Chengdu metropolitan area's population to be 18.1 million.
Culture
In 2006, China Daily named Chengdu China's fourth-most-livable city.
Literature
Some of China's most important literature comes from Chengdu. The city has been home to literary giants, such as Sima Xiangru and Yang Xiong, two masters of Fu, a mixture of descriptive prose and verse during the Tang dynasty; Li Bai and Du Fu, the most eminent poets of the Tang and Song dynasties respectively; Yang Shen'an, a famous scholar of the Ming dynasty; and Guo Moruo and Ba Jin, two well-known modern writers. Chang Qu, a historian of Chengdu during the Jin dynasty, compiled the earliest local historical records, the Record of Hua Yang State. Zhao Chongzuo, a poet in Chengdu during the Later Shu Kingdom, edited Among the Flowers, the first anthology of Ci in China's history. Meng Chang, the king of Later Shu, wrote the first couplet for the Spring Festival, which says, "A harvest year accepts celebrations, good festivals foreshadow long springs."
In 2023, Chengdu hosted the 81st World Science Fiction Convention, having beat out Winnipeg, Canada, in site-selection voting in 2021.
Fine art
During the period of the Five Dynasties, Huang Quan, a painter in Chengdu, initiated the Fine-Brush Flower-and-Bird Painting School with other painters. At that time, "Hanlin Painting Academy" was the earliest royal academy in China.
Religion
Chengdu contains official, Roman Catholic and Protestant congregations, some of which are underground churches.
The Apostolic Vicariate of Szechwan (now known as Roman Catholic Diocese of Chengdu) was established on 15 October 1696. Artus de Lionne, a French missionary of Paris Foreign Missions Society, was appointed as the first Apostolic Vicar.
In 1890, the Canadian Methodist Mission was searching for more stations in Asia. In February 1891, Dr. , who had been Superintendent of the New York Methodist Mission Society of Central China recommended that Chengtu be its first Mission sight. During the meeting, it was proposed he lead this contingency; having built western hospitals, Boy's and Girl's schools at Missions he established on the Yangtze and Gan Rivers from 1866 – 1888. On 9 May 1891 Dr. Virgil Hart arrived in Chengtu and two weeks later bought a home and had it subdivided into living quarters and a dispensary, for the later arriving Missionary staff to move into.
On 24 June 1892, the doors of Chengtu's first Protestant Mission Headquarters were opened with over one thousand people of the community attending. The first Methodist religious service was held the following Sunday with only several attendants. The first western dispensary in Sichuan was opened 3 November 1892 with sixteen patients seeking care. The mission site became so popular that a larger space was secured near Chengtu's East Gate in the spring of 1893. This site is where the city's first Methodist church (Sï-Shen-Tsï Methodist Church) and hospital were built. These were later razed by rioting Chinese in 1895 and the Mission staff retreated to Chongqing and later Shanghai to escape the marauders. Dr. Virgil Hart traveled to Peking to demand redress and full payment of retribution was collected from Sichuan Viceroy Liu Ping Chang. The mission compound was quickly rebuilt only to be destroyed once more in the riots of 1901. These were rebuilt a third time and later missionaries would relocate and expand the Boys' and Girls' Schools just south of the city, dedicating the Divinity College as Hart College in 1914; a part of the West China Union University, that is now Sichuan University and the West China School of Medicine (Huaxiyida). During the Cultural Revolution, the Sï-Shen-Tsï Methodist Church building was no longer in use and the building was entrusted to the nearby Chengdu City Second People's Hospital for management. The hospital used the chapel as a kindergarten and the office of the hospital equipment department. In 1984, the hospital returned the chapel building to the church.
In December 2018 the authorities attempted to close a 500-member underground church, the Early Rain Covenant Church, led by Pastor Wang Yi. Over 100 members of the church were arrested including the pastor and his wife. The church's kindergarten and theological college were raided and the church's media outlets were closed down. Before his arrest, church member Li Yingqiang declared: "Even if we are down to our last five, worship and gatherings will still go on because our faith is real. […] Persecution is a price worth paying for the Lord." Police are said to have told one member that the church had been declared an illegal organisation. Chinese media were banned from reporting the events. Video footage which found its way onto western social media showed arrests and photographs alleged to be of injuries inflicted by the police. From a photo of . Jiang's detention warrant it appears that the authorities have charged the church's leaders with "inciting subversion of state power," which carries a maximum sentence of 15 years.
In 2012, a Chabad Jewish Center was established in Chengdu, after moving five times, a permanent location was secured at Wuhou District.
Theater
The saying "Shu opera towers above all other performances in the world" reflects the achievement of Sichuan opera and Zaju (an ancient form of comedic drama involving dancing, singing, poetry, and miming). In the city, the first named opera "Bullfighting" was written in the Warring States period. The first detailed recorded opera was staged in the royal court of Shu Kingdom during the Three Kingdom period. China's first clearly recorded Zaju was also performed in Chengdu. Tombs of witty Han dynasty poets were excavated in Chengdu. And face-changing masks and fire breathing remain hallmarks of the Sichuan opera.
Language
The native language in Chengdu is Sichuanese, otherwise referred as Sichuan dialect. More precisely, "Chengdu Dialect" () is widely used in lieu of "Sichuanese" due to the largely different accents of Sichuanese speakers residing elsewhere.
Culinary art and tea culture
The distinct characteristic of Sichuan cuisine is the use of spicy chilies and peppercorns. Famous local dishes include Mapo doufu, Chengdu Hot pot, and Dan Dan Mien. Both Mapo Doufu and Dan Dan Mien contain Sichuan peppers. An article by the Los Angeles Times (2006) called Chengdu "China's party city" for its carefree lifestyle. Chengdu has more tea houses and bars than Shanghai despite having less than half the population. In 2023, there were more than 30,000 teahouses in Chengdu, and there were 3,566 legally registered bars, nightclubs, and dance halls in the city. A statistical report in 2019 showed that Chengdu had more bars than Shanghai, becoming the city with the most bars in China. Chengdu's tea culture dates back over a thousand years, including its time as the starting point of the Southern Silk Road.
Chengdu is officially recognized and named by UNESCO as the "City of Gastronomy".
Teahouse
Tea houses are ubiquitous in the city and range from ornate traditional establishments with bamboo furniture to simple modern tea houses. Teas on offer include jasmine, longjing and biluochun tea. Tea houses are popular venues for playing mahjong, getting a massage or one's ears clean. Some larger tea houses offer live entertainment such as Sichuan opera performances.
Hot pot
Chengdu is known for its hot pot. Hot pot is a traditional Sichuanese dish, made by cooking vegetables, fish, and/or meat in boiling spicy broth. A type of food suitable for friends' gathering, hot pot attracts both local people and tourists. Hot pot restaurants can be found at many places in Chengdu.
Mahjong
Mahjong has been an essential part of most local peoples' lives. After daytime work, people gather at home or in the tea houses on the street to play Mahjong. On sunny days, local people like to play Mahjong on the sidewalks to enjoy the sunshine and also the time with friends.
Mahjong is the most popular entertainment choice among locals for several reasons. Chengdu locals have simplified the rules and made it easier to play as compared to Cantonese Mahjong. Also, Mahjong in Chengdu is a way to meet old friends and to strengthen family relationships. In fact, many business people negotiate deals while playing Mahjong.
Rural tourism: Nong Jia Le
Chengdu claims to have first practiced the modern business model of 'Nong Jia Le' (Happy Rural Homes). It refers to the practice of suburban and rural residents converting their houses into restaurants, hotels and entertainment spaces in order to attract city dwellers.
Nong Jia Le features different styles and price levels and have been thriving around Chengdu. They provide gateways for city dwellers to escape the city, offer delicious and affordable home-made dishes, and provide mahjong facilities.
Main sights
World natural and cultural heritage sites
Mount Qingcheng
Mount Qingcheng is amongst the most important Taoism sites in China. It is situated in the suburbs of Dujiangyan City and connected to downtown Chengdu away by the Cheng-Guan Expressway.
With its peak above sea level, Mount Qingcheng enjoys a cool climate, but remains a lush green all year round and surrounded by hills and waterways. Mount Qingcheng's Fujian Temple, Tianshi Cave, and Shizu Hall are some of the existing more well-known Taoist holy sites. Shangqing Temple is noted for an evening phosphorescent glow locally referred to as "holy lights".
Dujiangyan Irrigation System
The Dujiangyan Irrigation System ( away from downtown Chengdu) is the oldest existing irrigation project in the world with a history of over 2000 years diverting water without a dam to distribute water and filter sand with an inflow-quantity control. The system was built by Libing and his son. The irrigation system prevents floods and droughts throughout the Plain of Chengdu.
Sichuan Giant Panda Sanctuaries
Covering a total of over 12 distinct counties and 4 cities, Sichuan Giant Panda Sanctuaries, lie on the transitional alp-canyon belt between the Sichuan Basin and the Qinghai-Tibetan Plateau. It is the largest remaining continuous habitat for giant pandas and home to more than 80 percent of the world's wild giant pandas. Globally speaking, it is also the most abundant temperate zone of greenery. The reserves of the habitat are away from Chengdu.
The Sichuan Giant Panda Sanctuaries are the most well-known of their kind in the world, with Wolong Nature Reserve, generally considered as the "homeland of pandas". It is a core habitat with unique natural conditions, complicated landforms, and a temperate climate with diverse wildlife. Siguniang Mountain, sometimes called the "Oriental Alpine" is approximately away from downtown Chengdu, and is composed of four adjacent peaks of the Traversal Mountain Range. Among the four peaks, the fourth and highest stands above sea level, and is perpetually covered by snow.
Culture of poetry and the Three Kingdoms
Wuhou Shrine
Wuhou Shrine (Temple of Marquis Wu; 武侯祠) is perhaps the most influential museum of Three Kingdoms relics in China. It was built in the Western Jin period (265–316) in the honor of Zhuge Liang, the famous military and political strategist who was Prime Minister of the Shu Han State during the Three Kingdoms period (220–280). The Shrine highlights the Zhuge Liang Memorial Temple and the Hall of Liu Bei (founder of the Shu Han state), along with statues of other historical figures of Shu Han, as well as cultural relics like stone inscriptions and tablets. The Huiling Mausoleum of Liu Bei represents a unique pattern of enshrining both the emperor and his subjects in the same temple, a rarity in China.
Du Fu thatched cottage
Du Fu was one of the most noted Tang dynasty poets; during the Lushan-Shi Siming Rebellion, he left Xi'an (then Chang'an) to take refuge in Chengdu. With the help from his friends, the thatched cottage was built along the Huanhua Stream in the west suburbs of Chengdu, where Du Fu spent four years of his life and produced more than 240 now-famous poems. During the Song dynasty, people started to construct gardens and halls on the site of his thatched cottage to honor his life and memory. Currently, a series of memorial buildings representing Du Fu's humble life stand on the river bank, along with a large collection of relics and various editions of his poems.
Ancient Shu civilization
Jinsha Site
The Jinsha Site are the first significant archeological discovery in China of the 21st century and were selected in 2006 as a "key conservation unit" of the nation. The Jinsha Relics Museum is located in the northwest of Chengdu, about from downtown. As a theme-park-style museum, it is for the protection, research, and display of Jinsha archaeological relics and findings. The museum covers , and houses relics, exhibitions, and a conservation center.
Golden Sun Bird
The Golden Sun Bird was excavated by archaeologists from the Jinsha Ruins on 25 February 2001. In 2005, it was designated as the official logo of Chinese cultural heritage by the China National Relic Bureau.
The round, foil plaque dates back to the ancient Shu area in 210 BC and is 94.2 percent pure gold and extremely thin. It contains four birds flying around the perimeter, representing the four seasons and directions. The sun-shaped cutout in the center contains 12 sunlight beams, representing the 12 months of a year. The exquisite design is remarkable for a 2,200-year-old piece.
Sanxingdui Museum
Situated in the northeast of the state-protected Sanxingdui Site, The original complex of Sanxingdui Museum was founded in August 1992 and opened in 1997. It is the representative work of the master architect Zheng Guoying. The original museum covers an area of 1,000 acres and was rated as the first batch of national first-class museums.
The new complex of Sanxingdui Museum was founded in March 2022. It covers an area of 54,400 square meters, which is about 5 times the size of the old museum. It was built for new cultural relics after major archaeological excavations. It displays more than 2,000 precious cultural relics such as bronze, jade, gold, pottery, and bone, and comprehensively and systematically displays the archaeological excavations and latest research results of Sanxingdui.
The main collection highlights the Ancient City of Chengdu, Shu State & its culture, while displaying thousands of valuable relics including earthenware, jade wares, bone objects, gold wares, and bronzes that have been unearthed from Shang dynasty sacrificial sites.
Buddhist and Taoist culture
Daci Temple
The Daci Temple (大慈寺), a temple in downtown Chengdu was first built during the Wei and Jin dynasties, with its cultural height during the Tang and Song dynasties. Xuanzang, a Tang dynasty monk, was initiated into monkhood and studied for several years here; during this time, he gave frequent sermons in Daci Monastery.
Wenshu Monastery
Also named Xinxiang Monastery, Wenshu Monastery (文殊院) is the best preserved Buddhist temple in Chengdu. Initially built during the Tang dynasty, it has a history dating back 1,300 years. Parts of Xuanzang's skull are held in consecration here (as a relic). The traditional home of scholar Li Wenjing is on the outskirts of the complex.
Baoguang Buddhist Temple
Located in Xindu District, Baoguang Buddhist Temple (宝光寺) enjoys a long history and a rich collection of relics. It is believed that it was constructed during the East Han period and has appeared in written records since the Tang dynasty. It was destroyed during the Ming dynasty in the early 16th century. In 1607, the ninth year of the reign of the Kangxi Emperor of the Qing dynasty, it was rebuilt.
Qingyang Palace
Located in the western part of Chengdu, Qingyang Palace (青羊宫) is not only the largest and oldest Taoist temple in the city, but also the largest Taoist temple in Southwestern China. The only existing copy of the Daozang Jiyao (a collection of classic Taoist scriptures) is preserved in the temple.
According to history, Qingyang Temple was the place where Lao Tzu preached his famous Dao De Jing to his disciple, Ying Xi.
Featured streets and historic towns
Kuanzhaixiangzi Alleys
Kuanzhaixiangzi Alleys (宽窄巷子) were first built during the Qing dynasty for Manchu soldiers. The lanes remained residential until 2003 when the local government turned the area into a mixed-use strip of restaurants, teahouses, bars, avant-garde galleries, and residential houses. Historic architecture has been well preserved in the Wide and Narrow lanes.
Jinli
Nearby Wuhou Shrine, Jinli is a popular commercial and dining area resembling the style of traditional architecture of western Sichuan. "Jinli" () is the name of an old street in Chengdu dating from the Han dynasty and means "making perfection more perfect."
The ancient Jinli Street was one of the oldest and the most commercialized streets in the history of the Shu state and was well known throughout the country during the Qin, Han and Three Kingdoms periods. Many aspects of the urban life of Chengdu are present in the current-day Jinli area: teahouses, restaurants, bars, theaters, handicraft stores, local snack vendors, and specialty shops.
Huanglongxi Historic Town
Facing the Jinjiang River to the east and leaning against Muma Mountain to the north, the ancient town of Huanglongxi is approximately southeast of Chengdu. It was a large military stronghold for the ancient Shu Kingdom. The head of the Shu Han State in the Three Kingdoms period was seated in Huanglongxi, and for some time, the general government offices for Renshou, Pengshan, and Huayang counties were also located here. The ancient town has preserved the Qing dynasty architectural style, as seen in the design of its streets, shops, and buildings.
Chunxi Road
Located in the center of downtown Chengdu, Chunxi Road () is a trendy and bustling commercial strip with a long history. It was built in 1924 and was named after a part of the Tao Te Ching. Today, it is one of the most well-known and popular fashion and shopping centers of Chengdu, lined with shopping malls, luxury brand stores, and boutique shops.
Anren Historic Town
Anren Historic Town is located west of Chengdu. It was the hometown of Liu Wencai, a Qing dynasty warlord, landowner and millionaire. His 27 historic mansions have been well preserved and turned into museums. Three old streets built during the Republic of China period are still being used today by residents. Museums in Anren have a rich collection of more of than 8 million pieces of relics and artifacts. A museum dedicated to the memorial of the 2008 Sichuan earthquake was built in 2010.
Luodai Historic Town
Luodai was built, like many historic structures in the area, during the period of the Three Kingdoms. According to legend, the Shu Han emperor Liu Shan dropped his jade belt into a well when he passed through this small town. Thus, the town was named 'lost belt' (). It later evolved into its current name with the same pronunciation, but a different first character.
Luodai Historic Town is one of the five major Hakka settlements in China. Three or four hundred years ago, a group of Hakka people moved to Luodai from coastal cities. It has since grown into the largest community for Hakka people.
Economy
China's state council has designated Chengdu as the country's western center of logistics, commerce, finance, science and technology, as well as a hub of transportation and communication. It is also an important base for manufacturing and agriculture.
According to the World Bank's 2007 survey report on global investment environments, Chengdu was declared "a benchmark city for investment environment in inland China."
Also based on a research report undertaken by the Nobel economics laureate, Dr. Robert Mundell and the celebrated Chinese economist, Li Yining, published by the State Information Center in 2010, Chengdu has become an "engine" of the Western Development Program, a benchmark city for investment environment in inland China, and a major leader in new urbanization.
In 2010, 12 of the Fortune 500 companies, including ANZ Bank, Nippon Steel Corporation, and Electricité de France, have opened offices, branches, or operation centers in Chengdu, the largest number in recent years. Meanwhile, the Fortune 500 companies that have opened offices in Chengdu, including JP Morgan Chase, Henkel, and GE, increased their investment and upgraded the involvement of their branches in Chengdu. By the end of 2010, over 200 Fortune 500 companies had set up branches in Chengdu, ranking it first in terms of the number of Fortune 500 companies in Central and Western China. Of these, 149 are foreign enterprises and 40 are domestic companies.
According to the 2010 AmCham China White Paper on the State of American Business in China, Chengdu has become a top investment destination in China.
The main industries in Chengdu—including machinery, automobile, medicine, food, and information technology—are supported by numerous large-scale enterprises. In addition, an increasing number of high-tech enterprises from outside Chengdu have also settled down there.
Chengdu is becoming one of the favorite cities for investment in Central and Western China. Among the world's 500 largest companies, 133 multinational enterprises have had subsidiaries or branch offices in Chengdu by October 2009. These MNEs include Intel, Cisco, Sony and Toyota that have assembly and manufacturing bases, as well as Motorola, Ericsson, and Microsoft that have R&D centers in Chengdu. The National Development and Reform Commission has formally approved Chengdu's proposed establishment of a national bio-industry base there. The government of Chengdu had unveiled a plan to create a 90-billion-CNY bio pharmaceutical sector by 2012. China's aviation industries have begun construction of a high-tech industrial park in the city that will feature space and aviation technology. The local government plans to attract overseas and domestic companies for service outsourcing and become a well-known service outsourcing base in China and worldwide.
In the middle of the 2000s, the city expanded urban infrastructure and services to nearby rural communities in an effort to improve rural living conditions.
Electronics and IT industries
Chengdu has long been an established national electronics and IT industry hub. Chengdu's growth accelerated alongside the growth of China's domestic telecom services sector, which along with India's together account for over 70 percent of the world telecommunications market. Several key national electronics R&D institutes are located in Chengdu. Chengdu Hi-tech Industrial Development Zone has attracted a variety of multinationals, at least 30 Fortune 500 companies and 12,000 domestic companies, including Intel, IBM, Cisco, Nokia, Motorola, SAP, Siemens, Canon, HP, Xerox, Microsoft, Tieto, NIIT, MediaTek, and Wipro, as well as domestic powerhouses such as Lenovo. Dell opened its second major China operations center in 2011 in Chengdu as its center in Xiamen expands in 2010.
Intel Capital acquired a strategic stake in Primetel, Chengdu's first foreign technology company in 2001. Intel's Chengdu factory, set up in 2005 is its second in China, after its Shanghai factory, and the first such large-scale foreign investment in the electronics industry in interior mainland China. Intel, the world's largest chipmaker, has invested US$600 million in two assembly and testing facilities in Chengdu. Following the footsteps of Intel, Semiconductor Manufacturing International Corporation (SMIC), the world's third largest foundry, set up an assembly and testing plant in Chengdu in 2006. AMD, Intel's rival, had set up an R&D center in this city in 2008.
In November 2006, IBM signed an agreement with the Chengdu High-Tech Zone to establish a Global Delivery Center, its fourth in China after Dalian, Shanghai and Shenzhen, within the Chengdu Tianfu Software Park. Scheduled to be operational by February 2007, this new center will provide multilingual application development and maintenance services to clients globally in English, Japanese and Chinese, and to the IBM Global Procurement Center, recently located to the southern Chinese city of Shenzhen. On 23 March 2008, IBM announced at the "West China Excellent Enterprises CEO Forum" that the southwest working team of IBM Global Business Services is now formally stationed in Chengdu. On 28 May 2008, Zhou Weikun, president of IBM China disclosed that IBM Chengdu would increase its staff number from the present 600 to nearly 1,000 by the end of the year.
In July 2019, Amazon Web Services, the cloud computing company, signed a deal with the Chengdu High-Tech Zone to establish an innovation center. This project was intended to attract international business and enterprise into the area, promote cloud computing in China, and develop artificial intelligence technologies.
Chengdu is a major base for communication infrastructure, with one of China's nine top level postal centers and one of six national telecom exchanges hub.
In 2009, Chengdu hosted the World Cyber Games Grand Finals (11–15 November). It was the first time China hosted the world's largest computer and video game tournament.
Financial industry
Chengdu is a leading financial hub in the Asia-Pacific region and ranks 35th globally and 6th in China after (Shanghai, Hong Kong, Beijing, Shenzhen and Guangzhou) in the 2021 Global Financial Centres Index. Chengdu has attracted a large number of foreign financial institutions, including Citigroup, HSBC, Standard Chartered Bank, JPMorgan Chase, ANZ and MUFG Bank.
ANZ's data services center, established in 2011 in Chengdu, employs over 800 people, and in March 2019 the bank recruited further staff to support its data analytics and big data efforts. In 2020, ANZ temporarily repurposed its Chengdu data center to an IT helpdesk, as part of the bank's pandemic response.
Historically, Chengdu has marked its name in the history of financial innovation. The world's first paper currency 'Jiao Zi' was seen in Chengdu in the year 1023, during the Song dynasty.
Now, Chengdu is not only the gateway of Western China for foreign financial institutions, but also a booming town for Chinese domestic financial firms. The Chinese monetary authority, People's Bank of China (China's central bank), set its southwestern China headquarters in Chengdu City. In addition, almost all domestic banks and securities brokerage firms located their regional headquarters or branches in Chengdu. At the same time, the local financial firms of Chengdu are strengthening their presences nationally, notably, Huaxi Securities, Sinolink Securities, and Bank of Chengdu. Moreover, on top of banks and brokerage firms, the flourish of local economy lured more and more financial service firms to the city to capitalise on the economic growth. Grant Thornton, KPMG, PWC and Ernst & Young are the four global accountants and business advisers with Western China head offices in the city.
It is expected that by 2012, value-added financial services will make up 14 percent of the added-value service industry and 7 percent of the regional GDP. By 2015, those figures are expected to grow to 18 percent and 9 percent respectively.
Modern logistic industry
Because of its logistic infrastructure, professional network, and resources in science, technology, and communication, Chengdu has become home to 43 foreign-funded logistic enterprises, including UPS, TNT, DHL, and Maersk, as well as a number of well-known domestic logistic enterprises including COSCO, CSCL, SINOTRANS, CRE, Transfar Group, South Logistic Group, YCH, and STO. By 2012, the logistic industry in Chengdu will realize a value added of RMB 50 billion, with an average annual growth exceeding 18 percent. Ten new international direct flights will be in service; five railways for five-scheduled block container trains will be put into operation; and 50 large logistic enterprises are expected to have annual operation revenue exceeding RMB 100 million.
Modern business and trade
Chengdu is the largest trade center in western China with a market covering all of Sichuan province, exerting influence on six provinces, cities, and districts in western China. Chengdu ranks first among cities in western China in terms of the scale of foreign investment in commerce and trade. By 2012, total retail sales of consumer goods in Chengdu reached RMB 331.77 billion, up 16 percent annually on average.
Convention and exhibition industry
Boasting the claim as "China's Famous Exhibition City" and "China's Most Competitive Convention and Exhibition City", Chengdu takes the lead in central and western China for its scale of convention economy. It has been recognized as one of the three largest convention and exhibition cities in China. In 2010, direct revenue from the convention and exhibition industry was RMB 3.21 billion, with a year-on-year growth of 27.8 percent. The growth reached a historical high.
Software and service outsourcing industry
In 2006, Chengdu was listed as one of the first service outsourcing base cities in China by the Ministry of Science and Technology. Among the Top 10 service outsourcing enterprises in the world, Accenture, IBM, and Wipro are based in Chengdu. In addition, 20 international enterprises including Motorola, Ubi Soft Entertainment, and Agilent, have set up internal shared service centers or R&D centers in Chengdu. Maersk Global Document Processing Center and Logistic Processing Sub-center, DHL Chengdu Service Center, Financial Accounting Center for DHL China, and Siemens Global IT Operation Center will be put into operation. In 2010, offshore service outsourcing in Chengdu realized a registered contract value of US$336 million, 99 percent higher than the previous year.
New energy industry
Chengdu was granted the title "National High-Tech Industry Base for New Energy Industry" (新能源产业国家高技术产业基地) by the National Development and Reform Commission. Chengdu ranked first again in the list of China's 15 "Cities with Highest Investment Value for New Energies" released at the beginning of 2011, and Shuangliu County under its jurisdiction entered "2010 China's Top 100 Counties of New Energies". In 2012, Chengdu's new energy industry reached an investment over 20 billion RMB and sales revenue of 50 billion RMB.
Electronics and information industry
Chengdu is home to the most competitive IT industry cluster in western China, an important integrated circuit industry base in China, and one of the five major national software industry bases.
Manufacturing chains are already formed in integrated circuits, optoelectronics displays, digital video & audio, optical communication products, and original-equipment products of electronic terminals, including companies as IBM, Intel, Texas Instruments, Microsoft, Motorola, Nokia, Ericsson, Dell, Lenovo, Foxconn, Compal, and Wistron.
Automobile industry
Chengdu has built a comprehensive automobile industry system, and preliminarily formed a system integrated with trade, exhibitions, entertainment, R&D, and manufacturing of spare parts and whole vehicles (e.g., sedans, coaches, sport utility vehicles, trucks, special vehicles). There are whole vehicle makers, such as Dongfeng-PSA (Peugeot-Citroën), Volvo, FAW-Volkswagen, FAW-Toyota, Yema, and Sinotruk Wangpai, as well as nearly 200 core parts makers covering German, Japanese, and other lines of vehicles.
In 2011, Volvo announced that its first manufacturing base in China with an investment of RMB 5.4 billion was to be built in Chengdu. By 2015, the automobile production capacity of Chengdu's Comprehensive Function Zone of Automobile Industry is expected to reach 700,000 vehicles and 1.25 million in 2020.
Modern agriculture
Chengdu enjoys favorable agricultural conditions and rich natural resources. It is an important base for high-quality agricultural products. A national commercial grain and edible oil production base, the vegetable and food supply base as well as the key agricultural products processing center and the logistics distribution center of western China are located in Chengdu.
Defense industry
Chengdu is home to many defense companies such as the Chengdu Aircraft Company, which produces the recently declassified J-10 Vigorous Dragon combat aircraft as well as the JF-17 Thunder, in a joint collaborative effort with Pakistan Air Force. Chengdu Aircraft Company has also developed the J-20 Mighty Dragon stealth fighter. The company is one of the major manufacturers of Chinese Military aviation technology.
Industrial zones
Chengdu Hi-tech Comprehensive Free Trade Zone
Chengdu Hi-tech Comprehensive Free Trade Zone was established with the approval of the State Council on 18 October 2010 and passed the national acceptance on 25 February 2011. It was officially operated in May 2011. Chengdu High-tech Comprehensive Free Trade Zone is integrated and expanded from the former Chengdu Export Processing Zone and Chengdu Bonded Logistics Center. it is located in the Chengdu West High-tech Industrial Development Zone, with an area of 4.68 square kilometers and divided into three areas A, B and C. The industries focus on notebook computer manufacturing, tablet computer manufacturing, wafer manufacturing and chip packaging testing, electronic components, precision machining, and biopharmaceutical industry. Chengdu Hi-Tech Comprehensive Free Trade Zone has attracted top 500 and multinational enterprises including as Intel, Foxconn, Texas Instruments, Dell, and Morse.
In 2020, the Chengdu Hi-Tech Comprehensive Free Trade Zone achieved a total import and export volume of 549.1 billion yuan (including Shuangliu Sub-zone), accounting for 68% of the province's total foreign trade import and export volume, ranking No.1 in the national comprehensive free trade zones for three consecutive years.
Chengdu Economic and Technological Development Zone
Chengdu Export Processing Zone
Chengdu Hi-Tech Industrial Development Zone
Chengdu National Cross-Strait Technology Industry Development Park
This was established in 1992 as the Chengdu Taiwanese Investment Zone.
Built environments
In 1988, The Implementation Plan for a Gradual Housing System Reform in Cities and Towns marked the beginning of overall housing reform in urban areas of China. More than 20 real estate companies set up in Chengdu, which was the first step for Chengdu's real estate development.
The comprehensive Funan River renovation project in the 1990s had been another step towards promoting Chengdu environmental development. The Funan River Comprehensive Improvement Project won the UN-Habitat Scroll of Honour Award in 1998, as well as winning the "Local Initiative Award" by the International Council for Local Environmental Initiatives in 2000.
Chengdu started the Five Main Roads & One Bridge project in 1997. Three of the roads supported the east part of the city, the other two led to the south. It established the foundation of the Eastern and Southern sub-centers of Chengdu. The two major sub-centers determined people's eastward and southward living trends. Large numbers of buildings appeared around the east and south of the 2nd Ring Road. The Shahe River renovation project together with Jin River project also set off a fashion for people living by the two rivers. It was said that the map of Chengdu should update every three months.
A speculative housing boom occurred in the late 1990s and early 2000s. In 2000, dozens of commercial real estate projects also appeared. While promoting the real estate market, the Chinese government encouraged citizens to buy their own houses by providing considerable subsidies at a certain period. Houses were included in commodities.
Transport
Air
Chengdu is the third Chinese city with two international airports (Shuangliu International Airport and Chengdu Tianfu International Airport) after Beijing and Shanghai. Chengdu Shuangliu International Airport is located in Shuangliu County southwest of downtown. Chengdu Shuangliu International Airport is the busiest airport in Central and Western China and the nation's fourth-busiest airport in 2018, with a total passenger traffic of 53 million in 2018.
Chengdu airports (including Shuangliu International Airport and Tianfu International Airport) is also a 144-hour visa-free transit port for foreigners from 53 countries Besides, Chengdu airports also offer 24-hour visa-free transit for most nationals when having a stopover in Chengdu.
Chengdu Shuangliu International Airport has two runways and is capable of landing the Airbus A-380, currently the largest passenger aircraft in operation. Chengdu is the fourth city in China with two commercial-use runways, after Beijing, Shanghai and Guangzhou. On 26 May 2009, Air China, Chengdu City Government and Sichuan Airport Group signed an agreement to improve the infrastructure of the airport and increase the number of direct international flights to and from Chengdu. The objective is to increase passenger traffic to more than 40 million by 2015, making Chengdu Shuangliu International Airport the fourth-largest international hub in China, after Beijing, Shanghai and Guangzhou, top 30 largest airports in the world. Chengdu Shuangliu Airport ranked the No.1 and No.2 busiest airport in China in 2020 and 2021, respectively.
A second international airport, the Chengdu Tianfu International Airport currently with two main terminals and three runways, opened in June 2021. The new airport is southeast of the city and will have a capacity to handle between 80 and 90 million passengers per year.
Railway
Chengdu is the primary railway hub city and rail administrative center in southwestern China. The China Railway Chengdu Group manages the railway system of Sichuan Province, Chongqing City, and Guizhou Province. Chengdu has four main freight railway stations. Among them, the Chengdu North Marshalling Station is one of the largest marshalling stations in Asia. Since April 2013, companies are able to ship goods three times a week (initially only once a week) to Europe on trains originating from Chengdu Qingbaijiang Station bound for Łódź, Poland. It is the first express cargo train linking China and Europe, taking 12 days to complete the full journey.
There are four major passenger stations servicing Chengdu: Chengdu railway station (commonly referred to as the "North Station"), Chengdu South railway station (ChengduNan Station), Chengdu East railway station (ChengduDong Station), and Chengdu West railway station (ChengduXi Station). Additionally, Chengdu Tianfu Station is under construction.
Chengdu is the terminus of Baoji–Chengdu railway, Chengdu–Chongqing railway, Chengdu–Kunming railway, Chengdu–Dazhou railway, Shanghai–Wuhan–Chengdu high-speed railway, Chengdu-Lanzhou railway, Xi'an-Chengdu high-speed railway, Chengdu-Guiyang high-speed railway, Chengdu-Kunming high-speed railway and Chengdu–Dujiangyan high-speed railway.
The Chengdu–Dujiangyan high-speed railway is a high-speed rail line connecting Chengdu with the satellite city of Dujiangyan and the Mountain Qingcheng World Heritage Site. The line is in length with 15 stations. CRH1 train sets on the line reach a maximum speed of and complete the full trip in 30 minutes. The line was built in 18 months and entered operation on 12 May 2010.
Metropolitan expressways
Chengdu's transport network is well developed, and Chengdu serves as the starting point for many national highways, with major routes going from Sichuan–Shanxi, Sichuan–Tibet, and Sichuan–Yunnan.
Several major road projects have been constructed: a tunnel from Shuangliu Taiping to Jianyang Sancha Lake; alteration of the National Expressway 321, from Jiangyang to Longquanyi. There will also be a road that connects Longquan Town to Longquan Lake; it is connected to the Chengdu–Jianyang Expressway and hence shorten the journey by . By the end of 2008, there are ten expressways, connecting downtown Chengdu to its suburbs. The expressways are Chenglin Expressway, extensions of Guanghua Avenue, Shawan Line, and an expressway from Chengdu to Heilongtan.
The toll-free Chengjin Expressway in the east of Chengdu is long. It takes about half an hour to drive from central Chengdu to Jintang.
The expressway between Chengdu to Heilongtan (Chengdu section), going to the south of the city, is long. It is also toll-free and a journey from downtown Chengdu to Heilongtan will only take half an hour.
The extension of Guanghua Avenue, going towards the west of the city. It make the journey time from Chongzhou City to Sanhuan Road to less than half an hour.
The extension of Shawan Road going north is designed for travel at . After it is connected to the expressways Pixian–Dujiangyan and Pixian–Pengzhou, it will take only 30 minutes to go from Chengdu to Pengzhou.
Coach
There are many major intercity bus stations in Chengdu, and they serve different destinations.
Chadianzi (): Hongyuan, Jiuzhaigou, Rilong Town, Ruo Ergai, Songpan County, Wolong and Langzhong
Xinnanmen (: Daocheng, Emei Shan, Jiǔzhàigōu, Kangding, Garzê Tibetan Autonomous Prefecture, Ya'an and Leshan
Wuguiqiao (): Chongqing
Jinsha (): Qionglai, Pi County and Huayang () Chengdu East railway Station
Highways
National Highway G5 Beijing–Kunming
National Highway G42 Shanghai–Chengdu
National Highway G76 Xiamen–Chengdu
National Highway G93 Chengdu–Chongqing Ring
National Highway G4202 Chengdu Ring
Chengdu Metro
The Chengdu Metro officially opened on 1 October 2010. Line 1 runs from Shengxian Lake to Guangdu (south-north). Line 2 opened in September 2012. Line 3 opened in July 2016. Line 4 opened in December 2015. Line 10 connects to city center and Shuangliu International Airport. Future plans call for more than thirty lines. As of the end of June 2024, Chengdu has 558 km of metro lines in operation.
Bus
Bus transit is an important method of public transit in Chengdu. There are more than 400 bus lines in Chengdu with nearly 12,000 buses in total. In addition, the Chengdu BRT offers services on the Second Ring Road Elevated Road. Bus cards are available that permit free bus transfers for three hours.
River transport
Historically, Jinjiang River (also known as Nanhe River) was used for boat traffic in and out of Chengdu. To ensure that Chengdu's goods have access to Yangtze River efficiently, inland port cities of Yibin and Luzhou—both of which are reachable from Chengdu within hours by expressways—on the Yangtze have commenced large-scale port infrastructure development. As materials and equipment for the rebuilding of northern Sichuan are sent in from the East Coast to Sichuan, these ports will see significant increases in throughput.
Education and research
Wen Weng, administer of Chengdu in the Han dynasty, established the first local public school now named Shishi (literally a stone house) in the world. The school site has not changed for more than 2,000 years, which remains the site of today's Shishi High School. No. 7 High School and Shude High School are also two famous local public schools in Chengdu.
Chengdu is a leading scientific research city, one of the only two cities in the Western China region (alongside Xi'an), ranking in the top 25 cities worldwide by scientific research outputs. It is consistently ranked # 1 as the center of higher education and scientific research in Southwest China. The city is home to more than 58 universities, with the two reputable ones being Sichuan University and the University of Electronic Science and Technology of China, ranking 98 and 101-150 worldwide, respectively.
Higher education
Sichuan University (SCU) (Founded in 1896), including the West China Medical Center of Sichuan University (Founded in 1910)
Southwest Jiaotong University (Founded in 1896)
Southwestern University of Finance and Economics (Founded in 1925)
University of Electronic Science and Technology of China (Founded in 1956)
Chengdu University of Technology (Founded in 1956)
Sichuan Normal University (Founded in 1946)
Chengdu University of Traditional Chinese Medicine (Founded in 1956)
Chengdu Kinesiology University (Founded in 1942)
Southwest University for Nationalities (Founded in 1951)
Sichuan Conservatory of Music (Founded in 1939)
Xihua University (Founded in 1960)
Southwest Petroleum University (Founded in 1958)
Chengdu University of Information Technology (Founded in 1951)
Chengdu University (Founded in 1978)
Chengdu Medical College (Founded in 1947)
Note: Private institutions or institutions without full-time bachelor programs are not listed.
Consulates
The United States Consulate General at Chengdu opened on 16 October 1985. It was the first foreign consulate in west-central China since 1949. The United States Consulate General at Chengdu was closed on 27 July 2020, corresponding to the closure of Chinese Consulate-General, Houston. The Sri Lankan consulate in Chengdu opened in 2009, and was temporarily closed in 2016. Currently, 17 countries have consulates in Chengdu. The Philippines, India, Greece, Brazil and Argentina have been approved to open consulates in Chengdu.
Sports
Soccer
Soccer is a popular sport in Chengdu. Chengdu Tiancheng, Chengdu's soccer team, played in the 42,000-seat Chengdu Sports Stadium in the Chinese League One. The club was founded on 26 February 1996 and was formerly known as Chengdu Five Bulls named after their first sponsor, the Five Bulls Cigarette Company. English professional soccer club Sheffield United F.C., took over the club on 11 December 2005. The club was later promoted into the China Super League until they were embroiled in a match-fixing scandal in 2009. Punished with relegation the owners eventually sold their majority on 9 December 2010 to Hung Fu Enterprise Co., Ltd and Scarborough Development (China) Co., Ltd. On 23 May 2013 the Tiancheng Investment Group announced the acquisition of the club.
Currently, Chengdu Rongcheng F.C. plays in the Chinese Super League.
Longquanyi Stadium was one of the four venues which hosted the 2004 AFC Asian Cup. Chengdu, along with Shanghai, Hangzhou, Tianjin and Wuhan, hosted the 2007 FIFA Women's World Cup.
Tennis
Chengdu is the hometown of Grand Slam champions Zheng Jie and Yan Zi, who won the women's double championships at both the Australian Open and Wimbledon in 2006, and Li Na who won the 2011 French Open and 2014 Australian Open, has led to increased interest in tennis in Chengdu. Over 700 standard tennis courts have been built in the city in the past 10 years (2006–2016), and the registered membership for the Chengdu Tennis Association have grown to over 10,000 from the original 2,000 in the 1980s.
Chengdu is now part of an elite group of cities to host an ATP (Association of Tennis Professionals) Champions Tour tournament, along with London, Zürich, São Paulo and Delray Beach. Chengdu Open, an ATP Championships Tour starting in 2009, have successfully invited star players including Pete Sampras, Marat Safin, Carlos Moya, Tomas Enqvist, and Mark Philippoussis.
Overwatch
Chengdu was represented in the Overwatch League by the Chengdu Hunters, the first major esports team to represent Chengdu. They played as part of the League's Pacific Division from 2019 until 2022.
League of Legends
Chengdu hosted the 2024 Mid-Season Invitational from 1 May to 19 May at the Chengdu Financial City Performing Arts Center. South Korean team Gen.G defeated home favorites Bilibili Gaming 3-1 in a rematch of their upper bracket final match. Prior to the 2024 League of Legends World Championship grand finals, it was also announced that Chengdu would also host the 2025 tournament Final.
Multi-sport events
Chengdu hosted the 2021 Summer World University Games, originally scheduled to take place from 8–19 August 2021, but the delayed Summer Olympics in Tokyo from 2020 to 2021 caused the proposed dates to be moved due to the COVID-19 pandemic. The games would eventually be delayed to 28 July8 August 2023 due to COVID-19 concerns. The city will also host the 2025 World Games.
Major sports venues
The Chengdu Sports Center is located in downtown Chengdu, covering and has 42,000 seats. As one of the landmarks of Chengdu, it is the first large multipurpose venue in Chengdu that can accommodate sports competitions, trainings, social activities, and performances. It is the home stadium of the Chengdu Blades, Chengdu's soccer team. The stadium hosted the 2007 FIFA Women's World Cup.
The Sichuan International Tennis Center, located away from Chengdu's Shuangliu International Airport, covers an area of . It is the largest tennis center in southwest China and the fourth tennis center in China meeting ATP competition standards, after Beijing, Shanghai and Nanjing. This center is equipped with 36 standard tennis courts and 11,000 seats. Since 2016, the Chengdu Open, an ATP Championship Tour tournament, is held here annually.
The Chengdu Goldenport Circuit is a motorsport racetrack that has hosted the A1 Grand Prix, Formula V6 Asia, China Formula 4 Championship and China GT Championship.
Twin towns and sister cities
Chengdu is twinned with:
Agra, Uttar Pradesh, India
Bengaluru, Karnataka, India
Bonn, North Rhine-Westphalia, Germany (10 September 2009)
Cebu City, Central Visayas, Philippines
Chiang Mai, Chiang Mai Province, Thailand
Daegu, South Korea (10 November 2015)
Fingal, Ireland
Flemish Brabant, Flanders, Belgium (27 May 2011)
Gimcheon, North Gyeongsang Province, South Korea
Haifa, Israel
Hamilton, New Zealand (6 May 2015)
Honolulu, Hawaii, United States (14 September 2011)
Horsens, East Jutland, Denmark
Maputo, Mozambique
Kandy, Central Province, Sri Lanka
Kathmandu, Nepal
Knoxville, Tennessee, United States
Kofu, Yamanashi, Japan (27 September 1984)
Lahore, Punjab, Pakistan
Linz, Upper Austria, Austria (1983)
Ljubljana, Slovenia (1981)
Łódź, Łódź Voivodeship, Poland (29 June 2015)
Lviv, Lviv Oblast, Ukraine (2014)
Maastricht, Limburg, Netherlands (13 September 2012)
Mechelen, Belgium (1993)
Medan, North Sumatra, Indonesia (2002)
Melbourne, Victoria, Australia
Montpellier, Languedoc-Roussillon, France (22 June 1981)
Nashville, Tennessee, United States
Palermo, Sicily, Italy
Perth, Western Australia, Australia (September 2012)
Phoenix, Arizona, United States
Sheffield, South Yorkshire, United Kingdom (23 March 2010)
Volgograd, Volgograd Oblast, Russia (27 May 2011)
Winnipeg, Manitoba, Canada (1988)
Zapopan, Jalisco, Mexico
Chengdu also has friendly relationships or partnerships with:
Adelaide, South Australia, Australia
Atlanta, Georgia, United States
Baku, Azerbaijan
Beyoğlu, Istanbul, Turkey
City of Gold Coast, Queensland, Australia
Dalarna, Sweden
Fez, Morocco
Milan, Lombardy, Italy
Saint Petersburg, Russia
Tallinn, Estonia
Valencia, Spain
Notable people
Tang Danhong, filmmaker and poet
Yang Hongying, (born 1962), best-selling author of children's fiction books
Tao Jiali (born 1987), fighter pilot in the People's Liberation Army Air Force
Muni He/Lily He (born 1999), golfer
Shen Xiaoting (Born 1999), singer (Kep1er)
Li Yifeng (born 1987), male actor
Jason Zhang (born 1982), pop singer
Li Yuchun (born 1984), singer and actress
Jane Zhang (born 1984), singer and songwriter
Gong Jun (born 1992), actor
Zhao Lusi (born 1998), actress and singer
Guo Feng (born 1962), songwriter and singer
Xu Deqing (born 1963), general in the People's Liberation Army (PLA) serving as political commissar of the Central Theater Command since January 2022.
Zhi-Ming Ma (born 1948), mathematics professor of Chinese Academy of Sciences, former Vice Chairman of the Executive Committee for International Mathematical Union
Huajian Gao (born 1963), Chinese-American mechanician widely known for his contributions to the field of solid mechanics
See also
List of cities in China by population
List of current and former capitals of subdivisions of China
List of twin towns and sister cities in China
Notes
References
Bibliography
Cheung, Raymond. OSPREY AIRCRAFT OF THE ACES 126: Aces of the Republic of China Air Force. Oxford: Bloomsbury Publishing Plc, 2015. .
Mayhew, Bradley; Miller, Korina; English, Alex, South-West China, Lonely Planet Publications, 1998 (2nd edition 2002). Cf. p. 444 for its article on Chengdu.
Quian, Jack, Chengdu: A City of Paradise, 2006
Further reading
Ling Zhu, "Chengdu, the city of spice and tea" , China Daily, Government of China, Friday, 22 December 2006
Anna Zhang, "City Profile: Chengdu – Land of Abundance," Shanghai Business Review, July 2012.
Stapleton, Kristin. Civilizing Chengdu.
Stapleton, Kristin. Fact in Fiction
External links
Official website of the Chengdu Government
Official website of the Chengdu Government
310s BC establishments
316 BC
National forest cities in China
Populated places established in the 4th century BC
Provincial capitals in China
Webarchive template wayback links
Eutrophication
Prefecture-level divisions of Sichuan
Cities in Sichuan
Sub-provincial cities in the People's Republic of China | Chengdu | [
"Chemistry",
"Environmental_science"
] | 16,416 | [
"Eutrophication",
"Environmental chemistry",
"Water pollution"
] |
170,381 | https://en.wikipedia.org/wiki/ReplayTV | ReplayTV was a former DVR company that from 1999 until 2005, produced a brand of digital video recorders (DVR), a term synonymous with personal video recorder (PVR). It is a consumer video device which allows users to capture television programming to internal hard disk storage for later viewing (and time shifting). ReplayTV was founded in September 1997 by future Roku founder Anthony Wood, who was president and CEO of ReplayTV until August 2001.
The first ReplayTV model was introduced in January 1999 during the Consumer Electronics Show in Las Vegas, at the same time as a competing DVR model from rival company TiVo. After the sale of assets to DirecTV, ReplayTV's only ongoing activity was maintenance of the electronic program guide service by D&M Holdings, which was to be discontinued on July 31, 2011. However, on July 29, 2011, a notice was placed on the ReplayTV website stating that service would be continued without interruption for lifetime subscribers and monthly subscribers may have a short interruption in service. On September 2, 2011, programming contact through the ReplayTV dialup system was terminated without any update message being sent to subscribers or posted on replaytv.com. DNNA filed for bankruptcy on July 20, 2015. EPG data from their servers ran out on July 15, 2015. Even with the end of support from DNNA, third-party solutions are available to provide Electronic Program Guide data to ReplayTV units.
History
ReplayTV was founded in September 1997 by businessman Anthony Wood, who later founded Roku in October 2002. Initial sales to consumers were launched in April 1999, while volume production and sales did not begin until later in 2000. ReplayTV was purchased by SONICblue in 2001.
On March 23, 2003, SONICblue filed for Chapter 11 bankruptcy, and on April 16 sold most of its assets, including ReplayTV, to the Japanese electronics giant D&M Holdings. SONICblue was fighting a copyright infringement suit over the ReplayTV's ability to skip commercials when it filed for bankruptcy.
On December 19, 2005, Digital Networks North America announced that it was exiting the hardware business as soon as current inventory was sold out. ReplayTV would then concentrate on PC software sales of its DVR technology in a partnership with Hauppauge Computer Works, a manufacturer of television cards for PCs.
On December 13, 2007, D&M Holdings sold most of the assets of ReplayTV to DirecTV. It still provided electronic program guide service to existing customers, using content from Tribune Media Services. The domains replay.com, replaytv.com, and replaytv.net used by the ReplayTV units to access the electronic program guide are or were owned by DirecTV.
On June 15, 2011, D&M Holdings announced it was permanently discontinuing the ReplayTV electronic program guide service:"The ReplayTV Electronic Programming Guide (EPG) Service will be permanently discontinued on July 31, 2011. After this date, owners of ReplayTV DVR units will still be able to manually record analog TV programs, but will not have the benefit of access to the interactive program guide. Effective immediately, monthly billing for the ReplayTV service to remaining customers has been suspended."
On July 29, 2011, D&M announced they would continue providing guide service. The following was appearing on the ReplayTV website:Important Announcement. After the announced shutdown of the ReplayTV programming guide service, we have had many positive, enthusiastic comments about the ReplayTV DVR products and services. In light of this response, ReplayTV and its parent company Digital Networks North America, Inc. have decided to continue the electronic programming guide service pursuant to the terms of your service activation agreement. We thank you very much for all of your support and enthusiasm over the many years these products have been sold.
Around July 4, 2015, new guide data was not being sent to ReplayTV units. The last day of guide data was July 15. ReplayTV said they were working on the problem, although it was never fixed. It is assumed this problem is a result of the bankruptcy.
Bankruptcy
On July 20, 2015, ReplayTV.com posted the following information:
Digital Networks North America, Inc. filed for chapter 7 bankruptcy relief with the United States Bankruptcy Court for the District of Delaware on July 20, 2015 and has ceased all business operations. A chapter 7 trustee will be appointed by the Bankruptcy Court to oversee the administration and liquidation of the bankruptcy estate for Digital Networks North America, Inc. Creditors will receive a notice of the bankruptcy filing by mail.
Parent company of ReplayTV, Digital Networks North America filed for Chapter 7 bankruptcy. Following the announcement, ReplayTV ended services.
Legal battle
On October 31, 2001, numerous TV companies, including the three major networks, filed a lawsuit against SONICblue, which at the time marketed the ReplayTV device. They alleged that the ReplayTV 4000 series was part of an “unlawful scheme” that “attacks the fundamental economic underpinnings of free television and basic nonbroadcast services” according to the lawsuit.
The TV industry attacked ReplayTV for two reasons:
The machines enabled people to record television programs and then watch them without commercials via the optional "Commercial Advance" feature. This had the potential to undercut advertising revenues which the lawsuit called "the lifeblood of most television channels".
The machines allowed users to share programs they have recorded with others via the "Send Show" feature, which transmits digital copies of shows not only on a local network, but also over the Internet to other ReplayTV owners, thereby enabling people who had not paid for premium channels to watch premium content for free.
Both the “Commercial Advance” and the “Send Show” features were alleged to violate U.S. copyright and other federal and state laws, according to the TV industry plaintiffs, who wanted sales of the ReplayTV 4000 devices—slated for shipment on Nov. 15, 2001—stopped.
The lawsuit against SONICblue was stayed when the company filed for bankruptcy protection in March 2003. In August 2003, the ReplayTV 5500 series went on sale without the Autoskip and Send Show features though the features continued to be enabled on the earlier models.
Operation
ReplayTV service is only available in the United States via its subscription program. The hardware is no longer sold and the subscription service was ended in June 2011. The subscriptions service's feature of searchable program guides will end effective July 31, 2011. After this date, owners of ReplayTV DVR units will still be able to manually record analog TV programs, but will not have the benefit of access to the interactive program guide. Older units were only able to accomplish this download via a dial-up connection. Later models were also capable of downloading program guides via the user's existing internet connection (broadband or DSL), as well as via the dialup connection.
Like other DVRs, ReplayTV allows users to record television programs. The subscription service was operated on a monthly fee, or one time payment, lifetime subscription. Each individual unit required a separate subscription. Older units, like the 2000 and 3000 series, did not require monthly subscription fees, and those units that were still in operation and continue to receive programming data without a subscription until July 31, 2011. The price of the original ReplayTV units was higher than comparable TiVo units by approximately the same amount as TiVo's lifetime subscription, so a lifetime subscription was essentially priced into the units.
Some ReplayTV models allow automatic commercial skipping with no user intervention. It scanned for the black frames local television stations used to insert commercials.
Hardware and features
The "4000 Series" and "5000 Series" ReplayTV units have Ethernet connections that allow the user to stream shows to another similar ReplayTV unit within the same local network, transfer shows to another similar ReplayTV unit (either on the local network or across the Internet) or to a personal computer. This capability enables users to move recorded programs to PCs using third-party programs. These units also have the capability to automatically skip commercial advertisements during playback (known as "Commercial Advance", which was trademarked by the original company). This commercial advance feature used several heuristics to detect commercials and had an accuracy of about 90 to 95 percent. Recording of television programs could be accomplished either manually, or through use of the program guide using various criteria. Shows to be recorded could also be described via thematic categories. Shows could be recorded or lightly managed through MyReplayTV at my.replaytv.com
The most recent "5500 Series" ReplayTV units had the ability to stream shows to another similar ReplayTV unit within the same local network, but when loaded with up-to-date system software they can no longer transfer shows to other ReplayTV units across the Internet. The "5500 Series" units have also had the automatic commercial advance feature removed in favor of a manual "Show|Nav" feature. The units are otherwise identical to the "5000 Series" units. System software releases prior to version 5.1 build 144 retain the original features even in 5500-series hardware devices, but the software is automatically updated to 5.1 build 144 via through-the-web software updates.
The "4000 Series", "5000 Series" and "5500 Series" ReplayTV units stored the content using MPEG2.
Video: mpeg2video, yuv420p, 720x480, 29.97 frame/s, 7413 kbit/s
Audio: mp2, 48000 Hz, stereo, 192 kbit/s
The operating system used on the "4000 Series", "5000 Series" and "5500 Series" was Vxworks. Several versions of the software were released, some older versions supported commercial advance even on newer units.
Other features
Beginning with the earliest models, ReplayTV units had an undocumented "random access skip" feature. While playing a recorded program, the user could enter a number from the remote control, then press the QuickSkip button to advance that number of minutes forward in the program, or press the "Skip Back" button to go back that number of minutes. If no number was entered, the skip buttons would advance 30 seconds or retreat ten seconds, respectively. This feature spanned several models of ReplayTV. Users could also type a number followed by pressing Jump to go to that exact minute in the recording.
Models
The first models released were the 2000 series from ReplayTV (the company name at that time). In 1999, the price of the ReplayTV 2001 was $995 and it
provided 6 hours of video storage. The ReplayTV model 2004 was $1,995 and provided 26 hours. Each of these models included a lifetime offer of
program guide service. The ReplayTV 2020 model, offering 20 hours of recording storage, was also released in 1999 at a cost of $699. It replaced the previously released models.
In the year 2000 the 3000-series models were released, as well as an equivalent "ShowStopper" model branded by Panasonic.
This was followed by the 4000 line in the fall of 2001. The 4000-series models were the first to include an Ethernet port, and to support the
sharing of shows between different units either locally (using video streaming) or over the Internet (by duplicating the content).
In 2002 the 4500-series was released. These had hardware that was much like the 4000-series, but they could be purchased without lifetime program guide service for a substantially reduced price. All subsequent models also unbundled the guide service from the hardware. Units without lifetime activation become
almost completely unusable if the monthly activation is terminated.
2005 products with list prices (no longer available):
ReplayTV RTV5504 40-Hour Digital Video Recorder ($149.99)
ReplayTV RTV5508 80-Hour Digital Video Recorder ($299.99)
ReplayTV RTV5516 160-Hour Digital Video Recorder ($449.99)
ReplayTV RTV5532 320-Hour Digital Video Recorder ($799.99)
The ReplayTV 5000 series included a JP1 remote which could be reprogrammed or upgraded using free software.
Subscription
The ReplayTV subscription has two different options, a monthly recurring charge of $12.95, or one-time lifetime activation fee of $299. Subscriptions for additional units were $6.95 a month. PC edition customers are charged $20 per year and come with one year of service.
On June 15, 2011, ReplayTV announced that it would be permanently discontinuing its Electronic Programming Guide Service (the channel guide) on July 31, 2011. "After this day, your ReplayTV DVR will still be able to manually record analog TV programs, but will not have the benefit of access to the interactive program guide. All billing for your service has been suspended."
Their website states the reasoning as being that the industry conversion to HDTV is complete, and customers should contact their local providers for options.
However, there may be options to keep service alive after official support ends using the WiRNS application as all units should be permanently activated if they contact ReplayTV servers before July 31, 2011.
On July 29, 2011, D&M Holdings reversed their previous decision and will continue the ReplayTV electronic program guide service. "For monthly subscribers of the ReplayTV service, we are exploring options by which you may continue paying for and receiving such service going forward. We apologize in advance should there be any minor disruptions in the ReplayTV service while we implement the continuation of the programming guide. Thank you. ReplayTV"
On or around July 4, 2015, DirectTV apparently turned off their guide servers at production.replaytv.net. On July 15, 2015, the last guide data disappeared from ReplayTV units; it is doubtful that the guide data will return. However, alternatives exist which can provide ReplayTVs with guide data for as long as they (the ReplayTV units themselves) are functional. Three such options are LaHO, WiRNS and Perc Data.
See also
Commercial skipping
References
External links
(dead)
Digital video recorders
Interactive television
Companies that filed for Chapter 11 bankruptcy in 2003
Companies that filed for Chapter 7 bankruptcy in 2015
Companies that have filed for Chapter 7 bankruptcy
Defunct electronics companies of the United States | ReplayTV | [
"Technology"
] | 2,916 | [
"Digital video recorders",
"Recording devices"
] |
170,384 | https://en.wikipedia.org/wiki/Copal | Copal is a tree resin, particularly the aromatic resins from the copal tree Protium copal (Burseraceae) used by the cultures of pre-Columbian Mesoamerica as ceremonially burned incense and for other purposes. More generally, copal includes resinous substances in an intermediate stage of polymerization and hardening between "gummier" resins and amber. Copal that is partly mineralized is known as copaline.
It is available in different forms; the hard, amber-like yellow copal is a less expensive version, while the milky-white copal is more expensive.
Etymology
The word "copal" is derived from the Nahuatl language word , meaning "incense".
History and uses
Subfossil copal is well known from New Zealand (kauri gum from Agathis australis (Araucariaceae)), Japan, the Dominican Republic, Colombia, and Madagascar. It often has inclusions and is sometimes sold as "young amber". When it is treated or enhanced in an autoclave (as is sometimes done to industrialized Baltic amber) it is used for jewelry. In its natural condition copal can be easily distinguished from old amber by its lighter citrine colour and its surface getting tacky with a drop of acetone or chloroform. Copal resin from Hymenaea verrucosa (Fabaceae) is found in East Africa and is used in incense. East Africa apparently had a higher amount of subfossil copal, which is found one or two meters below living copal trees, from roots of trees that may have lived thousands of years earlier. This subfossil copal produces a harder varnish.
By the 18th century, Europeans found it to be a valuable ingredient in making a good wood varnish. It became widely used in the manufacture of furniture and carriages. It was also sometimes used as a picture varnish. By the late 19th and early 20th century, varnish manufacturers in England and America were using it on train carriages, greatly swelling its demand. In 1859, Americans consumed 68% of the East African trade, which was controlled through the Sultan of Zanzibar, with Germany receiving 24%. The American Civil War and the creation of the Suez Canal led to Germany, India, and Hong Kong taking the majority by the end of that century.
Copal is still used by a number of indigenous peoples of Mexico and Central America as an incense, during sweat lodge ceremonies and sacred mushroom ceremonies.
References
Sources
Further reading
Visual arts materials
Fossil resins
Incense material
Mesoamerican society
Natural history of Mesoamerica
Resins
Organic gemstones
Kauri gum | Copal | [
"Physics"
] | 551 | [
"Resins",
"Kauri gum",
"Unsolved problems in physics",
"Incense material",
"Materials",
"Amorphous solids",
"Matter"
] |
170,396 | https://en.wikipedia.org/wiki/Bark%20%28botany%29 | Bark is the outermost layer of stems and roots of woody plants. Plants with bark include trees, woody vines, and shrubs. Bark refers to all the tissues outside the vascular cambium and is a nontechnical term. It overlays the wood and consists of the inner bark and the outer bark. The inner bark, which in older stems is living tissue, includes the innermost layer of the periderm. The outer bark on older stems includes the dead tissue on the surface of the stems, along with parts of the outermost periderm and all the tissues on the outer side of the periderm. The outer bark on trees which lies external to the living periderm is also called the rhytidome.
Products derived from bark include bark shingle siding and wall coverings, spices, and other flavorings, tanbark for tannin, resin, latex, medicines, poisons, various hallucinogenic chemicals, and cork. Bark has been used to make cloth, canoes, and ropes and used as a surface for paintings and map making. A number of plants are also grown for their attractive or interesting bark colorations and surface textures or their bark is used as landscape mulch.
The process of removing bark is decortication and a log or trunk from which bark has been removed is said to be decorticated.
Botanical description
Bark is present only on woody plants - herbaceous plants and stems of young plants lack bark.
From the outside to the inside of a mature woody stem, the layers include the following:
Bark
Periderm
Cork (phellem or suber), includes the rhytidome
Cork cambium (phellogen)
Phelloderm
Cortex
Phloem
Vascular cambium
Wood (xylem)
Sapwood (alburnum)
Heartwood (duramen)
Pith (medulla)
In young stems, which lack what is commonly called bark, the tissues are, from the outside to the inside:
Epidermis, which may be replaced by periderm
Cortex
Primary and secondary phloem
Vascular cambium
Secondary and primary xylem.
Cork cell walls contain suberin, a waxy substance which protects the stem against water loss, the invasion of insects into the stem, and prevents infections by bacteria and fungal spores. The cambium tissues, i.e., the cork cambium and the vascular cambium, are the only parts of a woody stem where cell division occurs; undifferentiated cells in the vascular cambium divide rapidly to produce secondary xylem to the inside and secondary phloem to the outside. Phloem is a nutrient-conducting tissue composed of sieve tubes or sieve cells mixed with parenchyma and fibers. The cortex is the primary tissue of stems and roots. In stems the cortex is between the epidermis layer and the phloem, in roots the inner layer is not phloem but the pericycle.
As the stem ages and grows, changes occur that transform the surface of the stem into the bark. The epidermis is a layer of cells that cover the plant body, including the stems, leaves, flowers and fruits, that protects the plant from the outside world. In old stems the epidermal layer, cortex, and primary phloem become separated from the inner tissues by thicker formations of cork. Due to the thickening cork layer these cells die because they do not receive water and nutrients. This dead layer is the rough corky bark that forms around tree trunks and other stems.
Cork, sometimes confused with bark in colloquial speech, is the outermost layer of a woody stem, derived from the cork cambium. It serves as protection against damage from parasites, herbivorous animals and diseases, as well as dehydration and fire.
Periderm
Often a secondary covering called the periderm forms on small woody stems and many non-woody plants, which is composed of cork (phellem), the cork cambium (phellogen), and the phelloderm. The periderm forms from the phellogen which serves as a lateral meristem. The periderm replaces the epidermis, and acts as a protective covering like the epidermis. Mature phellem cells have suberin in their walls to protect the stem from desiccation and pathogen attack. Older phellem cells are dead, as is the case with woody stems. The skin on the potato tuber (which is an underground stem) constitutes the cork of the periderm.
In woody plants, the epidermis of newly grown stems is replaced by the periderm later in the year. As the stems grow a layer of cells form under the epidermis, called the cork cambium, these cells produce cork cells that turn into cork. A limited number of cell layers may form interior to the cork cambium, called the phelloderm.
As the stem grows, the cork cambium produces new layers of cork which are impermeable to gases and water and the cells outside the periderm, namely the epidermis, cortex and older secondary phloem die.
Within the periderm are lenticels, which form during the production of the first periderm layer. Since there are living cells within the cambium layers that need to exchange gases during metabolism, these lenticels, because they have numerous intercellular spaces, allow gaseous exchange with the outside atmosphere. As the bark develops, new lenticels are formed within the cracks of the cork layers.
Rhytidome
The rhytidome is the most familiar part of bark, being the outer layer that covers the trunks of trees. It is composed mostly of dead cells and is produced by the formation of multiple layers of suberized periderm, cortical and phloem tissue. The rhytidome is especially well developed in older stems and roots of trees. In shrubs, older bark is quickly exfoliated and thick rhytidome accumulates. It is generally thickest and most distinctive at the trunk or bole (the area from the ground to where the main branching starts) of the tree.
Chemical composition
Bark tissues make up by weight between 10 and 20% of woody vascular plants and consists of various biopolymers, tannins, lignin, suberin and polysaccharides. Up to 40% of the bark tissue is made of lignin, which forms an important part of a plant, providing structural support by crosslinking between different polysaccharides, such as cellulose.
Condensed tannin, which is in fairly high concentration in bark tissue, is thought to inhibit decomposition. It could be due to this factor that the degradation of lignin is far less pronounced in bark tissue than it is in wood.
It has been proposed that, in the cork layer (the phellogen), suberin acts as a barrier to microbial degradation and so protects the internal structure of the plant.
Analysis of the lignin in the bark wall during decay by the white-rot fungi Lentinula edodes (Shiitake mushroom) using 13C NMR revealed that the lignin polymers contained more Guaiacyl lignin units than Syringyl units compared to the interior of the plant. Guaiacyl units are less susceptible to degradation as, compared to syringyl, they contain fewer aryl-aryl bonds, can form a condensed lignin structure, and have a lower redox potential. This could mean that the concentration and type of lignin units could provide additional resistance to fungal decay for plants protected by bark.
Damage and repair
Bark can sustain damage from environmental factors, such as frost crack and sun scald, as well as biological factors, such as woodpecker and boring beetle attacks. Male deer and other male members of the Cervidae (deer family) can cause extensive bark damage during the rutting season by rubbing their antlers against the tree to remove their velvet.
The bark is often damaged by being bound to stakes or wrapped with wires. In the past, this damage was called bark-galling and was treated by applying clay laid on the galled place and binding it up with hay. In modern usage, "galling" most typically refers to a type of abnormal growth on a plant caused by insects or pathogens.
Bark damage can have several detrimental effects on the plant. Bark serves as a physical barrier to disease pressure, especially from fungi, so its removal makes the plant more susceptible to disease. Damage or destruction of the phloem impedes the transport of photosynthetic products throughout the plant; in extreme cases, when a band of phloem all the way around the stem is removed, the plant will usually quickly die. Bark damage in horticultural applications, as in gardening and public landscaping, results in often unwanted aesthetic damage.
The degree to which woody plants are able to repair gross physical damage to their bark is quite variable across species and type of damage. Some are able to produce a callus growth which heals over the wound rapidly, but leaves a clear scar, whilst others such as oaks do not produce an extensive callus repair. Sap is sometimes produced to seal the damaged area against disease and insect intrusion.
A number of living organisms live in or on bark, including insects, fungi and other plants like mosses, algae and other vascular plants. Many of these organisms are pathogens or parasites but some also have symbiotic relationships.
Uses
The inner bark (phloem) of some trees is edible. In hunter-gatherer societies and in times of famine, it is harvested and used as a food source. In Scandinavia, bark bread is made from rye to which the toasted and ground innermost layer of bark of scots pine or birch is added. The Sami people of far northern Europe use large sheets of Pinus sylvestris bark that are removed in the spring, prepared and stored for use as a staple food resource. The inner bark is eaten fresh, dried or roasted.
Bark can be used as a construction material, and was used widely in pre-industrial societies. Some barks, particularly Birch bark, can be removed in long sheets and other mechanically cohesive structures, allowing the bark to be used in the construction of canoes, as the drainage layer in roofs, for shoes, backpacks, and other useful items. Bark was also used as a construction material in settler colonial societies, particularly Australia, both as exterior wall cladding and as a roofing material.
In the cork oak (Quercus suber) the bark is thick enough to be harvested as a cork product without killing the tree; in this species the bark may get very thick (e.g. more than 20 cm has been reported).
Some s have significantly different phytochemical content from other parts. Some of these phytochemicals have pesticidal, culinary, or medicinally and culturally important ethnopharmacological properties.
Bark contains strong fibres known as bast, and there is a long tradition in northern Europe of using bark from coppiced young branches of the small-leaved lime (Tilia cordata) to produce cordage and rope, used for example in the rigging of Viking Age longships.
Among the commercial products made from bark are cork, cinnamon, quinine (from the bark of Cinchona) and aspirin (from the bark of willow trees). The bark of some trees, notably oak (Quercus robur) is a source of tannic acid, which is used in tanning. Bark chips generated as a by-product of lumber production are often used in bark mulch. Bark is important to the horticultural industry since in shredded form it is used for plants that do not thrive in ordinary soil, such as epiphytes.
Wood bark contains lignin which when pyrolyzed yields a liquid bio-oil product rich in natural phenol derivatives. These are used as a replacement for fossil-based phenols in phenol-formaldehyde (PF) resins used in Oriented Strand Board (OSB) and plywood.
Gallery
See also
Bark beetle
Bark painting
Trunk (botany)
Bark isolate
Bark-binding, a diseased condition of tree bark
References
Other references
Cédric Pollet, Bark: An Intimate Look at the World's Trees. London, Frances Lincoln, 2010. (Translated by Susan Berry)
Plant physiology
Plant morphology | Bark (botany) | [
"Biology"
] | 2,583 | [
"Plant morphology",
"Plant physiology",
"Plants"
] |
170,417 | https://en.wikipedia.org/wiki/T%20cell | T cells are one of the important types of white blood cells of the immune system and play a central role in the adaptive immune response. T cells can be distinguished from other lymphocytes by the presence of a T-cell receptor (TCR) on their cell surface.
T cells are born from hematopoietic stem cells, found in the bone marrow. Developing T cells then migrate to the thymus gland to develop (or mature). T cells derive their name from the thymus. After migration to the thymus, the precursor cells mature into several distinct types of T cells. T cell differentiation also continues after they have left the thymus. Groups of specific, differentiated T cell subtypes have a variety of important functions in controlling and shaping the immune response.
One of these functions is immune-mediated cell death, and it is carried out by two major subtypes: CD8+ "killer" (cytotoxic) and CD4+ "helper" T cells. (These are named for the presence of the cell surface proteins CD8 or CD4.) CD8+ T cells, also known as "killer T cells", are cytotoxic – this means that they are able to directly kill virus-infected cells, as well as cancer cells. CD8+ T cells are also able to use small signalling proteins, known as cytokines, to recruit other types of cells when mounting an immune response. A different population of T cells, the CD4+ T cells, function as "helper cells". Unlike CD8+ killer T cells, the CD4+ helper T (TH) cells function by further activating memory B cells and cytotoxic T cells, which leads to a larger immune response. The specific adaptive immune response regulated by the TH cell depends on its subtype (such as T-helper1, T-helper2, T-helper17, regulatory T-cell), which is distinguished by the types of cytokines they secrete.
Regulatory T cells are yet another distinct population of T cells that provide the critical mechanism of tolerance, whereby immune cells are able to distinguish invading cells from "self". This prevents immune cells from inappropriately reacting against one's own cells, known as an "autoimmune" response. For this reason, these regulatory T cells have also been called "suppressor" T cells. These same regulatory T cells can also be co-opted by cancer cells to prevent the recognition of, and an immune response against, tumor cells.
Development
Origin, early development and migration to the thymus
All T cells originate from c-kit+Sca1+ haematopoietic stem cells (HSC) which reside in the bone marrow. In some cases, the origin might be the foetal liver during embryonic development. The HSC then differentiate into multipotent progenitors (MPP) which retain the potential to become both myeloid and lymphoid cells. The process of differentiation then proceeds to a common lymphoid progenitor (CLP), which can only differentiate into T, B or NK cells. These CLP cells then migrate via the blood to the thymus, where they engraft:. Henceforth they are known as thymocytes, the immature stage of a T cell.
The earliest cells which arrived in the thymus are commonly termed double-negative, as they express neither the CD4 nor CD8 co-receptor. The newly arrived CLP cells are CD4−CD8−CD44+CD25−ckit+ cells, and are termed early thymic progenitor (ETP) cells. These cells will then undergo a round of division and downregulate c-kit and are termed double-negative one (DN1) cells. To become T cells, the thymocytes must undergo multiple DN stages as well as positive selection and negative selection.
Double negative thymocytes can be identified by the surface expression of CD2, CD5 and CD7. Still during the double negative stages, CD34 expression stops and CD1 is expressed. Expression of both CD4 and CD8 makes them double positive, and matures into either CD4+ or CD8+ cells.
TCR development
A critical step in T cell maturation is making a functional T cell receptor (TCR). Each mature T cell will ultimately contain a unique TCR that reacts to a random pattern, allowing the immune system to recognize many different types of pathogens. This process is essential in developing immunity to threats that the immune system has not encountered before, since due to random variation there will always be at least one TCR to match any new pathogen.
A thymocyte can only become an active T cell when it survives the process of developing a functional TCR. The TCR consists of two major components, the alpha and beta chains. These both contain random elements designed to produce a wide variety of different TCRs, but due to this huge variety they must be tested to make sure they work at all. First, the thymocytes attempt to create a functional beta chain, testing it against a 'mock' alpha chain. Then they attempt to create a functional alpha chain. Once a working TCR has been produced, the cells then must test if their TCR will identify threats correctly, and to do this it is required to recognize the body’s major histocompatibility complex (MHC) in a process known as positive selection. The thymocyte must also ensure that it does not react adversely to "self" antigens, called negative selection. If both positive and negative selection are successful, the TCR becomes fully operational and the thymocyte becomes a T cell.
TCR β-chain selection
At the DN2 stage (CD44+CD25+), cells upregulate the recombination genes RAG1 and RAG2 and re-arrange the TCRβ locus, combining V-D-J recombination and constant region genes in an attempt to create a functional TCRβ chain. As the developing thymocyte progresses through to the DN3 stage (CD44−CD25+), the thymocyte expresses an invariant α-chain called pre-Tα alongside the TCRβ gene. If the rearranged β-chain successfully pairs with the invariant α-chain, signals are produced which cease rearrangement of the β-chain (and silence the alternate allele). Although these signals require the pre-TCR at the cell surface, they are independent of ligand binding to the pre-TCR. If the chains successfully pair a pre-TCR forms, and the cell downregulates CD25 and is termed a DN4 cell (CD25−CD44−). These cells then undergo a round of proliferation, and begin to re-arrange the TCRα locus during the double-positive stage.
Positive selection
The process of positive selection takes 3 to 4 days and occurs in the thymic cortex. Double-positive thymocytes (CD4+/CD8+) migrate deep into the thymic cortex, where they are presented with self-antigens. These self-antigens are expressed by thymic cortical epithelial cells on MHC molecules, which reside on the surface of cortical epithelial cells. Only thymocytes that interact well with MHC-I or MHC-II will receive a vital "survival signal", while those that cannot interact strongly enough will receive no signal and die from neglect. This process ensures that the surviving thymocytes will have an 'MHC affinity' that means they will exhibit stronger binding affinity for specific MHC alleles in that organism. The vast majority of developing thymocytes will not pass positive selection, and die during this process.
A thymocyte's fate is determined during positive selection. Double-positive cells (CD4+/CD8+) that interact well with MHC class II molecules will eventually become CD4+ "helper" cells, whereas thymocytes that interact well with MHC class I molecules mature into CD8+ "killer" cells. A thymocyte becomes a CD4+ cell by down-regulating expression of its CD8 cell surface receptors. If the cell does not lose its signal, it will continue downregulating CD8 and become a CD4+, both CD8+ and CD4+ cells are now single positive cells.
This process does not filter for thymocytes that may cause autoimmunity. The potentially autoimmune cells are removed by the following process of negative selection, which occurs in the thymic medulla.
Negative selection
Negative selection removes thymocytes that are capable of strongly binding with "self" MHC molecules. Thymocytes that survive positive selection migrate towards the boundary of the cortex and medulla in the thymus. While in the medulla, they are again presented with a self-antigen presented on the MHC complex of medullary thymic epithelial cells (mTECs). mTECs must be Autoimmune regulator positive (AIRE+) to properly express tissue-specific antigens on their MHC class I peptides. Some mTECs are phagocytosed by thymic dendritic cells; this makes them AIRE− antigen presenting cells (APCs), allowing for presentation of self-antigens on MHC class II molecules (positively selected CD4+ cells must interact with these MHC class II molecules, thus APCs, which possess MHC class II, must be present for CD4+ T-cell negative selection). Thymocytes that interact too strongly with the self-antigen receive an apoptotic signal that leads to cell death. However, some of these cells are selected to become Treg cells. The remaining cells exit the thymus as mature naive T cells, also known as recent thymic emigrants. This process is an important component of central tolerance and serves to prevent the formation of self-reactive T cells that are capable of inducing autoimmune diseases in the host.
TCR development summary
β-selection is the first checkpoint, where thymocytes that are able to form a functional pre-TCR (with an invariant alpha chain and a functional beta chain) are allowed to continue development in the thymus. Next, positive selection checks that thymocytes have successfully rearranged their TCRα locus and are capable of recognizing MHC molecules with appropriate affinity. Negative selection in the medulla then eliminates thymocytes that bind too strongly to self-antigens expressed on MHC molecules. These selection processes allow for tolerance of self by the immune system. Typical naive T cells that leave the thymus (via the corticomedullary junction) are self-restricted, self-tolerant, and single positive.
Thymic output
About 98% of thymocytes die during the development processes in the thymus by failing either positive selection or negative selection, whereas the other 2% survive and leave the thymus to become mature immunocompetent T cells.
The thymus contributes fewer cells as a person ages. As the thymus shrinks by about 3% a year throughout middle age, a corresponding fall in the thymic production of naive T cells occurs, leaving peripheral T cell expansion and regeneration to play a greater role in protecting older people.
Types of T cell
T cells are grouped into a series of subsets based on their function. CD4 and CD8 T cells are selected in the thymus, but undergo further differentiation in the periphery to specialized cells which have different functions. T cell subsets were initially defined by function, but also have associated gene or protein expression patterns.
Conventional adaptive T cells
Helper CD4+ T cells
T helper cells (TH cells) assist other lymphocytes, including the maturation of B cells into plasma cells and memory B cells, and activation of cytotoxic T cells and macrophages. These cells are also known as CD4+ T cells as they express the CD4 glycoprotein on their surfaces. Helper T cells become activated when they are presented with peptide antigens by MHC class II molecules, which are expressed on the surface of antigen-presenting cells (APCs). Once activated, they divide rapidly and secrete cytokines that regulate or assist the immune response. These cells can differentiate into one of several subtypes, which have different roles. Cytokines direct T cells into particular subtypes.
Cytotoxic CD8+ T cells
Cytotoxic T cells (TC cells, CTLs, T-killer cells, killer T cells) destroy virus-infected cells and tumor cells, and are also implicated in transplant rejection. These cells are defined by the expression of the CD8 protein on their cell surface. Cytotoxic T cells recognize their targets by binding to short peptides (8-11 amino acids in length) associated with MHC class I molecules, present on the surface of all nucleated cells. Cytotoxic T cells also produce the key cytokines IL-2 and IFNγ. These cytokines influence the effector functions of other cells, in particular macrophages and NK cells.
Memory T cells
Antigen-naive T cells expand and differentiate into memory and effector T cells after they encounter their cognate antigen within the context of an MHC molecule on the surface of a professional antigen presenting cell (e.g. a dendritic cell). Appropriate co-stimulation must be present at the time of antigen encounter for this process to occur. Historically, memory T cells were thought to belong to either the effector or central memory subtypes, each with their own distinguishing set of cell surface markers (see below). Subsequently, numerous new populations of memory T cells were discovered including tissue-resident memory T (Trm) cells, stem memory TSCM cells, and virtual memory T cells. The single unifying theme for all memory T cell subtypes is that they are long-lived and can quickly expand to large numbers of effector T cells upon re-exposure to their cognate antigen. By this mechanism they provide the immune system with "memory" against previously encountered pathogens. Memory T cells may be either CD4+ or CD8+ and usually express CD45RO.
Memory T cell subtypes:
Central memory T cells (TCM cells) express CD45RO, C-C chemokine receptor type 7 (CCR7), and L-selectin (CD62L). Central memory T cells also have intermediate to high expression of CD44. This memory subpopulation is commonly found in the lymph nodes and in the peripheral circulation. (Note- CD44 expression is usually used to distinguish murine naive from memory T cells).
Effector memory T cells (TEM cells and TEMRA cells) express CD45RO but lack expression of CCR7 and L-selectin. They also have intermediate to high expression of CD44. These memory T cells lack lymph node-homing receptors and are thus found in the peripheral circulation and tissues. TEMRA stands for terminally differentiated effector memory cells re-expressing CD45RA, which is a marker usually found on naive T cells.
Tissue-resident memory T cells (TRM) occupy tissues (skin, lung, etc.) without recirculating. One cell surface marker that has been associated with TRM is the intern αeβ7, also known as CD103.
Virtual memory T cells (TVM) differ from the other memory subsets in that they do not originate following a strong clonal expansion event. Thus, although this population as a whole is abundant within the peripheral circulation, individual virtual memory T cell clones reside at relatively low frequencies. One theory is that homeostatic proliferation gives rise to this T cell population. Although CD8 virtual memory T cells were the first to be described, it is now known that CD4 virtual memory cells also exist.
Regulatory CD4+ T cells
Regulatory T cells are crucial for the maintenance of immunological tolerance. Their major role is to shut down T cell–mediated immunity toward the end of an immune reaction and to suppress autoreactive T cells that escaped the process of negative selection in the thymus.
Two major classes of CD4+ Treg cells have been described—FOXP3+ Treg cells and FOXP3− Treg cells.
Regulatory T cells can develop either during normal development in the thymus, and are then known as thymic Treg cells, or can be induced peripherally and are called peripherally derived Treg cells. These two subsets were previously called "naturally occurring" and "adaptive" (or "induced"), respectively. Both subsets require the expression of the transcription factor FOXP3 which can be used to identify the cells. Mutations of the FOXP3 gene can prevent regulatory T cell development, causing the fatal autoimmune disease IPEX.
Several other types of T cells have suppressive activity, but do not express FOXP3 constitutively. These include Tr1 and Th3 cells, which are thought to originate during an immune response and act by producing suppressive molecules. Tr1 cells are associated with IL-10, and Th3 cells are associated with TGF-beta. Recently, Th17 cells have been added to this list.
Innate-like T cells
Innate-like T cells or unconventional T cells represent some subsets of T cells that behave differently in immunity. They trigger rapid immune responses, regardless of the major histocompatibility complex (MHC) expression, unlike their conventional counterparts (CD4 T helper cells and CD8 cytotoxic T cells), which are dependent on the recognition of peptide antigens in the context of the MHC molecule. Overall, there are three large populations of unconventional T cells: NKT cells, MAIT cells, and gammadelta T cells. Now, their functional roles are already being well established in the context of infections and cancer. Furthermore, these T cell subsets are being translated into many therapies against malignancies such as leukemia, for example.
Natural killer T cell
Natural killer T cells (NKT cells – not to be confused with natural killer cells of the innate immune system) bridge the adaptive immune system with the innate immune system. Unlike conventional T cells that recognize protein peptide antigens presented by major histocompatibility complex (MHC) molecules, NKT cells recognize glycolipid antigens presented by CD1d. Once activated, these cells can perform functions ascribed to both helper and cytotoxic T cells: cytokine production and release of cytolytic/cell killing molecules. They are also able to recognize and eliminate some tumor cells and cells infected with herpes viruses.
Mucosal associated invariant T cells
Mucosal associated invariant T (MAIT) cells display innate, effector-like qualities. In humans, MAIT cells are found in the blood, liver, lungs, and mucosa, defending against microbial activity and infection. The MHC class I-like protein, MR1, is responsible for presenting bacterially-produced vitamin B metabolites to MAIT cells. After the presentation of foreign antigen by MR1, MAIT cells secrete pro-inflammatory cytokines and are capable of lysing bacterially-infected cells. MAIT cells can also be activated through MR1-independent signaling. In addition to possessing innate-like functions, this T cell subset supports the adaptive immune response and has a memory-like phenotype. Furthermore, MAIT cells are thought to play a role in autoimmune diseases, such as multiple sclerosis, arthritis and inflammatory bowel disease, although definitive evidence is yet to be published.
Gamma delta T cells
Gamma delta T cells (γδ T cells) represent a small subset of T cells which possess a γδ TCR rather than the αβ TCR on the cell surface. The majority of T cells express αβ TCR chains. This group of T cells is much less common in humans and mice (about 2% of total T cells) and are found mostly in the gut mucosa, within a population of intraepithelial lymphocytes. In rabbits, sheep, and chickens, the number of γδ T cells can be as high as 60% of total T cells. The antigenic molecules that activate γδ T cells are still mostly unknown. However, γδ T cells are not MHC-restricted and seem to be able to recognize whole proteins rather than requiring peptides to be presented by MHC molecules on APCs. Some murine γδ T cells recognize MHC class IB molecules. Human γδ T cells that use the Vγ9 and Vδ2 gene fragments constitute the major γδ T cell population in peripheral blood. These cells are unique in that they specifically and rapidly respond to a set of nonpeptidic phosphorylated isoprenoid precursors, collectively named phosphoantigens, which are produced by virtually all living cells. The most common phosphoantigens from animal and human cells (including cancer cells) are isopentenyl pyrophosphate (IPP) and its isomer dimethylallyl pyrophosphate (DMPP). Many microbes produce the active compound hydroxy-DMAPP (HMB-PP) and corresponding mononucleotide conjugates, in addition to IPP and DMAPP. Plant cells produce both types of phosphoantigens. Drugs activating human Vγ9/Vδ2 T cells comprise synthetic phosphoantigens and aminobisphosphonates, which upregulate endogenous IPP/DMAPP.
Activation
Activation of CD4+ T cells occurs through the simultaneous engagement of the T-cell receptor and a co-stimulatory molecule (like CD28, or ICOS) on the T cell by the major histocompatibility complex (MHCII) peptide and co-stimulatory molecules on the APC. Both are required for production of an effective immune response; in the absence of co-stimulation, T cell receptor signalling alone results in anergy. The signalling pathways downstream from co-stimulatory molecules usually engages the PI3K pathway generating PIP3 at the plasma membrane and recruiting PH domain containing signaling molecules like PDK1 that are essential for the activation of PKC-θ, and eventual IL-2 production. Optimal CD8+ T cell response relies on CD4+ signalling. CD4+ cells are useful in the initial antigenic activation of naive CD8 T cells, and sustaining memory CD8+ T cells in the aftermath of an acute infection. Therefore, activation of CD4+ T cells can be beneficial to the action of CD8+ T cells.
The first signal is provided by binding of the T cell receptor to its cognate peptide presented on MHCII on an APC. MHCII is restricted to so-called professional antigen-presenting cells, like dendritic cells, B cells, and macrophages, to name a few. The peptides presented to CD8+ T cells by MHC class I molecules are 8–13 amino acids in length; the peptides presented to CD4+ cells by MHC class II molecules are longer, usually 12–25 amino acids in length, as the ends of the binding cleft of the MHC class II molecule are open.
The second signal comes from co-stimulation, in which surface receptors on the APC are induced by a relatively small number of stimuli, usually products of pathogens, but sometimes breakdown products of cells, such as necrotic-bodies or heat shock proteins. The only co-stimulatory receptor expressed constitutively by naive T cells is CD28, so co-stimulation for these cells comes from the CD80 and CD86 proteins, which together constitute the B7 protein, (B7.1 and B7.2, respectively) on the APC. Other receptors are expressed upon activation of the T cell, such as OX40 and ICOS, but these largely depend upon CD28 for their expression. The second signal licenses the T cell to respond to an antigen. Without it, the T cell becomes anergic, and it becomes more difficult for it to activate in future. This mechanism prevents inappropriate responses to self, as self-peptides will not usually be presented with suitable co-stimulation. Once a T cell has been appropriately activated (i.e. has received signal one and signal two) it alters its cell surface expression of a variety of proteins. Markers of T cell activation include CD69, CD71 and CD25 (also a marker for Treg cells), and HLA-DR (a marker of human T cell activation). CTLA-4 expression is also up-regulated on activated T cells, which in turn outcompetes CD28 for binding to the B7 proteins. This is a checkpoint mechanism to prevent over activation of the T cell. Activated T cells also change their cell surface glycosylation profile.
The T cell receptor exists as a complex of several proteins. The actual T cell receptor is composed of two separate peptide chains, which are produced from the independent T cell receptor alpha and beta (TCRα and TCRβ) genes. The other proteins in the complex are the CD3 proteins: CD3εγ and CD3εδ heterodimers and, most important, a CD3ζ homodimer, which has a total of six ITAM motifs. The ITAM motifs on the CD3ζ can be phosphorylated by Lck and in turn recruit ZAP-70. Lck and/or ZAP-70 can also phosphorylate the tyrosines on many other molecules, not least CD28, LAT and SLP-76, which allows the aggregation of signalling complexes around these proteins.
Phosphorylated LAT recruits SLP-76 to the membrane, where it can then bring in PLC-γ, VAV1, Itk and potentially PI3K. PLC-γ cleaves PI(4,5)P2 on the inner leaflet of the membrane to create the active intermediaries diacylglycerol (DAG), inositol-1,4,5-trisphosphate (IP3); PI3K also acts on PIP2, phosphorylating it to produce phosphatidlyinositol-3,4,5-trisphosphate (PIP3). DAG binds and activates some PKCs. Most important in T cells is PKC-θ, critical for activating the transcription factors NF-κB and AP-1. IP3 is released from the membrane by PLC-γ and diffuses rapidly to activate calcium channel receptors on the ER, which induces the release of calcium into the cytosol. Low calcium in the endoplasmic reticulum causes STIM1 clustering on the ER membrane and leads to activation of cell membrane CRAC channels that allows additional calcium to flow into the cytosol from the extracellular space. This aggregated cytosolic calcium binds calmodulin, which can then activate calcineurin. Calcineurin, in turn, activates NFAT, which then translocates to the nucleus. NFAT is a transcription factor that activates the transcription of a pleiotropic set of genes, most notable, IL-2, a cytokine that promotes long-term proliferation of activated T cells.
PLC-γ can also initiate the NF-κB pathway. DAG activates PKC-θ, which then phosphorylates CARMA1, causing it to unfold and function as a scaffold. The cytosolic domains bind an adapter BCL10 via CARD (Caspase activation and recruitment domains) domains; that then binds TRAF6, which is ubiquitinated at K63. This form of ubiquitination does not lead to degradation of target proteins. Rather, it serves to recruit NEMO, IKKα and -β, and TAB1-2/ TAK1. TAK 1 phosphorylates IKK-β, which then phosphorylates IκB allowing for K48 ubiquitination: leads to proteasomal degradation. Rel A and p50 can then enter the nucleus and bind the NF-κB response element. This coupled with NFAT signaling allows for complete activation of the IL-2 gene.
While in most cases activation is dependent on TCR recognition of antigen, alternative pathways for activation have been described. For example, cytotoxic T cells have been shown to become activated when targeted by other CD8 T cells leading to tolerization of the latter.
In spring 2014, the T-Cell Activation in Space (TCAS) experiment was launched to the International Space Station on the SpaceX CRS-3 mission to study how "deficiencies in the human immune system are affected by a microgravity environment".
T cell activation is modulated by reactive oxygen species.
Antigen discrimination
A unique feature of T cells is their ability to discriminate between healthy and abnormal (e.g. infected or cancerous) cells in the body. Healthy cells typically express a large number of self derived pMHC on their cell surface and although the T cell antigen receptor can interact with at least a subset of these self pMHC, the T cell generally ignores these healthy cells. However, when these very same cells contain even minute quantities of pathogen derived pMHC, T cells are able to become activated and initiate immune responses. The ability of T cells to ignore healthy cells but respond when these same cells contain pathogen (or cancer) derived pMHC is known as antigen discrimination. The molecular mechanisms that underlie this process are controversial.
Clinical significance
Deficiency
Causes of T cell deficiency include lymphocytopenia of T cells and/or defects on function of individual T cells. Complete insufficiency of T cell function can result from hereditary conditions such as severe combined immunodeficiency (SCID), Omenn syndrome, and cartilage–hair hypoplasia. Causes of partial insufficiencies of T cell function include acquired immune deficiency syndrome (AIDS), and hereditary conditions such as DiGeorge syndrome (DGS), chromosomal breakage syndromes (CBSs), and B cell and T cell combined disorders such as ataxia-telangiectasia (AT) and Wiskott–Aldrich syndrome (WAS).
The main pathogens of concern in T cell deficiencies are intracellular pathogens, including Herpes simplex virus, Mycobacterium and Listeria. Also, fungal infections are also more common and severe in T cell deficiencies.
Cancer
Cancer of T cells is termed T-cell lymphoma, and accounts for perhaps one in ten cases of non-Hodgkin lymphoma. The main forms of T cell lymphoma are:
Extranodal T cell lymphoma
Cutaneous T cell lymphomas: Sézary syndrome and Mycosis fungoides
Anaplastic large cell lymphoma
Angioimmunoblastic T cell lymphoma
Exhaustion
T cell exhaustion is a poorly defined or ambiguous term. There are three approaches to its definition. "The first approach primarily defines as exhausted the cells that present the same cellular dysfunction (typically, the absence of an expected effector response). The second approach primarily defines as exhausted the cells that are produced by a given cause (typically, but not necessarily, chronic exposure to an antigen). Finally, the third approach primarily defines as exhausted the cells that present the same molecular markers (typically, programmed cell death protein 1 [PD-1])."
Dysfunctional T cells are characterized by progressive loss of function, changes in transcriptional profiles and sustained expression of inhibitory receptors. At first, cells lose their ability to produce IL-2 and TNFα, which is followed by the loss of high proliferative capacity and cytotoxic potential, and eventually leads to their deletion. Exhausted T cells typically indicate higher levels of CD43, CD69 and inhibitory receptors combined with lower expression of CD62L and CD127. Exhaustion can develop during chronic infections, sepsis and cancer. Exhausted T cells preserve their functional exhaustion even after repeated antigen exposure.
During chronic infection and sepsis
T cell exhaustion can be triggered by several factors like persistent antigen exposure and lack of CD4 T cell help. Antigen exposure also has effect on the course of exhaustion because longer exposure time and higher viral load increases the severity of T cell exhaustion. At least 2–4 weeks exposure is needed to establish exhaustion. Another factor able to induce exhaustion are inhibitory receptors including programmed cell death protein 1 (PD1), CTLA-4, T cell membrane protein-3 (TIM3), and lymphocyte activation gene 3 protein (LAG3). Soluble molecules such as cytokines IL-10 or TGF-β are also able to trigger exhaustion. Last known factors that can play a role in T cell exhaustion are regulatory cells. Treg cells can be a source of IL-10 and TGF-β and therefore they can play a role in T cell exhaustion. Furthermore, T cell exhaustion is reverted after depletion of Treg cells and blockade of PD1. T cell exhaustion can also occur during sepsis as a result of cytokine storm. Later after the initial septic encounter anti-inflammatory cytokines and pro-apoptotic proteins take over to protect the body from damage. Sepsis also carries high antigen load and inflammation. In this stage of sepsis T cell exhaustion increases. Currently there are studies aiming to utilize inhibitory receptor blockades in treatment of sepsis.
During transplantation
While during infection T cell exhaustion can develop following persistent antigen exposure after graft transplant similar situation arises with alloantigen presence. It was shown that T cell response diminishes over time after kidney transplant. These data suggest T cell exhaustion plays an important role in tolerance of a graft mainly by depletion of alloreactive CD8 T cells. Several studies showed positive effect of chronic infection on graft acceptance and its long-term survival mediated partly by T cell exhaustion. It was also shown that recipient T cell exhaustion provides sufficient conditions for NK cell transfer. While there are data showing that induction of T cell exhaustion can be beneficial for transplantation it also carries disadvantages among which can be counted increased number of infections and the risk of tumor development.
During cancer
During cancer T cell exhaustion plays a role in tumor protection. According to research some cancer-associated cells as well as tumor cells themselves can actively induce T cell exhaustion at the site of tumor. T cell exhaustion can also play a role in cancer relapses as was shown on leukemia. Some studies have suggested that it is possible to predict relapse of leukemia based on expression of inhibitory receptors PD-1 and TIM-3 by T cells. Many experiments and clinical trials have focused on immune checkpoint blockers in cancer therapy, with some of these approved as valid therapies that are now in clinical use. Inhibitory receptors targeted by those medical procedures are vital in T cell exhaustion and blocking them can reverse these changes.
See also
Chimeric antigen receptor T cell
Gut-specific homing
Immunoblast
Immunosenescence
Parafollicular cell also called C cell
References
Further reading
T cells
Human cells
Immune system
Immunology | T cell | [
"Biology"
] | 7,428 | [
"Organ systems",
"Immunology",
"Immune system"
] |
170,429 | https://en.wikipedia.org/wiki/Isotropic%20etching | In semiconductor manufacturing, isotropic etching is a method commonly used to remove material from a substrate via a chemical process using an etchant substance. The etchant may be in liquid-, gas- or plasma-phase, although liquid etchants such as buffered hydrofluoric acid (BHF) for silicon dioxide etching are more often used. Unlike anisotropic etching, isotropic etching does not etch in a single direction, but rather etches in multiple directions within the substrate. Any horizontal component of the etch direction may therefore result in undercutting of patterned areas, and significant changes to device characteristics. Isotropic etching may occur unavoidably, or it may be desirable for process reasons.
References
Semiconductors | Isotropic etching | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 162 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
170,479 | https://en.wikipedia.org/wiki/UFO%20conspiracy%20theories | UFO conspiracy theories are a subset of conspiracy theories which argue that various governments and politicians globally, in particular the United States government, are suppressing evidence that unidentified flying objects are controlled by a non-human intelligence or built using alien technology. Such conspiracy theories usually argue that Earth governments are in communication or cooperation with extraterrestrial visitors despite public disclaimers, and further that some of these theories claim that the governments are explicitly allowing alien abduction.
Individuals who have suggested that UFO evidence is being suppressed include Stanford University immunologist Garry Nolan, United States Senator Barry Goldwater, British Admiral Lord Hill-Norton (former NATO head and chief of the British Defence Staff), American Vice Admiral Roscoe H. Hillenkoetter (first CIA director), Israeli brigadier general Haim Eshed (former director of space programs for the Israel Ministry of Defense), astronauts Gordon Cooper and Edgar Mitchell, and former Canadian Defence Minister Paul Hellyer. Beyond their testimonies and reports they have presented no evidence to substantiate their statements and claims. According to the Committee for Skeptical Inquiry little or no evidence exists to support them despite significant research on the subject by non-governmental scientific agencies.
Scholars of religion have identified some new religious movements among the proponents of UFO conspiracy theories, most notably Heaven's Gate, the Nation of Islam, and Scientology.
Overview
The earliest flying disc conspiracy theories claimed that the objects were secret American or Soviet technology. By December 1949, author Donald Keyhoe promoted the idea that the Air Force was withholding knowledge of interplanetary spaceships, culminating in his 1955 work The Flying Saucer Conspiracy. The following year, the book They Knew Too Much About Flying Saucers introduced the concept of the Men in Black.
Beginning in the late 1960s, theories began to include stories of crash retrievals, culminating in the 1980 book titled The Roswell Incident. In the 1970s, a supposed cover up was termed a "Cosmic Watergate".
Background
Personnel in the mid-1940s reported unidentified objects under various names.
Roswell balloon and 'recovered disc' hoaxes
On July 8, 1947, Roswell Army Air Field issued a press release stating that they had recovered a "flying disc". The Army quickly retracted the statement and clarified that the crashed object was a conventional weather balloon. The Roswell incident did not surface again until the late 1970s, when it was incorporated into conspiracy literature.
The Roswell balloon was far from the only misidentified "disc". One potential disc, recovered from the yard of a priest in Grafton, Wisconsin, was identified as an ordinary circular saw-blade. More elaborate hoax saucers were found in Shreveport, Louisiana; in Black River Falls, Wisconsin; and in Clear water, Florida. On July 9, press reported the recovery of a thirty-inch disc from a Hollywood back yard; the hoaxer was never identified.
On July 11, press reported the recovery of a 30-inch disc from the yard of a Twin Falls, Idaho home. On July 12, it was reported nationally that the Twin Falls disc was a hoax. Photos of the object were publicly released. The object was described as containing radio tubes, electric coils, and wires underneath a Plexiglas dome. Press reported that four teenagers had confessed to creating the disc.
Chronology of UFO conspiracy theories
Richard Shaver, Fred Crisman, and The Shaver Mystery
In the October 1947 issue of Amazing Stories, editor Raymond Palmer argued the flying disc flap was proof of Richard Sharpe Shaver's claims. That same issue carried a letter from Shaver in which he argued the truth behind the discs would remain a secret.
Wrote Shaver: "The discs can be a space invasion, a secret new army plane — or a scouting trip by an enemy country...OR, they can be Shaver's space ships, taking off and landing regularly on earth for centuries past, and seen today as they have always been — as a mystery. They could be leaving earth with cargos of wonder-mech that to us would mean emancipation from a great many of our worst troubles— and we'll never see those cargos...I predict that nothing more will be seen, and the truth of what the strange disc ships really are will never be disclosed to the common people. We just don't count to the people who do know about such things. It isn't necessary to tell us anything."
Fred Crisman, Kenneth Arnold, and the Maury Island hoax
Raymond Palmer contacted original saucer witness Kenneth Arnold and requested he travel to Tacoma to investigate a story of two harbor patrolmen in Tacoma who reportedly possessed fragments of a "flying saucer". On July 28, Palmer wired $200 to Arnold to fund the investigation.
Arnold met Fred Crisman, who told a story of a flying saucer which spewed debris. Arnold relayed the details of the case to Lieutenant Frank Brown, the military's flying disc investigator, who had previously interviewed Arnold about his sighting. Brown and an aide flew to Tacoma, where they met with Crisman, who gave them a cereal box containing alleged flying disc debris. Returning to base in California, the pair were killed when their plane crashed near Kelso, Washington. The following day, press accounts revealed that a "mysterious telephone informant with uncannily accurate information" had contacted the United Press of Tacoma. The caller claimed that the crashed B-25 had been loaded with flying disc fragments; the caller further claimed that flying saucer witnesses Kenneth Arnold and E.J. Smith had been "in secret conference" at Tacoma's Hotel Winthrop. General Ned Schramm of the fourth air force publicly acknowledged that the deceased pilots were intelligence officers who had traveled to Tacoma to meet with original saucer witness Kenneth Arnold.
By August 3, press reported a detailed narrative about the events: original saucer witness Kenneth Arnold had travelled to Tacoma to investigate claims by Fred Crisman and Harold Dahl, who reported recovering debris dropped by a flying saucer at Maury Island. Flight 105 pilot E.J. Smith joined the investigation, which received "lava rocks" from Crisman who claimed they were "flying disc debris". Army Air Force investigators were contacted, and two investigators flew to Tacoma where they took possession of the debris and departed in their B-25 to return to Hamilton Field.
Aftermath
In 1952, Arnold would author Coming of the Saucers detailing his 1947 investigation of Fred Crisman's claims. In 1968, Crisman would be subpoenaed by a New Orleans grand jury in the prosecution of a local man for the assassination of President John F. Kennedy—a prosecution that would later be dramatized in the 1991 Oliver Stone film JFK. In the late 1970s, the United States House Select Committee on Assassinations considered the possibility that Crisman may have been one of the "tramps" detained and photographed in the aftermath of the JFK assassination. Later authors like Bill Cooper would allege that Kennedy was assassinated because he intended to disclose the reality of UFOs.
1949
Winchell and the Soviets
On April 3, 1949, radio personality Walter Winchell broadcast the claim that it had been definitively established that the flying saucers were guided missiles fired from Russia. In response, the Air Force denied any such conclusion. The Air Force reportedly requested an FBI investigation into Winchell's claims, a request that was denied.
Keyhoe and the Air Force knowledge of UFOs
On December 26, 1949, True magazine published an article by Donald Keyhoe titled "The Flying Saucers Are Real". Keyhoe, a former Major in the US Marines, claimed that elements within the Air Force knew that saucers existed and had concluded they were likely 'inter-planetary'.
The article examined the Mantell UFO incident and quoted an unnamed pilot who opined that the Air Force's explanation "looks like a cover up to me". The Gorman Dogfight and the Chiles-Whitted UFO encounter were also described. The article cited a supposed report from Air Material Command and claimed a "rocket authority at Wright field" had concluded saucers were interplanetary. Concern over a public panic, of the kind that supposedly occurred after the 1938 War of the Worlds broadcast, is cited in the article as a possible motive for the cover up. Citing historic sources, Keyhoe speculated that similar sightings have likely occurred for at least several centuries.
The True article caused a sensation. Though such figures are always difficult to verify, Captain Edward J. Ruppelt, the first head of Project Blue Book, reported that "It is rumored among magazine publishers that Don Keyhoe's article in True was one of the most widely read and widely discussed magazine articles in history." When Keyhoe expanded the article into a book, The Flying Saucers Are Real (1950), it sold over half a million copies in paperback.
In March 1950, the Air Force denied "flying saucers" exist and further denied that they were US technology being covered-up.
Scully and alien bodies
In October and November 1949, journalist Frank Scully published two columns in Variety, claiming that dead extraterrestrial beings were recovered from a flying saucer crash, based on what he said was reported to him by a scientist involved. His 1950 book Behind the Flying Saucers expanded on the theme, adding that there had been two such incidents in Arizona and one in New Mexico, a 1948 incident that involved a saucer that was nearly in diameter. In January 1950, Time Magazine skeptically repeated stories of crashed saucers with humanoid occupants.
It was later revealed that Scully had been the victim of "two veteran confidence artists".
In 1952 and 1956, True magazine published articles by San Francisco Chronicle reporter John Philip Cahn that exposed Newton and "Dr. Gee" (identified as Leo A. GeBauer) as oil con artists who had hoaxed Scully.
1950s
The 1950s saw an increase in both governmental and civilian investigative efforts and reports of public disinformation and suppression of evidence.
The UK Ministry of Defence's UFO Project has its roots in a study commissioned in 1950 by the MOD's then Chief Scientific Adviser, radar scientist Henry Tizard. As a result of his insistence that UFO sightings should not be dismissed without some form of proper scientific study, the department set up the Flying Saucer Working Party (or FSWP).
In August 1950, Montanan baseball manager Nicholas Mariana filmed several UFOs with his color 16mm camera. Project Blue Book was called in and, after inspecting the film, Mariana claimed it was returned to him with critical footage removed, clearly showing the objects as disc-shaped. The incident sparked nationwide media attention.
In April 1952, Life Magazine published "Have We Visitors From Space?", which was sympathetic to the extraterrestrial hypothesis. The article is thought to have contributed to the 1952 UFO flap.
Canadian radio engineer Wilbert B. Smith, who worked for the Canadian Department of Transport, was interested in flying saucer propulsion technology and wondered if the assertions in the just-published Scully and Keyhoe books were factual. In September 1950, he had the Canadian embassy in Washington D.C. arrange contact with U.S. officials to try to discover the truth of the matter. Smith was briefed by Robert Sarbacher, a physicist and consultant to the Defense Department's Research and Development Board. Other correspondence, having to do with Keyhoe needing to get clearance to publish another article on Smith's theories of UFO propulsion, indicated that Bush and his group were operating out of the Research and Development Board. Smith then briefed superiors in the Canadian government, leading to the establishment of Project Magnet, a small Canadian government UFO research effort. Canadian documents and Smith's private papers were uncovered in the late 1970s, and by 1984, other alleged documents emerged claiming the existence of a highly secret UFO oversight committee of scientists and military people called Majestic 12, again naming Vannevar Bush. Sarbacher was also interviewed in the 1980s and corroborated the information in Smith's memos and correspondence. Throughout the 1950s and early 1960s, Smith granted public interviews, and among other things stated that he had been lent crashed UFO material for analysis by a highly secret U.S. government group which he wouldn't name.
A few weeks after the Robertson Panel, the Air Force issued Regulation 200–2, ordering air base officers to publicly discuss UFO incidents only if they were judged to have been solved, and to classify all the unsolved cases to keep them out of the public eye. In addition, UFO investigative duties started to be taken on by the newly formed 4602nd Air Intelligence Squadron (AISS) of the Air Defense Command. The 4602nd AISS was tasked with investigating only the most important UFO cases having intelligence or national security implications. These were deliberately siphoned away from Blue Book, leaving Blue Book to deal with the more trivial reports.
Keyhoe and The Flying Saucer Conspiracy
In 1955, Donald Keyhoe authored a new book that pointedly accused elements of the United States government of engaging in a conspiracy to cover up knowledge of flying saucers. Keyhoe claims the existence of a "silence group" orchestrating this conspiracy. Historian of Folklore Curtis Peebles argues: "The Flying Saucer Conspiracy marked a shift in Keyhoe's belief system. No longer were flying saucers the central theme; that now belonged to the silence group and its coverup. For the next two decades Keyhoe's beliefs about this would dominate the flying saucer myth."
The book features claims of a possible discovery of an "orbiting space base" or a "moon base", knowledge of which might trigger a public panic. The Flying Saucer Conspiracy also incorporated legends of the Bermuda Triangle disappearances. Keyhoe sensationalized claims, ultimately stemming from optical illusions, of unusual structures on the moon.
Morris Jessup, Carl Allen and the Philadelphia Experiment
In 1955, Morris K. Jessup achieved some notoriety with his book The Case for the UFO, in which he argued that UFOs represented a mysterious subject worthy of further study. Jessup speculated that UFOs were "exploratory craft of 'solid' and 'nebulous' character." Jessup also "linked ancient monuments with prehistoric superscience".
In January 1956, Jessup began receiving a series of letters from "Carlos Miguel Allende", later identified as Carl Meredith Allen. "Allende" warned Jessup not to investigate the levitation of UFOs and spun a tale of a dangerous experiment in which Navy Ship was successfully made invisible, only to inexplicably teleport from Philadelphia to Norfolk, Virginia, before reappearing back in Philadelphia. The ship's crew was supposed to have suffered various side effects, including insanity, intangibility, and being "frozen" in place. By 1975, the Philadelphia Experiment was being promoted by paranormal author Charles Berlitz and in 1984, the legend was adapted into a fictional film.
In 1957, Jessup was invited to the Office of Naval Research where he was shown an annotated copy of his book that was filled with handwritten notes in its margins, written with three different shades of blue ink, appearing to detail a debate among three individuals. They discussed ideas about the propulsion for flying saucers, alien races, and express concern that Jessup was too close to discovering their technology. Jessup noticed the handwriting of the annotations resembled the letters he received from Allen. (Twelve years later, Allen would say that he authored all of the annotations in order "to scare the hell out of Jessup.")
The Jessup book with Allen's scribbled commentaries gained a life of its own when the Varo Manufacturing Corporation of Garland, Texas, who did contract work for ONR, began producing mimeographed copies of the book with Allen's annotations and Allen's letters to Jessup. These copies came to be known as the "Varo edition." This became the heart of many "Philadelphia Experiment" books, documentaries, and movies to come. Over the years various writers and researchers who tried to get more information from Carl Allen found his responses elusive, or could not find him at all.
Edward Ruppelt and The Report on Unidentified Flying Objects
Ruppelt was a captain in the US Air Force who served as director of official investigations into UFOs: Project Grudge and Project Bluebook.
In 1956, Ruppelt authored The Report on Unidentified Flying Objects, a book that has been called the "most significant" of its era. The book discussed the Twining memo which initiated UFO investigation and the rejected 1948 "Estimate of the Situation". Ruppelt
criticized the Air Force's handling of UFOs investigations. Historian Curtis Peebles concludes that the book "should have ended the speculation about an Air Force cover-up. In fact, Ruppelt's statements were converted into support for the cover-up idea."
Al Chop and The True Story of Flying Saucers
In 1956, a film titled Unidentified Flying Objects: The True Story of Flying Saucers dramatized the events of the early 1950s from the point of view of Air Force press officer Albert M. Chop. Chop had served as the Press Chief for Air Materiel Command in Dayton, Ohio until 1951 when he transferred to the Pentagon to serve as the press spokesman for Project Bluebook. The film incorporates interviews with actual eyewitnesses and historic footage of unidentified objects, concluding with a dramatization of the 1952 UFO flap that featured repeated sightings over Washington D.C.
Gray Barker and the 'Men in Black'
1956 saw the publication of Gray Barker's They Knew Too Much About Flying Saucers, the book which publicized the idea of Men in Black who appear to UFO witnesses and warn them to keep quiet. There has been continued speculation that the men in black are government agents who harass and threaten UFO witnesses.
According to the Skeptical Inquirer article "Gray Barker: My Friend, the Myth-Maker", there may have been "a grain of truth" to Barker's writings on the Men in Black, in that government agencies did attempt to discourage public interest in UFOs during the 1950s. However, Barker is thought to have greatly embellished the facts of the situation. In the same Skeptical Inquirer article, Sherwood revealed that, in the late 1960s, he and Barker collaborated on a brief fictional notice alluding to the Men in Black, which was published as fact first in Raymond A. Palmer's Flying Saucers magazine and some of Barker's own publications. In the story, Sherwood (writing as "Dr. Richard H. Pratt") claimed he was ordered to silence by the "blackmen" after learning that UFOs were time-travelling vehicles. Barker later wrote to Sherwood, "Evidently the fans swallowed this one with a gulp."
Keyhoe and 'Enigma of the Skies'
On January 22, 1958, Donald Keyhoe appeared on the CBS tv show Armstrong Circle Theatre in an episode titled "UFO: Enigma of the Skies". During the live broadcast, Keyhoe deviated from the pre-approved script, announcing "now I’m going to reveal something that has never been disclosed before". At this point in the broadcast, Keyhoe's microphone was cut. CBS later explained: "This program had been carefully cleared for security reasons. Therefore, it was the responsibility of this network to insure [sic] performance in accordance with pre-determined security standards. Any indication that there would be a deviation might lead to statements that neither this network nor the individuals on the program were authorized to release."
1960s
Throughout much of the 1960s, atmospheric physicist James E. McDonald suggested—via lectures, articles and letters—that the U.S. Government was mishandling evidence that would support the extraterrestrial hypothesis.
Jacques Vallée and the "Pentacle Memorandum"
In June 1967, researcher Jacques Vallée was tasked with organizing files collected by Project Bluebook investigator J. Allen Hynek Among those files, Vallee found a memo dated 9 January 1953 addressed to an assistant of Edward J. Ruppelt, an Air Force officer assigned to Bluebook. The memo was signed "H.C. Cross", but Vallée elected to refer to the author under the pseudonym "Pentacle".
The memo referred to a previously unknown analysis of several thousand UFO reports, along with calls for agreements about "what can and what cannot be discussed" with the 1953 Roberson Panel. Writing in his 1967 journal, Vallée expressed the opinion that the memo, if it were published, "would cause an even bigger uproar among foreign scientists than among Americans: it would prove the devious nature of the statements made by the Pentagon all these years about the non-existence of UFOs".
1970s
Emenegger documentary and the landing at Holloman Air Force Base
Clark cites a 1973 encounter as perhaps the earliest suggestion that the U.S. government was involved with ETs. That year, Robert Emenegger and Allan Sandler of Los Angeles, California were in contact with officials at Norton Air Force Base in order to make a documentary film. Emenegger and Sandler report that Air Force Officials (including Paul Shartle) suggested incorporating UFO information in the documentary, including as its centerpiece genuine footage of a 1971 UFO landing at Holloman Air Force Base in New Mexico. Furthermore, says Emenegger, he was given a tour of Holloman AFB and was shown where officials conferred with aliens. This was supposedly not the first time the U.S. had met these aliens, as Emenegger reported that his U.S. military sources had "been monitoring signals from an alien group with which they were unfamiliar, and did their ET guests know anything about them? The ETs said no." The documentary was released in 1974 as UFOs: Past, Present, and Future (narrated by Rod Serling) containing only a few seconds of the Holloman UFO footage, the remainder of the landing depicted with illustrations and re-enactments.
In 1988, Shartle said that the film in question was genuine, and that he had seen it several times.
In 1976 a televised documentary report UFOs: It Has Begun written by Robert Emenegger was presented by Rod Serling, Burgess Meredith and José Ferrer. Some sequences were recreated based upon the statements of eyewitness observers, together with the findings and conclusions of governmental civil and military investigations. The documentary uses a hypothetical UFO landing at Holloman AFB as a backdrop.
Emenegger's 1973 depiction of a landing at Holloman is widely noted for its "striking" similarities to Steven Spielberg's 1977 depiction of a landing at Devil's Tower in the film Close Encounters of the Third Kind. In the 2013 documentary Mirage Men, Ufologist Richard Dolan discussed the Emenegger documentary, saying "I have wondered [if] that film, I think as many people have wondered, was an abortive attempt at some kind of 'Disclosure'.
J. Allen Hynek and "Cosmic Watergate"
J. Allen Hynek was an American astronomer who served as scientific advisor to UFO studies undertaken by the U.S. Air Force under three projects: Project Sign (1947–1949), Project Grudge (1949–1951) and Project Blue Book (1952–1969) Hynek had drawn ridicule for his most famous debunking, in which he suggests a mass-sighting over Michigan may have been caused by "swamp gas".
By 1974, the former skeptic was publicly charging that Bluebook was "a Cosmic Watergate". Hynek claimed 20% of Bluebook cases were unexplained. Fellow Ufologist like Stanton Friedman echoed Hynek's "Cosmic Watergate" accusations. In 1976, pulp publisher Ray Palmer argued "there is a definite link between flying saucers, The Shaver Mystery, The Kennedy’s assassinations, Watergate and Fred Crisman."
Alternative 3 and a secret space program
Jerome Clark comments that many UFO conspiracy theory tales "can be traced to a mock documentary Alternative 3, broadcast on British television on June 20, 1977 (but intended for April Fools' Day), and subsequently turned into a paperback book."
According to the fictional research presented in the episode, it was claimed that missing scientists were involved in a secret American/Soviet plan in outer space, and further suggested that interplanetary space travel had been possible for much longer than was commonly accepted. The episode featured a fictional Apollo astronaut who claims to have stumbled on a mysterious lunar base during his moonwalk.
It was claimed that scientists had determined that the Earth's surface would be unable to support life for much longer, due to pollution leading to catastrophic climate change. Physicist "Dr Carl Gerstein" (played by Richard Marner) claimed to have proposed in 1957 that there were three alternatives to this problem. The first alternative was the drastic reduction of the human population on Earth. The second alternative was the construction of vast underground shelters to house government officials and a cross section of the population until the climate had stabilized. The third alternative, the so-called "Alternative 3", was to populate Mars via a way station on the Moon. The final moments of the film feature the discovery of animal life on the surface of Mars.
Paul Bennewitz
The late 1970s also saw the beginning of controversy centered on Paul Bennewitz of Albuquerque, New Mexico.
1980s
Jesse Marcel and Roswell conspiracy theories
In February 1978, UFO researcher Stanton Friedman interviewed Jesse Marcel, the only person known to have accompanied the Roswell debris from where it was recovered to Fort Worth where reporters saw material that was claimed to be part of the recovered object. Marcel's statements contradicted those he made to the press in 1947.
In November 1979, Marcel's first filmed interview was featured in a documentary titled "UFO's Are Real", co-written by Friedman. The film had a limited release but was later syndicated for broadcasting.
On February 28, 1980, sensationalist tabloid the National Enquirer brought large-scale attention to the Marcel story. On September 20, 1980, the TV series In Search of... aired an interview where Marcel described his participation in the 1947 press conference:
"They wanted some comments from me, but I wasn't at liberty to do that. So, all I could do is keep my mouth shut. And General Ramey is the one who discussed – told the newspapers, I mean the newsman, what it was, and to forget about it. It is nothing more than a weather observation balloon. Of course, we both knew differently."
Marcel gave a final interview to HBO's America Undercover which aired in August 1985. In all his statements, Marcel consistently denied the presence of bodies. Between 1978 and the early 1990s, UFO researchers such as Stanton T. Friedman, William Moore, Karl T. Pflock, and the team of Kevin D. Randle and Donald R. Schmitt interviewed several dozen people who claimed to have had a connection with the events at Roswell in 1947.
In the 1990s, the US military published two reports disclosing the true nature of the crashed aircraft: a surveillance balloon from Project Mogul. Nevertheless, the Roswell incident continues to be of interest to the media, and conspiracy theories surrounding the event persist. Roswell has been described as "the world's most famous, most exhaustively investigated and most thoroughly debunked UFO claim".
Gordon Cooper
By 1981, astronaut Gordon Cooper reported suppression of a flying saucer movie filmed in high clarity by two Edwards AFB range photographers on May 3, 1957. Cooper said he viewed developed negatives of the object, clearly showing a dish-like object with a dome on top and something like holes or ports in the dome. When later interviewed by James McDonald, the photographers and another witness confirmed the story. Cooper said military authorities then picked up the film and neither he nor the photographers ever heard what happened to it. The incident was also reported in a few newspapers, such as the Los Angeles Times. The official explanation was that the photographers had filmed a weather balloon distorted by hot desert air.
Majestic 12
The so-called Majestic 12 documents surfaced in 1982, suggesting that there was secret, high-level U.S. government interest in UFOs dating to the 1940s. Upon examination, the Federal Bureau of Investigation (FBI) declared the documents to be "completely bogus", and many ufologists consider them to be an elaborate hoax.
The term "Extraterrestrial Biological Entities" (or EBEs) was used in the MJ-12 documents.
Linda Moulton Howe and cattle mutilations
Linda Moulton Howe is an advocate of conspiracy theories that cattle mutilations are of extraterrestrial origin and speculations that the U.S. government is involved with aliens.
George C. Andrews and Milton William Cooper
In 1986, conspiracy theorist George C. Andrews authored Extra-Terrestrials Among Us, accusing the CIA of the Kennedy assassination. Scholar of extremism Michael Barkun notes that "Andrew's political views are almost indistinguishable from those associated with militias, only his placement of extraterrestrials at the pinnacle of conspiracies identifies him as a ufologist." According to Barkun, "the publication of Extra-Terrestrials Among Us marked the beginning of a feverish period of UFO conspiracism, from 1986 to 1989.
Citing Andrews as a source, in 1991 the UFO conspiracy author Bill Cooper published the influential conspiracy work Behold a Pale Horse which claimed that Kennedy was killed after he "informed Majestic 12 that he intended to reveal the presence of aliens to the American people". Behold a Pale Horse became 'wildly popular' with conspiracy theorists and went on to be one of the most-read books in the US prison system. According to Michael Barkun, the theories of Andrews and Cooper helped create "a conspiracist form of UFO speculation, which Jerome Clark refers to as ufology's 'dark side'."
UFO Cover-Up?: Live!
In 1980, the term "Area 51" was used in the popular press after Delta Force trained there for Operation Eagle Claw, the failed attempt to rescue American hostages in Iran. Press again discussed the site in 1984 after the government seized adjacent land.
On October 14, 1988, actor Mike Farrell hosted UFO Cover Up? Live, a two-hour television special "focusing on the government's handling of information regarding UFOs" and "whether there has been any suppression of evidence supporting the existence of UFOs".
Bob Lazar and Area 51
In November 1989, Bob Lazar appeared in a special interview with investigative reporter George Knapp on Las Vegas TV station KLAS to discuss his alleged employment at S-4. In his interview with Knapp, Lazar said he first thought the saucers were secret, terrestrial aircraft, whose test flights must have been responsible for many UFO reports. Gradually, on closer examination and from having been shown multiple briefing documents, Lazar came to the conclusion that the discs must have been of extraterrestrial origin. He claims that they use moscovium, an element that decays in a fraction of a second, to warp space, and that "Grey" aliens are from the Zeta Reticuli star system. According to the Los Angeles Times, he never obtained the degrees he claims to hold from MIT and Caltech. By 1991, Nevada press reported tourists traveling to the Groom Lake region in hopes of glimpsing UFOs.
1990s
The Branton Files are a series of documents espousing various conspiracy theories circulated on the internet since at least the mid-1990s. They are most often attributed to Bruce Alan Walton who claims to have been a victim of alien abduction and had contact through "altered states of consciousness" with humans "living in the inner earth". The files have been characterized as "high fantasy" filled with "complex and convoluted conspiracism".
Phil Schneider and Dulce Base
In 1995, a man calling himself Philip Schneider made a few appearances at UFO conventions, espousing essentially a new version of the theories mentioned above. Schneider claimed to be the son of U-boat commander who was captured by the allies and switched sides. According to Schneider, his father has been part of the Philadelphia Experiment. Schneider claimed to have played a role in the construction of Deep Underground Military Bases (DUMBs) across the United States, and as a result he said that he had been exposed to classified information of various sorts as well as having personal experiences with EBEs. He claimed to have survived the Dulce Base catastrophe and decided to tell his tale.
According to folklore, Schneider died on January 17, 1996, in a death ruled a suicide, though some of his followers reportedly believed he may have been murdered.
2000s
2003 saw the publication of Alien Encounters (), by Chuck Missler and Mark Eastman, which primarily re-stated the notions presented above (especially Cooper's) and presents them as fact.
MoD secret files
Eight files from 1978 to 1987 on UFO sightings were first released on May 14, 2008, to the National Archives' website by the British Ministry of Defence. Two hundred files were set to be made public by 2012. The files are correspondence from the public sent to government officials, such as the MoD and Margaret Thatcher. The information can be downloaded. Copies of Lt. Col. Halt's letter regarding the sighting at RAF Woodbridge (see above) to the U.K. Ministry of Defence were routinely released (without additional comment) by the USA's base public affairs staff throughout the 1980s until the base closed. The MoD released the files due to requests under the Freedom of Information Act. The files included reports of "lights in the sky" from Britons.
Disclosure
In the early 2000s, the concept of "disclosure" became increasingly popular in the UFO conspiracy community: that the government had classified and withheld information on alien contact and full disclosure was needed, and was pursued by activist lobbying groups.
In 1993, Steven M. Greer founded the Disclosure Project to promote the concept. In May 2001, Greer held a press conference at the National Press Club in Washington, D.C. that demanded Congress hold hearings on "secret U.S. involvement with UFOs and extraterrestrials". It was described by an attending BBC reporter as "the strangest ever news conference hosted by Washington's august National Press Club". The Disclosure Project's claims were met with by derision by skeptics and spokespeople for the United States Air Force.
In 2013, the production company CHD2, LLC held a "Citizen Hearing on Disclosure" at the National Press Club in Washington, D.C. from 29 April to 3 May 2013. The group paid former U.S. Senator Mike Gravel and former Representatives Carolyn Cheeks Kilpatrick, Roscoe Bartlett, Merrill Cook, Darlene Hooley, and Lynn Woolsey $20,000 each to participate, and to preside over panels of academics and former government and military officials discussing UFOs and extraterrestrials.
Other such groups include Citizens Against UFO Secrecy, founded in 1977.
The name of the German website Disclose.tv, which was initially a conspiracy forum focused on UFOs, ghosts and paranormal phenomena, references the concept.
On December 16, 2017, The New York Times broke the story of the Advanced Aerospace Threat Identification Program, a Defense Intelligence Agency program to study "unidentified aerial phenomenon" The program's director, Luis Elizondo, has claimed there is a government conspiracy to suppress evidence that UFOs are of non-human origin. From 2019 to 2021, Dave Grusch was the representative of the National Reconnaissance Office to the Unidentified Aerial Phenomena Task Force; Beginning in 2023, Grusch publicly claimed elements of the US government and its contractors were covering up evidence of UFOs and their reverse-engineering.
Allegations of evidence suppression
Allegations of suppression of UFO related evidence have persisted for many decades. Some conspiracy theories also claim that some governments might have removed and/or destroyed/suppressed physical evidence; some examples follow.
On July 7, 1947, William Rhodes photographed an unusual object over Phoenix, Arizona. The photos appeared in a Phoenix newspaper and a few other papers. An Army Air Force intelligence officer and an FBI agent interviewed Rhodes on August 29 and convinced him to surrender the negatives, which he did the next day. He was informed he would not get them back, but later he tried, unsuccessfully, to retrieve them. The photos were analyzed and subsequently appeared in some classified Air Force UFO intelligence reports. (Randle, 34–45, full account)
A June 27, 1950, movie of a "flying disk" over Louisville, Kentucky, taken by a Louisville Courier-Journal photographer, had the USAF Directors of counterintelligence (AFOSI) and intelligence discussing in memos how to best obtain the movie and interview the photographer without revealing Air Force interest. One memo suggested the FBI be used, then precluded the FBI getting involved. Another memo said "it would be nice if OSI could arrange to secure a copy of the film in some covert manner," but if that was not feasible, one of the Air Force scientists might have to negotiate directly with the newspaper. In a recent interview, the photographer confirmed meeting with military intelligence and still having the film in his possession until then, but refused to say what happened to the film after that.
In another 1950 movie incident from Montana, Nicholas Mariana filmed some unusual aerial objects and eventually turned the film over to the U.S. Air Force, but insisted that the first part of the film, clearly showing the objects as spinning discs, had been removed when it was returned to him.
On January 22, 1958, when NICAP director Donald Keyhoe appeared on CBS television, his statements on UFOs were censored by the Air Force. During the show when Keyhoe tried to depart from the censored script to "reveal something that has never been disclosed before," CBS cut the sound, later stating Keyhoe was about to violate "predetermined security standards" and about to say something he was not "authorized to release." Conspiracy theorists claim that what Keyhoe was about to reveal were four publicly unknown military studies concluding UFOs were interplanetary (including the 1948 Project Sign Estimate of the Situation and Blue Book's 1952 engineering analysis of UFO motion). (Good, 286–287; Dolan 293–295)
A March 1, 1967 memo directed to all USAF divisions, from USAF Lt. General Hewitt Wheless, Assistant Vice Chief of Staff, stated that unverified information indicated that unknown individuals, impersonating USAF officers and other military personnel, had been harassing civilian UFO witnesses, warning them not to talk, and also confiscating film, referring specifically to the Heflin incident. AFOSI was to be notified if any personnel were to become aware of any other incidents. (Document in Fawcett & Greenwood, 236.)
According to one theory related to the assassination of President John F. Kennedy, the CIA killed Kennedy in order to prevent him from leaking information to the Soviet Union about a covert program to reverse-engineer alien technology (i.e., Majestic 12).
Nick Cook, an aviation investigative journalist for Jane's Information Group, researcher of Billion Dollar Secret and author of The Hunt for Zero Point, claims to have uncovered documentary evidence that top-secret US Defense Industry technology has been developed by government-backed Defense Industry programs beginning in the 1940s, and using research conducted by Nazi scientists during WWII. This research was recovered by Allied Military Intelligence, taken to the U.S. and developed further with the collaboration of the same former German scientists at top-secret facilities established at White Sands, New Mexico, and later at Area 51. Allegedly, this resulted in production of a real-world prototype operational supersonic craft, which was actually tested and used in clandestine military exercises, with other developments incorporated later into spy aircraft tasked with overflying hostile countries. The UFO story that evidence of alien technology is being suppressed and removed or destroyed was generated and then promoted by the CIA, beginning 1947, as false-lead disinformation to cover it all up for the sake of National Security, particularly during the Cold War. This being a time when (his investigations found) the Soviet Union too was developing its own top-secret high-tech UFO craft. Cook's conclusions, alleging suppression of evidence of advanced human technology instead of alien, together with what he presents as declassified top-secret documents and blueprints, and his interviews of various experts (some of doubtful reliability), was developed and broadcast as a feature documentary on British television in 2005 as "UFOs: The Secret Evidence" and in the US in 2006 as a two-part episode on the History Channel's UFO Files, retitled "An Alien History of Planet Earth", with an added introduction by actor William Shatner. The History Channel program teaser promised "...a look at rumors of classified military aircraft incorporating alien technology into their designs."
In 2013, Sen. Mike Gravel claimed that the government was suppressing evidence of extraterrestrials.
Benjamin Radford has pointed out how unlikely such suppression of evidence is given that "[t]he UFO coverup conspiracy would have to span decades, cross international borders, and transcend political administrations" and that "all of the world's governments, in perpetuity, regardless of which political party is in power and even among enemies, [would] have colluded to continue the coverup."
In popular fiction
Works of popular fiction have included premises and scenes in which a government intentionally prevents disclosure to its populace of the discovery of non-human, extraterrestrial intelligence. Motion picture examples include 2001: A Space Odyssey (as well as the earlier novel by Arthur C. Clarke), Easy Rider, the Steven Spielberg films Close Encounters of the Third Kind and E.T. the Extra-Terrestrial, Hangar 18, Total Recall, Men in Black, and Independence Day. Television series and films including The X-Files, Dark Skies, and Stargate have also featured efforts by governments to conceal information about extraterrestrial beings. The plot of the Sidney Sheldon novel The Doomsday Conspiracy involves a UFO conspiracy.
In March 2001, former astronaut and United States Senator John Glenn appeared on an episode of the TV series Frasier playing a fictional version of himself who confesses to a UFO coverup.
Timeline
March 1946 - Palmer's Amazing Stories publishes Shaver's "A Warning to Future Humanity"
June 1946 - Palmer's Amazing Stories publishes Crisman's letter corroborating Shaver's claims
July 1947 - Palmer hires original saucer witness Kenneth Arnold to investigate Crisman's Maury Island incident; USAF investigators killed in plane crash
October 1947 - Palmer's Amazing Stories published letter by Shaver saying the truth behind the discs "will never be disclosed to common people".
April 3, 1949 - Winchell alleges cover-up of saucers being Soviet
December 26, 1949 - Keyhoe's article "The Flying Saucers Are Real" published in True
October 1949 - Scully's article on Aztec hoax introduces alien bodies
1952 - Arnold's The Coming of the Saucers introduces Maury Island hoax to wider audience
April 1952 - "Have We Visitors From Space" published in Life Magazine
July 31, 1952 - Samford press conference
1955 - Keyhoe authors The Flying Saucer Conspiracy
1955 - Morris Jessup authors The Case for the UFO, an anonymously-annotated copy of which introduces "The Philadelphia Experiment".
1956 - Keyhoe's The Flying Saucer Conspiracy links UFOs and Bermuda Triangle
1956 - Ruppelt authors The Report on Unidentified Flying Objects
1956 - Chop's film Unidentified Flying Objects: The True Story of Flying Saucers released
1956 - Barker authors They Knew Too Much About Flying Saucers introduces Men in Black
January 22, 1958 - Keyhoe mic cut on live TV
March 20-21, 1966 - Michigan "swamp gas" UFO reports occur; Hynek's explanation is ridiculed
April 3, 1968 - 2001: A Space Odyssey revealed
October 31, 1968 - Crisman subpoenaed in Clay Shaw JFK assassination case
January 9, 1969 - Crisman accused of being one of the three tramps
1974 - Emenegger releases film UFOs: Past, Present, and Future introduces summoned landing
1974 - Hynek alleges a 'Cosmic Watergate'
1976 - Palmer links "flying saucers, The Shaver Mystery, The Kennedy’s assassinations, Watergate and Fred Crisman"
November 16, 1977 - Close Encounters of the Third Kind released
November 1979 - Jesse Marcel suggests Roswell was extraterrestrial in Friedman documentary
1980 - Moulton Howe's documentary A Strange Harvest links cattle mutilations to UFOs
October 14, 1988 - UFO Cover Up? Live introduces Majestic 12 and Area 51
July 1, 1989 - Bill Moore addresses MUFON
November 1989 - Bob Lazar first televised interview
1991 - Cooper's Behold a Pale Horse published
September 10, 1993 -The X-Files premieres
July 3, 1996 - Independence Day premieres
July 2, 1997 - Men in Black premieres
December 16, 2017 - New York Timmes publishes story about AATIP and the Nimitz case
See also
Alien abduction
Area 51
Brookings Report
Cattle mutilation
Crop circle
Flying saucer
List of alleged extraterrestrial beings
List of major UFO sightings
Men in black
Planetary objects proposed in religion, astrology, ufology and pseudoscience
Project Blue Book
Rendlesham Forest incident
Roswell incident
Storm Area 51
Notes and references
Footnotes
Citations
Bibliography
Further reading
Peebles, Curtis. Watch the Skies! A Chronicle of the Flying Saucer Myth. Washington, DC:Smithsonian Institution, 1994. .
External links
CIA's Role in the Study of UFOs, 1947–90
National Security Agency UFO Documents Index
20th Century UFO Conspiracies, a lecture by Emory University Professor Felix Harcourt
UFO
Government responses to UFOs
UFO
UFO
UFO culture
UFO-related phenomena
Ufology
Unidentified flying objects | UFO conspiracy theories | [
"Technology"
] | 9,431 | [
"Space and astronomy conspiracy theories",
"UFO conspiracy theories",
"Science and technology-related conspiracy theories"
] |
170,522 | https://en.wikipedia.org/wiki/Consumerism | Consumerism is a social and economic order in which the aspirations of many individuals include the acquisition of goods and services beyond those necessary for survival or traditional displays of status. It emerged in Western Europe before the Industrial Revolution and became widespread around 1900. In economics, consumerism refers to policies that emphasize consumption. It is the consideration that the free choice of consumers should strongly inform the choice by manufacturers of what is produced and how, and therefore influence the economic organization of a society.
Consumerism has been criticized by both individuals who choose other ways of participating in the economy (i.e. choosing simple living or slow living) and environmentalists concerned about its impact on the planet. Experts often assert that consumerism has physical limits, such as growth imperative and overconsumption, which have larger impacts on the environment. This includes direct effects like overexploitation of natural resources or large amounts of waste from disposable goods and significant effects like climate change. Similarly, some research and criticism focuses on the sociological effects of consumerism, such as reinforcement of class barriers and creation of inequalities.
Evolution of the term
The term "consumerism" has several definitions. These definitions may not be related to each other and confusingly, they conflict with each other.
In a 1955 speech, John Bugas, a vice president of the Ford Motor Company, coined the term "consumerism" as a substitute for "capitalism" to better describe the American economy:
Bugas's definition aligned with Austrian economics founder Carl Menger's conception of consumer sovereignty, as laid out in his 1871 book Principles of Economics, whereby consumer preferences, valuations, and choices control the economy entirely. This view stood in direct opposition to Karl Marx's critique of the capitalist economy as a system of exploitation.
For social critic Vance Packard, however, "consumerism" was not a positive term about consumer practices but rather a negative term, meaning excessive materialism and wastefulness. In the advertisements for his 1960 book The Waste Makers, the word "consumerism" was prominently featured in a negative way.
One sense of the term relates to efforts to support consumers' interests. By the early 1970s it had become the accepted term for the field and began to be used in these ways:
Consumerism is the concept that consumers should be informed decision makers in the marketplace. In this sense consumerism is the study and practice of matching consumers with trustworthy information, such as product testing reports.
Consumerism is the concept that the marketplace itself is responsible for ensuring social justice through fair economic practices. Consumer protection policies and laws compel manufacturers to make products safe.
Consumerism refers to the field of studying, regulating, or interacting with the marketplace. The consumer movement is the social movement which refers to all actions and all entities within the marketplace which give consideration to the consumer.
While the above definitions were becoming established, other people began using the term consumerism to mean "high levels of consumption". This definition has gained popularity since the 1970s and began to be used in these ways:
Consumerism is the selfish and frivolous collecting of products, or economic materialism. In this sense consumerism is negative and in opposition to positive lifestyles of anti-consumerism and simple living.
Consumerism is a force from the marketplace which destroys individuality and harms society. It is related to globalization and in protest against this some people promote the "anti-globalization movement".
History
Origins
The consumer society developed throughout the late 17th century and the 18th century. Peck addresses the assertion made by consumption scholars about writers such as "Nicholas Barbon and Bernard Mandeville" in "Luxury and War: Reconsidering Luxury Consumption in Seventeenth-Century England" and how their emphasis on the financial worth of luxury changed society's perceptions of luxury. They argue that a significant transformation occurred in the eighteenth century when the focus shifted from court-centered luxury spending to consumer-driven luxury consumption, which was fueled by middle-class purchases of new products.
The English economy expanded significantly in the 17th century due to new methods of agriculture that rendered it feasible to cultivate a larger area. A time of heightened demand for luxury goods and increased cultural interaction was reflected in the wide range of luxury products that the aristocracy and affluent merchants imported from nations like Italy and the Low Countries. This expansion of luxury consumption in England was facilitated by state policies that encouraged cultural borrowing and import substitution, hence enabling the purchase of luxury items. Luxury goods included sugar, tobacco, tea, and coffee; these were increasingly grown on vast plantations (historically by slave labor) in the Caribbean as demand steadily rose. In particular, sugar consumption in Britain increased by a factor of 20 during the 18th century.
Furthermore, the non-importation movement commenced in the 18th century, more precisely from 1764 to 1776, as Witkowski's article "Colonial Consumers in Revolt: Buyer Values and Behavior during the Nonimportation Movement, 1764–1776" discusses. He describes the evolving development of consumer culture in the context of "colonial America". An emphasis on efficiency and economical consumption gave way to a preference for comfort, convenience, and importing products. During this time of transformation, colonial consumers had to choose between rising material desires and conventional values.
Culture of consumption
The pattern of intensified consumption became particularly visible in the 17th century in London, where the gentry and prosperous merchants took up residence and promoted a culture of luxury and consumption that slowly extended across socio-economic boundaries. Marketplaces expanded as shopping centres, such as the New Exchange, opened in 1609 by Robert Cecil in the Strand. Shops started to become important as places for Londoners to meet and socialize and became popular destinations alongside the theatre. From 1660, Restoration London also saw the growth of luxury buildings as advertisements for social position, with speculative architects like Nicholas Barbon and Lionel Cranfield operating. This then-scandalous line of thought caused great controversy with the publication of the influential work Fable of the Bees in 1714, in which Bernard Mandeville argued that a country's prosperity ultimately lay in the self-interest of the consumer.
The pottery entrepreneur and inventor, Josiah Wedgwood, noticed the way that aristocratic fashions, themselves subject to periodic changes in direction, slowly filtered down through different classes of society. He pioneered the use of marketing techniques to influence and manipulate the movement of prevailing tastes and preferences to cause the aristocracy to accept his goods; it was only a matter of time before the middle classes also rapidly bought up his goods. Other producers of a wide range of other products followed his example, and the spread and importance of consumption fashions became steadily more important. Since then, advertising has played a major role in fostering a consumerist society, marketing goods through various platforms in nearly all aspects of human life, and pushing the message that the potential customer's personal life requires some product.
Mass production
The Industrial Revolution dramatically increased the availability of consumer goods, although it was still primarily focused on the capital goods sector and industrial infrastructure (i.e., mining, steel, oil, transportation networks, communications networks, industrial cities, financial centers, etc.). The advent of the department store represented a paradigm shift in the experience of shopping. Customers could now buy an astonishing variety of goods, all in one place, and shopping became a popular leisure activity. While previously the norm had been the scarcity of resources, the industrial era created an unprecedented economic situation. For the first time in history products were available in outstanding quantities, at outstandingly low prices, therefore available to virtually everyone in the industrialized West.
By the turn of the 20th century, the average worker in Western Europe or the United States still spent approximately 80–90% of their income on food and other necessities. What was needed to propel consumerism, was a system of mass production and consumption, exemplified by Henry Ford, an American car manufacturer. After observing the assembly lines in the meat-packing industry, Frederick Winslow Taylor brought his theory of scientific management to the organization of the assembly line in other industries; this unleashed incredible productivity and reduced the costs of commodities produced on assembly lines around the world.
Consumerism has long had intentional underpinnings, rather than just developing out of capitalism. As an example, Earnest Elmo Calkins noted to fellow advertising executives in 1932 that "consumer engineering must see to it that we use up the kind of goods we now merely use", while the domestic theorist Christine Frederick observed in 1929 that "the way to break the vicious deadlock of a low standard of living is to spend freely, and even waste creatively".
The older term and concept of "conspicuous consumption" originated at the turn of the 20th century in the writings of sociologist and economist Thorstein Veblen. The term describes an apparently irrational and confounding form of economic behaviour. Veblen's scathing proposal that this unnecessary consumption is a form of status display is made in darkly humorous observations like the following:
The term "conspicuous consumption" spread to describe consumerism in the United States in the 1960s, but was soon linked to debates about media theory, culture jamming, and its corollary productivism.
Television and American consumerism
The advent of the television in the late 1940s proved to be an attractive opportunity for advertisers, who could reach potential consumers in the home using lifelike images and sound. The introduction of mass commercial television positively impacted retail sales. The television motivated consumers to purchase more products and upgrade whatever they currently had. In the United States, a new consumer culture developed centered around buying products, especially automobiles and other durable goods, to increase their social status. Woojin Kim of the University of California, Berkeley, argues that sitcoms of this era also helped to promote the idea of suburbia.
According to Woojin, the attraction of television advertising has brought an improvement in Americans' social status. Watching television programs has become an important part of people's cultural life. Television advertising can enrich and change the content of advertising from hearing and vision and make people in contact with it. The image of television advertising is realistic, and it is easy to have an interest and desire to buy advertising goods, At the same time, the audience intentionally or unintentionally compares and comments on the advertised goods while appreciating the TV advertisements, arouses the interest of the audience by attracting attention, and forms a buying idea, which is conducive to enhancing the buying confidence. Therefore, TV can be used as a media way to accelerate and affect people's desire to buy products.
In the 21st century
Madeline Levine criticized what she saw as a large change in American culture – "a shift away from values of community, spirituality, and integrity, and toward competition, materialism and disconnection."
Businesses have realized that wealthy consumers are the most attractive targets of marketing. The upper class's tastes, lifestyles, and preferences trickle down to become the standard for all consumers. The not-so-wealthy consumers can "purchase something new that will speak of their place in the tradition of affluence". A consumer can have the instant gratification of purchasing an expensive item to improve social status.
Emulation is also a core component of 21st century consumerism. As a general trend, regular consumers seek to emulate those who are above them in the social hierarchy. The poor strive to imitate the wealthy and the wealthy imitate celebrities and other icons. The celebrity endorsement of products can be seen as evidence of the desire of modern consumers to purchase products partly or solely to emulate people of higher social status. This purchasing behavior may co-exist in the mind of a consumer with an image of oneself as being an individualist.
Cultural capital, the intangible social value of goods, is not solely generated by cultural pollution. Subcultures also manipulate the value and prevalence of certain commodities through the process of bricolage. Bricolage is the process by which mainstream products are adopted and transformed by subcultures. These items develop a function and meaning that differs from their corporate producer's intent. In many cases, commodities that have undergone bricolage often develop political meanings.
For example, Doc Martens, originally marketed as workers boots, gained popularity with the punk movement and AIDS activism groups and became symbols of an individual's place in that social group. When corporate America recognized the growing popularity of Doc Martens they underwent another change in cultural meaning through counter-bricolage. The widespread sale and marketing of Doc Martens brought the boots back into the mainstream. While corporate America reaped the ever-growing profits of the increasingly expensive boot and those modeled after its style, Doc Martens lost their original political association. Mainstream consumers used Doc Martens and similar items to create an "individualized" sense of identity by appropriating statement items from subcultures they admired.
When consumerism is considered as a movement to improve rights and powers of buyers in relation to sellers, there are certain traditional rights and powers of sellers and buyers.
The American Dream has long been associated with consumerism. According to Sierra Club's Dave Tilford, "With less than 5 percent of world population, the U.S. uses one-third of the world's paper, a quarter of the world's oil, 23 percent of the coal, 27 percent of the aluminum, and 19 percent of the copper."
China is the world's fastest-growing consumer market. According to biologist Paul R. Ehrlich, "If everyone consumed resources at the US level, you will need another four or five Earths."
With the development of the economy, consumers' awareness of protecting their rights and interests is growing, and consumer demand is growing. Online commerce has expanded the consumer market and enhanced consumer information and market transparency. Digital fields not only bring advantages and convenience but also cause many problems and increase the opportunities for consumers to suffer damage.
Under the virtual network environment, on the one hand, consumers' privacy protection is vulnerable to infringement, driven by the development of hacker technology and the Internet, on the other hand, consumers' right to know is the basic right of consumers. When purchasing goods and receiving services, we need the real situation of institutional services. Finally, in the Internet era, consumers' demand is increasing, and we also need to protect consumers' rights and interests to improve consumers' rights and interests and promote the operation of the economic market.
Socially mediated political consumerism
Today's society has entered the era of entertainment and the Internet. Most people spend more time browsing on mobile phones than face-to-face. The convenience of social media has a subtle impact on the public and unconsciously changes people's consumption habits. The socialized Internet is gradually developing, such as Twitter, websites, news and social media, with sharing and participation as the core, consumers share product information and opinions through social media. At the same time, by understanding the reputation of the brand on social media, consumers can easily change their original attitude towards the brand. The information provided by social media helps consumers shorten the time of thinking about products and decision-making, so as to improve consumers' initiative in purchase decision-making and improve consumers' shopping and decision-making quality to a certain extent.
Criticism
Andreas Eisingerich discusses in his article "Vision statement: Behold the extreme consumers...and learn to embrace them" that "In many critical contexts, consumerism is used to describe the tendency of people to identify strongly with products or services they consume, especially those with commercial brand-names and perceived status-symbolism appeal, e.g. a luxury car, designer clothing, or expensive jewelry". A major criticism of consumerism is that it serves the interests of capitalism.
Consumerism can take extreme forms, to the extent that consumers will sacrifice significant time and income not only to make purchases, but also to actively support a certain firm or brand. As stated by Gary Cross in his book "All Consuming Century: Why Consumerism Won in Modern America", "consumerism succeeded where other ideologies failed because it concretely expressed the cardinal political ideals of the century – liberty and democracy – and with relatively little self-destructive behavior or personal humiliation." He discusses how consumerism won in its forms of expression.
Tim Kasser, in his book The High Price of Materialism, examines how the culture of consumerism and materialism affects our happiness and well-being. The book argues that people who value wealth and possessions more than other things tend to have lower levels of satisfaction, self-esteem, and intimacy, and higher levels of anxiety, depression, and insecurity. The book also explores how materialistic values harm our relationships, our communities, and our environment, and suggests ways to reduce materialism and increase our quality of life.
Opponents of consumerism argue that many luxuries and unnecessary consumer-products may act as a social mechanism allowing people to identify like-minded individuals through the display of similar products, again utilizing aspects of status-symbolism to judge socioeconomic status and social stratification. Some people believe relationships with a product or brand name are substitutes for healthy human relationships lacking in societies, and along with consumerism, create a cultural hegemony, and are part of a general process of social control in modern society.
In 1955, economist Victor Lebow stated:
Figures who arguably do not wholly buy into consumerism include German historian Oswald Spengler (1880–1936), who said: "Life in America is exclusively economic in structure and lacks depth", and French writer Georges Duhamel (1884–1966), who held American materialism up as "a beacon of mediocrity that threatened to eclipse French civilization". Francis Fukuyama blames consumerism for moral compromises.
Moreover, some critics have expressed concern about the role commodities play in the definition of one's self. In his 1976 book Captains of Consciousness: Advertising and the Social Roots of the Consumer Culture, historian and media theorist Stuart Ewen introduced what he referred to as the "commodification of consciousness", and coined the term "commodity self" to describe an identity built by the goods we consume.
For example, people often identify as PC or Mac users, or define themselves as a Coke drinker rather than a Pepsi drinker. The ability to choose one product out of a great number of others allows a person to build a sense of "unique" individuality, despite the prevalence of Mac users or the nearly identical tastes of Coke and Pepsi. By owning a product from a certain brand, one's ownership becomes a vehicle of presenting an identity that is associated with the attitude of the brand. The idea of individual choice is exploited by corporations that claim to sell "uniqueness" and the building blocks of an identity. The invention of the commodity self is a driving force of consumerist societies, preying upon the deep human need to build a sense of self.
Environmental impact
Critics of consumerism point out that consumerist societies are more prone to damage the environment, contribute to global warming and use resources at a higher rate than other societies. Jorge Majfud says that "Trying to reduce environmental pollution without reducing consumerism is like combatting drug trafficking without reducing the drug addiction."
Pope Francis also critiques consumerism in his encyclical Laudato Si': On Care For Our Common Home. He critiques the harm consumerism does to the environment and states, "The analysis of environmental problems cannot be separated from the analysis of human, family, work-related and urban contexts, nor from how individuals relate to themselves, which leads in turn to how they relate to others and to the environment." Pope Francis believes the obsession with consumerism leads individuals further away from their humanity and obscures the interrelated nature between humans and the environment.
Another critic is James Gustave Speth. He argues that the growth imperative represents the main goal of capitalistic consumerism. In his book The Bridge at the Edge of the World he notes, "Basically, the economic system does not work when it comes to protecting environmental resources, and the political system does not work when it comes to correcting the economic system".
In an opinion segment of New Scientist magazine published in August 2009, reporter Andy Coghlan cited William Rees of the University of British Columbia and epidemiologist Warren Hern of the University of Colorado at Boulder saying that human beings, despite considering themselves civilized thinkers, are "subconsciously still driven by an impulse for survival, domination and expansion ... an impulse which now finds expression in the idea that inexorable economic growth is the answer to everything, and, given time, will redress all the world's existing inequalities."
According to figures presented by Rees at the annual meeting of the Ecological Society of America, human society is in a "global overshoot", consuming 30% more material than is sustainable from the world's resources. Rees went on to state that at present, 85 countries are exceeding their domestic "bio-capacities", and compensate for their lack of local material by depleting the stocks of other countries, which have a material surplus due to their lower consumption. Not only that, but McCraken indicates that how consumer goods and services are bought, created and used should be taken under consideration when studying consumption.
Not all anti-consumerists oppose consumption in itself, but they argue against increasing the consumption of resources beyond what is environmentally sustainable. Jonathan Porritt writes that consumers are often unaware of the negative environmental impacts of producing many modern goods and services, and that the extensive advertising industry only serves to reinforce increasing consumption.
Conservation scientists Lian Pin Koh and Tien Ming Lee, discuss that in the 21st century, the damage to forests and biodiversity cannot be dealt with only by the shift towards "Green" initiatives such as "sustainable production, green consumerism, and improved production practices". They argue that consumption in developing and emerging countries needs to be less excessive. Likewise, other ecological economists such as Herman Daly and Tim Jackson recognize the inherent conflict between consumer-driven consumption and planet-wide ecological degradation.
American environmental historian and sociologist Jason W. Moore, in his book Anthropocene or Capitalocene? Nature, History, and the Crisis of Capitalism points out that the challenge of addressing both underconsumption and overconsumption of resources lies at the heart of the world’s primary sustainability dilemma. While significant portions of the global population struggle to meet basic needs, the resource-intensive lifestyles of affluent societies — characterized by car dependency, frequent air travel, high meat consumption, and an apparently limitless appetite for consumer goods like clothing and technological devices — are key drivers of the unsustainable practices.
Consumerism as cultural ideology
In the 21st century's globalized economy, consumerism has become a noticeable part of the culture. Critics of the phenomenon not only criticized it against what is environmentally sustainable, but also the spread of consumerism in cultural aspects. However, several scholars have written about the relationship between environmentalism and consumerism in a market economy society.
Discussions of the environmental implications of consumerist ideologies in works by economists James Gustave Speth and Naomi Klein, and consumer cultural historian Gary Cross. Leslie Sklair proposes the criticism through the idea of culture-ideology of consumerism in his works. He says that,
Today, people are universally and continuously being exposed to mass consumerism and product placement in the media or even in their daily lives. The line between information, entertainment, and promotion of products has been blurred, thus explaining how people have become more reformulated into consumerist behaviours. Shopping centers are a representative example of a place where people are explicitly exposed to an environment that welcomes and encourages consumption.
For example, in 1993, Goss wrote that the shopping center designers "strive to present an alternative rationale for the shopping center's existence, manipulate shoppers' behavior through the configuration of space, and consciously design a symbolic landscape that provokes associative moods and dispositions in the shopper". On the prevalence of consumerism in daily life, historian Gary Cross says that "The endless variation of clothing, travel, and entertainment provided opportunity for practically everyone to find a personal niche, no matter their race, age, gender or class."
Arguably, the success of the consumerist cultural ideology can be witnessed all around the world. People who rush to the mall to buy products and end up spending money with their credit cards can easily become entrenched in the financial system of capitalist globalization.
Alternatives
Since consumerism began, various individuals and groups have consciously sought an alternative lifestyle. These movements range on a spectrum from moderate "simple living", "eco-conscious shopping", and "localvore"/"buying local", to Freeganism on the extreme end. Building on these movements, the discipline of ecological economics addresses the macro-economic, social and ecological implications of a primarily consumer-driven economy.
See also
""
References
Consumerism—An Interpretation
Consumerism, 4th Ed.
Consumerism: As a Way of Life
External links
"Consumer Culture", by Ginny Wilmerding.
"Consumers may not realize the full impact of their choices"
"Globalizing consumption" by Paul James and Andy Scerri
"Obedience, Consumerism, and Climate Change", by Yosef Brody
A Global Consumer Solidarity Movement
AdBusters, an anti-consumerism magazine
Center for the Advancement of the Steady State Economy, a post-consumerist macro-economic framework
Circles of Sustainability, website for the Circles of Sustainability approach
Consumerium Development Wiki, a wiki related to consumer activism
Global-local consumption, by Imre Szeman and Paul James
Peter Medlin, WNIJ, "Illinois Is the First State to Have High Schools Teach News Literacy," National Public Radio, August 12, 2021
Postconsumers, moving beyond addictive consumerism
Renegade Consumer, an actively anti-consumerism organization
The Human Being Lost in Consumerism: A Polish Perspective and Challenges in Religious Education, by Elżbieta Osewska and Józef Stala
Consumer behaviour
Economic ideologies
Economic sociology | Consumerism | [
"Biology"
] | 5,315 | [
"Behavior",
"Consumer behaviour",
"Human behavior"
] |
170,537 | https://en.wikipedia.org/wiki/Starpath%20Supercharger | The Starpath Supercharger (originally called the Arcadia Supercharger) is an expansion peripheral cartridge created by Starpath, for playing cassette-based proprietary games on the Atari 2600 video game console.
The device consists of a long cartridge with a handle on the end, and an audio cassette cable. It adds 6 KB to the Atari 2600's 128 bytes of RAM (increasing it 49-fold to 6,272 bytes of RAM), allowing for the creation of specially compatible games which are larger and have higher resolution graphics than normal cartridges. A cable coming out of the side of the cartridge plugs into the earphone jack of any standard cassette player, for loading all Supercharger games from standard audio cassettes.
Games
All Supercharger games were developed by Starpath.
Initial releases
Listed in order of release:
Phaser Patrol
Communist Mutants from Space
Fireball
Suicide Mission
Escape from the Mindmaster (prototype is called Labyrinth)
Dragonstomper (prototype is called Excalibur)
Killer Satellites
Rabbit Transit
Frogger, The Official
Party Mix
Mail order releases
These games were available only via mail order after Starpath declared bankruptcy.
Sword of Saros
Survival Island
Prototypes
Sweat: The Decathlon Game
Going Up??
Compatibility
The Supercharger is compatible with Atari 2600, Atari 2600 Jr., and the Sears Video Arcade consoles.
Due to the shape of the Supercharger, it does not normally fit into the ColecoVision's Expansion Module #1, which is an adapter that allows the ColecoVision to play Atari 2600 games. However, if the cover of the expansion module is removed or an extender is used, the Supercharger will work. Extenders were sent to customers who called Starpath about such issues.
The Supercharger does not work on many Atari 7800 systems (which is typically backward compatible with the Atari 2600), although it does with some early models of the system. After Atari installed a circuit to fix a compatibility issue with the 2600 version of Dark Chambers, it subsequently caused incompatibility with the Supercharger and some other games that use the FE bank switching method.
Reception
Danny Goodman of Creative Computing Video & Arcade Games said that the Supercharger's "graphics are something else", reporting that the diagonal lines in one game under development were among the smoothest he had seen in any console.
Legacy
The complete library of games, including the prototype Sweat, was also released on audio CD as Stella Gets A New Brain by CyberPuNKS (Jim Nitchals, Dan Skelton, Glenn Saunders and Russ Perry Jr.). There are two releases, both sanctioned by Atari and Bridgestone Multimedia, who had obtained the rights to the Starpath library some time ago. The first release is a limited number not-for-profit product, which also includes the previously unreleased Atari prototype, Polo by Carol Shaw. The second release includes the Supercharger prototypes Meteoroid (an early version of Suicide Mission) and Excalibur (an early version of Dragonstomper), in addition to a number of homebrew games by permission of their respective authors, and the song Atari 2600 by Splitsville, fully licensed from the band.
References
External links
STARPATH/ARCADIA FAQ, last modified 1/5/1995
GENERAL STARPATH SUPERCHARGER QUESTIONS, CYBERPUNKS" Project FAQ, last modified 3/5/2000 by Glenn Saunders
Atari 2600
Video game accessories | Starpath Supercharger | [
"Technology"
] | 706 | [
"Video game accessories",
"Components"
] |
170,567 | https://en.wikipedia.org/wiki/Toxicity | Toxicity is the degree to which a chemical substance or a particular mixture of substances can damage an organism. Toxicity can refer to the effect on a whole organism, such as an animal, bacterium, or plant, as well as the effect on a substructure of the organism, such as a cell (cytotoxicity) or an organ such as the liver (hepatotoxicity). Sometimes the word is more or less synonymous with poisoning in everyday usage.
A central concept of toxicology is that the effects of a toxicant are dose-dependent; even water can lead to water intoxication when taken in too high a dose, whereas for even a very toxic substance such as snake venom there is a dose below which there is no detectable toxic effect. Toxicity is species-specific, making cross-species analysis problematic. Newer paradigms and metrics are evolving to bypass animal testing, while maintaining the concept of toxicity endpoints.
Etymology
In Ancient Greek medical literature, the adjective τοξικόν (meaning "toxic") was used to describe substances which had the ability of "causing death or serious debilitation or exhibiting symptoms of infection." The word draws its origins from the Greek noun τόξον (meaning "arc"), in reference to the use of bows and poisoned arrows as weapons.
English-speaking American culture has adopted several figurative usages for toxicity, often when describing harmful inter-personal relationships or character traits (e.g. "toxic masculinity").
History
Humans have a deeply rooted history of not only being aware of toxicity, but also taking advantage of it as a tool. Archaeologists studying bone arrows from caves of Southern Africa have noted the likelihood that some aging 72,000 to 80,000 years old were dipped in specially prepared poisons to increase their lethality. Although scientific instrumentation limitations make it difficult to prove concretely, archaeologists hypothesize the practice of making poison arrows was widespread in cultures as early as the paleolithic era. The San people of Southern Africa have managed to preserved this practice into the modern era, with the knowledge base to form complex mixtures from poisonous beetles and plant derived extracts, yielding an arrow-tip product with a shelf life beyond several months to a year.
Types
There are generally five types of toxicities: chemical, biological, physical, radioactive and behavioural.
Disease-causing microorganisms and parasites are toxic in a broad sense but are generally called pathogens rather than toxicants. The biological toxicity of pathogens can be difficult to measure because the threshold dose may be a single organism. Theoretically one virus, bacterium or worm can reproduce to cause a serious infection. If a host has an intact immune system, the inherent toxicity of the organism is balanced by the host's response; the effective toxicity is then a combination. In some cases, e.g. cholera toxin, the disease is chiefly caused by a nonliving substance secreted by the organism, rather than the organism itself. Such nonliving biological toxicants are generally called toxins if produced by a microorganism, plant, or fungus, and venoms if produced by an animal.
Physical toxicants are substances that, due to their physical nature, interfere with biological processes. Examples include coal dust, asbestos fibres or finely divided silicon dioxide, all of which can ultimately be fatal if inhaled. Corrosive chemicals possess physical toxicity because they destroy tissues, but are not directly poisonous unless they interfere directly with biological activity. Water can act as a physical toxicant if taken in extremely high doses because the concentration of vital ions decreases dramatically with too much water in the body. Asphyxiant gases can be considered physical toxicants because they act by displacing oxygen in the environment but they are inert, not chemically toxic gases.
Radiation can have a toxic effect on organisms.
Behavioral toxicity refers to the undesirable effects of essentially therapeutic levels of medication clinically indicated for a given disorder (DiMascio, Soltys and Shader, 1970). These undesirable effects include anticholinergic effects, alpha-adrenergic blockade, and dopaminergic effects, among others.
Measuring
Toxicity can be measured by its effects on the target (organism, organ, tissue or cell). Because individuals typically have different levels of response to the same dose of a toxic substance, a population-level measure of toxicity is often used which relates the probabilities of an outcome for a given individual in a population. One such measure is the . When such data does not exist, estimates are made by comparison to known similar toxic things, or to similar exposures in similar organisms. Then, "safety factors" are added to account for uncertainties in data and evaluation processes. For example, if a dose of a toxic substance is safe for a laboratory rat, one might assume that one-tenth that dose would be safe for a human, allowing a safety factor of 10 to allow for interspecies differences between two mammals; if the data are from fish, one might use a factor of 100 to account for the greater difference between two chordate classes (fish and mammals). Similarly, an extra protection factor may be used for individuals believed to be more susceptible to toxic effects such as in pregnancy or with certain diseases. Or, a newly synthesized and previously unstudied chemical that is believed to be very similar in effect to another compound could be assigned an additional protection factor of 10 to account for possible differences in effects that are probably much smaller. This approach is very approximate, but such protection factors are deliberately very conservative, and the method has been found to be useful in a wide variety of applications.
Assessing all aspects of the toxicity of cancer-causing agents involves additional issues, since it is not certain if there is a minimal effective dose for carcinogens, or whether the risk is just too small to see. In addition, it is possible that a single cell transformed into a cancer cell is all it takes to develop the full effect (the "one hit" theory).
It is more difficult to determine the toxicity of chemical mixtures than a pure chemical because each component displays its own toxicity, and components may interact to produce enhanced or diminished effects. Common mixtures include gasoline, cigarette smoke, and industrial waste. Even more complex are situations with more than one type of toxic entity, such as the discharge from a malfunctioning sewage treatment plant, with both chemical and biological agents.
The preclinical toxicity testing on various biological systems reveals the species-, organ- and dose-specific toxic effects of an investigational product. The toxicity of substances can be observed by (a) studying the accidental exposures to a substance (b) in vitro studies using cells/ cell lines (c) in vivo exposure on experimental animals. Toxicity tests are mostly used to examine specific adverse events or specific endpoints such as cancer, cardiotoxicity, and skin/eye irritation. Toxicity testing also helps calculate the No Observed Adverse Effect Level (NOAEL) dose and is helpful for clinical studies.
Classification
For substances to be regulated and handled appropriately they must be properly classified and labelled. Classification is determined by approved testing measures or calculations and has determined cut-off levels set by governments and scientists (for example, no-observed-adverse-effect levels, threshold limit values, and tolerable daily intake levels). Pesticides provide the example of well-established toxicity class systems and toxicity labels. While currently many countries have different regulations regarding the types of tests, numbers of tests and cut-off levels, the implementation of the Globally Harmonized System has begun unifying these countries.
Global classification looks at three areas: Physical Hazards (explosions and pyrotechnics), Health Hazards and environmental hazards.
Health hazards
The types of toxicities where substances may cause lethality to the entire body, lethality to specific organs, major/minor damage, or cause cancer. These are globally accepted definitions of what toxicity is. Anything falling outside of the definition cannot be classified as that type of toxicant.
Acute toxicity
Acute toxicity looks at lethal effects following oral, dermal or inhalation exposure. It is split into five categories of severity where Category 1 requires the least amount of exposure to be lethal and Category 5 requires the most exposure to be lethal. The table below shows the upper limits for each category.
Note: The undefined values are expected to be roughly equivalent to the category 5 values for oral and dermal administration.
Other methods of exposure and severity
Skin corrosion and irritation are determined through a skin patch test analysis, similar to an allergic inflammation patch test. This examines the severity of the damage done; when it is incurred and how long it remains; whether it is reversible and how many test subjects were affected.
Skin corrosion from a substance must penetrate through the epidermis into the dermis within four hours of application and must not reverse the damage within 14 days. Skin irritation shows damage less severe than corrosion if: the damage occurs within 72 hours of application; or for three consecutive days after application within a 14-day period; or causes inflammation which lasts for 14 days in two test subjects. Mild skin irritation is minor damage (less severe than irritation) within 72 hours of application or for three consecutive days after application.
Serious eye damage involves tissue damage or degradation of vision which does not fully reverse in 21 days. Eye irritation involves changes to the eye which do fully reverse within 21 days.
Other categories
Respiratory sensitizers cause breathing hypersensitivity when the substance is inhaled.
A substance which is a skin sensitizer causes an allergic response from a dermal application.
Carcinogens induce cancer, or increase the likelihood of cancer occurring.
Neurotoxicity is a form of toxicity in which a biological, chemical, or physical agent produces an adverse effect on the structure or function of the central or peripheral nervous system. It occurs when exposure to a substance – specifically, a neurotoxin or neurotoxicant– alters the normal activity of the nervous system in such a way as to cause permanent or reversible damage to nervous tissue.
Reproductively toxic substances cause adverse effects in either sexual function or fertility to either a parent or the offspring.
Specific-target organ toxins damage only specific organs.
Aspiration hazards are solids or liquids which can cause damage through inhalation.
Environmental hazards
An Environmental hazard can be defined as any condition, process, or state adversely affecting the environment. These hazards can be physical or chemical, and present in air, water, and/or soil. These conditions can cause extensive harm to humans and other organisms within an ecosystem.
Common types of environmental hazards
Water: detergents, fertilizer, raw sewage, prescription medication, pesticides, herbicides, heavy metals, PCBs
Soil: heavy metals, herbicides, pesticides, PCBs
Air: particulate matter, carbon monoxide, sulfur dioxide, nitrogen dioxide, asbestos, ground-level ozone, lead (from aircraft fuel, mining, and industrial processes)
The EPA maintains a list of priority pollutants for testing and regulation.
Occupational hazards
Workers in various occupations may be at a greater level of risk for several types of toxicity, including neurotoxicity. The expression "Mad as a hatter" and the "Mad Hatter" of the book Alice in Wonderland derive from the known occupational toxicity of hatters who used a toxic chemical for controlling the shape of hats. Exposure to chemicals in the workplace environment may be required for evaluation by industrial hygiene professionals.
Hazards for small businesses
Hazards from medical waste and prescription disposal
Hazards in the arts
Hazards in the arts have been an issue for artists for centuries, even though the toxicity of their tools, methods, and materials was not always adequately realized. Lead and cadmium, among other toxic elements, were often incorporated into the names of artist's oil paints and pigments, for example, "lead white" and "cadmium red".
20th-century printmakers and other artists began to be aware of the toxic substances, toxic techniques, and toxic fumes in glues, painting mediums, pigments, and solvents, many of which in their labelling gave no indication of their toxicity. An example was the use of xylol for cleaning silk screens. Painters began to notice the dangers of breathing painting mediums and thinners such as turpentine. Aware of toxicants in studios and workshops, in 1998 printmaker Keith Howard published Non-Toxic Intaglio Printmaking which detailed twelve innovative Intaglio-type printmaking techniques including photo etching, digital imaging, acrylic-resist hand-etching methods, and introducing a new method of non-toxic lithography.
Mapping environmental hazards
There are many environmental health mapping tools. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund programs. TOXMAP is a resource funded by the US Federal Government. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network
(TOXNET) and PubMed, and from other authoritative sources.
Aquatic toxicity
Aquatic toxicity testing subjects key indicator species of fish or crustacea to certain concentrations of a substance in their environment to determine the lethality level. Fish are exposed for 96 hours while crustacea are exposed for 48 hours. While GHS does not define toxicity past 100 mg/L, the EPA currently lists aquatic toxicity as "practically non-toxic" in concentrations greater than 100 ppm.
Note: A category 4 is established for chronic exposure, but simply contains any toxic substance which is mostly insoluble, or has no data for acute toxicity.
Factors influencing toxicity
Toxicity of a substance can be affected by many different factors, such as the pathway of administration (whether the toxicant is applied to the skin, ingested, inhaled, injected), the time of exposure (a brief encounter or long term), the number of exposures (a single dose or multiple doses over time), the physical form of the toxicant (solid, liquid, gas), the concentration of the substance, and in the case of gases, the partial pressure (at high ambient pressure, partial pressure will increase for a given concentration as a gas fraction), the genetic makeup of an individual, an individual's overall health, and many others. Several of the terms used to describe these factors have been included here.
Acute exposure A single exposure to a toxic substance which may result in severe biological harm or death; acute exposures are usually characterized as lasting no longer than a day.
Chronic exposure Continuous exposure to a toxicant over an extended period of time, often measured in months or years; it can cause irreversible side effects.
Alternatives to dose-response framework
Considering the limitations of the dose-response concept, a novel Abstract Drug Toxicity Index (DTI) has been proposed recently. DTI redefines drug toxicity, identifies hepatotoxic drugs, gives mechanistic insights, predicts clinical outcomes and has potential as a screening tool.
See also
Agency for Toxic Substances and Disease Registry (ATSDR)
Biological activity
Biological warfare
California Proposition 65 (1986)
Carcinogen
Drunkenness
Indicative limit value
List of highly toxic gases
Material safety data sheet (MSDS)
Mutagen
Hepatotoxicity
Nephrotoxicity
Neurotoxicity
Ototoxicity
Paracelsus
Physiologically-based pharmacokinetic modelling.
Poison
Reference dose
Registry of Toxic Effects of Chemical Substances (RTECS) – toxicity database
Soil contamination
Teratogen
Toxic tort
Toxication
Toxicophore
Toxin
Toxica, a disambiguation page
References
External links
Agency for Toxic Substances and Disease Registry
Whole Effluent, Aquatic Toxicity Testing FAQ
TOXMAP Environmental Health e-Maps from the United States National Library of Medicine
Toxseek: meta-search engine in toxicology and environmental health
Pharmacology
Toxicology
Chemical hazards | Toxicity | [
"Chemistry",
"Environmental_science"
] | 3,308 | [
"Chemical hazards",
"Pharmacology",
"Toxicology",
"Medicinal chemistry"
] |
170,578 | https://en.wikipedia.org/wiki/Vermicompost | Vermicompost (vermi-compost) is the product of the decomposition process using various species of worms, usually red wigglers, white worms, and other earthworms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vermicast. This process is called vermicomposting, with the rearing of worms for this purpose is called vermiculture.
Vermicast (also called worm castings, worm humus, worm poop, worm manure, or worm faeces) is the end-product of the breakdown of organic matter by earthworms. These excreta have been shown to contain reduced levels of contaminants and a higher saturation of nutrients than the organic materials before vermicomposting.
Vermicompost contains water-soluble nutrients which may be extracted as vermiwash and is an excellent, nutrient-rich organic fertilizer and soil conditioner. It is used in gardening and sustainable, organic farming.
Vermicomposting can also be applied for treatment of sewage. A variation of the process is vermifiltration (or vermidigestion) which is used to remove organic matter, pathogens, and oxygen demand from wastewater or directly from blackwater of flush toilets.
Overview
Vermicomposting has gained popularity in both industrial and domestic settings because, as compared with conventional composting, it provides a way to treat organic wastes more quickly. In manure composing, the use of vermicomposting generates products that have lower salinity levels, as well as a more neutral pH.
The earthworm species (or composting worms) most often used are red wigglers (Eisenia fetida or Eisenia andrei), though European nightcrawlers (Eisenia hortensis, synonym Dendrobaena veneta) and red earthworm (Lumbricus rubellus) could also be used. Red wigglers are recommended by most vermicomposting experts, as they have some of the best appetites and breed very quickly. Users refer to European nightcrawlers by a variety of other names, including dendrobaenas, dendras, Dutch nightcrawlers, and Belgian nightcrawlers.
Containing water-soluble nutrients, vermicompost is a nutrient-rich organic fertilizer and soil conditioner in a form that is relatively easy for plants to absorb. Worm castings are sometimes used as an organic fertilizer. Because the earthworms grind and uniformly mix minerals in simple forms, plants need only minimal effort to obtain them. The worms' digestive systems create environments that allow certain species of microbes to thrive to help create a "living" soil environment for plants. The fraction of soil which has gone through the digestive tract of earthworms is called the drilosphere.
Vermicomposting is a common practice in permaculture.
Vermiwash can also be obtained from the liquid potion of vermicompost. Vermiwash is found to contain enzyme cocktail of proteases, amylases, urease and phosphatase. Microbiological study of vermiwash reveals that it contains nitrogen-fixing bacteria like Azotobactrer sp., Agrobacterium sp. and Rhizobium sp. and some phosphate solublizing bacteria. Laboratory scale trial shows effectiveness of vermiwash on plant growth.
Design considerations
Suitable worm species
All worms make compost but some species are not suitable for this purpose. Vermicompost worms are generally epigean. Species most often used for composting include:
Eisenia fetida (Europe), the red wiggler or tiger worm. Closely related to Eisenia andrei, which is also usable.
Eisenia hortensis (Europe), European nightcrawlers, prefers high C:N material.
Eudrilus eugeniae (West Africa), African Nightcrawlers. Useful in the tropics.
Perionyx excavatus (South and East Asia), blueworms. May be used in the tropics and subtropics.
Lampito mauritii (Southern Asia), used locally.
These species commonly are found in organic-rich soils throughout Europe and North America and live in rotting vegetation, compost, and manure piles. As they are shallow-dwelling and feed on decomposing plant matter in the soil, they adapt easily to live on food or plant waste in the confines of a worm bin. Some species are considered invasive in some areas, so they should be avoided (see earthworms as invasive species for a list).
Composting worms are available to order online, from nursery mail-order suppliers or angling shops where they are sold as bait. They can also be collected from compost and manure piles. These species are not the same worms that are found in ordinary soil or on pavement when the soil is flooded by water.
The following species are not recommended:
Lumbricus rubellus and Lumbricus terrestris (Europe). The two closely related species are anecic: they like to burrow underground and come up for food. As a result, they adapt poorly to shallow compost bins and should be avoided. They are also invasive in North America.
Large scale
Large-scale vermicomposting is practiced in Canada, Italy, Japan, India, Malaysia, the Philippines, and the United States. The vermicompost may be used for farming, landscaping, to create compost tea, or for sale. Some of these operations produce worms for bait and/or home vermicomposting.
There are two main methods of large-scale vermicomposting, windrow and raised bed. Some systems use a windrow, which consists of bedding materials for the earthworms to live in and acts as a large bin; organic material is added to it. Although the windrow has no physical barriers to prevent worms from escaping, in theory they should not, due to an abundance of organic matter for them to feed on. Often windrows are used on a concrete surface to prevent predators from gaining access to the worm population.
The windrow method and compost windrow turners were developed by Fletcher Sims Jr. of the Compost Corporation in Canyon, Texas. The Windrow Composting system is noted as a sustainable, cost-efficient way for farmers to manage dairy waste.
The second type of large-scale vermicomposting system is the raised bed or flow-through system. Here the worms are fed an inch of "worm chow" across the top of the bed, and an inch of castings are harvested from below by pulling a breaker bar across the large mesh screen which forms the base of the bed.
Because red worms are surface dwellers constantly moving towards the new food source, the flow-through system eliminates the need to separate worms from the castings before packaging. Flow-through systems are well suited to indoor facilities, making them the preferred choice for operations in colder climates.
Small scale
For vermicomposting at home, a large variety of bins are commercially available, or a variety of adapted containers may be used. They may be made of old plastic containers, wood, Styrofoam, or metal containers. The design of a small bin usually depends on where an individual wishes to store the bin and how they wish to feed the worms.
Some materials are less desirable than others in worm bin construction. Metal containers often conduct heat too readily, are prone to rusting, and may release heavy metals into the vermicompost. Styrofoam containers may release chemicals into the organic material. Some cedars, yellow cedar, and redwood contain resinous oils that may harm worms, although western red cedar has excellent longevity in composting conditions. Hemlock is another inexpensive and fairly rot-resistant wood species that may be used to build worm bins.
Bins need holes or mesh for aeration. Some people add a spout or holes in the bottom for excess liquid to drain into a tray for collection. The most common materials used are plastic: recycled polyethylene and polypropylene and wood. Worm compost bins made from plastic are ideal, but require more drainage than wooden ones because they are non-absorbent. However, wooden bins will eventually decay and need to be replaced.
Small-scale vermicomposting is well-suited to turn kitchen waste into high-quality soil amendments, where space is limited. Worms can decompose organic matter without the additional human physical effort (turning the bin) that bin composting requires.
Composting worms which are detritivorous (eaters of trash), such as the red wiggler Eisenia fetida, are epigeic (surface dwellers) and together with symbiotic associated microbes are the ideal vectors for decomposing food waste. Common earthworms such as Lumbricus terrestris are anecic (deep burrowing) species and hence unsuitable for use in a closed system. Other soil species that contribute include insects, other worms and molds.
Climate and temperature
There may be differences in vermicomposting method depending on the climate. It is necessary to monitor the temperatures of large-scale bin systems (which can have high heat-retentive properties), as the raw materials or feedstocks used can compost, heating up the worm bins as they decay and killing the worms.
The most common worms used in composting systems, redworms (Eisenia fetida, Eisenia andrei, and Lumbricus rubellus) feed most rapidly at temperatures of . They can survive at . Temperatures above may harm them. This temperature range means that indoor vermicomposting with redworms is possible in all but tropical climates. Other worms like Perionyx excavatus are suitable for warmer climates. If a worm bin is kept outside, it should be placed in a sheltered position away from direct sunlight and insulated against frost in winter.
Feedstock
There are few food wastes that vermicomposting cannot compost, although meat waste and dairy products are likely to putrefy, and in outdoor bins can attract vermin. Green waste should be added in moderation to avoid heating the bin.
Small-scale or home systems
Such systems usually use kitchen and garden waste, using "earthworms and other microorganisms to digest organic wastes, such as kitchen scraps".
This includes:
All fruits and vegetables (including citrus, in limited quantities)
Vegetable and fruit peels and ends
Coffee grounds and filters
Tea bags (even those with high tannin levels)
Grains such as bread, cracker and cereal (including moldy and stale)
Eggshells (rinsed off)
Leaves and grass clippings (not sprayed with pesticides)
Newspapers (most inks used in newspapers are not toxic)
Paper toweling (which has not been used with cleaners or chemicals)
Large-scale or commercial
Such vermicomposting systems need reliable sources of large quantities of food.
Systems presently operating use:
Dairy cow or pig manure
Sewage sludge
Brewery waste
Cotton mill waste
Agricultural waste
Food processing and grocery waste
Cafeteria waste
Grass clippings and wood chips
Harvesting
Factors affecting the speed of composting include the climate and the method of composting. There are signs to look for to determine whether compost is finished. The finished compost would have an ambient temperature, dark color, and be as moist as a damp sponge. Towards the end of the process, bacteria slow down the rate of metabolizing food or stop completely. There is the possibility of some solid organic matter still being present in the compost at this point, but it could stay in and continue decomposing for the next couple of years unless removed. The compost should be allowed to cure after finished to allow acids to be removed over time so it becomes more neutral, which could take up to three months and results in the compost being more consistent in size. Elevating the maturing compost off the ground can prevent unwanted plant growth. It compost should consistently be slightly damp and should be aerated but does not need to be turned. The curing process can be done in a storage bin or on a tarp.
Methods
Vermicompost is ready for harvest when it contains few-to-no scraps of uneaten food or bedding. There are several methods of harvesting from small-scale systems: "dump and hand sort", "let the worms do the sorting", "alternate containers" and "divide and dump." These differ on the amount of time and labor involved and whether the vermicomposter wants to save as many worms as possible from being trapped in the harvested compost.
The pyramid method of harvesting worm compost is commonly used in small-scale vermicomposting, and is considered the simplest method for single layer bins. In this process, compost is separated into large clumps, which is placed back into composting for further breakdown, and lighter compost, with which the rest of the process continues. This lighter mix is placed into small piles on a tarp under the sunlight. The worms instinctively burrow to the bottom of the pile. After a few minutes, the top of the pyramid is removed repeatedly, until the worms are again visible. This repeats until the mound is composed mostly of worms.
When harvesting the compost, it is possible to separate eggs and cocoons and return them to the bin, thereby ensuring new worms are hatched. Cocoons are small, lemon-shaped yellowish objects that can usually be seen with the naked eye. The cocoons can hold up to 20 worms (though 2–3 is most common). Cocoons can lay dormant for as long as two years if conditions are not conducive for hatching.
Properties
Vermicompost has been shown to be richer in many nutrients than compost produced by other composting methods. It has also outperformed a commercial plant medium with nutrients added, but levels of magnesium required adjustment, as did pH.
However, in one study it has been found that homemade backyard vermicompost was lower in microbial biomass, soil microbial activity, and yield of a species of ryegrass than municipal compost.
It is rich in microbial life which converts nutrients already present in the soil into plant-available forms.
Unlike other compost, worm castings also contain worm mucus which helps prevent nutrients from washing away with the first watering and holds moisture better than plain soil.
Increases in the total nitrogen content in vermicompost, an increase in available nitrogen and phosphorus, a decrease in potassium, as well as the increased removal of heavy metals from sludge and soil have been reported. The reduction in the bioavailability of heavy metals has been observed in a number of studies.
Benefits of vermicomposting
Soil
Improves soil aeration
Enriches soil with micro-organisms (adding enzymes such as phosphatase and cellulase)
Microbial activity in worm castings is 10 to 20 times higher than in the soil and organic matter that the worm ingests
Attracts deep-burrowing earthworms already present in the soil
Improves water holding capacity
Plant growth
Enhances germination, plant growth and crop yield
It helps in root and plant growth
Enriches soil organisms (adding plant hormones such as auxins and gibberellic acid)
Economic
Biowastes conversion reduces waste flow to landfills
Elimination of biowastes from the waste stream reduces contamination of other recyclables collected in a single bin (a common problem in communities practicing single-stream recycling)
Creates low-skill jobs at local level
Low capital investment and relatively simple technologies make vermicomposting practical for less-developed agricultural regions
Environmental
Helps to close the "metabolic gap" through recycling waste on-site
Large systems often use temperature control and mechanized harvesting, however other equipment is relatively simple and does not wear out quickly
Production reduces greenhouse gas emissions such as methane and nitric oxide (produced in landfills or incinerators when not composted).
Uses
Soil conditioner
Vermicompost can be mixed directly into the soil, or mixed with water to make a liquid fertilizer known as worm tea.
The light brown waste liquid, or leachate, that drains into the bottom of some vermicomposting systems is not to be confused with worm tea. It is an uncomposted byproduct from when water-rich foods break down and may contain pathogens and toxins. It is best discarded or applied back to the bin when added moisture is needed for further processing.
The pH, nutrient, and microbial content of these fertilizers varies upon the inputs fed to worms. Pulverized limestone, or calcium carbonate can be added to the system to raise the pH.
Operation and maintenance
Smells
When closed, a well-maintained bin is odorless; when opened, it should have little smell—if any smell is present, it is earthy. The smell may also depend on the type of composted material added to the bin. An unhealthy worm bin may smell, potentially due to low oxygen conditions. Worms require gaseous oxygen. Oxygen can be provided by airholes in the bin, occasional stirring of bin contents, and removal of some bin contents if they become too deep or too wet. If decomposition becomes anaerobic from excess wet feedstock added to the bin, or the layers of food waste have become too deep, the bin will begin to smell of ammonia.
Moisture
Moisture must be maintained above 50%, as lower moisture content will not support worm respiration and can increase worm mortality. Operating moisture-content range should be between 70 and 90%, with a suggested content of 70–80% for vermicomposting operations. If decomposition has become anaerobic, to restore healthy conditions and prevent the worms from dying, excess waste water must be reduced and the bin returned to a normal moisture level. To do this, first reduce addition of food scraps with a high moisture content and second, add fresh, dry bedding such as shredded newspaper to your bin, mixing it in well.
Pest species
Pests such as rodents and flies are attracted by certain materials and odors, usually from large amounts of kitchen waste, particularly meat. Eliminating the use of meat or dairy product in a worm bin decreases the possibility of pests.
Predatory ants can be a problem in African countries.
In warm weather, fruit and vinegar flies breed in the bins if fruit and vegetable waste is not thoroughly covered with bedding. This problem can be avoided by thoroughly covering the waste by at least of bedding. Maintaining the correct pH (close to neutral) and water content of the bin (just enough water where squeezed bedding drips a couple of drops) can help avoid these pests as well.
Worms escaping
Worms generally stay in the bin, but may try to leave the bin when first introduced, or often after a rainstorm when the humidity outside is high. Maintaining adequate conditions in the worm bin and putting a light over the bin when first introducing worms should eliminate this problem.
Nutrient levels
Commercial vermicomposters test and may amend their products to produce consistent quality and results. Because the small-scale and home systems use a varied mix of feedstocks, the nitrogen, phosphorus, and potassium (NPK) content of the resulting vermicompost will also be inconsistent. NPK testing may be helpful before the vermicompost or tea is applied to the garden.
In order to avoid over-fertilization issues, such as nitrogen burn, vermicompost can be diluted as a tea 50:50 with water, or as a solid can be mixed in 50:50 with potting soil.
Additionally, the mucous layer created by worms which surrounds their castings allows for a "time release" effect, meaning not all nutrients are released at once. This also reduces the risk of burning the plants, as is common with the use and overuse of commercial fertilizers.
Application examples
Vermicomposting is widely used in North America for on-site institutional processing of food scraps, such as in hospitals, universities, shopping malls, and correctional facilities. Vermicomposting is used for medium-scale on-site institutional organic material recycling, such as for food scraps from universities and shopping malls. It is selected either as a more environmentally friendly choice than conventional disposal, or to reduce the cost of commercial waste removal.
From 20 July 2020, the State Government of Chhattisgarh India started buying cow dung under the "Godhan Nyay Yojana" Scheme. Cow dung procured under this scheme will be utilised for the production of vermicompost fertilizer.
See also
Fertilizer
Home composting
Maggot farming
Mary Arlene Appelhof
Vermifilter
Vermiponics, use of wormbin leachate in hydroponics
Waste management
References
Further reading
External links
Biodegradable waste management
Organic gardening
Composting
Home composting
Feces
Worms (obsolete taxon) | Vermicompost | [
"Chemistry",
"Biology"
] | 4,376 | [
"Biodegradable waste management",
"Excretion",
"Animal waste products",
"Biodegradation",
"Feces"
] |
170,586 | https://en.wikipedia.org/wiki/Jealousy | Jealousy generally refers to the thoughts or feelings of insecurity, fear, and concern over a relative lack of possessions or safety.
Jealousy can consist of one or more emotions such as anger, resentment, inadequacy, helplessness or disgust. In its original meaning, jealousy is distinct from envy, though the two terms have popularly become synonymous in the English language, with jealousy now also taking on the definition originally used for envy alone. These two emotions are often confused with each other, since they tend to appear in the same situation.
Jealousy is a typical experience in human relationships, and it has been observed in infants as young as five months. Some researchers claim that jealousy is seen in all cultures and is a universal trait. However, others claim jealousy is a culture-specific emotion.
Jealousy can either be suspicious or reactive, and it is often reinforced as a series of particularly strong emotions and constructed as a universal human experience. Psychologists have proposed several models to study the processes underlying jealousy and have identified factors that result in jealousy. Sociologists have demonstrated that cultural beliefs and values play an important role in determining what triggers jealousy and what constitutes socially acceptable expressions of jealousy. Biologists have identified factors that may unconsciously influence the expression of jealousy.
Throughout history, artists have also explored the theme of jealousy in paintings, films, songs, plays, poems, and books, and theologians have offered religious views of jealousy based on the scriptures of their respective faiths.
Etymology
The word stems from the French jalousie, formed from jaloux (jealous), and further from Low Latin zelosus (full of zeal), in turn from the Greek word (zēlos), sometimes "jealousy", but more often in a positive sense "emulation, ardour, zeal" (with a root connoting "to boil, ferment"; or "yeast"). The "biblical language" zeal would be known as "tolerating no unfaithfulness" while in middle English zealous is good. One origin word gelus meant "Possessive and suspicious" the word then turned into jelus.
Since William Shakespeare's use of terms like "green-eyed monster", the color green has been associated with jealousy and envy, from which the expression "green with envy", is derived.
Theories
Scientific examples
People do not express jealousy through a single emotion or a single behavior.
They instead express jealousy through diverse emotions and behaviors, which makes it difficult to form a scientific definition of jealousy. Scientists instead define it in their own words, as illustrated by the following examples:
"Romantic jealousy is here defined as a complex of thoughts, feelings, and actions which follow threats to self-esteem and/or threats to the existence or quality of the relationship, when those threats are generated by the perception of potential attraction between one's partner and a (perhaps imaginary) rival."
"Jealousy, then, is any aversive reaction that occurs as the result of a partner's extradyadic relationship that is considered likely to occur."
"Jealousy is conceptualized as a cognitive, emotional, and behavioral response to a relationship threat. In the case of sexual jealousy, this threat emanates from knowing or suspecting that one's partner has had (or desires to have) sexual activity with a third party. In the case of emotional jealousy, an individual feels threatened by her or his partner's emotional involvement with and/or love for a third party."
"Jealousy is defined as a defensive reaction to a perceived threat to a valued relationship, arising from a situation in which the partner's involvement with an activity and/or another person is contrary to the jealous person's definition of their relationship."
"Jealousy is triggered by the threat of separation from, or loss of, a romantic partner, when that threat is attributed to the possibility of the partner's romantic interest in another person."
These definitions of jealousy share two basic themes. First, all the definitions imply a triad composed of a jealous individual, a partner, and a perception of a third party or rival. Second, all the definitions describe jealousy as a reaction to a perceived threat to the relationship between two people, or a dyad. Jealous reactions typically involve aversive emotions and/or behaviors that are assumed to be protective for their attachment relationships. These themes form the essential meaning of jealousy in most scientific studies.
Comparison with envy
Popular culture uses the word jealousy as a synonym for envy. Many dictionary definitions include a reference to envy or envious feelings. In fact, the overlapping use of jealousy and envy has a long history.
The terms are used indiscriminately in such popular 'feel-good' books as Nancy Friday's Jealousy, where the expression 'jealousy' applies to a broad range of passions, from envy to lust and greed. While this kind of usage blurs the boundaries between categories that are intellectually valuable and psychologically justifiable, such confusion is understandable in that historical explorations of the term indicate that these boundaries have long posed problems. Margot Grzywacz's fascinating etymological survey of the word in Romance and Germanic languages asserts, indeed, that the concept was one of those that proved to be the most difficult to express in language and was therefore among the last to find an unambiguous term. Classical Latin used invidia, without strictly differentiating between envy and jealousy. It was not until the postclassical era that Latin borrowed the late and poetic Greek word zelotypia and the associated adjective zelosus. It is from this adjective that are derived French jaloux, Provençal gelos, Italian geloso, Spanish celoso, and Portuguese cioso.
Perhaps the overlapping use of jealousy and envy occurs because people can experience both at the same time. A person may envy the characteristics or possessions of someone who also happens to be a romantic rival. In fact, one may even interpret romantic jealousy as a form of envy. A jealous person may envy the affection that their partner gives to a rival – affection the jealous person feels entitled to themselves. People often use the word jealousy as a broad label that applies to both experiences of jealousy and experiences of envy.
Although popular culture often uses jealousy and envy as synonyms, modern philosophers and psychologists have argued for conceptual distinctions between jealousy and envy. For example, philosopher John Rawls distinguishes between jealousy and envy on the ground that jealousy involves the wish to keep what one has, and envy the wish to get what one does not have. Thus, a child is jealous of her parents' attention to a sibling, but envious of her friend's new bicycle. Psychologists Laura Guerrero and Peter Andersen have proposed the same distinction. They claim the jealous person "perceives that he or she possesses a valued relationship, but is in danger of losing it or at least of having it altered in an undesirable manner," whereas the envious person "does not possess a valued commodity, but wishes to possess it." Gerrod Parrott draws attention to the distinct thoughts and feelings that occur in jealousy and envy.
The common experience of jealousy for many people may involve:
Fear of loss
Suspicion of or anger about a perceived betrayal
Low self-esteem and sadness over perceived loss
Uncertainty and loneliness
Fear of losing an important person to another
Distrust
The experience of envy involves:
Feelings of inferiority
Longing
Resentment of circumstances
Ill will towards envied person often accompanied by guilt about these feelings
Motivation to improve
Desire to possess the attractive rival's qualities
Disapproval of feelings
Sadness towards other's accomplishments
Parrott acknowledges that people can experience envy and jealousy at the same time. Feelings of envy about a rival can even intensify the experience of jealousy. Still, the differences between envy and jealousy in terms of thoughts and feelings justify their distinction in philosophy and science.
In psychology
Jealousy involves an entire "emotional episode" including a complex narrative. This includes the circumstances that lead up to jealousy, jealousy itself as emotion, any attempt at self regulation, subsequent actions and events, and ultimately the resolution of the episode. The narrative can originate from experienced facts, thoughts, perceptions, memories, but also imagination, guesses and assumptions. The more society and culture matter in the formation of these factors, the more jealousy can have a social and cultural origin. By contrast, jealousy can be a "cognitively impenetrable state", where education and rational belief matter very little.
One possible explanation of the origin of jealousy in evolutionary psychology is that the emotion evolved in order to maximize the success of our genes: it is a biologically based emotion selected to foster the certainty about the paternity of one's own offspring. A jealous behavior, in women, is directed into avoiding sexual betrayal and a consequent waste of resources and effort in taking care of someone else's offspring. There are, additionally, cultural or social explanations of the origin of jealousy. According to one, the narrative from which jealousy arises can be in great part made by the imagination. Imagination is strongly affected by a person's cultural milieu. The pattern of reasoning, the way one perceives situations, depends strongly on cultural context. It has elsewhere been suggested that jealousy is in fact a secondary emotion in reaction to one's needs not being met, be those needs for attachment, attention, reassurance or any other form of care that would be otherwise expected to arise from that primary romantic relationship.
While mainstream psychology considers sexual arousal through jealousy a paraphilia, some authors on sexuality have argued that jealousy in manageable dimensions can have a definite positive effect on sexual function and sexual satisfaction. Studies have also shown that jealousy sometimes heightens passion towards partners and increases the intensity of passionate sex.
Jealousy in children and teenagers has been observed more often in those with low self-esteem and can evoke aggressive reactions. One such study suggested that developing intimate friends can be followed by emotional insecurity and loneliness in some children when those intimate friends interact with others. Jealousy is linked to aggression and low self-esteem. Research by Sybil Hart, PhD, at Texas Tech University indicates that children are capable of feeling and displaying jealousy at as young as six months. Infants showed signs of distress when their mothers focused their attention on a lifelike doll. This research could explain why children and infants show distress when a sibling is born, creating the foundation for sibling rivalry.
In addition to traditional jealousy comes Obsessive Jealousy, which can be a form of Obsessive Compulsive Disorder. This jealousy is characterized by obsessional jealousy and thoughts of the partner.
In sociology
Anthropologists have claimed that jealousy varies across cultures. Cultural learning can influence the situations that trigger jealousy and the manner in which jealousy is expressed. Attitudes toward jealousy can also change within a culture over time. For example, attitudes toward jealousy changed substantially during the 1960s and 1970s in the United States. People in the United States adopted much more negative views about jealousy. As men and women became more equal it became less appropriate or acceptable to express jealousy.
Romantic jealousy
Romantic jealousy arises as a result of romantic interest.
It is defined as “a complex of thoughts, feelings, and actions that follow threats to self-esteem and/or threats to the existence or quality of the relationship when those threats are generated by the perception of a real or potential romantic attraction between one's partner and a (perhaps imaginary) rival.” Different from sexual jealousy, romantic jealousy is triggered by threats to self and relationship (rather than sexual interest in another person). Factors, such as feelings of inadequacy as a partner, sexual exclusivity, and having put relatively more effort into the relationship, are positively correlated to relationship jealousy in both genders.
Communicative responses
As romantic jealousy is a complicated reaction that has multiple components, i.e., thoughts, feelings, and actions, one aspect of romantic jealousy that is under study is communicative responses. Communicative responses serve three critical functions in a romantic relationship, i.e., reducing uncertainty, maintaining or repairing relationship, and restoring self-esteem. If done properly, communicative responses can lead to more satisfying relationships after experiencing romantic jealousy.
There are two subsets of communicative responses: interactive responses and general behavior responses. Interactive responses is face-to-face and partner-directed while general behavior responses may not occur interactively. Guerrero and colleagues further categorize multiple types of communicative responses of romantic jealousy. Interactive responses can be broken down to six types falling in different places on continua of threat and directness:
Avoidance/Denial (low threat and low directness). Example: becoming silent; pretending nothing is wrong.
Integrative Communication (low threat and high directness). Example: explaining feelings; calmly questioning partner.
Active Distancing (medium threat and medium directness). Example: decreasing affection.
Negative Affect Expression (medium threat and medium directness). Example: venting frustration; crying or sulking.
Distributive Communication (high threat and high directness). Example: acting rude; making hurtful or abrasive comments.
Violent Communication/Threats (high threat and high directness). Example: using physical force.
Guerrero and colleagues have also proposed five general behavior responses. The five sub-types differ in whether a response is 1) directed at partner or rival(s), 2) directed at discovery or repair, and 3) positively or negatively valenced:
Surveillance/ Restriction (rival-targeted, discovery-oriented, commonly negatively valenced). Example: observing rival; trying to restrict contact with partner.
Rival Contacts (rival-targeted, discovery-oriented/repair-oriented, commonly negatively valenced). Example: confronting rival.
Manipulation Attempts (partner-targeted, repair-oriented, negatively valenced). Example: tricking partner to test loyalty; trying to make partner feel guilty.
Compensatory Restoration (partner-targeted, repair-oriented, commonly positively valenced). Example: sending flowers to partner.
Violent Behavior (-, -, negatively valenced). Example: slamming doors.
While some of these communicative responses are destructive and aggressive, e.g., distributive communication and active distancing, some individuals respond to jealousy in a more constructive way. Integrative communication, compensatory restoration, and negative affect expression have been shown to lead to positive relation outcomes. One factor that affects the type of communicative responses elicited in an individual is emotions. Jealousy anger is associated with more aggressive communicative response while irritation tends to lead to more constructive communicative behaviors.
Researchers also believe that when jealousy is experienced it can be caused by differences in understanding the commitment level of the couple, rather than directly being caused by biology alone. The research identified that if a person valued long-term relationships more than being sexually exclusive, those individuals were more likely to demonstrate jealousy over emotional rather than physical infidelity.
Through a study conducted in three Spanish-Speaking countries, it was determined that Facebook jealousy also exists. This Facebook jealousy ultimately leads to increased relationship jealousy and study participants also displayed decreased self esteem as a result of the Facebook jealousy.
Sexual jealousy
Sexual jealousy may be triggered when a person's partner displays sexual interest in another person. The feeling of jealousy may be just as powerful if one partner suspects the other is guilty of infidelity. Fearing that their partner will experience sexual jealousy the person who has been unfaithful may lie about their actions in order to protect their partner. Experts often believe that sexual jealousy is in fact a biological imperative. It may be part of a mechanism by which humans and other animals ensure access to the best reproductive partners.
It seems that male jealousy in heterosexual relationships may be influenced by their female partner's phase in her menstrual cycle. In the period around and shortly before ovulation, males are found to display more mate-retention tactics, which are linked to jealousy. Furthermore, a male is more likely to employ mate-retention tactics if their partner shows more interest in other males, which is more likely to occur in the pre-ovulation phase.
Contemporary views on gender-based differences
According to Rebecca L. Ammon in The Osprey Journal of Ideas and Inquiry at UNF Digital Commons (2004), the Parental Investment Model based on parental investment theory posits that more men than women ratify sex differences in jealousy. In addition, more women over men consider emotional infidelity (fear of abandonment) as more distressing than sexual infidelity. According to the attachment theory, sex and attachment style makes significant and unique interactive contributions to the distress experienced. Security within the relationship also heavily contributes to one's level of distress. These findings imply that psychological and cultural mechanisms regarding sex differences may play a larger role than expected. The attachment theory also claims to reveal how infants' attachment patterns are the basis for self-report measures of adult attachment. Although there are no sex differences in childhood attachment, individuals with dismissing behavior were more concerned with the sexual aspect of relationships. As a coping mechanism these individuals would report sexual infidelity as more harmful. Moreover, research shows that audit attachment styles strongly conclude with the type of infidelity that occurred. Thus psychological and cultural mechanisms are implied as unvarying differences in jealousy that play a role in sexual attachment.
In 1906, The American Journal of Psychology had reported that "the weight of quotable (male) authority is to the effect that women are more susceptible to jealousy". This claim was accompanied in the journal by a quote from Confucius: "The five worst maladies that afflict the female mind are indocility, discontent, slander, jealousy and silliness."
Emotional jealousy was predicted to be nine times more responsive in females than in males. The emotional jealousy predicted in females also held turn to state that females experiencing emotional jealousy are more violent than men experiencing emotional jealousy.
There are distinct emotional responses to gender differences in romantic relationships. For example, due to paternity uncertainty in males, jealousy increases in males over sexual infidelity rather than emotional. According to research more women are likely to be upset by signs of resource withdraw (i.e. another female) than by sexual infidelity. A large amount of data supports this notion. However, one must consider for jealousy the life stage or experience one encounters in reference to the diverse responses to infidelity available. Research states that a componential view of jealousy consist of specific set of emotions that serve the reproductive role. However, research shows that both men and women would be equally angry and point the blame for sexual infidelity, but women would be more hurt by emotional infidelity. Despite this fact, anger surfaces when both parties involved are responsible for some type of uncontrollable behavior, sexual conduct is not exempt. Some behavior and actions are controllable such as sexual behavior. However hurt feelings are activated by relationship deviation. No evidence is known to be sexually dimorphic in both college and adult convenience samples. The Jealousy Specific Innate Model (JSIM) proved to not be innate, but may be sensitive to situational factors. As a result, it may only activate at stages. For example, it was predicted that male jealousy decreases as females reproductive values decreases.
A second possibility that the JSIM effect is not innate but is cultural. Differences have been highlighted in socio-economic status specific such as the divide between high school and collegiate individuals. Moreover, individuals of both genders were angrier and blamed their partners more for sexual infidelities but were more hurt by emotional infidelity. Jealousy is composed of lower-level emotional states (e.g., anger and hurt) which may be triggered by a variety of events, not by differences in individuals' life stage. Although research has recognized the importance of early childhood experiences for the development of competence in intimate relationships, early family environment is recently being examined as we age). Research on self-esteem and attachment theory suggest that individuals internalize early experiences within the family which subconsciously translates into their personal view of worth of themselves and the value of being close to other individuals, especially in an interpersonal relationship.
In animals
A study by researchers at the University of California, San Diego, replicated jealousy studies done on humans on canines. They reported, in a paper published in PLOS ONE in 2014, that a significant number of dogs exhibited jealous behaviors when their human companions paid attention to dog-like toys, compared to when their human companions paid attention to non-social objects.
In addition, Jealousy has been speculated to be a potential factor in incidences of aggression or emotional tension in dogs. Mellissa Starling, an animal behavior consultant of the University of Sydney, noted that "dogs are social animals and they obey a group hierarchy. Changes in the home, like the arrival of a baby, can prompt a family pet to behave differently to what one might expect."
Applications
In fiction, film, and art
Artistic depictions of jealousy occur in fiction, films, and other art forms such as painting and sculpture. Jealousy is a common theme in literature, art, theatre, and film. Often, it is presented as a demonstration of particularly deep feelings of love, rather than a destructive obsession.
A study done by Ferris, Smith, Greenberg, and Smith looked into the way people saw dating and romantic relationships based on how many reality dating shows they watched. People who spent a large amount of time watching these reality dating shows "endorsed" or supported the "dating attitudes" that would be shown on the show. While the other people who do not spend time watching reality dating shows did not mirror the same ideas. This means if someone watches a reality dating show that displays men and women reacting violently or aggressively towards their partner due to jealousy they can mirror that. This is reflected in romantic movies as well. Jessica R. Frampton conducted a study looking into romantic jealousy in movies. The study found that there were "230 instances of romantic jealousy were identified in the 51 top-grossing romantic comedies from 2002–2014" Some of the films did not display romantic jealousy however, some featured many examples of romantic jealousy. This was due to the fact that some of the top-grossing movies did not contain a rival or romantic competition. While others such as Forgetting Sarah Marshall was said to contain "19 instances of romantic jealousy." Out of the 230 instances 58% were reactive jealousy while 31% showed possessive jealousy. The last 11% displayed anxious jealousy it was seen the least in all 230 cases. Out of the 361 reactions to the jealousy found 53% were found to be "Destructive responses." Only 19% of responses were constructive while 10% showed avoidant responses. The last 18% were considered "rival focused responses" which lead to the finding that "there was a higher than expected number of rival-focused responses to possessive jealousy."
In religion
Jealousy in religion examines how the scriptures and teachings of various religions deal with the topic of jealousy. Religions may be compared and contrasted on how they deal with two issues: concepts of divine jealousy, and rules about the provocation and expression of human jealousy.
Cross culture
A study was done in order to cross examine jealousy among four different cultures, Ireland, Thailand, India and the United States. These cultures were chosen to demonstrate differences in expression across cultures. The study posits that male-dominant cultures are more likely to express and reveal jealousy. The survey found that Thais are less likely to express jealousy than the other three cultures. This is because the men in these cultures are rewarded in a way for showing jealousy due to the fact that some women interpret it as love. This can also be seen when watching romantic comedies when males show they are jealous of a rival or emotionally jealous women perceive it as men caring more.
See also
References
Lyhda, Belcher (2009). " Different Types of Jealousy" livestrong.com
Notes
Further reading
Peter Goldie. The Emotions, A Philosophical Exploration . Oxford University Press, 2000
W. Gerrod Parrott. Emotions in Social Psychology . Psychology Press, 2001
Jesse J. Prinz. Gut Reactions: A Perceptual Theory of Emotions. Oxford University Press, 2004
Jealousy among the Sangha Quoting Jeremy Hayward from his book on Chögyam Trungpa Rinpoche Warrior-King of Shambhala: Remembering Chögyam Trungpa
Hart, S. L. & Legerstee, M. (Eds.) "Handbook of Jealousy: Theory, Research, and Multidisciplinary Approaches" . Wiley-Blackwell, 2010.
Levy, Kenneth N., Kelly, Kristen M Feb 2010; Sex Differences in Jealousy: A Contribution From Attachment Theory Psychological Science, vol. 21: pp. 168–173
External links
Emotions
Narcissism
Philosophy of love
Personal life
bo:མིག་སེར།
he:קנאה
sw:Kijicho
hu:Féltékenység
new:ईर्ष्या
tr:Kıskançlık
yi:קנאה | Jealousy | [
"Biology"
] | 5,106 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
170,634 | https://en.wikipedia.org/wiki/Wabi-sabi | In traditional Japanese aesthetics, is centered on the acceptance of transience and imperfection. The aesthetic is sometimes described as one of appreciating beauty that is "imperfect, impermanent, and incomplete" in nature. It is prevalent in many forms of Japanese art.
is a composite of two interrelated aesthetic concepts, and . According to the Stanford Encyclopedia of Philosophy, may be translated as "subdued, austere beauty," while means "rustic patina." is derived from the Buddhist teaching of the , specifically , and .
Characteristics of aesthetics and principles include asymmetry, roughness, simplicity, economy, austerity, modesty, intimacy, and the appreciation of both natural objects and the forces of nature.
Description
can be described as "the most conspicuous and characteristic feature of what we think of as traditional Japanese beauty. It occupies roughly the same position in the Japanese pantheon of aesthetic values as do the Greek ideals of beauty and perfection in the West." Another description of by Andrew Juniper notes that, "If an object or expression can bring about, within us, a sense of serene melancholy and a spiritual longing, then that object could be said to be ." For Richard Powell, " nurtures all that is authentic by acknowledging three simple realities: nothing lasts, nothing is finished, and nothing is perfect."
When it comes to thinking about an English definition or translation of the words wabi and sabi Andrew Juniper explains that, "They have been used to express a vast range of ideas and emotions, and so their meanings are more open to personal interpretation than almost any other word in the Japanese vocabulary." Therefore, an attempt to directly translate wabi-sabi may take away from the ambiguity that is so important to understanding how the Japanese view it.
After centuries of incorporating artistic and Buddhist influences from China, eventually evolved into a distinctly Japanese ideal. Over time, the meanings of and changed to be more lighthearted and hopeful. Around 700 years ago, particularly among the Japanese nobility, understanding emptiness and imperfection was honored as tantamount to the first step to or enlightenment. In today's Japan, the meaning of is often condensed to "wisdom in natural simplicity". In art books, it is typically defined as "flawed beauty". artworks often emphasize the process of making the piece and that it is ultimately incomplete.
From an engineering or design point of view, may be interpreted as the imperfect quality of any object, due to inevitable limitations in design and construction/manufacture especially with respect to unpredictable or changing usage conditions; in this instance, could be interpreted as the aspect of imperfect reliability, or the limited mortality of any object, hence the phonological and etymological connection with the Japanese word . Although the kanji characters for "rust" are not the same as in , the original spoken word (pre-kanji, ) are believed to be one and the same.
and both suggest sentiments of desolation and solitude. In the Mahayana Buddhist view of the universe, these may be viewed as positive characteristics, representing liberation from a material world and transcendence to a simpler life. Since Mahayana philosophy predicates that genuine understanding is reached through experience rather than words, may best be appreciated non-verbally.
Although the and concepts are religious in origin, actual usage of the words in Japanese is often quite casual, in keeping with the syncretic nature of Japanese belief.
Education
In one sense is a training whereby the student of learns to find the most basic, natural objects interesting, fascinating and beautiful. Fading autumn leaves would be an example. can change the student's perception of the world to the extent that a chip or crack in a vase makes it more interesting and gives the object greater meditative value. Similarly materials that age such as bare wood, paper and fabric become more interesting as they exhibit changes that can be observed over time.
History
Wabi-sabi has roots in ancient Chinese Taoism and Zen Buddhism. It started to shape Japanese culture when the Zen priest Murata Jukō (村田珠光, 1423–1502) modified the tea ceremony. He introduced simple, rough, wooden and clay instruments to replace the gold, jade, and porcelain of the Chinese style tea service that was popular at the time. About one hundred years later, the tea master Sen no Rikyū (千利休, 1522 – April 21, 1591) introduced wabi-sabi to the royalty with his design of the teahouse. "He constructed a teahouse with a door so low that even the emperor would have to bow in order to enter, reminding everyone of the importance of humility before tradition, mystery, and spirit."
In Japanese arts
At first, something that exhibited wabi-sabi qualities could only be discovered; it could be "found in the simple dwellings of the farmers that dotted the landscape, epitomized in neglected stone lanterns overgrown with moss or in simple bowls and other household utensils used by the common folk." However, towards the end of the late medieval period, the ruling class began using these aesthetic values to intentionally create "tea ceremony utensils, handicrafts, tea ceremony rooms and cottages, homes, gardens, even food and sweets, and above all manners and etiquette."
Many forms of Japanese art have been influenced by Zen and Mahayana philosophy over the past thousand years, with the concepts of the acceptance and contemplation of imperfection, and constant flux and impermanence of all things being particularly important to Japanese arts and culture. Accordingly, many Japanese art forms can be seen to encapsulate and exemplify the ideals of wabi-sabi.
Garden design
Japanese gardens started out as very simple open spaces that were meant to encourage kami, or spirits, to visit. During the Kamakura period Zen ideals began to influence the art of garden design in Japan. Temple gardens were decorated with large rocks and other raw materials to build Karesansui or Zen rock gardens. "Their designs imbued the gardens with a sense of the surreal and beckoned viewers to forget themselves and become immersed in the seas of gravel and the forests of moss. By loosening the rigid sense of perception, the actual scales of the garden became irrelevant and the viewers were able to then perceive the huge landscapes deep within themselves."
Tea gardens
Due to the tea garden’s close relationship with the tea ceremony, "the tea garden became one of the richest expressions of wabi sabi." These small gardens would usually include many elements of wabi-sabi style design. They were designed in a way that set the scene for the visitor to make their own interpretations and put them in the state of mind in order to participate in the tea ceremony.
Poetry
Japanese poetry such as tanka and haiku are very short and focus on the defining attributes of a scene. "By withholding verbose descriptions the poem entices the reader to actively participate in the fulfillment of its meaning and, as with the Zen gardens, to become an active participant in the creative process." One of the most famous Japanese poets, Basho, was credited with establishing sabi as definitive emotive force in haiku. Many of his works, as with other wabi-sabi expressions, make no use of sentimentality or superfluous adjectives, only the "devastating imagery of solitude."
Ceramics
As the preference for the more simplistic and modest was on the rise, Zen masters found the ornate ceramics from China less and less attractive and too ostentatious. Potters began to experiment with a more free expression of beauty and strayed away from uniformity and symmetry. New kilns gave potters new colors, forms, and textures, allowing them to create pieces that were very unique and nonuniform. These potters used a specific kind of firing which was thought to produce the best ceramics due to the part played by nature and the organic ash glazes, a clear embodiment of wabi-sabi.
For example: Hon'ami Kōetsu's (本阿弥 光悦; 1558 – 27 February 1637) white raku bowl called "Mount Fuji" (Shiroraku-Chawan, Fujisan) listed as a national treasure by the Japanese government.
Kintsugi, a specific technique that uses gold lacquer to repair broken pottery, is considered a wabi-sabi expression.
Flower arrangement
Sen no Rikyu saw the rikka style that was popular at the time and disliked its adherence to formal rules. He did away with the formalism and the opulent vases from China, using only the simplest vases for the flower displays (chabana) in his tea ceremonies. Instead of using more impressive flowers, he insisted on the use of wildflowers. "Ikebana, like the gardens, uses a living medium in the creative process, and it is this ingredient of life that brings a unique feel to flower arrangements."
Ikebana then became a very important part of the tea ceremony, and the flowers were treated with the utmost respect. "When a tea-master has arranged a flower to his satisfaction he will place it on the tokonoma, the place of honour in a Japanese room. It rests there like an enthroned prince, and the guests or disciples on entering the room will salute it with a profound bow before making their addresses to the host."
Other examples
(the traditional (bamboo flute) music of wandering Zen monks)
A contemporary Japanese exploration of the concept of can be found in the influential essay In Praise of Shadows by Jun'ichirō Tanizaki.
Other examples include:
The cultivation of bonsai (miniature trees) – a typical bonsai design features wood with a rough texture, pieces of deadwood, and trees with hollow trunks, all intended to highlight the passage of time and nature. Bonsai are often displayed in the autumn or after they have shed leaves for the winter, in order to admire their bare branches.
Tea ceremony.
Influence upon the West
has been employed in the Western world in a variety of contexts, including in the arts, technology, media, and mental health, among others.
The arts
Many Western designers, writers, poets and artists have utilised ideals within their work to varying degrees, with some considering the concept a key component of their art, and others using it only minimally.
Designer Leonard Koren (born 1948) published for Artists, Designers, Poets & Philosophers (1994) as an examination of , contrasting it with Western ideals of beauty. According to Penelope Green, Koren's book subsequently "became a talking point for a wasteful culture intent on penitence and a touchstone for designers of all stripes." This is the book that first introduced the term "wabi-sabi" into Western aesthetic discourse.
concepts historically had extreme importance in the development of Western studio pottery; Bernard Leach (1887–1979) was deeply influenced by Japanese aesthetics and techniques, which is evident in his foundational book A Potter's Book.
The work of American artist John Connell (1940–2009) is also considered to be centered on the idea of ; other artists who have employed the idea include former Stuckist artist and remodernist filmmaker Jesse Richards (born 1975), who employs it in nearly all of his work, along with the concept of .
Some haiku in English also adopt the aesthetic in written style, creating spare, minimalist poems that evoke loneliness and transience, such as Nick Virgilio's "autumn twilight:/ the wreath on the door/ lifts in the wind".
Technology
During the 1990s, the concept was borrowed by computer software developers and employed in agile programming and wiki, used to describe acceptance of the ongoing imperfection of computer programming produced through these methods.
Mental health
Wabi-sabi has been evoked in a mental health context as a helpful concept for reducing perfectionist thinking.
In media
In 2009, Marcel Theroux presented "In Search of Wabi Sabi" on BBC Four, as part of the channel's Hidden Japan season of programming, travelling throughout Japan trying to understand the aesthetic tastes of its people. Theroux began by comically enacting a challenge from the book Living by Taro Gold, asking members of the public on a street in Tokyo to describe – the results of which showed that, just as Gold predicted, "they will likely give you a polite shrug and explain that Wabi Sabi is simply unexplainable."
See also
Clinamen
Higashiyama Bunka in the Muromachi period
(a Japanese aesthetic ideal)
Teaism
(also known as )
Tao Te Ching
I Ching
Perfect is the enemy of good
References
Bibliography
Davies, Roger and Osamu Ikeno (Eds.) (2002). The Japanese Mind. Tuttle Publishing. pp. 223–231. .
Burnham, Robert Jr. (1978). Burnham's Celestial Handbook, an observer's guide to the universe beyond the solar system, Volume III: Pavo through Vulpecula, pages 1625-1626 (Dover books, ISBN 0-486-23673-0).
External links
In Search of Wabi Sabi with Marcel Theroux
Chadō
Concepts in aesthetics
Design
Japanese aesthetics
Japanese literary terminology
Japanese style of gardening
Japanese words and phrases
Landscape design history
Low-energy building
Sustainable building
Words and phrases with no direct English translation
Japanese crafts | Wabi-sabi | [
"Engineering"
] | 2,740 | [
"Construction",
"Sustainable building",
"Design",
"Building engineering"
] |
170,734 | https://en.wikipedia.org/wiki/Smegma | Smegma (from Ancient Greek ) is a combination of shed skin cells, skin oils, and moisture. It occurs in both male and female mammalian genitalia. In females, it collects around the clitoris and in the folds of the labia minora; in males, smegma collects under the foreskin.
Females
The accumulation of sebum combined with dead skin cells forms smegma. Smegma clitoridis is defined as the secretion of the apocrine (sweat) and sebaceous (sebum) glands of the clitoris in combination with desquamating epithelial cells. Glands that are located around the clitoris, the labia minora, and the labia majora secrete sebum.
If smegma is not removed frequently it can lead to clitoral adhesion which can make clitoral stimulation (such as masturbation) painful (clitorodynia).
Males
In males, smegma helps keep the glans moist and facilitates sexual intercourse by acting as a lubricant.
Smegma was originally thought to be produced by sebaceous glands near the frenulum called Tyson's glands; however, subsequent studies have failed to find these glands. Joyce Wright states that smegma is produced from minute microscopic protrusions of the mucosal surface of the foreskin and that living cells constantly grow towards the surface, undergo fatty degeneration, separate off, and form smegma. Parkash et al. found that smegma contains 26.6% fats and 13.3% proteins, which they judged to be consistent with necrotic epithelial debris.
Newly produced smegma has a smooth, moist texture. It is thought to be rich in squalene and contain prostatic and seminal secretions, desquamated epithelial cells, and the mucin content of the urethral glands of Littré. Smegma contains cathepsin B, lysozymes, chymotrypsin, neutrophil elastase and cytokines, which aid the immune system.
According to Wright, the production of smegma, which is low in childhood, increases from adolescence until sexual maturity when the function of smegma for lubrication assumes its full value. From middle-age, production starts to decline and in old age virtually no smegma is produced. Jakob Øster reported that the incidence of smegma increased from 1% among 6- to 9-year-olds to 8% among 14- to 17-year-olds (amongst those who did not present with phimosis and could be examined).
Clinical significance and hygiene
The production of smegma, which increases during puberty, can only be of limited significance, as males and females learn to practice good genital hygiene.
Men with smegma can cause irritation and inflammation, which can increase the risk of penile cancer. In the past some experts used to be concerned smegma itself might cause cancer.
Other animals
In healthy animals, smegma helps clean and lubricate the genitals. In veterinary medicine, analysis of this smegma is sometimes used for detection of urogenital tract pathogens, such as Tritrichomonas foetus. Accumulation of smegma in the equine preputial folds and the urethral fossa and urethral diverticulum can form large "beans" and promote the carriage of Taylorella equigenitalis, the causative agent of contagious equine metritis. Some equine veterinarians have recommended periodic cleaning of male genitals to improve the health of the animal.
See also
Keratin pearl
List of cutaneous conditions
Mycobacterium smegmatis – found in smegma
References
External links
Excretion
Exocrine system
Mammal female reproductive system
Mammal male reproductive system | Smegma | [
"Biology"
] | 818 | [
"Exocrine system",
"Organ systems",
"Excretion"
] |
170,736 | https://en.wikipedia.org/wiki/Understory | In forestry and ecology, understory (American English), or understorey (Commonwealth English), also known as underbrush or undergrowth, includes plant life growing beneath the forest canopy without penetrating it to any great extent, but above the forest floor. Only a small percentage of light penetrates the canopy so understory vegetation is generally shade-tolerant. The understory typically consists of trees stunted through lack of light, other small trees with low light requirements, saplings, shrubs, vines and undergrowth. Small trees such as holly and dogwood are understory specialists.
In temperate deciduous forests, many understory plants start into growth earlier in the year than the canopy trees, to make use of the greater availability of light at that particular time of year. A gap in the canopy caused by the death of a tree stimulates the potential emergent trees into competitive growth as they grow upwards to fill the gap. These trees tend to have straight trunks and few lower branches. At the same time, the bushes, undergrowth, and plant life on the forest floor become denser. The understory experiences greater humidity than the canopy, and the shaded ground does not vary in temperature as much as open ground. This causes a proliferation of ferns, mosses, and fungi and encourages nutrient recycling, which provides favorable habitats for many animals and plants.
Understory structure
The understory is the underlying layer of vegetation in a forest or wooded area, especially the trees and shrubs growing between the forest canopy and the forest floor.
Plants in the understory comprise an assortment of seedlings and saplings of canopy trees together with specialist understory shrubs and herbs. Young canopy trees often persist in the understory for decades as suppressed juveniles until an opening in the forest overstory permits their growth into the canopy. In contrast understory shrubs complete their life cycles in the shade of the forest canopy. Some smaller tree species, such as dogwood and holly, rarely grow tall and generally are understory trees.
The canopy of a tropical forest is typically about thick, and intercepts around 95% of the sunlight. The understory therefore receives less intense light than plants in the canopy and such light as does penetrate is impoverished in wavelengths of light that are most effective for photosynthesis. Understory plants therefore must be shade tolerant—they must be able to photosynthesize adequately using such light as does reach their leaves. They often are able to use wavelengths that canopy plants cannot. In temperate deciduous forests towards the end of the leafless season, understory plants take advantage of the shelter of the still leafless canopy plants to "leaf out" before the canopy trees do. This is important because it provides the understory plants with a window in which to photosynthesize without the canopy shading them. This brief period (usually 1–2 weeks) is often a crucial period in which the plant can maintain a net positive carbon balance over the course of the year.
As a rule forest understories also experience higher humidity than exposed areas. The forest canopy reduces solar radiation, so the ground does not heat up or cool down as rapidly as open ground. Consequently, the understory dries out more slowly than more exposed areas do. The greater humidity encourages epiphytes such as ferns and mosses, and allows fungi and other decomposers to flourish. This drives nutrient cycling, and provides favorable microclimates for many animals and plants, such as the pygmy marmoset.
See also
Fire-stick farming
Layers of rainforests
Overgrazing
References
External links
https://www.eolss.net/sample-chapters/C10/E5-03-01-08.pdf
Biology terminology
Forest ecology
Forests
Habitat
Plants
Plants by habitat | Understory | [
"Biology"
] | 778 | [
"Plants",
"Plants by habitat",
"Forests",
"Organisms by habitat",
"Ecosystems",
"nan"
] |
170,757 | https://en.wikipedia.org/wiki/Cyclotron%20radiation | In particle physics, cyclotron radiation is electromagnetic radiation emitted by non-relativistic accelerating charged particles deflected by a magnetic field. The Lorentz force on the particles acts perpendicular to both the magnetic field lines and the particles' motion through them, creating an acceleration of charged particles that causes them to emit radiation as a result of the acceleration they undergo as they spiral around the lines of the magnetic field.
The name of this radiation derives from the cyclotron, a type of particle accelerator used since the 1930s to create highly energetic particles for study. The cyclotron makes use of the circular orbits that charged particles exhibit in a uniform magnetic field. Furthermore, the period of the orbit is independent of the energy of the particles, allowing the cyclotron to operate at a set frequency. Cyclotron radiation is emitted by all charged particles travelling through magnetic fields, not just those in cyclotrons. Cyclotron radiation from plasma in the interstellar medium or around black holes and other astronomical phenomena is an important source of information about distant magnetic fields.
Properties
The power (energy per unit time) of the emission of each electron can be calculated:
where E is energy, t is time, is the Thomson cross section (total, not differential), B is the magnetic field strength, v is the velocity perpendicular to the magnetic field, c is the speed of light and is the permeability of free space.
Cyclotron radiation has a spectrum with its main spike at the same fundamental frequency as the particle's orbit, and harmonics at higher integral factors. Harmonics are the result of imperfections in the actual emission environment, which also create a broadening of the spectral lines. The most obvious source of line broadening is non-uniformities in the magnetic field; as an electron passes from one area of the field to another, its emission frequency will change with the strength of the field. Other sources of broadening include collisional broadening as the electron will invariably fail to follow a perfect orbit, distortions of the emission caused by interactions with the surrounding plasma, and relativistic effects if the charged particles are sufficiently energetic. When the electrons are moving at relativistic speeds, cyclotron radiation is known as synchrotron radiation.
The recoil experienced by a particle emitting cyclotron radiation is called radiation reaction. Radiation reaction acts as a resistance to motion in a cyclotron; and the work necessary to overcome it is the main energetic cost of accelerating a particle in a cyclotron. Cyclotrons are prime examples of systems which experience radiation reaction.
Examples
In the context of magnetic fusion energy, cyclotron radiation losses translate into a requirement for a minimum plasma energy density in relation to the magnetic field energy density.
Cyclotron radiation would likely be produced in a high altitude nuclear explosion. Gamma rays produced by the explosion would ionize atoms in the upper atmosphere and those free electrons would interact with the Earth's magnetic field to produce cyclotron radiation in the form of an electromagnetic pulse (EMP). This phenomenon is of concern to the military as the EMP may damage solid state electronic equipment.
See also
Auroral kilometric radiation (AKR)
Bremsstrahlung
Beamstrahlung
Synchrotron radiation
Free electron laser
Larmor formula
References
Electromagnetic radiation
Plasma phenomena
Experimental particle physics | Cyclotron radiation | [
"Physics"
] | 700 | [
"Physical phenomena",
"Plasma physics",
"Electromagnetic radiation",
"Plasma phenomena",
"Radiation",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
170,768 | https://en.wikipedia.org/wiki/Urban%20secession | Urban secession is a city's secession from its surrounding region to form a new political unit.
This new unit is usually a subdivision of the same country as its surroundings. Many cities around the world form a separate local government unit. The most common reason is that the population of the city is too large for the city to be subsumed into a larger local government unit.
However, in a few cases, full sovereignty may be attained, in which case the unit is usually called a city-state. It is an extreme form of urban autonomy, which can be expressed in less formal terms or by ordinary legislation such as a city charter.
History
Urban autonomy has a long history back to the prehistoric urbanization and the original Mediterranean city-states of classical times, e.g. Ancient Athens, Ancient Rome. In medieval times such measures as the Magdeburg rights established special status for cities and their residents in commercial relations. In general it receded as European cities were incorporated into nation-states especially in the 17th century to 20th century, eventually losing many special rights.
Theory of urban secession
Modern theorists of local civic economies, including Robert J. Oakerson and Jane Jacobs, argue that cities reflect a clash of values, especially of tolerances versus preferences, with views of the city varying from a pure community to that of a pure marketplace. Suburbanites have a strong tendency to view the city as a marketplace since they do not participate in its street life voluntarily, nor do they consider the city to be a safe and comfortable place to live in. By contrast, those who choose downtown living tend to see it as more of a community, but must pay careful attention to their tolerances (for smog, noise pollution, crime, taxation, etc.). Ethics and thus politics of these interest groups vastly differ.
Secession (the setup of entirely new legislative and executive entities) is advocated by certain urban theorists, notably Jane Jacobs, as the only way to deal politically with these vast differences in culture between modern cities and even their nearest suburbs and essential watersheds. She stated that "cities that wish to thrive in the next century must separate politically from their surrounding regions." She rejected the lesser "Charter" and less formal solutions, arguing the full structure of real regional government were necessary, and applied to the urban area alone. In particular she rejected the idea that suburban regions should have any say over the rules in the city: "they have left it, and aren't part of it." Jacobs herself lived in an urban neighborhood (The Annex, Toronto) which would have been paved over in the 1970s by a highway project to serve the suburbs, the Spadina Expressway, had the proponents of urban secession not stopped it. Jacobs likewise took part in blocking the development of the Lower Manhattan Expressway in the 1960s, opposing Robert Moses. These freeways are examples of the clash of urban community versus suburban market interests.
Advocates of highway development and suburban participation in urban government theorize that cities which protect themselves from the suburbs, forcing them to become self-sufficient small towns, cutting off the freeways, forcing commuters into subways, etc., are committing suicide by forcing business out into the suburbs. Advocates respond that cities depend more on their quality of life to attract migrants and professionals, and that remote work makes it possible for workers in the city to live anywhere, coming into town less frequently, without the rush.
Examples
City-states and semi-autonomous cities
Fully autonomous cities, or city-states, are the objective of urban secession. A modern instance of a city-state is the Republic of Singapore, which was expelled from Malaysia for a variety of political and social reasons. Currently, Singapore is the sole city which has total independence, an indigenous currency, and a substantial military. The other two existing and de jure city-states are Monaco and Vatican City, which operate as, nominally, politically independent urban areas.
Certain cities like Hong Kong and Macau have a degree of autonomy, but not full political independence. Specifically, the status of a special administrative regions (SAR) has been conferred upon Hong Kong and Macau in the People's Republic of China. The reason for their relative autonomy stems from their existence for more than a century as European colonies, and a resultant difficulty of full reintegration.
Asia
In China, both Beijing and Tianjin are independent of the surrounding province of Hebei, of which they were formerly a part. Similarly, Shanghai is now independent from Jiangsu and Chongqing from Sichuan.
In Japan, Tokyo, as well as being a city, forms a prefecture, falling into a special category of "metropolitan prefecture" having some of the attributes of a city and some of a prefecture. Within Tokyo, there are smaller units, "wards", "cities", "towns", etc., but some of the responsibilities normally assigned to cities and towns in other Japanese prefectures are handled by the Tokyo metropolitan government instead.
In both South Korea and North Korea, special cities are independent from their surrounding provinces and city-states under direct governance from the central government. Examples are Seoul, Busan, Daegu, Incheon, Gwangju, Daejeon and Ulsan in South Korea and Pyongyang and Rason in North Korea. In South Korea, the main criterion for granting secession from the province is a population reaching one million.
Taiwan, officially the Republic of China, administers six cities, formerly part of the Republic of China's Taiwan Province, as special municipalities: Kaohsiung, Taichung, Tainan, Taipei, New Taipei and Taoyuan. (The People's Republic of China, which claims Taiwan, continues to recognise these municipalities as an integral part of PRC's purported Taiwan Province; the People's Republic of China regards Taiwan as its 23rd province, with Taipei as its capital.)
In Indonesia, the capital Jakarta was once part of West Java until it gained special autonomy status and broke away from its former province in 1961. The mayor position was replaced by governor, making it special autonomous province and operates independently from its surrounding provinces
Malaysian capitals Kuala Lumpur and Putrajaya as well as Labuan island was once part of Selangor and Sabah respectively. In 1974 Kuala Lumpur was declared as first Federal Territory in Malaysia in order to prevent clash between Selangor state government and federal government, the state capital of Selangor was later moved to nearby Shah Alam. Later in 1984 Labuan was chosen by the federal government for the development Offshore financial centre and declared as second Federal Territory after Kuala Lumpur. Putrajaya declared as third Federal Territory later in 2001 after federal government finished developing the city as new federal capital while Kuala Lumpur stays as royal capital.
In Thailand, the capital Bangkok operates independently of any province and is considered a special administrative area. It is a primate city in terms of its large population, having nearly 8% of Thailand's total population.
Europe
The Brussels capital region, a densely built-up area consisting of 19 communes including the capital city Brussels, became one of Belgium's three regions after the country was turned into a federation in 1970. (In Belgium there are special circumstances due to the country's language communities.)
In Bulgaria the capital Sofia is an oblast of its own - Sofia-grad, while the surrounding area is divided between the Pernik Oblast and the Sofia Oblast.
Madrid has its own autonomous region, including a regional parliament even though Spain is a unitary state.
Paris and the Lyon Metropolis are their own departments in France.
The capital city of Bucharest is also a county within Romania.
Moscow and Saint Petersburg, the biggest cities in Russia, have a federal city status. Following the 2014 annexation of Crimea by the Russian Federation, the city of Sevastopol is also administered as a federal city, though Ukraine and most of the UN member countries continue to regard Sevastopol as a city with special status within Ukraine.
In the United Kingdom, London secessionism has gathered momentum following the Brexit referendum, when the UK as a whole voted to leave the European Union, but Greater London, which is its own region (unlike other urban areas in the UK), voted to remain in the EU.
German-speaking countries
In Germany there are two cities — Berlin and Hamburg — that are Bundesländer in themselves (thus, they are city-states within a federal system). The Free Hanseatic City of Bremen is also a city-state, comprising two cities: Bremen and Bremerhaven. At the district level, many large and medium-sized cities form their own district-free cities (German: Kreisfreie Städte).
The city of Vienna is a federal state within the Republic of Austria. As in Germany, many large and medium-sized cities in Austria are separate from the regular districts, instead forming their own statutory cities (German: Statutarstädte).
One of the cantons of Switzerland, Basel-Stadt, is a city-state.
North America
There are no city-states in North America. The District of Columbia in the United States and Distrito Federal in Mexico are federal government districts and not ordinary municipalities. As such, they are subject to the direct authority, respectively, of the U.S. and Mexican federal governments. The residents of Washington, D.C. did not elect their own mayor and city council until 1972, when the United States Congress extended home rule to the city. However, the actions of the mayor and city council must still be approved, at least retroactively, by the Congress, and no legislation passed by the Government of the District of Columbia can take effect until and unless the U.S. Congress approves it.
Canada
Urban secession is one of many possible solutions pondered by some Canadian cities as they contemplate their problems. It is one that is considered politically useful because of the strong secessionist movement in Quebec, as well as the weaker secessionist movements in Newfoundland (formerly independent), Alberta and British Columbia.
In Quebec, with a secessionist movement and linguistic dichotomy, the division of a newly independent Quebec has been a strong undercurrent, with some having a Province of Montreal remaining in Canada, sometimes containing only the West Island and the West Shore of Montreal.
For many decades, the urban communities of Toronto, Montreal and Vancouver have been configured separately from their respective provinces, for purposes of apportioning Members of Parliament after the national censuses conducted every five years.
United States
Various proposals have been made for New York City to secede from New York State. On a lower level, some states permit or have permitted a city to secede from its county and become a county-equivalent jurisdiction in its own right. Whether the new county-equivalent jurisdiction is considered to be a consolidated city-county like Philadelphia, Pennsylvania or San Francisco, California or an independent city like St. Louis, Missouri is a matter for each such state to decide. In November 2018, the Georgia General Assembly allowed voters in a wealthy enclave of Stockbridge, Georgia to decide if they wanted to secede, which they then declined to do. In Ohio, hundreds of cities and villages have withdrawn from their surrounding townships by forming paper townships.
Oceania
The 2007 Royal Commission on Auckland Governance was set up by the New Zealand Government to investigate possible changes to the administration of Auckland. The city was in 2009 named as the country's only supercity with the merging of several former councils, and in 2010 the Auckland Region became a unitary authority governed by the Auckland Council. Suggestion has since been made that the region could become an independent city state.
See also
Libertarian municipalism
Free City of Danzig
Italian Regency of Carnaro
Localism (politics)
References
External links
BBC: Are cities the new countries?
Autonomy
Local government in the United States
Localism (politics)
Secession
Urban planning | Urban secession | [
"Engineering"
] | 2,389 | [
"Urban planning",
"Architecture"
] |
170,808 | https://en.wikipedia.org/wiki/Synchrotron%20radiation | Synchrotron radiation (also known as magnetobremsstrahlung) is the electromagnetic radiation emitted when relativistic charged particles are subject to an acceleration perpendicular to their velocity (). It is produced artificially in some types of particle accelerators or naturally by fast electrons moving through magnetic fields. The radiation produced in this way has a characteristic polarization, and the frequencies generated can range over a large portion of the electromagnetic spectrum.
Synchrotron radiation is similar to bremsstrahlung radiation, which is emitted by a charged particle when the acceleration is parallel to the direction of motion. The general term for radiation emitted by particles in a magnetic field is gyromagnetic radiation, for which synchrotron radiation is the ultra-relativistic special case. Radiation emitted by charged particles moving non-relativistically in a magnetic field is called cyclotron emission. For particles in the mildly relativistic range (≈85% of the speed of light), the emission is termed gyro-synchrotron radiation.
In astrophysics, synchrotron emission occurs, for instance, due to ultra-relativistic motion of a charged particle around a black hole. When the source follows a circular geodesic around the black hole, the synchrotron radiation occurs for orbits close to the photosphere where the motion is in the ultra-relativistic regime.
History
Synchrotron radiation was first observed by technician Floyd Haber, on April 24, 1947, at the 70 MeV electron synchrotron of the General Electric research laboratory in Schenectady, New York. While this was not the first synchrotron built, it was the first with a transparent vacuum tube, allowing the radiation to be directly observed.
As recounted by Herbert Pollock:
Description
A direct consequence of Maxwell's equations is that accelerated charged particles always emit electromagnetic radiation. Synchrotron radiation is the special case of charged particles moving at relativistic speed undergoing acceleration perpendicular to their direction of motion, typically in a magnetic field. In such a field, the force due to the field is always perpendicular to both the direction of motion and to the direction of field, as shown by the Lorentz force law.
The power carried by the radiation is found (in SI units) by the relativistic Larmor formula:
where
is the vacuum permittivity,
is the particle charge,
is the magnitude of the acceleration,
is the speed of light,
is the Lorentz factor,
,
is the radius of curvature of the particle trajectory.
The force on the emitting electron is given by the Abraham–Lorentz–Dirac force.
When the radiation is emitted by a particle moving in a plane, the radiation is linearly polarized when observed in that plane, and circularly polarized when observed at a small angle. Considering quantum mechanics, however, this radiation is emitted in discrete packets of photons and has significant effects in accelerators called quantum excitation. For a given acceleration, the average energy of emitted photons is proportional to and the emission rate to .
From accelerators
Circular accelerators will always produce gyromagnetic radiation as the particles are deflected in the magnetic field. However, the quantity and properties of the radiation are highly dependent on the nature of the acceleration taking place. For example, due to the difference in mass, the factor of in the formula for the emitted power means that electrons radiate energy at approximately 1013 times the rate of protons.
Energy loss from synchrotron radiation in circular accelerators was originally considered a nuisance, as additional energy must be supplied to the beam in order to offset the losses. However, beginning in the 1980s, circular electron accelerators known as light sources have been constructed to deliberately produce intense beams of synchrotron radiation for research.
In astronomy
Synchrotron radiation is also generated by astronomical objects, typically where relativistic electrons spiral (and hence change velocity) through magnetic fields.
Two of its characteristics include power-law energy spectra and polarization. It is considered to be one of the most powerful tools in the study of extra-solar magnetic fields wherever relativistic charged particles are present. Most known cosmic radio sources emit synchrotron radiation. It is often used to estimate the strength of large cosmic magnetic fields as well as analyze the contents of the interstellar and intergalactic media.
History of detection
This type of radiation was first detected in the Crab Nebula in 1956 by Jan Hendrik Oort and Theodore Walraven, and a few months later in a jet emitted by Messier 87 by Geoffrey R. Burbidge. It was confirmation of a prediction by Iosif S. Shklovsky in 1953. However, it had been predicted earlier (1950) by Hannes Alfvén and Nicolai Herlofson. Solar flares accelerate particles that emit in this way, as suggested by R. Giovanelli in 1948 and described by J.H. Piddington in 1952.
T. K. Breus noted that questions of priority on the history of astrophysical synchrotron radiation are complicated, writing:
From supermassive black holes
It has been suggested that supermassive black holes produce synchrotron radiation in "jets", generated by the gravitational acceleration of ions in their polar magnetic fields. The nearest such observed jet is from the core of the galaxy Messier 87. This jet is interesting for producing the illusion of superluminal motion as observed from the frame of Earth. This phenomenon is caused because the jets are traveling very near the speed of light and at a very small angle towards the observer. Because at every point of their path the high-velocity jets are emitting light, the light they emit does not approach the observer much more quickly than the jet itself. Light emitted over hundreds of years of travel thus arrives at the observer over a much smaller time period, giving the illusion of faster than light travel, despite the fact that there is actually no violation of special relativity.
Pulsar wind nebulae
A class of astronomical sources where synchrotron emission is important is pulsar wind nebulae, also known as plerions, of which the Crab nebula and its associated pulsar are archetypal.
Pulsed emission gamma-ray radiation from the Crab has recently been observed up to ≥25 GeV, probably due to synchrotron emission by electrons trapped in the strong magnetic field around the pulsar.
Polarization in the Crab nebula at energies from 0.1 to 1.0 MeV, illustrates this typical property of synchrotron radiation.
Interstellar and intergalactic media
Much of what is known about the magnetic environment of the interstellar medium and intergalactic medium is derived from observations of synchrotron radiation. Cosmic ray electrons moving through the medium interact with relativistic plasma and emit synchrotron radiation which is detected on Earth. The properties of the radiation allow astronomers to make inferences about the magnetic field strength and orientation in these regions. However, accurate calculations of field strength cannot be made without knowing the relativistic electron density.
In supernovae
When a star explodes in a supernova, the fastest ejecta move at semi-relativistic speeds approximately 10% the speed of light. This blast wave gyrates electrons in ambient magnetic fields and generates synchrotron emission, revealing the radius of the blast wave at the location of the emission. Synchrotron emission can also reveal the strength of the magnetic field at the front of the shock wave, as well as the circumstellar density it encounters, but strongly depends on the choice of energy partition between the magnetic field, proton kinetic energy, and electron kinetic energy. Radio synchrotron emission has allowed astronomers to shed light on mass loss and stellar winds that occur just prior to stellar death.
See also
Notes
References
Brau, Charles A. Modern Problems in Classical Electrodynamics. Oxford University Press, 2004. .
Jackson, John David. Classical Electrodynamics. John Wiley & Sons, 1999.
External links
Cosmic Magnetobremsstrahlung (synchrotron Radiation), by Ginzburg, V. L., Syrovatskii, S. I., ARAA, 1965
Developments in the Theory of Synchrotron Radiation and its Reabsorption, by Ginzburg, V. L., Syrovatskii, S. I., ARAA, 1969
Lightsources.org
BioSync – a structural biologist's resource for high energy data collection facilities
X-Ray Data Booklet
Particle physics
Synchrotron-related techniques
Electromagnetic radiation
Experimental particle physics | Synchrotron radiation | [
"Physics"
] | 1,798 | [
"Physical phenomena",
"Electromagnetic radiation",
"Radiation",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
170,853 | https://en.wikipedia.org/wiki/Supergiant | Supergiants are among the most massive and most luminous stars. Supergiant stars occupy the top region of the Hertzsprung–Russell diagram with absolute visual magnitudes between about −3 and −8. The temperature range of supergiant stars spans from about 3,400 K to over 20,000 K.
Definition
The title supergiant, as applied to a star, does not have a single concrete definition. The term giant star was first coined by Hertzsprung when it became apparent that the majority of stars fell into two distinct regions of the Hertzsprung–Russell diagram. One region contained larger and more luminous stars of spectral types A to M and received the name giant. Subsequently, as they lacked any measurable parallax, it became apparent that some of these stars were significantly larger and more luminous than the bulk, and the term super-giant arose, quickly adopted as supergiant.
Supergiants with spectral classes of O to A are typically referred to as blue supergiants, supergiants with spectral classes F and G are referred to as yellow supergiants, while those of spectral classes K to M are red supergiants. Another convention uses temperature: supergiants with effective temperatures below 4800 K are deemed red supergiants; those with temperatures between 4800 and 7500 K are yellow supergiants, and those with temperatures exceeding 7500 K are blue supergiants. These correspond approximately to spectral types M and K for red supergiants, G, F, and late A for yellow supergiants, and early A, B, and O for blue supergiants.
Spectral luminosity class
Supergiant stars can be identified on the basis of their spectra, with distinctive lines sensitive to high luminosity and low surface gravity. In 1897, Antonia C. Maury had divided stars based on the widths of their spectral lines, with her class "c" identifying stars with the narrowest lines. Although it was not known at the time, these were the most luminous stars. In 1943, Morgan and Keenan formalised the definition of spectral luminosity classes, with class I referring to supergiant stars. The same system of MK luminosity classes is still used today, with refinements based on the increased resolution of modern spectra. Supergiants occur in every spectral class from young blue class O supergiants to highly evolved red class M supergiants. Because they are enlarged compared to main-sequence and giant stars of the same spectral type, they have lower surface gravities, and changes can be observed in their line profiles. Supergiants are also evolved stars with higher levels of heavy elements than main-sequence stars. This is the basis of the MK luminosity system which assigns stars to luminosity classes purely from observing their spectra.
In addition to the line changes due to low surface gravity and fusion products, the most luminous stars have high mass-loss rates and resulting clouds of expelled circumstellar materials which can produce emission lines, P Cygni profiles, or forbidden lines. The MK system assigns stars to luminosity classes: Ib for supergiants; Ia for luminous supergiants; and 0 (zero) or Ia+ for hypergiants. In reality there is much more of a continuum than well defined bands for these classifications, and classifications such as Iab are used for intermediate luminosity supergiants. Supergiant spectra are frequently annotated to indicate spectral peculiarities, for example B2 Iae or F5 Ipec.
Evolutionary supergiants
Supergiants can also be defined as a specific phase in the evolutionary history of certain stars. Stars with initial masses above quickly and smoothly initiate helium core fusion after they have exhausted their hydrogen, and continue fusing heavier elements after helium exhaustion until they develop an iron core, at which point the core collapses to produce a Type II supernova. Once these massive stars leave the main sequence, their atmospheres inflate, and they are described as supergiants. Stars initially under will never form an iron core and in evolutionary terms do not become supergiants, although they can reach luminosities thousands of times the sun's. They cannot fuse carbon and heavier elements after the helium is exhausted, so they eventually just lose their outer layers, leaving the core of a white dwarf. The phase where these stars have both hydrogen and helium burning shells is referred to as the asymptotic giant branch (AGB), as stars gradually become more and more luminous class M stars. Stars of may fuse sufficient carbon on the AGB to produce an oxygen-neon core and an electron-capture supernova, but astrophysicists categorise these as super-AGB stars rather than supergiants.
Categorisation of evolved stars
There are several categories of evolved stars that are not supergiants in evolutionary terms but may show supergiant spectral features or have luminosities comparable to supergiants.
Asymptotic-giant-branch (AGB) and post-AGB stars are highly evolved lower-mass red giants with luminosities that can be comparable to more massive red supergiants, but because of their low mass, being in a different stage of development (helium shell burning), and their lives ending in a different way (planetary nebula and white dwarf rather than supernova), astrophysicists prefer to keep them separate. The dividing line becomes blurred at around (or as high as in some models) where stars start to undergo limited fusion of elements heavier than helium. Specialists studying these stars often refer to them as super AGB stars, since they have many properties in common with AGB such as thermal pulsing. Others describe them as low-mass supergiants since they start to burn elements heavier than helium and can explode as supernovae. Many post-AGB stars receive spectral types with supergiant luminosity classes. For example, RV Tauri has an Ia (bright supergiant) luminosity class despite being less massive than the sun. Some AGB stars also receive a supergiant luminosity class, most notably W Virginis variables such as W Virginis itself, stars that are executing a blue loop triggered by thermal pulsing. A very small number of Mira variables and other late AGB stars have supergiant luminosity classes, for example α Herculis.
Classical Cepheid variables typically have supergiant luminosity classes, although only the most luminous and massive will actually go on to develop an iron core. The majority of them are intermediate mass stars fusing helium in their cores and will eventually transition to the asymptotic giant branch. δ Cephei itself is an example with a luminosity of and a mass of .
Wolf–Rayet stars are also high-mass luminous evolved stars, hotter than most supergiants and smaller, visually less bright but often more luminous because of their high temperatures. They have spectra dominated by helium and other heavier elements, usually showing little or no hydrogen, which is a clue to their nature as stars even more evolved than supergiants. Just as the AGB stars occur in almost the same region of the HR diagram as red supergiants, Wolf–Rayet stars can occur in the same region of the HR diagram as the hottest blue supergiants and main-sequence stars.
The most massive and luminous main-sequence stars are almost indistinguishable from the supergiants they quickly evolve into. They have almost identical temperatures and very similar luminosities, and only the most detailed analyses can distinguish the spectral features that show they have evolved away from the narrow early O-type main-sequence to the nearby area of early O-type supergiants. Such early O-type supergiants share many features with WNLh Wolf–Rayet stars and are sometimes designated as slash stars, intermediates between the two types.
Luminous blue variables (LBVs) stars occur in the same region of the HR diagram as blue supergiants but are generally classified separately. They are evolved, expanded, massive, and luminous stars, often hypergiants, but they have very specific spectral variability, which defies the assignment of a standard spectral type. LBVs observed only at a particular time or over a period of time when they are stable, may simply be designated as hot supergiants or as candidate LBVs due to their luminosity.
Hypergiants are frequently treated as a different category of star from supergiants, although in all important respects they are just a more luminous category of supergiant. They are evolved, expanded, massive and luminous stars like supergiants, but at the most massive and luminous extreme, and with particular additional properties of undergoing high mass-loss due to their extreme luminosities and instability. Generally only the more evolved supergiants show hypergiant properties, since their instability increases after high mass-loss and some increase in luminosity.
Some B[e] stars are supergiants although other B[e] stars are clearly not. Some researchers distinguish the B[e] objects as separate from supergiants, while researchers prefer to define massive evolved B[e] stars as a subgroup of supergiants. The latter has become more common with the understanding that the B[e] phenomenon arises separately in a number of distinct types of stars, including some that are clearly just a phase in the life of supergiants.
Properties
Supergiants have masses from 8 to 12 times the Sun () upwards, and luminosities from about 1,000 to over a million times the Sun (). They vary greatly in radius, usually from 30 to 500, or even in excess of 1,000 solar radii (). They are massive enough to begin helium-core burning gently before the core becomes degenerate, without a flash and without the strong dredge-ups that lower-mass stars experience. They go on to successively ignite heavier elements, usually all the way to iron. Also because of their high masses, they are destined to explode as supernovae.
The Stefan–Boltzmann law dictates that the relatively cool surfaces of red supergiants radiate much less energy per unit area than those of blue supergiants; thus, for a given luminosity, red supergiants are larger than their blue counterparts. Radiation pressure limits the largest cool supergiants to around 1,500 and the most massive hot supergiants to around a million (Mbol around −10). Stars near and occasionally beyond these limits become unstable, pulsate, and experience rapid mass loss.
Surface gravity
The supergiant luminosity class is assigned on the basis of spectral features that are largely a measure of surface gravity, although such stars are also affected by other properties such as microturbulence. Supergiants typically have surface gravities of around log(g) 2.0 cgs and lower, although bright giants (luminosity class II) have statistically very similar surface gravities to normal Ib supergiants. Cool luminous supergiants have lower surface gravities, with the most luminous (and unstable) stars having log(g) around zero. Hotter supergiants, even the most luminous, have surface gravities around one, due to their higher masses and smaller radii.
Temperature
There are supergiant stars at all of the main spectral classes and across the whole range of temperatures from mid-M class stars at around 3,400 K to the hottest O class stars over 40,000 K. Supergiants are generally not found cooler than mid-M class. This is expected theoretically since they would be catastrophically unstable; however, there are potential exceptions among extreme stars such as VX Sagittarii.
Although supergiants exist in every class from O to M, the majority are spectral type B (blue supergiants), more than at all other spectral classes combined. A much smaller grouping consists of very low-luminosity G-type supergiants, intermediate mass stars burning helium in their cores before reaching the asymptotic giant branch. A distinct grouping is made up of high-luminosity supergiants at early B (B0-2) and very late O (O9.5), more common even than main sequence stars of those spectral types. The number of post-main sequence blue supergiants is greater than those expected from theoretical models, leading to the "blue supergiant problem".
The relative numbers of blue, yellow, and red supergiants is an indicator of the speed of stellar evolution and is used as a powerful test of models of the evolution of massive stars.
Luminosity
The supergiants lie more or less on a horizontal band occupying the entire upper portion of the HR diagram, but there are some variations at different spectral types. These variations are due partly to different methods for assigning luminosity classes at different spectral types, and partly to actual physical differences in the stars.
The bolometric luminosity of a star reflects its total output of electromagnetic radiation at all wavelengths. For very hot and very cool stars, the bolometric luminosity is dramatically higher than the visual luminosity, sometimes several magnitudes or a factor of five or more. This bolometric correction is approximately one magnitude for mid B, late K, and early M stars, increasing to three magnitudes (a factor of 15) for O and mid M stars.
All supergiants are larger and more luminous than main sequence stars of the same temperature. This means that hot supergiants lie on a relatively narrow band above bright main sequence stars. A B0 main sequence star has an absolute magnitude of about −5, meaning that all B0 supergiants are significantly brighter than absolute magnitude −5. Bolometric luminosities for even the faintest blue supergiants are tens of thousands of times the sun (). The brightest can be and are often unstable such as α Cygni variables and luminous blue variables.
The very hottest supergiants with early O spectral types occur in an extremely narrow range of luminosities above the highly luminous early O main sequence and giant stars. They are not classified separately into normal (Ib) and luminous (Ia) supergiants, although they commonly have other spectral type modifiers such as "f" for nitrogen and helium emission (e.g. O2 If for HD 93129A).
Yellow supergiants can be considerably fainter than absolute magnitude −5, with some examples around −2 (e.g. 14 Persei). With bolometric corrections around zero, they may only be a few hundred times the luminosity of the sun. These are not massive stars, though; instead, they are stars of intermediate mass that have particularly low surface gravities, often due to instability such as Cepheid pulsations. These intermediate mass stars' being classified as supergiants during a relatively long-lasting phase of their evolution account for the large number of low luminosity yellow supergiants. The most luminous yellow stars, the yellow hypergiants, are amongst the visually brightest stars, with absolute magnitudes around −9, although still less than .
There is a strong upper limit to the luminosity of red supergiants at around . Stars that would be brighter than this shed their outer layers so rapidly that they remain hot supergiants after they leave the main sequence. The majority of red supergiants were main sequence stars and now have luminosities below , and there are very few bright supergiant (Ia) M class stars. The least luminous stars classified as red supergiants are some of the brightest AGB and post-AGB stars, highly expanded and unstable low mass stars such as the RV Tauri variables. The majority of AGB stars are given giant or bright giant luminosity classes, but particularly unstable stars such as W Virginis variables may be given a supergiant classification (e.g. W Virginis itself). The faintest red supergiants are around absolute magnitude −3.
Variability
While most supergiants such as Alpha Cygni variables, semiregular variables, and irregular variables show some degree of photometric variability, certain types of variables amongst the supergiants are well defined. The instability strip crosses the region of supergiants, and specifically many yellow supergiants are Classical Cepheid variables. The same region of instability extends to include the even more luminous yellow hypergiants, an extremely rare and short-lived class of luminous supergiant. Many R Coronae Borealis variables, although not all, are yellow supergiants, but this variability is due to their unusual chemical composition rather than a physical instability.
Further types of variable stars such as RV Tauri variables and PV Telescopii variables are often described as supergiants. RV Tau stars are frequently assigned spectral types with a supergiant luminosity class on account of their low surface gravity, and they are amongst the most luminous of the AGB and post-AGB stars, having masses similar to the sun; likewise, the even rarer PV Tel variables are often classified as supergiants, but have lower luminosities than supergiants and peculiar B[e] spectra extremely deficient in hydrogen. Possibly they are also post-AGB objects or "born-again" AGB stars.
The LBVs are variable with multiple semi-regular periods and less predictable eruptions and giant outbursts. They are usually supergiants or hypergiants, occasionally with Wolf-Rayet spectra—extremely luminous, massive, evolved stars with expanded outer layers, but they are so distinctive and unusual that they are often treated as a separate category without being referred to as supergiants or given a supergiant spectral type. Often their spectral type will be given just as "LBV" because they have peculiar and highly variable spectral features, with temperatures varying from about 8,000 K in outburst up to 20,000 K or more when "quiescent."
Chemical abundances
The abundance of various elements at the surface of supergiants is different from less luminous stars. Supergiants are evolved stars and may have undergone convection of fusion products to the surface.
Cool supergiants show enhanced helium and nitrogen at the surface due to convection of these fusion products to the surface during the main sequence of very massive stars, to dredge-ups during shell burning, and to the loss of the outer layers of the star. Helium is formed in the core and shell by fusion of hydrogen and nitrogen which accumulates relative to carbon and oxygen during CNO cycle fusion. At the same time, carbon and oxygen abundances are reduced. Red supergiants can be distinguished from luminous but less massive AGB stars by unusual chemicals at the surface, enhancement of carbon from deep third dredge-ups, as well as carbon-13, lithium and s-process elements. Late-phase AGB stars can become highly oxygen-enriched, producing OH masers.
Hotter supergiants show differing levels of nitrogen enrichment. This may be due to different levels of mixing on the main sequence due to rotation or because some blue supergiants are newly evolved from the main sequence while others have previously been through a red supergiant phase. Post-red supergiant stars have a generally higher level of nitrogen relative to carbon due to convection of CNO-processed material to the surface and the complete loss of the outer layers. Surface enhancement of helium is also stronger in post-red supergiants, representing more than a third of the atmosphere.
Evolution
O type main-sequence stars and the most massive of the B type blue-white stars become supergiants. Due to their extreme masses, they have short lifespans, between 30 million years and a few hundred thousand years. They are mainly observed in young galactic structures such as open clusters, the arms of spiral galaxies, and in irregular galaxies. They are less abundant in spiral galaxy bulges and are rarely observed in elliptical galaxies, or globular clusters, which are composed mainly of old stars.
Supergiants develop when massive main-sequence stars run out of hydrogen in their cores, at which point they start to expand, just like lower-mass stars. Unlike lower-mass stars, however, they begin to fuse helium in the core smoothly and not long after exhausting their hydrogen. This means that they do not increase their luminosity as dramatically as lower-mass stars, and they progress nearly horizontally across the HR diagram to become red supergiants. Also unlike lower-mass stars, red supergiants are massive enough to fuse elements heavier than helium, so they do not puff off their atmospheres as planetary nebulae after a period of hydrogen and helium shell burning; instead, they continue to burn heavier elements in their cores until they collapse. They cannot lose enough mass to form a white dwarf, so they will leave behind a neutron star or black hole remnant, usually after a core collapse supernova explosion.
Stars more massive than about cannot expand into a red supergiant. Because they burn too quickly and lose their outer layers too quickly, they reach the blue supergiant stage, or perhaps yellow hypergiant, before returning to become hotter stars. The most massive stars, above about , hardly move at all from their position as O main-sequence stars. These convect so efficiently that they mix hydrogen from the surface right down to the core. They continue to fuse hydrogen until it is almost entirely depleted throughout the star, then rapidly evolve through a series of stages of similarly hot and luminous stars: supergiants, slash stars, WNh-, WN-, and possibly WC- or WO-type stars. They are expected to explode as supernovae, but it is not clear how far they evolve before this happens. The existence of these supergiants still burning hydrogen in their cores may necessitate a slightly more complex definition of supergiant: a massive star with increased size and luminosity due to fusion products building up, but still with some hydrogen remaining.
The first stars in the universe are thought to have been considerably brighter and more massive than the stars in the modern universe. Part of the theorized population III of stars, their existence is necessary to explain observations of elements other than hydrogen and helium in quasars. Possibly larger and more luminous than any supergiant known today, their structure was quite different, with reduced convection and less mass loss. Their very short lives are likely to have ended in violent photodisintegration or pair instability supernovae.
Supernova progenitors
Most type II supernova progenitors are thought to be red supergiants, while the less common type Ib/c supernovae are produced by hotter Wolf–Rayet stars that have completely lost more of their hydrogen atmosphere. Almost by definition, supergiants are destined to end their lives violently. Stars large enough to start fusing elements heavier than helium do not seem to have any way to lose enough mass to avoid catastrophic core collapse, although some may collapse, almost without trace, into their own central black holes.
The simple "onion" models showing red supergiants inevitably developing to an iron core and then exploding have been shown, however, to be too simplistic. The progenitor for the unusual type II Supernova 1987A was a blue supergiant, thought to have already passed through the red supergiant phase of its life, and this is now known to be far from an exceptional situation. Much research is now focused on how blue supergiants can explode as a supernova and when red supergiants can survive to become hotter supergiants again.
Well known examples
Supergiants are rare and short-lived stars, but their high luminosity means that there are many naked-eye examples, including some of the brightest stars in the sky. Rigel, the brightest star in the constellation Orion is a typical blue-white supergiant; the three stars of Orion's Belt are all blue supergiants; Deneb is the brightest star in Cygnus, another blue supergiant; and Delta Cephei (itself the prototype) and Polaris are Cepheid variables and yellow supergiants. Antares and VV Cephei A are red supergiants. μ Cephei is considered a red hypergiant due to its large luminosity and it is one of the reddest stars visible to the naked eye and one of the largest in the galaxy. Rho Cassiopeiae, a variable, yellow hypergiant, is one of the most luminous naked-eye stars. Betelgeuse is a red supergiant that may have been a yellow supergiant in antiquity and the second brightest star in the constellation Orion.
See also
List of stars with resolved images
Planetary nebula
List of largest stars
References
External links
http://alobel.freeshell.org/rcas.html
http://www.solstation.com/x-objects/rho-cas.htm
Star types
Stellar phenomena | Supergiant | [
"Physics",
"Astronomy"
] | 5,178 | [
"Physical phenomena",
"Stellar phenomena",
"Star types",
"Astronomical classification systems"
] |
170,898 | https://en.wikipedia.org/wiki/Persuasive%20technology | Persuasive technology is broadly defined as technology that is designed to change attitudes or behaviors of the users through persuasion and social influence, but not necessarily through coercion. Such technologies are regularly used in sales, diplomacy, politics, religion, military training, public health, and management, and may potentially be used in any area of human-human or human-computer interaction. Most self-identified persuasive technology research focuses on interactive, computational technologies, including desktop computers, Internet services, video games, and mobile devices, but this incorporates and builds on the results, theories, and methods of experimental psychology, rhetoric, and human-computer interaction. The design of persuasive technologies can be seen as a particular case of design with intent.
Taxonomies
Functional triad
Persuasive technologies can be categorized by their functional roles. B. J. Fogg proposes the functional triad as a classification of three "basic ways that people view or respond to computing technologies": persuasive technologies can function as tools, media, or social actors – or as more than one at once.
As tools, technologies can increase people's ability to perform a target behavior by making it easier or restructuring it. For example, an installation wizard can influence task completion – including completing tasks not planned by users (such as installation of additional software).
As media, interactive technologies can use both interactivity and narrative to create persuasive experiences that support rehearsing a behavior, empathizing, or exploring causal relationships. For example, simulations and games instantiate rules and procedures that express a point of view and can shape behavior and persuade; these use procedural rhetoric.
Technologies can also function as social actors. This "opens the door for computers to apply ... social influence". Interactive technologies can cue social responses, e.g., through their use of language, assumption of established social roles, or physical presence. For example, computers can use embodied conversational agents as part of their interface. Or a helpful or disclosive computer can cause users to mindlessly reciprocate. Fogg notes that "users seem to respond to computers as social actors when computer technologies adopt animate characteristics (physical features, emotions, voice communication), play animate roles (coach, pet, assistant, opponent), or follow social rules or dynamics (greetings, apologies, turn taking)."
Direct interaction v. mediation
Persuasive technologies can also be categorized by whether they change attitude and behaviors through direct interaction or through a mediating role: do they persuade, for example, through human-computer interaction (HCI) or computer-mediated communication (CMC)? The examples already mentioned are the former, but there are many of the latter. Communication technologies can persuade or amplify the persuasion of others by transforming the social interaction, providing shared feedback on interaction, or restructuring communication processes.
Persuasion design
Persuasion design is the design of messages by analyzing and evaluating their content, using established psychological research theories and methods. Andrew Chak argues that the most persuasive web sites focus on making users feel comfortable about making decisions and helping them act on those decisions. During the clinical encounter, clinical decision support tools (CDST) are widely applied to improve patients' satisfaction towards medical decision-making shared with the physicians. The comfort that a user feels is generally registered subconsciously.
Persuasion by social motivators
Previous research has also utilized on social motivators like competition for persuasion. By connecting a user with other users, his/her coworkers, friends and families, a persuasive application can apply social motivators on the user to promote behavior changes. Social media such as Facebook, Twitter also facilitate the development of such systems. It has been demonstrated that social impact can result in greater behavior changes than the case where the user is isolated.
Persuasive strategies
Halko and Kientz made an extensive search in the literature for persuasive strategies and methods used in the field of psychology to modify health-related behaviors. Their search concluded that there are eight main types of persuasive strategies, which can be grouped into the following four categories, where each category has two complementary approaches.
Instruction style
Authoritative
This persuades the technology user through an authoritative agent, for example, a strict personal trainer who instructs the user to perform the task that will meet their goal.
Non-authoritative
This persuades the user through a neutral agent, for example, a friend who encourages the user to meet their goals. Another example of instruction style is customer reviews; a mix of positive and negative reviews together give a neutral perspective on a product or service.
Social feedback
Cooperative
This persuades the user through the notion of cooperating and teamwork, such as allowing the user to team up with friends to complete their goals.
Competitive
This persuades the user through the notion of competing. For example, users can play against friends or peers and be motivated to achieve their goal by winning the competition.
Motivation type
Extrinsic
This persuades the user through external motivators, for example, winning a trophy as a reward for completing a task.
Intrinsic
This persuades the user through internal motivators, such as the good feeling a user would have for being healthy or for achieving a goal.
It is worth noting that intrinsic motivators can be subject to the overjustification effect, which states if intrinsic motivators are associated with a reward and you remove the reward then the intrinsic motivation tends to diminish. This is because depending on how the reward is seen, it can become linked to extrinsic motivations instead of intrinsic motivations. Badges, prizes, and other award systems will increase intrinsic motivation if they are seen as reflecting competence and merit.
In 1973, Lepper et al. conducted a foundational study that underscored the overjustification effect. Their team brought magic markers to a preschool and created three test groups of children who were intrinsically motivated. The first group were informed that if they used markers they could receive a “Good Player Award.” The second group was not incentivized to use the magic markers with a reward, but were given a reward after playing. The third group was given no expectations about awards and received no awards. A week later, all students played with the markers without a reward. The students receiving the "good player" award originally showed half as much interest as when they began the study. Later, other psychologists repeated this experiment only to conclude that rewards create short-term motivation, but undermine intrinsic motivation.
Reinforcement type
Negative reinforcement
This persuades the user by removing an unpleasant stimulus. For example, a brown and dying nature scene might turn green and healthy as the user practises more healthy behaviors.
Positive reinforcement
This persuades the user by adding a positive stimulus. For example, adding flowers, butterflies, and other nice-looking elements to an empty nature scene as a user practises more healthy behaviors.
Logical Fallacies
More recently, Lieto and Vernero have also shown that arguments reducible to logical fallacies are a class of widely adopted persuasive techniques in both web and mobile technologies. These techniques have also shown their efficacy in large-scale studies about persuasive news recommendations as well as in the field of human-robot interaction. A 2021 report by the RAND Corporation shows how the use of logical fallacies is one of the rhetorical strategies used by the Russia and its agents to influence the online discourse and spread subversive information in Europe.
Reciprocal equality
One feature that distinguishes persuasion technology from familiar forms of persuasion is that the individual being persuaded often cannot respond in kind. This is a lack of reciprocal equality. For example, when a conversational agent persuades a user using social influence strategies, the user cannot also use similar strategies on the agent.
Health behavior change
While persuasive technologies are found in many domains, considerable recent attention has focused on behavior change in health domains. Digital health coaching is the utilization of computers as persuasive technology to augment the personal care delivered to patients, and is used in numerous medical settings.
Numerous scientific studies show that online health behaviour change interventions can influence users' behaviours. Moreover, the most effective interventions are modelled on health coaching, where users are asked to set goals, educated about the consequences of their behaviour, then encouraged to track their progress toward their goals. Sophisticated systems even adapt to users who relapse by helping them get back on the bandwagon.
Maintaining behavior change long term is one of the challenges of behavior change interventions. For instance, as reported, for chronic illness treatment regimens non-adherence rate can be as high as 50% to 80%. Common strategies that have been shown by previous research to increase long-term adherence to treatment include extended care, skills training, social support, treatment tailoring, self-monitoring, and multicomponent stages. However, even though these strategies have been demonstrated to be effective, there are also existing barriers to implementation of such programs: limited time, resources, as well as patient factors such as embarrassment of disclosing their health habits.
To make behavior change strategies more effective, researchers also have been adapting well-known and empirically tested behavior change theories into such practice. The most prominent behavior change theories that have been implemented in various health-related behavior change research has been self-determination theory, theory of planned behavior, social cognitive theory, transtheoretical model, and social ecological model. Each behavior change theory analyses behavior change in different ways and consider different factors to be more or less important. Research has suggested that interventions based on behavior change theories tend to yield better result than interventions that do not employ such theories. The effectiveness of them vary: social cognitive theory proposed by Bandura, which incorporates the well-known construct of self-efficacy, has been the most widely used method in behavior change interventions as well as the most effective in maintaining long-term behavior change.
Even though the healthcare discipline has produced a plethora of empirical behavior change research, other scientific disciplines are also adapting such theories to induce behavior change. For instance, behavior change theories have also been used in sustainability, such as saving electricity, and lifestyle, such as helping people drinking more water. These research has shown that these theories, already effectively proven useful in healthcare, is equally powerful in other fields to promote behavior change.
Interestingly, there have been some studies that showed unique insights and that behavior change is a complex chain of events: a study by Chudzynski et al. showed that reinforcement schedule has little effect on maintaining behavior change. A point made in a study by Wemyss et al. is that even though people who have maintained behavior change for short term might revert to baseline, their perception of their behavior change could be different: they still believe they maintained the behavior change even if they factually have not. Therefore, it is possible self-report measures would not always be the most effective way of evaluating the effectiveness of the intervention.
Promote sustainable lifestyles
Previous work has also shown that people are receptive to change their behaviors for sustainable lifestyles. This result has encouraged researchers to develop persuasive technologies to promote for example, green travels, less waste, etc.
One common technique is to facilitate people's awareness of benefits for performing eco-friendly behaviors. For example, a review of over twenty studies exploring the effects of feedback on electricity consumption in the home showed that the feedback on the electricity consumption pattern can typically result in a 5–12% saving. Besides the environmental benefits such as savings, health benefit, cost are also often used to promote eco-friendly behaviors.
Research challenges
Despite the promising results of existing persuasive technologies, there are three main challenges that remain present.
Technical challenges
Persuasive technologies developed relies on self-report or automated systems that monitor human behavior using sensors and pattern recognition algorithms. Several studies in the medical field have noted that self-report is subject to bias, recall errors and low adherence rates. The physical world and human behavior are both highly complex and ambiguous. Utilizing sensors and machine learning algorithms to monitor and predict human behavior remains a challenging problem, especially that most of the persuasive technologies require just-in-time intervention.
Difficulty in studying behavior change
In general, understanding behavioral changes require long-term studies as multiple internal and external factors can influence these changes (such as personality type, age, income, willingness to change and more). For that, it becomes difficult to understand and measure the effect of persuasive technologies.
Furthermore, meta-analyses of the effectiveness of persuasive technologies have shown that the behavior change evidence collected so far is at least controversial, since it is rarely obtained by Randomized Controlled Trials (RCTs), the “gold standard” in causal inference analysis. In particular, due to relevant practical challenges to perform strict RCTs, most of the above-mentioned empirical trials on lifestyles rely on voluntary, self-selected participants. If such participants were systematically adopting the desired behaviors already before entering the trial, then self-selection biases would occur. Presence of such biases would weaken the behavior change effects found in the trials. Analyses aimed at identifying the presence and extent of self-selection biases in persuasive technology trials are not widespread yet. A study by Cellina et al. on an app-based behavior change trial in the mobility field found evidence of no self-selection biases. However, further evidence needs to be collected in different contexts and under different persuasive technologies in order to generalize (or confute) their findings.
Ethical challenges
The question of manipulating feelings and desires through persuasive technology remains an open ethical debate. User-centered design guidelines should be developed encouraging ethically and morally responsible designs, and provide a reasonable balance between the pros and cons of persuasive technologies.
In addition to encouraging ethically and morally responsible designs, Fogg believes education, such as through the journal articles he writes, is a panacea for concerns about the ethical challenges of persuasive computers. Fogg notes two fundamental distinctions regarding the importance of education in engaging with ethics and technology: "First, increased knowledge about persuasive computers allows people more opportunity to adopt such technologies to enhance their own lives, if they choose. Second, knowledge about persuasive computers helps people recognize when technologies are using tactics to persuade them."
Another ethical challenge for persuasive technology designers is the risk of triggering persuasive backfires, where the technology triggers the bad behavior that it was designed to reduce.
See also
Other subjects which have some overlap or features in common with persuasive technology include:
Advertising
Artificial intelligence
Brainwashing
Captology
Coercion
Collaboration tools (including wikis)
Design for behaviour change
Personal coaching
Personal grooming
Propaganda
Psychology
Rhetoric and oratory skills
Technological rationality
T3: Trends, Tips & Tools for Everyday Living
References
Sources
External links
Human communication
Human–computer interaction
Persuasion | Persuasive technology | [
"Engineering",
"Biology"
] | 3,051 | [
"Human communication",
"Behavior",
"Human–machine interaction",
"Human behavior",
"Human–computer interaction"
] |
170,927 | https://en.wikipedia.org/wiki/Whooping%20cough | Whooping cough ( or ), also known as pertussis or the 100-day cough, is a highly contagious, vaccine-preventable bacterial disease. Initial symptoms are usually similar to those of the common cold with a runny nose, fever, and mild cough, but these are followed by two or three months of severe coughing fits. Following a fit of coughing, a high-pitched whoop sound or gasp may occur as the person breathes in. The violent coughing may last for 10 or more weeks, hence the phrase "100-day cough". The cough may be so hard that it causes vomiting, rib fractures, and fatigue. Children less than one year old may have little or no cough and instead have periods when they cannot breathe. The incubation period is usually seven to ten days. Disease may occur in those who have been vaccinated, but symptoms are typically milder.
The bacterium Bordetella pertussis causes pertussis, which is spread easily through the coughs and sneezes of an infected person. People are infectious from the start of symptoms until about three weeks into the coughing fits. Diagnosis is by collecting a sample from the back of the nose and throat. This sample can then be tested either by culture or by polymerase chain reaction.
Prevention is mainly by vaccination with the pertussis vaccine. Initial immunization is recommended between six and eight weeks of age, with four doses to be given in the first two years of life. Protection from pertussis decreases over time, so additional doses of vaccine are often recommended for older children and adults. Vaccination during pregnancy is highly effective at protecting the infant from pertussis during their vulnerable early months of life, and is recommended in many countries. Antibiotics may be used to prevent the disease in those who have been exposed and are at risk of severe disease. In those with the disease, antibiotics are useful if started within three weeks of the initial symptoms, but otherwise have little effect in most people. In pregnant women and children less than one year old, antibiotics are recommended within six weeks of symptom onset. Antibiotics used include erythromycin, azithromycin, clarithromycin, or trimethoprim/sulfamethoxazole. Evidence to support interventions for the cough, other than antibiotics, is poor. About 50% of infected children less than a year old require hospitalization and nearly 0.5% (1 in 200) die.
An estimated 16.3 million people worldwide were infected in 2015. Most cases occur in the developing world, and people of all ages may be affected. In 2015, pertussis resulted in 58,700 deaths – down from 138,000 deaths in 1990. Outbreaks of the disease were first described in the 16th century. The bacterium that causes the infection was discovered in 1906. The pertussis vaccine became available in the 1940s.
Signs and symptoms
The classic symptoms of pertussis are a paroxysmal cough, inspiratory whoop, and fainting, or vomiting after coughing. The cough from pertussis has been documented to cause subconjunctival hemorrhages, rib fractures, urinary incontinence, hernias, and vertebral artery dissection. Violent coughing can cause the pleura to rupture, leading to a pneumothorax. Vomiting after a coughing spell or an inspiratory whooping sound on coughing almost doubles the likelihood that the illness is pertussis. The absence of a paroxysmal cough or posttussive emesis makes it almost half as likely.
The illness usually starts with mild respiratory symptoms including mild coughing, sneezing, or a runny nose (known as the catarrhal stage). After one or two weeks, the coughing classically develops into uncontrollable fits, sometimes followed by a high-pitched "whoop" sound, as the person tries to inhale. About 50% of children and adults "whoop" at some point in diagnosed pertussis cases during the paroxysmal stage. This stage usually lasts up to 3 months, or sometimes longer. A gradual transition then occurs to the convalescent stage, which usually lasts one to four weeks. A decrease in paroxysms of coughing marks this stage, although paroxysms may occur with subsequent respiratory infection for many months after the onset of pertussis.
Symptoms of pertussis can be variable, especially between immunized and non-immunized people. Immunized people can present with a milder infection; they may only have the paroxysmal cough for a couple of weeks and may lack the "whooping" characteristic. Although immunized people have a milder form of the infection, they can still spread the disease to others who are not immune.
Incubation period
The time between exposure and the development of symptoms is on average 7–14 days (ranging 6–20 days), and rarely as long as 42 days.
Cause
Pertussis is caused by the bacterium Bordetella pertussis. It is an airborne disease (through droplets) that spreads easily through the coughs and sneezes of an infected person.
Host species
Humans are the only host species of B. pertussis. Outbreaks of whooping cough have been observed among chimpanzees in a zoo and wild gorillas; in both cases, it is considered likely that the infection was acquired as a result of close contact with humans. Several zoos have a long-standing custom of vaccinating their primates against whooping cough.
Mechanism
After the bacteria are inhaled, they initially adhere to the ciliated epithelium in the nasopharynx. Surface proteins of B. pertussis, including filamentous hemagglutinin and pertactin, mediate attachment to the epithelium. The bacteria then multiply. In infants, who experience more severe disease, the bacteria spread down to the lungs.
The bacteria secrete several toxins. Tracheal cytotoxin (TCT), a fragment of peptidoglycan, kills ciliated epithelial cells in the airway and thereby inhibits the mechanism which clears the airways of mucus and debris. TCT may contribute to the cough characteristic of pertussis. Pertussis toxin causes lymphocytosis by an unknown mechanism. The elevated number of white blood cells leads to pulmonary hypertension, a major cause of death by pertussis. In infants who develop encephalopathy, cerebral hemorrhage and cortical atrophy occur, likely due to hypoxia.
Diagnosis
Based on symptoms
A physician's overall impression is most effective in initially making the diagnosis. Single factors are much less useful. In adults with a cough of less than 8 weeks, vomiting after coughing or a "whoop" is supportive. If there are no bouts of coughing or there is a fever the diagnosis is unlikely. In children who have a cough of less than 4 weeks vomiting after coughing is somewhat supportive but not definitive.
Lab tests
Methods used in laboratory diagnosis include culturing of nasopharyngeal swabs on a nutrient medium (Bordet–Gengou medium), polymerase chain reaction (PCR), direct fluorescent antibody (DFA), and serological methods (e.g. complement fixation test). The bacteria can be recovered from the person only during the first three weeks of illness, rendering culturing and DFA useless after this period. However, PCR may have some limited usefulness for an additional three weeks.
Serology may be used for adults and adolescents who have already been infected for several weeks to determine whether antibodies against pertussis toxin or another virulence factor of B. pertussis are present at high levels in the person's blood.
Differential diagnosis
A similar, milder disease is caused by B. parapertussis.
Prevention
The primary method of prevention for pertussis is vaccination. Evidence is insufficient to determine the effectiveness of antibiotics in those who have been exposed, but are without symptoms. Preventive antibiotics, however, are still frequently used in those who have been exposed and are at high risk of severe disease (such as infants).
Vaccine
Pertussis vaccines are effective at preventing illness and are recommended for routine use by the World Health Organization and the United States Centers for Disease Control and Prevention. The vaccine saved an estimated half a million lives in 2002.
The multi-component acellular pertussis vaccine is 71–85% effective, with greater effectiveness against more severe strains. Despite widespread vaccination, pertussis has persisted in vaccinated populations. It remains "one of the most common vaccine-preventable diseases in Western countries". The 21st-century resurgence in pertussis infections is attributed to a combination of waning immunity and bacterial mutations that elude vaccines.
Immunization does not confer lifelong immunity; a 2011 CDC study indicated that protection may only last three to six years. This covers childhood, which is the time of greatest exposure and greatest risk of death from pertussis.
An effect of widespread immunization on society has been the shift of reported infections from children aged 1–9 years to infants, adolescents, and adults, with adolescents and adults acting as reservoirs for B. pertussis and infecting infants who have had fewer than three doses of vaccine.
Infection induces incomplete natural immunity that wanes over time. A 2005 study said estimates of the duration of infection-acquired immunity range from 7 to 20 years and the different results could be the result of differences in levels of circulating B. pertussis, surveillance systems, and case definitions used. The study said protective immunity after vaccination wanes after 4–12 years. One study suggested that the availability of vaccine exemptions increases the number of pertussis cases.
Some studies have suggested that while acellular pertussis vaccines effectively prevent disease, they have a limited impact on infection and transmission, meaning that vaccinated people could spread pertussis even though they may have only mild symptoms or none at all. Pertussis infection in these persons may be asymptomatic, or present as illness ranging from a mild cough to classic pertussis with persistent cough (i.e., lasting more than 7 days). Even though the disease may be milder in older persons, those who are infected may transmit the disease to other susceptible persons, including unimmunized or incompletely immunized infants. Older persons are often found to have the first case in a household with multiple pertussis cases and are often the source of infection for children.
Treatment
The antibiotics erythromycin, clarithromycin, or azithromycin are typically the recommended treatment. Newer macrolides are frequently recommended due to lower rates of side effects. Trimethoprim-sulfamethoxazole (TMP/SMX) may be used in those with allergies to first-line agents or in infants who have a risk of pyloric stenosis from macrolides.
A reasonable guideline is to treat people aged more than a year within three weeks of cough onset, infants aged less than one year, and pregnant women within six weeks of cough onset. If the person is diagnosed late, antibiotics will not alter the course of the illness, and even without antibiotics, they should no longer be spreading pertussis. When used early, antibiotics decrease the duration of infectiousness, and thus prevent spread. Short-term antibiotics (azithromycin for 3–5 days) are as effective as long-term treatment (erythromycin 10–14 days) in eliminating B. pertussis with fewer and less severe side effects.
People with pertussis are most infectious during the first two weeks following the onset of symptoms.
Effective treatments of the cough associated with this condition have not been developed. The use of over-the-counter cough medications is discouraged and has not been found helpful.
Prognosis
While most healthy older children and adults fully recover, infection in newborns is particularly severe. Pertussis is fatal in an estimated 0.5% of US infants under one year of age. First-year infants are also more likely to develop complications, such as apneas (31%), pneumonia (12%), seizures (0.6%) and encephalopathy (0.15%). This may be due to the ability of the bacterium to suppress the immune system.
Epidemiology
Pertussis is endemic worldwide. More than 151,000 cases were reported globally in 2018. However not all cases are reported or correctly diagnosed, especially in developing countries. Pertussis is one of the leading causes of vaccine-preventable deaths worldwide. A study in 2017 estimated the global burden of the disease to be 24 million cases per year with 160,000 deaths among young children, with about 90% of all cases occurring in developing countries.
Epidemics of the disease occur cyclically, every three to 5 years, both in areas with vaccination programs and those without. Over time, immunity declines in those who have either been vaccinated or have recovered from infection. In addition, infants are born who are susceptible to infection. An epidemic can occur once herd immunity decreases below a certain level. It is also possible that the bacterium is evolving to evade vaccine-induced immunity.
Before vaccines, an average of 178,171 cases was reported in the U.S., with peaks reported every two to five years; more than 93% of reported cases occurred in children under 10 years of age. With the widespread introduction of the DTP combined vaccine (diphtheria tetanus and pertussis) in the 1940s, pertussis incidence fell dramatically to approximately 1,000 by 1976, when they fluctuated between 1,000 and 30,000 annually.
Cases recorded outside of the U.S. were also recorded at high numbers comparable to their populations. Before the vaccine was discovered, Sweden averaged nearly 3,000 children deaths per year. With their population only being 1.8 million in the years 1749-64 this number was very high. The London population during the same period recorded over 3,000 deaths. The rates in London continued to grow into the 18th century. These numbers show how the disease affected not only the U.S. but also those around the world.
According to the CDC, reports that cases of whooping cough have reached their highest levels since 2014. This year, there have been over 16,000 cases, marking a fourfold increase compared to last year’s total of more than 3,700 cases. The CDC has also confirmed two deaths related to the illness. The United States is seeing a return to pre-pandemic trends, where annual cases typically exceed 10,000.
History
Discovery
B. pertussis was discovered in 1906 by Jules Bordet and Octave Gengou (the bacterium is subsequently named Bordetella pertussis in honour of Jules Bordet). They were able to successfully culture B. pertussis and went on to develop the first inactivated whole-cell vaccine in 1912, followed by other researchers in 1913 and 1914. These early vaccines had limited effectiveness. In the 1920s, Louis W. Sauer developed a vaccine for whooping cough at Evanston Hospital. In 1925 Danish physician Thorvald Madsen was the first to test a whole-cell vaccine on a wide scale. Madsen used the vaccine to control outbreaks in the Faroe Islands in the North Sea, however, two children died shortly after receiving the vaccine.
Vaccine
In 1932, an outbreak of whooping cough hit Atlanta, Georgia, prompting pediatrician Leila Denmark to begin her study of the disease. Over the next six years, her work was published in the Journal of the American Medical Association, and in partnership with Emory University and Eli Lilly & Company, she developed the first safe and effective pertussis vaccine. In 1942, American scientists Grace Eldering, Loney Gordon, and Pearl Kendrick combined the whole-cell pertussis vaccine with diphtheria and tetanus toxoids to generate the first DTP combination vaccine. To minimize the frequent side effects caused by the pertussis component, Japanese scientist Yuji Sato developed an acellular vaccine consisting of purified haemagglutinins (HAs: filamentous strep throat and leukocytosis-promoting-factor HA), which are secreted by B. pertussis. Sato's acellular pertussis vaccine was used in Japan starting in 1981. Later versions of the acellular vaccine in other countries consisted of additional defined components of B. pertussis and were often part of the DTaP combination vaccine.
References
External links
Pertussis at Todar's Online Textbook of Bacteriology
PBS NOVA – Vaccines: Calling The Shots
Bacterial diseases
Pediatrics
Articles containing video clips
Wikipedia medicine articles ready to translate
Disorders causing seizures
Wikipedia emergency medicine articles ready to translate
Vaccine-preventable diseases | Whooping cough | [
"Biology"
] | 3,535 | [
"Vaccination",
"Vaccine-preventable diseases"
] |
170,939 | https://en.wikipedia.org/wiki/Step%20function | In mathematics, a function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces.
Definition and first consequences
A function is called a step function if it can be written as
, for all real numbers
where , are real numbers, are intervals, and is the indicator function of :
In this definition, the intervals can be assumed to have the following two properties:
The intervals are pairwise disjoint: for
The union of the intervals is the entire real line:
Indeed, if that is not the case to start with, a different set of intervals can be picked for which these assumptions hold. For example, the step function
can be written as
Variations in the definition
Sometimes, the intervals are required to be right-open or allowed to be singleton. The condition that the collection of intervals must be finite is often dropped, especially in school mathematics, though it must still be locally finite, resulting in the definition of piecewise constant functions.
Examples
A constant function is a trivial example of a step function. Then there is only one interval,
The sign function , which is −1 for negative numbers and +1 for positive numbers, and is the simplest non-constant step function.
The Heaviside function , which is 0 for negative numbers and 1 for positive numbers, is equivalent to the sign function, up to a shift and scale of range (). It is the mathematical concept behind some test signals, such as those used to determine the step response of a dynamical system.
The rectangular function, the normalized boxcar function, is used to model a unit pulse.
Non-examples
The integer part function is not a step function according to the definition of this article, since it has an infinite number of intervals. However, some authors also define step functions with an infinite number of intervals.
Properties
The sum and product of two step functions is again a step function. The product of a step function with a number is also a step function. As such, the step functions form an algebra over the real numbers.
A step function takes only a finite number of values. If the intervals for in the above definition of the step function are disjoint and their union is the real line, then for all
The definite integral of a step function is a piecewise linear function.
The Lebesgue integral of a step function is where is the length of the interval , and it is assumed here that all intervals have finite length. In fact, this equality (viewed as a definition) can be the first step in constructing the Lebesgue integral.
A discrete random variable is sometimes defined as a random variable whose cumulative distribution function is piecewise constant. In this case, it is locally a step function (globally, it may have an infinite number of steps). Usually however, any random variable with only countably many possible values is called a discrete random variable, in this case their cumulative distribution function is not necessarily locally a step function, as infinitely many intervals can accumulate in a finite region.
See also
Crenel function
Piecewise
Sigmoid function
Simple function
Step detection
Heaviside step function
Piecewise-constant valuation
References
Special functions | Step function | [
"Mathematics"
] | 662 | [
"Special functions",
"Combinatorics"
] |
171,087 | https://en.wikipedia.org/wiki/Normal%20science | Normal science, identified and elaborated on by Thomas Samuel Kuhn in The Structure of Scientific Revolutions, is the regular work of scientists theorizing, observing, and experimenting within a settled paradigm or explanatory framework. Regarding science as puzzle-solving, Kuhn explained normal science as slowly accumulating detail in accord with established broad theory, without questioning or challenging the underlying assumptions of that theory.
The route to normal science
Kuhn stressed that historically, the route to normal science could be a difficult one. Prior to the formation of a shared paradigm or research consensus, would-be scientists were reduced to the accumulation of random facts and unverified observations, in the manner recorded by Pliny the Elder or Francis Bacon, while simultaneously beginning the foundations of their field from scratch through a plethora of competing theories.
Arguably at least the social sciences remain at such a pre-paradigmatic level today.
Normal science at work
Kuhn considered that the bulk of scientific work was that done by the 'normal' scientist, as they engaged with the threefold task of articulating the paradigm, precisely evaluating key paradigmatic facts, and testing those new points at which the theoretical paradigm is open to empirical appraisal.
Paradigms are central to Kuhn's conception of normal science. Scientists derive rules from paradigms, which also guide research by providing a framework for action that encompasses all the values, techniques, and theories shared by the members of a scientific community. Paradigms gain recognition from more successfully solving acute problems than their competitors. Normal science aims to improve the match between a paradigm's predictions and the facts of interest to a paradigm. It does not aim to discover new phenomena.
According to Kuhn, normal science encompasses three classes of scientific problems. The first class of scientific problems is the determination of significant fact, such as the position and magnitude of stars in different galaxies. When astronomers use special telescopes to verify Copernican predictions, they engage the second class: the matching of facts with theory, an attempt to demonstrate agreement between the two. Improving the value of the gravitational constant is an example of articulating a paradigm theory, which is the third class of scientific problems.
The breakdown of consensus
The normal scientist presumes that all values, techniques, and theories falling within the expectations of the prevailing paradigm are accurate. Anomalies represent challenges to be puzzled out and solved within the prevailing paradigm. Only if an anomaly or series of anomalies resists successful deciphering long enough and for enough members of the scientific community will the paradigm itself gradually come under challenge during what Kuhn deems a crisis of normal science. If the paradigm is unsalvageable, it will be subjected to a paradigm shift.
Kuhn lays out the progression of normal science that culminates in scientific discovery at the time of a paradigm shift: first, one must become aware of an anomaly in nature that the prevailing paradigm cannot explain. Then, one must conduct an extended exploration of this anomaly. The crisis only ends when one discards the old paradigm and successfully maps the original anomaly onto a new paradigm. The scientific community embraces a new set of expectations and theories that govern the work of normal science. Kuhn calls such discoveries scientific revolutions. Successive paradigms replace each other and are necessarily incompatible with each other.
In this way however, according to Kuhn, normal science possesses a built-in mechanism that ensures the relaxation of the restrictions that previously bound research, whenever the paradigm from which they derive ceases to function effectively. Kuhn's framework restricts the permissibility of paradigm falsification to moments of scientific discovery.
Criticism
Kuhn's normal science is characterized by upheaval over cycles of puzzle-solving and scientific revolution, as opposed to cumulative improvement. In Kuhn's historicism, moving from one paradigm to the next completely changes the universe of scientific assumptions. Imre Lakatos has accused Kuhn of falling back on irrationalism to explain scientific progress. Lakatos relates Kuhnian scientific change to a mystical or religious conversion ungoverned by reason.
With the aim of presenting scientific revolutions as rational progress, Lakatos provided an alternative framework of scientific inquiry in his paper Falsification and the Methodology of Scientific Research Programmes. His model of the research programme preserves cumulative progress in science where Kuhn's model of successive irreconcilable paradigms in normal science does not. Lakatos' basic unit of analysis is not a singular theory or paradigm, but rather the entire research programme that contains the relevant series of testable theories. Each theory within a research programme has the same common assumptions and is supposed by a belt of more modest auxiliary hypotheses that serve to explain away potential threats to the theory's core assumptions. Lakatos evaluates problem shifts, changes to auxiliary hypotheses, by their ability to produce new facts, better predictions, or additional explanations. Lakatos' conception of a scientific revolution involves the replacement of degenerative research programmes by progressive research programmes. Rival programmes persist as minority views.
Lakatos is also concerned that Kuhn's position may result in the controversial position of relativism, for Kuhn accepts multiple conceptions of the world under different paradigms. Although the developmental process he describes in science is characterized by an increasingly detailed and refined understanding of nature, Kuhn does not conceive of science as a process of evolution towards any goal or telos. He has noted his own sparing use of the word truth in his writing.
An additional consequence of Kuhn's relavitism, which poses a problem for the philosophy of science, is his blurred demarcation between science and non-science. Unlike Karl Popper's deductive method of falsification, under Kuhn, scientific discoveries that do not fit the established paradigm do not immediately falsify the paradigm. They are treated as anomalies within the paradigm that warrant further research, until a scientific revolution refutes the entire paradigm.
See also
References
Further reading
W. O. Hagstrom, The Scientific Community (1965)
External links
Paradigms and normal science
Philosophy of science
Science and technology studies | Normal science | [
"Technology"
] | 1,256 | [
"Science and technology studies"
] |
171,104 | https://en.wikipedia.org/wiki/Chitin | Chitin (C8H13O5N)n ( ) is a long-chain polymer of N-acetylglucosamine, an amide derivative of glucose. Chitin is the second most abundant polysaccharide in nature (behind only cellulose); an estimated 1 billion tons of chitin are produced each year in the biosphere. It is a primary component of cell walls in fungi (especially filamentous and mushroom-forming fungi), the exoskeletons of arthropods such as crustaceans and insects, the radulae, cephalopod beaks and gladii of molluscs and in some nematodes and diatoms.
It is also synthesised by at least some fish and lissamphibians. Commercially, chitin is extracted from the shells of crabs, shrimps, shellfish and lobsters, which are major by-products of the seafood industry. The structure of chitin is comparable to cellulose, forming crystalline nanofibrils or whiskers. It is functionally comparable to the protein keratin. Chitin has proved useful for several medicinal, industrial and biotechnological purposes.
Etymology
The English word "chitin" comes from the French word chitine, which was derived in 1821 from the Greek word χιτών (khitōn) meaning covering.
A similar word, "chiton", refers to a marine animal with a protective shell.
Chemistry, physical properties and biological function
The structure of chitin was determined by Albert Hofmann in 1929. Hofmann hydrolyzed chitin using a crude preparation of the enzyme chitinase, which he obtained from the snail Helix pomatia.
Chitin is a modified polysaccharide that contains nitrogen; it is synthesized from units of N-acetyl-D-glucosamine (to be precise, 2-(acetylamino)-2-deoxy-D-glucose). These units form covalent β-(1→4)-linkages (like the linkages between glucose units forming cellulose). Therefore, chitin may be described as cellulose with one hydroxyl group on each monomer replaced with an acetyl amine group. This allows for increased hydrogen bonding between adjacent polymers, giving the chitin-polymer matrix increased strength.
In its pure, unmodified form, chitin is translucent, pliable, resilient, and quite tough. In most arthropods, however, it is often modified, occurring largely as a component of composite materials, such as in sclerotin, a tanned proteinaceous matrix, which forms much of the exoskeleton of insects. Combined with calcium carbonate, as in the shells of crustaceans and molluscs, chitin produces a much stronger composite. This composite material is much harder and stiffer than pure chitin, and is tougher and less brittle than pure calcium carbonate. Another difference between pure and composite forms can be seen by comparing the flexible body wall of a caterpillar (mainly chitin) to the stiff, light elytron of a beetle (containing a large proportion of sclerotin).
In butterfly wing scales, chitin is organized into stacks of gyroids constructed of chitin photonic crystals that produce various iridescent colors serving phenotypic signaling and communication for mating and foraging. The elaborate chitin gyroid construction in butterfly wings creates a model of optical devices having potential for innovations in biomimicry. Scarab beetles in the genus Cyphochilus also utilize chitin to form extremely thin scales (five to fifteen micrometres thick) that diffusely reflect white light. These scales are networks of randomly ordered filaments of chitin with diameters on the scale of hundreds of nanometres, which serve to scatter light. The multiple scattering of light is thought to play a role in the unusual whiteness of the scales. In addition, some social wasps, such as Protopolybia chartergoides, orally secrete material containing predominantly chitin to reinforce the outer nest envelopes, composed of paper.
Chitosan is produced commercially by deacetylation of chitin by treatment with sodium hydroxide. Chitosan has a wide range of biomedical applications including wound healing, drug delivery and tissue engineering. Due to its specific intermolecular hydrogen bonding network, dissolving chitin in water is very difficult. Chitosan (with a degree of deacetylation of more than ~28%), on the other hand, can be dissolved in dilute acidic aqueous solutions below a pH of 6.0 such as acetic, formic and lactic acids. Chitosan with a degree of deacetylation greater than ~49% is soluble in water
Humans and other mammals
Humans and other mammals have chitinase and chitinase-like proteins that can degrade chitin; they also possess several immune receptors that can recognize chitin and its degradation products, initiating an immune response.
Chitin is sensed mostly in the lungs or gastrointestinal tract where it can activate the innate immune system through eosinophils or macrophages, as well as an adaptive immune response through T helper cells. Keratinocytes in skin can also react to chitin or chitin fragments.
Plants
Plants also have receptors that can cause a response to chitin, namely chitin elicitor receptor kinase 1 and chitin elicitor-binding protein. The first chitin receptor was cloned in 2006. When the receptors are activated by chitin, genes related to plant defense are expressed, and jasmonate hormones are activated, which in turn activate systemic defenses. Commensal fungi have ways to interact with the host immune response that, , were not well understood.
Some pathogens produce chitin-binding proteins that mask the chitin they shed from these receptors. Zymoseptoria tritici is an example of a fungal pathogen that has such blocking proteins; it is a major pest in wheat crops.
Fossil record
Chitin was probably present in the exoskeletons of Cambrian arthropods such as trilobites. The oldest preserved (intact) chitin samples thus far reported are dated to the Oligocene, about , from specimens encased in amber where the chitin has not completely degraded.
Uses
Agriculture
Chitin is a good inducer of plant defense mechanisms for controlling diseases. It has potential for use as a soil fertilizer or conditioner to improve fertility and plant resilience that may enhance crop yields.
Industrial
Chitin is used in many industrial processes. Examples of the potential uses of chemically modified chitin in food processing include the formation of edible films and as an additive to thicken and stabilize foods and food emulsions. Processes to size and strengthen paper employ chitin and chitosan.
Research
How chitin interacts with the immune system of plants and animals has been an active area of research, including the identity of key receptors with which chitin interacts, whether the size of chitin particles is relevant to the kind of immune response triggered, and mechanisms by which immune systems respond. Chitin is deacetylated chemically or enzymatically to produce chitosan, a highly biocompatible polymer which has found a wide range of applications in the biomedical industry. Chitin and chitosan have been explored as a vaccine adjuvant due to its ability to stimulate an immune response.
Chitin and chitosan are under development as scaffolds in studies of how tissue grows and how wounds heal, and in efforts to invent better bandages, surgical thread, and materials for allotransplantation. Sutures made of chitin have been experimentally developed, but their lack of elasticity and problems making thread have prevented commercial success so far.
Chitosan has been demonstrated and proposed to make a reproducible form of biodegradable plastic. Chitin nanofibers are extracted from crustacean waste and mushrooms for possible development of products in tissue engineering, drug delivery and medicine.
Chitin has been proposed for use in building structures, tools, and other solid objects from a composite material, combining chitin with Martian regolith. To build this, the biopolymers in the chitin are suggested as the binder for the regolith aggregate to form a concrete-like composite material. The authors believe that waste materials from food production (e.g. scales from fish, exoskeletons from crustaceans and insects, etc.) could be put to use as feedstock for manufacturing processes.
See also
Chitobiose
Lorica
Sporopollenin
Tectin
References
External links
Acetamides
Biomolecules
Biopesticides
Polysaccharides | Chitin | [
"Chemistry",
"Biology"
] | 1,830 | [
"Carbohydrates",
"Natural products",
"Biochemistry",
"Organic compounds",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Polysaccharides"
] |
171,136 | https://en.wikipedia.org/wiki/Public%20utility | A public utility company (usually just utility) is an organization that maintains the infrastructure for a public service (often also providing a service using that infrastructure). Public utilities are subject to forms of public control and regulation ranging from local community-based groups to statewide government monopolies.
Public utilities are meant to supply goods and services that are considered essential; water, gas, electricity, telephone, waste disposal, and other communication systems represent much of the public utility market. The transmission lines used in the transportation of electricity, or natural gas pipelines, have natural monopoly characteristics. A monopoly can occur when it finds the best way to minimize its costs through economies of scale to the point where other companies cannot compete with it. For example, if many companies are already offering electricity, the additional installation of a power plant will only disadvantage the consumer as prices could be increased. If the infrastructure already exists in a given area, minimal benefit is gained through competing. In other words, these industries are characterized by economies of scale in production. Though it can be mentioned that these natural monopolies are handled or watched by a public utilities commission, or an institution that represents the government.
There are many different types of public utilities. Some, especially large companies, offer multiple products, such as electricity and natural gas. Other companies specialize in one specific product, such as water. Modern public utilities may also be partially (or completely) sourced from clean and renewable energy in order to produce sustainable electricity. Of these, wind turbines and solar panels are those used most frequently.
Whether broadband internet access should be a public utility is a question that was being discussed with the rise of internet usage. This is a question that was being asked due to the telephone service being considered a public utility. Since arguably broadband internet access has taken over telephone service, perhaps it should be a public utility. The Federal Communications Commission (FCC) in the United States in 2015 made their stance on this issue clear. Due to the telephone service having been considered a public utility, the FCC made broadband internet access a public utility in the United States.
Management
Public utilities have historically been considered to be a natural monopoly. This school of thought holds that the most cost-efficient way of doing business is through a single firm because these are capital-intensive businesses with unusually large economies of scale and high fixed costs associated with building and operating the infrastructure, e.g. power plants, telephone lines and water treatment facilities. However, over the past several decades, traditional public utilities' monopoly position has eroded. For instance, wholesale electricity generation markets, electric transmission networks, electricity retailing and customer choice, telecommunications, some types of public transit and postal services have become competitive in some countries and the trend towards liberalization, deregulation and privatization of public utilities is growing. However, the infrastructure used to distribute most utility products and services has remained largely monopolistic.
Key players in the public utility sector include:
Generators produce or collect the specific product to be used by customers: for example, electricity or water.
Network operators (grid operators, regional network operators, and distribution network operators) sell access to their networks to retail service providers, who deliver the product to the end user.
Traders and marketers buy and sell the actual product and create further complex structured products, combined services and derivatives products. Depending on the product structure, these companies may provide utilities and businesses with a reliable supply of a product like electricity at a stable, predictable price, or a shorter term supply at a more volatile price.
Service providers and retailers are the last segment in the supply chain, selling directly to the final consumer. In some markets, final consumers can choose their own retail service provider.
Public utilities must pursue the following objective given the social responsibility their services attribute to them:
Ensuring services are of the highest quality and responsive to the needs and wishes of patients;
Ensuring that health services are effectively targeted so as to improve the health of local populations;
Improving the efficiency of the services so the volume of well-targeted effective services is the widest, given the available resources.
The management of public utilities continues to be important for local and general governments. By creating, expanding, and improving upon public utilities, a governmental body may attempt to improve its image or attract investment. Traditionally, public services have been provided by public legal entities, which operate much like corporations, but differ in that profit is not necessary for a functional business. A significant factor in government ownership has been to reduce the risk that an activity, if left to private initiative, may be considered not sufficiently profitable and neglected. Many utilities are essential for human life, national defense, or commerce, and the risk of public harm with mismanagement is considerably greater than with other goods. The principle of universality of utilities maintains that these services are best owned by, and operating for, the public. The government and the society itself would like to see these services being economically accessible to all or most of the population. Furthermore, other economic reasons based the idea: public services need huge investments in infrastructures, crucial for competitiveness but with a slow return of capital; last, technical difficulties can occur in the management of plurality of networks, example in the city subsoil.
Public pressure for renewable energy as a replacement for legacy fossil fuel power has steadily increased since the 1980s. As the technology needed to source the necessary amount of energy from renewable sources is still under study, public energy policy has been focused on short term alternatives such as natural gas (which still produces substantial carbon dioxide) or nuclear power. In 2021 a power and utilities industry outlook report by Deloitte identified a number of trends for the utilities industry:
Enhanced competition, sparked by regulations such as FERC's Order 2222 that open up the market to smaller, innovative firms using renewable energy sources, like wind or solar power
Expansions in infrastructure, to manage new renewable energy sources
Greater electrification of transportation, and longer-range batteries for cars and trucks
Oil companies and other traditional-energy players entering the renewable-energy field
A greater emphasis on disaster readiness
Finance
Issues faced by public utilities include:
Service area: regulators need to balance the economic needs of the companies and the social equity needed to guarantee to everyone the access to primary services.
Autonomy: Economic efficiency requires that markets be left to work by themselves with little intervention. Such instances are often not equitable for some consumers that might be priced out of the market.
Pricing: Equity requires that all citizens get the service at a fair price.
Alternative pricing methods include:
Average production costs: the utility calculates the break-even point and then set the prices equal to average costs. The equity issue is basically overcome since most of the market is being served. As a defect regulated firms do not have incentives to minimize costs.
Rate of return regulation: regulators let the firms set and charge any price, as long as the rate of return on invested capital does not exceed a certain rate. This method is flexible and allows for pricing freedom, forcing regulators to monitor prices. The drawback is that this method could lead to overcapitalization. For example, if the rate of return is set at five percent, then the firm can charge a higher price simply by investing more in capital than what it is actually needed (i.e., 5% of $10 million is greater than 5% of $6 million).
Price cap regulation: regulators directly set a limit on the maximum price. This method can result in a loss of service area. One benefit of this method is that it gives firms an incentive to seek cost-reducing technologies as a strategy to increase utility profits.
Utility stocks are considered stable investments because they typically provide regular dividends to shareholders and have more stable demand. Even in periods of economic downturns characterized by low interest rates, such stocks are attractive because dividend yields are usually greater than those of other stocks, so the utility sector is often part of a long-term buy-and-hold strategy.
Utilities require expensive critical infrastructure which needs regular maintenance and replacement. Consequently, the industry is capital intensive, requiring regular access to the capital markets for external financing. A utility's capital structure may have a significant debt component, which exposes the company to interest rate risk. Should rates rise, the company must offer higher yields to attract bond investors, driving up the utility's interest expenses. If the company's debt load and interest expense becomes too large, its credit rating will deteriorate, further increasing the cost of capital and potentially limiting access to the capital markets.
By country
Kazakhstan
Public utilities in Kazakhstan include heating, water supply, sewerage, electricity and communications systems.
Heating systems
They are mainly represented by centralized networks, with the exception of some rural areas.
Various types of fuels are used, including coal, natural gas and fuel oil.
Many systems need to be upgraded to increase their efficiency and reduce their environmental impact.
Water supply systems
They provide the population with drinking and industrial water.
The sources of water are rivers, lakes and groundwater.
The level of water quality in some regions is of concern.
It is necessary to increase the efficiency of water resources use and improve water quality.
Sewerage systems
Wastewater is diverted from residential and industrial facilities.
The level of wastewater treatment in some regions does not meet modern standards.
Sewerage systems need to be expanded and upgraded to protect the environment.
Power supply
It is provided by power plants running on various types of fuels, including coal, natural gas, hydropower and nuclear energy.
There are problems with power outages, especially in rural areas.
It is necessary to modernize the power grid and increase their efficiency.
The heating, water supply and sewerage systems of Kazakhstan, although functioning, require urgent modernization. The technical capabilities of these networks are becoming outdated, which leads to an increase in operating costs and a decrease in their reliability.
A report by the European Bank for Reconstruction and Development (EBRD) notes that additional investments are needed to improve the efficiency and reliability of these systems.
The analysis conducted by the EBRD revealed a number of problems faced by heating, water supply and sewerage systems in Kazakhstan.
Outdated technologies: In many cases, the infrastructure has exhausted its resource and needs to be replaced.
Low energy efficiency: Existing systems consume a lot of energy, which leads to unjustified costs.
Unreliability: Worn-out networks often fail, which leads to interruptions in the supply of water and heat, as well as leaks.
The report also provides examples of cities where networks are being upgraded with the support of the EBRD. These projects demonstrate how the introduction of modern technologies can improve the efficiency, reliability and environmental friendliness of heating, water supply and sewerage systems.
Upgrading infrastructure is not just a matter of convenience. It is of vital importance for public health, environmental protection and ensuring the sustainable development of the economy of Kazakhstan.
In most cases, public utilities in Kazakhstan are state-owned, which means that their activities are directly regulated by akimats. This creates a system with an administrative nature of relations, where the authorities have the authority to issue mandatory instructions for these companies.
The influence of the state on the activity
Proponents of such a system emphasize that it allows the authorities to directly influence the commercial activities of public utilities, ensuring their compliance with state interests. This can be expressed in:
Tariff control: Akimats can set tariffs for housing and communal services, making them accessible to the public.
Ensuring the quality of services: The State can influence the standards of service by ensuring the provision of public services of appropriate quality.
Implementation of social programs: Public utilities can participate in social programs aimed at supporting vulnerable segments of the population.
Limitations of State control
However, such a system has its drawbacks. Excessive government intervention can lead to:
Reduced efficiency: Bureaucratic procedures and restrictions in decision-making can slow down the work of enterprises and hinder the introduction of innovations.
Unreasonable expenses: Administrative barriers and inefficient management can lead to an increase in inappropriate expenses.
Limiting investments: The uncertainty of government policy and the risks of interference from akimats may deter potential investors.
Resource efficiency:
Despite these limitations, utilities within the framework of this system can demonstrate high efficiency in the use of labor resources and management costs.
Residents of Kazakhstan receive water, sewerage and heating from companies recognized by the state as natural monopolies. This means that there is no competition in these areas, and tariffs are set by a special state body – the Committee for Regulation of Natural Monopolies, Competition and Consumer Protection (CRNM and CP).
In order to ensure the smooth operation of public utilities, the state also controls the investment programs of monopolistic companies. This is handled by the Committee on Construction and Housing and Communal Services. Such a system allows you to regulate prices for utilities and direct investments to infrastructure development. However, this system also has its disadvantages. For example, the lack of competition can lead to a decrease in the efficiency of monopolistic companies.
To protect the interests of consumers from unjustified overpricing and substandard service, there are special regulatory bodies whose powers are regulated by the Law "On Natural Monopolies" and other regulatory acts.
Main functions:
Investment promotion: Development of tariff calculation methods that are attractive to both consumers and private investors interested in investing in the modernization of public infrastructure.
Control over the use of funds from IFIs: Determining the specifics of regulating the activities of natural monopolies that attract financing from international financial institutions (IFIs). This allows you to track the intended use of borrowed funds.
Formation of a transparent tariff policy: Establishment of rules obliging monopolistic companies to publicly disclose information about tariffs, as well as infrastructure development plans.
Analysis of investment programs: Evaluation of investment programs of natural monopolies, approval of development plans and control over their implementation.
Interaction at different levels:
It is important to note that the powers to regulate the activities of natural monopolies are distributed between federal and local authorities. Effective coordination of their actions is necessary to ensure coordinated work and achieve common goals.
As a result, the activities of the regulatory authorities of natural monopolies are aimed at ensuring a balance between the interests of consumers, utility companies and the state.
The EBRD
2017 was marked by a new round of cooperation between Kazakhstan and the European Bank for Reconstruction and Development (EBRD). The parties signed a three-year agreement with the aim of working together to modernize the country's infrastructure.
As part of this agreement, the EBRD will allocate funds for the implementation of a number of important projects aimed at:
Improving urban infrastructure: Upgrading water supply, sewerage, heating and other vital facilities will be a priority.
Optimization of customs procedures: Joint efforts will be made to simplify customs processes, which should lead to stimulating trade and accelerating economic growth.
In addition to these two key areas, the EBRD will continue to support other initiatives aimed at improving the well-being of citizens of Kazakhstan.
Azerbaijan
Chad
Colombia
Turkey
United Kingdom and Ireland
In the United Kingdom and Ireland, the state, private firms, and charities ran the traditional public utilities. For instance, the Sanitary Districts were established in England and Wales in 1875 and in Ireland in 1878.
The term can refer to the set of services provided by various organizations that are used in everyday life by the public, such as: electricity generation, electricity retailing, electricity supplies, natural gas supplies, water supplies, sewage works, sewage systems and broadband internet services. They are regulated by Ofgem, Ofwat, Ofcom, the Water Industry Commission for Scotland and the Utility Regulator in the United Kingdom, and the Commission for Regulation of Utilities and the Commission for Communications Regulation in the Republic of Ireland. Disabled community transport services may occasionally be included within the definition. They were mostly privatised in the UK during the 1980s.
United States
The first public utility in the United States was a grist mill erected on Mother Brook in Dedham, Massachusetts, in 1640.
In the U.S., public utilities provide services at the consumer level, be it residential, commercial, or industrial consumer. Utilities, merchant power producers and very large consumers buy and sell bulk electricity at the wholesale level through a network of regional transmission organizations (RTO) and independent system operators (ISO) within one of three grids, the Eastern Interconnection, the Texas Interconnection, which is a single ISO, and the Western Interconnection.
U.S. utilities historically operated with a high degree of financial leverage and low interest coverage ratios compared to industrial companies. Investors accepted these credit characteristics because of the regulation of the industry and the belief that there was minimal bankruptcy risk because of the essential services they provide. In recent decades several high-profile utility company bankruptcies have challenged this perception.
Monopoly vs. competition
Public utilities were historically regarded as natural monopolies because the infrastructure required to produce and deliver a product such as electricity or water is very expensive to build and maintain. Once assets such as power plants or transmission lines are in place, the cost of adding another customer is small, and duplication of facilities would be wasteful. As a result, utilities were either government monopolies, or if investor-owned, regulated by a public utilities commission.
In the electric utility industry, the monopoly approach began to change in the 1990s. In 1996, the Federal Energy Regulatory Commission (FERC) issued its Order No. 888, which mandated that electric utilities open access to their transmission systems to enhance competition and "functionally unbundle" their transmission service from their other operations. The order also promoted the role of an independent system operator to manage power flow on the electric grid. Later, FERC Order No. 889 established an electronic information system called OASIS (open access same-time information system) which would give new users of transmission lines access to the same information available to the owner of the network. The result of these and other regulatory rulings was the eventual restructuring of the traditional monopoly-regulated regime to one in which all bulk power sellers could compete. A further step in industry restructuring, "customer choice", followed in some 19 states, giving retail electric customers the option to be served by non-utility retail power marketers.
Ownership structure
Public utilities can be privately owned or publicly owned. Publicly owned utilities include cooperative and municipal utilities. Municipal utilities may actually include territories outside of city limits or may not even serve the entire city. Cooperative utilities are owned by the customers they serve. They are usually found in rural areas. Publicly owned utilities are non-profit. Private utilities, also called investor-owned utilities, are owned by investors, and operate for profit, often referred to as a rate of return.
Regulation
A public utilities commission is a governmental agency in a particular jurisdiction that regulates the commercial activities related to associated electric, natural gas, telecommunications, water, railroad, rail transit, and/or passenger transportation companies. For example, the California Public Utilities Commission (CPUC) and the Public Utility Commission of Texas regulate the utility companies in California and Texas, respectively, on behalf of their citizens and ratepayers (customers). These public utility commissions (PUCs) are typically composed of commissioners, who are appointed by their respective governors, and dedicated staff that implement and enforce rules and regulations, approve or deny rate increases, and monitor/report on relevant activities.
Ratemaking practice in the U.S. holds that rates paid by a utility's customers should be set at a level which assures that the utility can provide reliable service at reasonable cost.
Over the years, various changes have dramatically re-shaped the mission and focus of many public utility commissions. Their focus has typically shifted from the up-front regulation of rates and services to the oversight of competitive marketplaces and enforcement of regulatory compliance.
See also
Building block model, form of public utility regulation common in Australia
Public utility building
References
External links
World Bank report on Water, Electricity and Utility subsidies
Latest News in Utilities and Information Technology
Latest in UK business utility news
Economics of transport and utility industries
Flow meters
Monopoly (economics)
Public services | Public utility | [
"Chemistry",
"Technology",
"Engineering"
] | 4,063 | [
"Measuring instruments",
"Flow meters",
"Fluid dynamics"
] |
171,209 | https://en.wikipedia.org/wiki/Hayes%20AT%20command%20set | The Hayes command set (also known as the AT command set) is a specific command language originally developed by Dale Heatherington and Dennis Hayes for the Hayes Smartmodem in 1981.
The command set consists of a series of short text strings which can be combined to produce commands for operations such as dialing, hanging up, and changing the parameters of the connection. The vast majority of dial-up modems use the Hayes command set in numerous variations.
The command set covered only those operations supported by the earliest modems. When new commands were required to control additional functionality in higher speed modems, a variety of one-off standards emerged from each of the major vendors. These continued to share the basic command structure and syntax, but added any number of new commands using some sort of prefix character for Hayes and USRobotics, and for Microcom, for instance. Many of these were re-standardized on the Hayes extensions after the introduction of the SupraFAXModem 14400 and the market consolidation that followed.
The term "Hayes compatible" was and still is important within the industry.
History
Background
Before the introduction of the bulletin board system (BBS), modems typically operated on direct-dial telephone lines that began and ended with a known modem at each end. The modems operated in either "originate" or "answer" modes, manually switching between two sets of frequencies for data transfer. Generally, the user placing the call would switch their modem to "originate" and then dial the number by hand. When the remote modem answered, already set to "answer" mode, the telephone handset was switched off and communications continued until the caller manually disconnected.
When automation was required, it was commonly only needed on the answer side; for instance, a bank might need to take calls from a number of branch offices for end-of-day processing. To fill this role, some modems included the ability to pick up the phone automatically when it was in answer mode, and to clear the line when the other user manually disconnected. The need for automated outbound dialling was considerably less common, and was handled through a separate peripheral device: a "dialler". This was normally plugged into a separate input/output port on the computer (typically an RS-232 port) and programmed separately from the modem itself.
This method of operation worked satisfactorily in the 1960s and early 1970s, when modems were generally used to connect dumb devices like computer terminals (dialling out) with smart mainframe computers (answering). However, the microcomputer revolution of the 1970s led to the introduction of low-cost modems and the idea of a semi-dedicated point-to-point link was no longer appropriate. There were potentially thousands of users who might want to dial any of the other thousands of users, and the only solution at the time was to make the user dial manually.
The computer industry needed a way to tell the modem what number to dial through software. The earlier separate dialers had this capability, but only at the cost of a separate port, which a microcomputer might not have available. Another solution would have been to use a separate set of "command pins" dedicated to sending and receiving commands; another could have used a signal pin indicating that the modem should interpret incoming data as a command. Both of these had hardware support in the RS-232 standard. However, many implementations of the RS-232 port on microcomputers were extremely basic, and some eliminated many of these pins to reduce cost.
Hayes' solution
Hayes Communications introduced a solution in its 1981 Smartmodem by using the existing data pins with no modification. Instead, the modem itself could be switched between one of two modes:
Data mode in which the modem sends the data to the remote modem. (A modem in data mode treats everything it receives from the computer as data and sends it across the phone line).
Command mode in which data is interpreted as commands to the local modem (commands the local modem should execute).
To switch from data mode to command mode, sessions sent an escape sequence string of three plus signs () followed by a pause of about a second. The pause at the end of the escape sequence was required to reduce the problem caused by in-band signaling: if any other data was received within one second of the three plus signs, it was not the escape sequence and would be sent as data. To switch back they sent the online command, . In actual use many of the commands automatically switched to the online mode after completion, and it is rare for a user to use the online command explicitly.
In order to avoid licensing Hayes's patent, some manufacturers implemented the escape sequence without the time guard interval (Time Independent Escape Sequence (TIES)). This had a major denial of service security implication in that it would lead to the modem hanging up the connection should the computer ever try to transmit the byte sequence in data mode. For any computer connected to the Internet through such a modem, this could be easily exploited by sending it a ping of death request containing the sequence in the payload. The computer operating system would automatically try to reply the sender with the same payload, immediately disconnecting itself from the Internet, as the modem would interpret the ICMP data payload as a Hayes command. The same error would also trigger if, for example, the user of the computer ever tried to send an e-mail containing the aforementioned string.
Commands
The Hayes command set includes commands for various phone-line operations such as dialing and hanging-up. It also includes various controls to set up the modem, including a set of register commands which allowed the user to directly set the various memory locations in the original Hayes modem. The command set was copied largely verbatim, including the meaning of the registers, by almost all early 300 baud modem manufacturers, of which there were quite a few.
The expansion to 1200 and 2400 baud required the addition of new commands, some of them prefixed with an ampersand () to denote those dedicated to new functionality. Hayes itself was forced to quickly introduce a 2400 baud model shortly after their 1200, and the command sets were identical as a time-saving method. Essentially by accident, this allowed users of existing 1200 baud modems to use the new Hayes 2400 models without changing their software. This re-inforced the use of the Hayes versions of these commands. Years later, the Telecommunications Industry Association (TIA)/Electronic Industries Alliance (EIA) formally standardized the 2400-baud command set as Data Transmission Systems and Equipment – Serial Asynchronous Automatic Dialing and Control, TIA/EIA-602.
However, Hayes Communications was slow to release modems supporting higher speeds or compression, and three other companies led: Microcom, U.S. Robotics, and Telebit. Each of these three used its own additional command-sets. By the early-1990s, there were four major command sets in use, and a number of versions based on one of these. Things became simpler again during the widespread introduction of 14.4 and modems in the early 1990s. Slowly, a set of commands based heavily on the original Hayes extended set using commands became popular, and then universal. Only one other command set has remained popular, the U.S. Robotics set from their popular line of modems.
Description
The following text lists part of the Hayes command set, also called the AT commands: "AT" meaning 'attention'. Each command string is prefixed with "AT", and a number of discrete commands can be concatenated after the "AT".
The Hayes command set can subdivide into four groups:
basic command set – A capital character followed by a digit. For example, M1.
extended command set – An "&" (ampersand) and a capital character followed by a digit. This extends the basic command set. For example, &M1. Note that M1 is different from &M1.
proprietary command set – Usually starting either with a backslash (“\”) or with a percent sign (“%”); these commands vary widely among modem manufacturers.
register commands – Sr=n where r is the number of the register to be changed, and n is the new value that is assigned. A register represents a specific physical location in memory. Modems have small amounts of memory on board. The fourth set of commands serves for entering values into a particular register (memory location). For example, S7=60 instructs the modem to "Set register #7 to the value 60". Registers usually control aspects of the modem operation (e.g. transmission strength, modulation parameters) and are usually specific to a particular model.
Although the command-set syntax defines most commands by a letter-number combination (L0, L1 etc.), the use of a zero is optional. In this example, "L0" equates to a plain "L". Keep this in mind when reading the table below.
When in data mode, an escape sequence can return the modem to command mode. The normal escape sequence is three plus signs ("+++"), and to disambiguate it from possible real data, a guard timer is used: it must be preceded by a pause, not have any pauses between the plus signs, and be followed by a pause; by default, a "pause" is one second and "no pause" is anything less.
Syntactical definitions
The following syntactical definitions apply:
Carriage return character, is the command line and result code terminator character, which value, in decimal ASCII between 0 and 255, is specified in register S3. The default value is 13.
Linefeed character, is the character recognised as line feed character. Its value, in decimal ASCII between 0 and 255, is specified in register S4. The default value is 10. The line feed character is output after the carriage return character if verbose result codes are used (V1 option is used); otherwise, if numeric format result codes are used (V0 option is used), it will not appear in the result codes.
Name enclosed in angle brackets is a syntactical element. They do not appear in the command line.
Optional subparameter of a command or an optional part of AT information response is enclosed in square brackets. Brackets themselves do not appear in the command line. When the subparameter is not given in AT commands which have a Read command, the new value equals its previous value. In AT commands which do not store the values of any of their subparameters, and so have not a Read command, which are called action type commands, the action should be done on the basis of the recommended default setting of the subparameter.
Modem initialization
A string can contain many Hayes commands placed together, so as to optimally prepare the modem to dial out or answer, e.g. AT&F&D2&C1S0=0X4. Most modem software supported a user supplied initialization string, which was typically a long concatenated AT command which was sent to the modem upon launch. The V.250 specification requires all DCEs to accept a body (after "AT") of at least 40 characters of concatenated commands.
Example session
The following represents two computers, computer A and computer B, both with modems attached, and the user controlling the modems with terminal emulator software. Terminal-emulator software typically allows the user to send Hayes commands directly to the modem, and to see the responses. In this example, the user of computer A makes the modem dial the phone number of modem B at phone number (212) 555-0100 (long distance). After every command and response, there is a carriage return sent to complete the command.
Compatibility
While the original Hayes command set represented a huge leap forward in modem-based communications, with time many problems set in, almost none of them due to Hayes per se:
Due to the lack of a written standard, other modem manufacturers just copied the external visible commands and (roughly) the basic actions. This led to a wide variety of subtle differences in how modems changed from state to state, and how they handled error conditions, hangups, and timeouts.
Each manufacturer tended to add new commands to handle emerging needs, often incompatible with other modems. For example, setting up hardware or software handshaking often required many different commands for different modems. This undermined the handy universality of the basic Hayes command set.
Many Hayes compatible modems had serious quirks that made them effectively incompatible. For example, many modems required a pause of several seconds after receiving the "AT Z" reset command. Some modems required spaces between commands, while others did not. Some would unhelpfully change baud-rate of their own volition, which would leave the computer with no clue how to handle the incoming data.
As a result of all this, eventually many communications programs had to give up any sense of being able to talk to all "Hayes-compatible" modems, and instead the programs had to try to determine the modem type from its responses, or provide the user with some option whereby they could enter whatever special commands it took to coerce their particular modem into acting properly.
Autobaud
The Hayes command set facilitated automatic baud rate detection as "A" and "T" happen to have bit patterns that are very regular; "A" is "100 0001" and so has a 1 bit at the start and end and "T" is "101 0100" which has a pattern with (nearly) every other bit set. Since the RS-232 interface transmits least significant bit first, the according line pattern with 8-N-1 (eight data bits, no parity bit, one stop bit) is 01000001010001010101 (start and stop bits italicized) which is used as syncword.
The basic Hayes command set
The following commands are understood by virtually all modems supporting an AT command set, whether old or new.
{| summary="Basic Hayes Command Set" class="wikitable"
! Command
! Description
! Comments
|-
| A0 or A
| Answer incoming call
|
|-
| A/
| Repeat last command
| Do not preface with AT, do not follow with carriage return. Enter usually aborts.
|-
| D
| Dial
| Dial the following number and then handshake
P – Pulse Dial
T – Touch Tone Dial
W – Wait for the second dial tone
R – Reverse to answer-mode after dialing
@ - Wait for up to 30 seconds for one or more ringbacks
, - Pause for the time specified in register S8 (usually 2 seconds)
; – Remain in command mode after dialing.
! – Flash switch-hook (Hang up for a half second, as in transferring a call.)
L – Dial last number
|-
| E0 or E
| No Echo
| Will not echo commands to the computer
|-
| E1
| Echo
| Will echo commands to the computer (so one can see what one types if the computer software does not support echo)
|-
| H0 or H
| Hook Status
| On hook. Hangs up the phone, ending any call in progress.
|-
| H1
| Hook status
| Off hook. Picks up the phone line (typically you'll hear a dialtone)
|-
| I0 to I9
| Inquiry, Information, or Interrogation
| This command returns information about the model, such as its firmware or brand name. Each number (0 to 9, and sometimes 10 and above) returns one line of modem-specific information, or the word ERROR if the line is not defined. Today, Windows uses this for Plug-and-play detection of specific modem types.
|-
| L0 or Ln (n=1 to 3)
| Speaker Loudness. Supported only by some modems with speakers. Modems lacking speakers, or with physical volume controls, or ones whose sound output is piped through the sound card will not support this command.
| 0 turns off speaker, 1 to 3 are for increasing volumes.
|-
| M0 or M
| Speaker Mute, completely silent during dialing
| M3 is also common, but different on many brands
|-
| M1
|
| Speaker on until remote carrier detected (user will hear dialing and the modem handshake, but once a full connection is established the speaker is muted)
|-
| M2
|
| Speaker always on (data sounds are heard after CONNECT)
|-
| O
| Return Online
| Returns the modem back to the normal connected state after being interrupted by the "+++" escape code.
|-
| Q0 or Q
| Quiet Mode
| Off – Displays result codes, user sees command responses (e.g. OK)
|-
| Q1
| Quiet Mode
| On – Result codes are suppressed, user does not see responses.
|-
| Sn
| rowspan="3" | Select current register
Note that Sn, ? and =r are actually three separate commands, and can be given in separate AT commands.
| Select register n as the current register
|-
| Sn?
| Select register n as the current register, and query its value. Using ? on its own will query whichever register was most recently selected.
|-
| Sn=r
| Select register n as the current register, and store r in it. Using =r on its own will store into whichever register was most recently selected.
|-
| V0 or V
| Verbose
| Numeric result codes
|-
| V1
|
| English result codes (e.g. CONNECT, BUSY, NO CARRIER etc.)
|-
| X0 or X
| Smartmodem
| Hayes Smartmodem 300 compatible result codes
|-
| X1
|
| Usually adds connection speed to basic result codes (e.g. CONNECT 1200)
|-
| X2
|
| Usually adds dial tone detection (preventing blind dial, and sometimes preventing ATO)
|-
| X3
|
| Usually adds busy signal detection.
|-
| X4
|
| Usually adds both busy signal and dial tone detection
|-
| Z0 or Z
| Reset
| Reset modem to stored configuration, and usually also physically power-cycles the modem (during which it is unresponsive). Z0, Z1 etc. are for multiple stored profiles. &F is similar in that it returns to factory default settings on modems without NVRAM (non volatile memory), but it does not reset the modem
|}
Note: a command string is terminated with a CR (\r) character
Although not part of the command set, a tilde character ~ is commonly used in modem command sequences. The ~ causes many applications to pause sending the command stream to the device (usually for half a second), e.g. after a Reset. The ~''' is not sent to the modem.
Modem S register definitions
V.250
The ITU-T established a standard in its V-Series Recommendations, V.25 ter, in 1995 in an attempt to establish a standard for the command set again. It was renamed V.250 in 1998 with an annex that was not concerning the Hayes command set renamed as V.251''. A V.250 compliant modem implements the A, D, E, H, I, L, M, N, O, P, Q, T, V, X, Z, &C, &D, and &F commands in the way specified by the standard. It must also implement S registers and must use registers S0, S3, S4, S5, S6, S7, S8, and S10 for the purposes given in the standard. It also must implement any command beginning with the plus sign, "+" followed by any letter A to Z, only in accordance with ITU recommendations. Modem manufacturers are free to implement other commands and S-registers as they see fit, and may add options to standard commands.
GSM
The ETSI GSM 07.07 (3GPP TS 27.007) specifies AT style commands for controlling a GSM phone or modem. The ETSI GSM 07.05 (3GPP TS 27.005) specifies AT style commands for managing the Short Message Service (SMS) feature of GSM.
Examples of GSM commands:
GSM/3G modems typically support the ETSI GSM 07.07/3GPP TS 27.007 AT command set extensions, although how many commands are implemented varies.
Most USB modem vendors, such as Huawei, Sierra Wireless, Option, have also defined proprietary extensions for radio mode selection (GSM/3G preference) or similar. Some recent high speed modems provide a virtual Ethernet interface instead of using a Point-to-Point Protocol (PPP) for the data connection because of performance reasons (PPP connection is only used between the computer and the modem, not over network). The set-up requires vendor-specific AT command extensions. Sometimes the specifications for these extensions are openly available, other times the vendor requires an NDA for access to these.
Voice command set
Modems with voice or answering-machine capabilities support a superset of these commands to enable digital audio playback and recording.
See also
Access Point Name (APN)
Command and Data modes (modem)
ITU-T Recommendations:
H.324 (video)
T.31 (fax)
Notes and references
External links
List of AT commands: Basic (Hayes), Extended, Proprietary
Hayes AT Command Reference Manual
A list of Hayes AT commands
3gpp.org, 3GPP AT command set for User Equipment
Modem initialisation string
Extended Hayes AT command parameters for SMS (dead)
Determining your Class of Fax / Modem
Openmoko: AT Commands
Cell modem commands
ITU Standard V.250
AT Commands Reference Guide from Telit (dead)
Telecommunications-related introductions in 1981
Networking standards
Modems
AT command set | Hayes AT command set | [
"Technology",
"Engineering"
] | 4,588 | [
"Networking standards",
"Computer standards",
"Computer networks engineering"
] |
171,216 | https://en.wikipedia.org/wiki/Lanolin | Lanolin (from Latin 'wool', and 'oil'), also called wool fat, wool yolk, wool wax, sheep grease, sheep yolk, or wool grease, is a wax secreted by the sebaceous glands of wool-bearing animals. Lanolin used by humans comes from domestic sheep breeds that are raised specifically for their wool. Historically, many pharmacopoeias have referred to lanolin as wool fat (adeps lanae); however, as lanolin lacks glycerides (glycerol esters), it is not a true fat. Lanolin primarily consists of sterol esters instead. Lanolin's waterproofing property aids sheep in shedding water from their coats. Certain breeds of sheep produce large amounts of lanolin.
Lanolin's role in nature is to protect wool and skin from climate and the environment; it also plays a role in skin (integumental) hygiene. Lanolin and its derivatives are used in the protection, treatment, and beautification of human skin.
Composition
A typical high-purity grade of lanolin is composed predominantly of long chain waxy esters (approximately 97% by weight) with the remainder being lanolin alcohols, lanolin acids and lanolin hydrocarbons.
An estimated 8,000 to 20,000 different types of lanolin esters are present in lanolin, resulting from combinations between the 200 or so different lanolin acids and the 100 or so different lanolin alcohols identified so far.
Lanolin’s complex composition of long-chain esters, hydroxyesters, diesters, lanolin alcohols, and lanolin acids means in addition to its being a valuable product in its own right, it is also the starting point for the production of a whole spectrum of lanolin derivatives, which possess wide-ranging chemical and physical properties. The main derivatisation routes include hydrolysis, fractional solvent crystallisation, esterification, hydrogenation, alkoxylation and quaternisation. Lanolin derivatives obtained from these processes are used widely in both high-value cosmetics and skin treatment products.
Hydrolysis of lanolin yields lanolin alcohols and lanolin acids. Lanolin alcohols are a rich source of cholesterol (an important skin lipid) and are powerful water-in-oil emulsifiers; they have been used extensively in skincare products for over 100 years. Approximately 40% of the acids derived from lanolin are alpha-hydroxy acids (AHAs). The use of AHAs in skin care products has attracted a great deal of attention in recent years. Details of the AHAs isolated from lanolin can be seen in the table below.
Production
Crude lanolin constitutes about 5–25% of the weight of freshly shorn wool. The wool from one Merino sheep will produce about 250–300 ml of recoverable wool grease. Lanolin is extracted by washing the wool in hot water with a special wool scouring detergent to remove dirt, wool grease (crude lanolin), suint (sweat salts), and anything else stuck to the wool. The wool grease is continuously removed during this washing process by centrifuge separators, which concentrate it into a waxlike substance melting at approximately .
Applications
Lanolin and its many derivatives are used extensively in both the personal care (e.g., high value cosmetics, facial cosmetics, lip products) and health care sectors such as topical liniments. Lanolin is also found in lubricants, rust-preventive coatings, shoe polish, and other commercial products.
Lanolin is a relatively common allergen and is often misunderstood as a wool allergy. However, allergy to a lanolin-containing product is difficult to pinpoint and often other products containing lanolin may be fine for use. Patch testing can be done if a lanolin allergy is suspected.
It is frequently used in protective baby skin treatment and for sore nipples from breastfeeding but health authorities recommend alternative methods first, including nipple cleaning and improving baby positioning as well as expressing milk by hand. Lanolin is reported to have soothing properties but the lack of research leads to the previous recommendations being primary.
Lanolin is used commercially in many industrial products ranging from rustproof coatings to lubricants. Some sailors use lanolin to create slippery surfaces on their propellers and stern gear to which barnacles cannot adhere. Commercial products (e.g. Lanocote) containing up to 85% lanolin are used to prevent corrosion in marine fasteners, especially when two different metals are in contact with each other and saltwater. The water-repellent properties make it valuable in many applications as a lubricant grease where corrosion would otherwise be a problem.
7-Dehydrocholesterol from lanolin is used as a raw material for producing vitamin D3 by irradiation with ultraviolet light.
Baseball players often use it to soften and break in their baseball gloves (shaving cream with lanolin is popularly used for this).
Anhydrous liquid lanolin, combined with parabens, has been used in trials as artificial tears to treat dry eye. Anhydrous lanolin is also used as a lubricant for brass instrument tuning slides.
Lanolin can also be restored to woollen garments to make them water and dirt repellent, such as for cloth diaper covers.
Lanolin is also used in lip balm products such as Carmex. For some people, it can irritate the lips.
Lanolin is sometimes used by people on continuous positive airway pressure therapy to reduce irritation with masks, particular nasal pillow masks that can often create sore spots in the nostrils.
Lanolin is a popular additive to moustache wax, particularly 'extra-firm' varieties.
Lanolin is used as a primary lubricating component in aerosol-based brass lubricants in the ammunition reloading process. Mixed warm 1:12 with highly concentrated ethanol (usually 99%), the ethanol acts as a carrier which evaporates quickly after application, leaving a fine film of lanolin behind to prevent brass seizing in resizing dies.
Lanolin, when mixed with ingredients such as neatsfoot oil, beeswax and glycerol, is used in various leather treatments, for example in some saddle soaps and in leather care products.
Standards and legislation
In addition to general purity requirements, lanolin must meet official requirements for the permissible levels of pesticide residues. The Fifth Supplement of the United States Pharmacopoeia XXII published in 1992 was the first to specify limits for 34 named pesticides. A total limit of 40 ppm (i.e. 40 mg/kg) total pesticides was stipulated for lanolin of general use, with no individual limit greater than 10 ppm.
A second monograph also introduced into the US Pharmacopoeia XXII in 1992 was entitled 'Modified Lanolin'. Lanolin conforming to this monograph is intended for use in more exacting applications, for example on open wounds. In this monograph, the limit of total pesticides was reduced to 3 ppm total pesticides, with no individual limit greater than 1 ppm.
In 2000, the European Pharmacopoeia introduced pesticide residue limits into its lanolin monograph. This requirement, which is generally regarded as the new quality standard, extends the list of pesticides to 40 and imposes even lower concentration limits.
Some very high-purity grades of lanolin surpass monograph requirements. New products obtained using complex purification techniques produce lanolin esters in their natural state, removing oxidative and environmental impurities resulting in white, odourless, hypoallergenic lanolin. These ultra-high-purity grades of lanolin are ideally suited to the treatment of dermatological disorders such as eczema and on open wounds.
Lanolin attracted attention owing to a misunderstanding concerning its sensitising potential. A study carried out at New York University Hospital in the early 1950s had shown about 1% of patients with dermatological disorders were allergic to the lanolin being used at that time. By one estimate, this simple misunderstanding of failing to differentiate between the general healthy population and patients with dermatological disorders exaggerates the sensitising potential of lanolin by 5,000–6,000 times.
The European Cosmetics Directive, introduced in July 1976, contained a stipulation that cosmetics which contained lanolin should be labelled to that effect. This ruling was challenged immediately, and in the early 1980s, it was overturned and removed from the directive. Despite only being in force for a short period of time, this ruling did harm both to the lanolin industry and to the reputation of lanolin in general. The Cosmetics Directive ruling only applied to the presence of lanolin in cosmetic products; it did not apply to the many hundreds of its different uses in dermatological products designed for the treatment of compromised skin conditions.
Modern analytical methods have revealed lanolin possesses a number of important chemical and physical similarities to human stratum corneum lipids; the lipids which help regulate the rate of water loss across the epidermis and govern the hydration state of the skin.
Cryogenic scanning electron microscopy has shown that lanolin, like human stratum corneum lipids, consists of a mass of liquid crystalline material. Cross-polarised light microscopy has shown the multilamellar vesicles formed by lanolin are identical to those formed by human stratum corneum lipids. The incorporation of bound water into the stratum corneum involves the formation of multilamellar vesicles.
Skin bioengineering studies have shown the durational effect of the emollient (skin smoothing) action produced by lanolin is very significant and lasts for many hours. Lanolin applied to the skin at 2 mg/cm2 has been shown to reduce roughness by about 35% after one hour and 50% after two hours, with the overall effect lasting for considerably more than eight hours. Lanolin is also known to form semiocclusive (breathable) films on the skin. When applied daily at around 4 mg/cm2 for five consecutive days, the positive moisturising effects of lanolin were detectable until 72 hours after final application. Lanolin may achieve some of its moisturising effects by forming a secondary moisture reservoir within the skin.
The barrier repair properties of lanolin have been reported to be superior to those produced by both petrolatum and glycerol. In a small clinical study conducted on volunteer subjects with terribly dry (xerotic) hands, lanolin was shown to be superior to petrolatum in reducing the signs and symptoms of dryness and scaling, cracks and abrasions, and pain and itch. In another study, a high purity grade of lanolin was found to be significantly superior to petrolatum in assisting the healing of superficial wounds.
References
External links
Animal glandular products
Waxes
Non-petroleum based lubricants
By-products | Lanolin | [
"Physics"
] | 2,271 | [
"Materials",
"Matter",
"Waxes"
] |
171,317 | https://en.wikipedia.org/wiki/Countercurrent%20exchange | Countercurrent exchange is a mechanism between two flowing bodies flowing in opposite directions to each other, in which there is a transfer of some property, usually heat or some chemical. The flowing bodies can be liquids, gases, or even solid powders, or any combination of those. For example, in a distillation column, the vapors bubble up through the downward flowing liquid while exchanging both heat and mass. It occurs in nature and is mimicked in industry and engineering. It is a kind of exchange using counter flow arrangement.
The maximum amount of heat or mass transfer that can be obtained is higher with countercurrent than co-current (parallel) exchange because countercurrent maintains a slowly declining difference or gradient (usually temperature or concentration difference). In cocurrent exchange the initial gradient is higher but falls off quickly, leading to wasted potential. For example, in the adjacent diagram, the fluid being heated (exiting top) has a higher exiting temperature than the cooled fluid (exiting bottom) that was used for heating. With cocurrent or parallel exchange the heated and cooled fluids can only approach one another. The result is that countercurrent exchange can achieve a greater amount of heat or mass transfer than parallel under otherwise similar conditions.
Countercurrent exchange when set up in a circuit or loop can be used for building up concentrations, heat, or other properties of flowing liquids. Specifically when set up in a loop with a buffering liquid between the incoming and outgoing fluid running in a circuit, and with active transport pumps on the outgoing fluid's tubes, the system is called a countercurrent multiplier, enabling a multiplied effect of many small pumps to gradually build up a large concentration in the buffer liquid.
Other countercurrent exchange circuits where the incoming and outgoing fluids touch each other are used for retaining a high concentration of a dissolved substance or for retaining heat, or for allowing the external buildup of the heat or concentration at one point in the system.
Countercurrent exchange circuits or loops are found extensively in nature, specifically in biologic systems. In vertebrates, they are called a rete mirabile, originally the name of an organ in fish gills for absorbing oxygen from the water. It is mimicked in industrial systems. Countercurrent exchange is a key concept in chemical engineering thermodynamics and manufacturing processes, for example in extracting sucrose from sugar beet roots.
Countercurrent multiplication is a similar but different concept where liquid moves in a loop followed by a long length of movement in opposite directions with an intermediate zone. The tube leading to the loop passively building up a gradient of heat (or cooling) or solvent concentration while the returning tube has a constant small pumping action all along it, so that a gradual intensification of the heat or concentration is created towards the loop. Countercurrent multiplication has been found in the kidneys as well as in many other biological organs.
Three current exchange systems
Countercurrent exchange and cocurrent exchange are two mechanisms used to transfer some property of a fluid from one flowing current of fluid to another across a barrier allowing one way flow of the property between them. The property transferred could be heat, concentration of a chemical substance, or other properties of the flow.
When heat is transferred, a thermally-conductive membrane is used between the two tubes, and when the concentration of a chemical substance is transferred a semipermeable membrane is used.
Cocurrent flow—half transfer
In the cocurrent flow exchange mechanism, the two fluids flow in the same direction.
As the cocurrent and countercurrent exchange mechanisms diagram showed, a cocurrent exchange system has a variable gradient over the length of the exchanger. With equal flows in the two tubes, this method of exchange is only capable of moving half of the property from one flow to the other, no matter how long the exchanger is.
If each stream changes its property to be 50% closer to that of the opposite stream's inlet condition, exchange will stop when the point of equilibrium is reached, and the gradient has declined to zero. In the case of unequal flows, the equilibrium condition will occur somewhat closer to the conditions of the stream with the higher flow.
Cocurrent flow examples
A cocurrent heat exchanger is an example of a cocurrent flow exchange mechanism. Two tubes have a liquid flowing in the same direction. One starts off hot at , the second cold at . A thermoconductive membrane or an open section allows heat transfer between the two flows.
The hot fluid heats the cold one, and the cold fluid cools down the warm one. The result is thermal equilibrium: Both fluids end up at around the same temperature: , almost exactly between the two original temperatures ( and ). At the input end, there is a large temperature difference of and much heat transfer; at the output end, there is a very small temperature difference (both are at the same temperature of or close to it), and very little heat transfer if any at all. If the equilibrium—where both tubes are at the same temperature—is reached before the exit of the liquid from the tubes, no further heat transfer will be achieved along the remaining length of the tubes.
A similar example is the cocurrent concentration exchange. The system consists of two tubes, one with brine (concentrated saltwater), the other with freshwater (which has a low concentration of salt in it), and a semi permeable membrane which allows only water to pass between the two, in an osmotic process. Many of the water molecules pass from the freshwater flow in order to dilute the brine, while the concentration of salt in the freshwater constantly grows (since the salt is not leaving this flow, while water is). This will continue, until both flows reach a similar dilution, with a concentration somewhere close to midway between the two original dilutions. Once that happens, there will be no more flow between the two tubes, since both are at a similar dilution and there is no more osmotic pressure.
Countercurrent flow—almost full transfer
In countercurrent flow, the two flows move in opposite directions.
Two tubes have a liquid flowing in opposite directions, transferring a property from one tube to the other. For example, this could be transferring heat from a hot flow of liquid to a cold one, or transferring the concentration of a dissolved solute from a high concentration flow of liquid to a low concentration flow.
The counter-current exchange system can maintain a nearly constant gradient between the two flows over their entire length of contact. With a sufficiently long length and a sufficiently low flow rate this can result in almost all of the property transferred. So, for example, in the case of heat exchange, the exiting liquid will be almost as hot as the original incoming liquid's heat.
Countercurrent flow examples
In a countercurrent heat exchanger, the hot fluid becomes cold, and the cold fluid becomes hot.
In this example, hot water at enters the top pipe. It warms water in the bottom pipe which has been warmed up along the way, to almost . A minute but existing heat difference still exists, and a small amount of heat is transferred, so that the water leaving the bottom pipe is at close to . Because the hot input is at its maximum temperature of , and the exiting water at the bottom pipe is nearly at that temperature but not quite, the water in the top pipe can warm the one in the bottom pipe to nearly its own temperature. At the cold end—the water exit from the top pipe, because the cold water entering the bottom pipe is still cold at , it can extract the last of the heat from the now-cooled hot water in the top pipe, bringing its temperature down nearly to the level of the cold input fluid ().
The result is that the top pipe which received hot water, now has cold water leaving it at , while the bottom pipe which received cold water, is now emitting hot water at close to . In effect, most of the heat was transferred.
Conditions for higher transfer results
Nearly complete transfer in systems implementing countercurrent exchange, is only possible if the two flows are, in some sense, "equal".
For a maximum transfer of substance concentration, an equal flowrate of solvents and solutions is required. For maximum heat transfer, the average specific heat capacity and the mass flow rate must be the same for each stream. If the two flows are not equal, for example if heat is being transferred from water to air or vice versa, then, similar to cocurrent exchange systems, a variation in the gradient is expected because of a buildup of the property not being transferred properly.
Countercurrent exchange in biological systems
Countercurrent exchange is used extensively in biological systems for a wide variety of purposes. For example, fish use it in their gills to transfer oxygen from the surrounding water into their blood, and birds use a countercurrent heat exchanger between blood vessels in their legs to keep heat concentrated within their bodies. In vertebrates, this type of organ is referred to as a rete mirabile (originally the name of the organ in the fish gills). Mammalian kidneys use countercurrent exchange to remove water from urine so the body can retain water used to move the nitrogenous waste products (see countercurrent multiplier).
Countercurrent multiplication loop
A countercurrent multiplication loop is a system where fluid flows in a loop so that the entrance and exit are at similar low concentration of a dissolved substance but at the far end of the loop there is a high concentration of that substance. A buffer liquid between the incoming and outgoing tubes receives the concentrated substance. The incoming and outgoing tubes do not touch each other.
The system allows the buildup of a high concentration gradually, by allowing a natural buildup of concentration towards the tip inside the in-going tube, (for example using osmosis of water out of the input pipe and into the buffer fluid), and the use of many active transport pumps each pumping only against a very small gradient, during the exit from the loop, returning the concentration inside the output pipe to its original concentration.
The incoming flow starting at a low concentration has a semipermeable membrane with water passing to the buffer liquid via osmosis at a small gradient. There is a gradual buildup of concentration inside the loop until the loop tip where it reaches its maximum.
Theoretically a similar system could exist or be constructed for heat exchange.
In the example shown in the image, water enters at 299 mg/L (NaCl / H2O). Water passes because of a small osmotic pressure to the buffer liquid in this example at 300 mg/L (NaCl / H2O). Further up the loop there is a continued flow of water out of the tube and into the buffer, gradually raising the concentration of NaCl in the tube until it reaches 1199 mg/L at the tip. The buffer liquid between the two tubes is at a gradually rising concentration, always a bit over the incoming fluid, in this example reaching 1200 mg/L. This is regulated by the pumping action on the returning tube as will be explained immediately.
The tip of the loop has the highest concentration of salt (NaCl) in the incoming tube—in the example 1199 mg/L, and in the buffer 1200 mg/L. The returning tube has active transport pumps, pumping salt out to the buffer liquid at a low difference of concentrations of up to 200 mg/L more than in the tube. Thus when opposite the 1000 mg/L in the buffer liquid, the concentration in the tube is 800 and only 200 mg/L are needed to be pumped out. But the same is true anywhere along the line, so that at exit of the loop also only 200 mg/L need to be pumped.
In effect, this can be seen as a gradually multiplying effect—hence the name of the phenomena: a 'countercurrent multiplier' or the mechanism: Countercurrent multiplication, but in current engineering terms, countercurrent multiplication is any process where only slight pumping is needed, due to the constant small difference of concentration or heat along the process, gradually raising to its maximum. There is no need for a buffer liquid, if the desired effect is receiving a high concentration at the output pipe.
In the kidney
A circuit of fluid in the loop of Henle—an important part of the kidneys—allows for gradual buildup of the concentration of urine in the kidneys, by using active transport on the exiting nephrons (tubules carrying liquid in the process of gradually concentrating the urea). The active transport pumps need only to overcome a constant and low gradient of concentration, because of the countercurrent multiplier mechanism.
Various substances are passed from the liquid entering the nephrons until exiting the loop (See the nephron flow diagram). The sequence of flow is as follows:
Renal corpuscle: Liquid enters the nephron system at the Bowman's capsule.
Proximal convoluted tubule: It then may reabsorb urea in the thick descending limb. Water is removed from the nephrons by osmosis (and glucose and other ions are pumped out with active transport), gradually raising the concentration in the nephrons.
Loop of Henle Descending: The liquid passes from the thin descending limb to the thick ascending limb. Water is constantly released via osmosis. Gradually there is a buildup of osmotic concentration, until 1200 mOsm is reached at the loop tip, but the difference across the membrane is kept small and constant.
For example, the liquid at one section inside the thin descending limb is at 400 mOsm while outside it is 401. Further down the descending limb, the inside concentration is 500 while outside it is 501, so a constant difference of 1 mOsm is kept all across the membrane, although the concentration inside and outside are gradually increasing.
Loop of Henle Ascending: after the tip (or 'bend') of the loop, the liquid flows in the thin ascending limb. Salt–sodium Na+ and chloride Cl− ions are pumped out of the liquid gradually lowering the concentration in the exiting liquid, but, using the countercurrent multiplier mechanism, always pumping against a constant and small osmotic difference.
For example, the pumps at a section close to the bend, pump out from 1000 mOsm inside the ascending limb to 1200 mOsm outside it, with a 200 mOsm across. Pumps further up the thin ascending limb, pump out from 400 mOsm into liquid at 600 mOsm, so again the difference is retained at 200 mOsm from the inside to the outside, while the concentration both inside and outside are gradually decreasing as the liquid flow advances.
The liquid finally reaches a low concentration of 100 mOsm when leaving the thin ascending limb and passing through the thick one
Distal convoluted tubule: Once leaving the loop of Henle the thick ascending limb can optionally reabsorb and re increase the concentration in the nephrons.
Collecting duct: The collecting duct receives liquid between 100 mOsm if no re-absorption is done, to 300 or above if re-absorption was used. The collecting duct may continue raising the concentration if required, by gradually pumping out the same ions as the Distal convoluted tubule, using the same gradient as the ascending limbs in the loop of Henle, and reaching the same concentration.
Ureter: The liquid urine leaves to the ureter.
Same principle is used in hemodialysis within artificial kidney machines.
History
Initially the countercurrent exchange mechanism and its properties were proposed in 1951 by professor Werner Kuhn and two of his former students who called the mechanism found in the loop of Henle in mammalian kidneys a Countercurrent multiplier and confirmed by laboratory findings in 1958 by Professor Carl W. Gottschalk. The theory was acknowledged a year later after a meticulous study showed that there is almost no osmotic difference between liquids on both sides of nephrons. Homer Smith, a considerable contemporary authority on renal physiology, opposed the model countercurrent concentration for 8 years, until conceding ground in 1959. Ever since, many similar mechanisms have been found in biologic systems, the most notable of these: the rete mirabile in fish.
Countercurrent exchange of heat in organisms
In cold weather the blood flow to the limbs of birds and mammals is reduced on exposure to cold environmental conditions, and returned to the trunk via the deep veins which lie alongside the arteries (forming venae comitantes). This acts as a counter-current exchange system which short-circuits the warmth from the arterial blood directly into the venous blood returning into the trunk, causing minimal heat loss from the extremities in cold weather. The subcutaneous limb veins are tightly constricted, thereby reducing heat loss via this route, and forcing the blood returning from the extremities into the counter-current blood flow systems in the centers of the limbs. Birds and mammals that regularly immerse their limbs in cold or icy water have particularly well developed counter-current blood flow systems to their limbs, allowing prolonged exposure of the extremities to the cold without significant loss of body heat, even when the limbs are as thin as the lower legs, or tarsi, of a bird, for instance.
When animals like the leatherback turtle and dolphins are in colder water to which they are not acclimatized, they use this CCHE mechanism to prevent heat loss from their flippers, tail flukes, and dorsal fins. Such CCHE systems are made up of a complex network of peri-arterial venous plexuses, or venae comitantes, that run through the blubber from their minimally insulated limbs and thin streamlined protuberances. Each plexus consists of a central artery containing warm blood from the heart surrounded by a bundle of veins containing cool blood from the body surface. As these fluids flow past each other, they create a heat gradient in which heat is transferred and retained inside the body. The warm arterial blood transfers most of its heat to the cool venous blood now coming in from the outside. This conserves heat by recirculating it back to the body core. Since the arteries give up a good deal of their heat in this exchange, there is less heat lost through convection at the periphery surface.
Another example is found in the legs of an Arctic fox treading on snow. The paws are necessarily cold, but blood can circulate to bring nutrients to the paws without losing much heat from the body. Proximity of arteries and veins in the leg results in heat exchange, so that as the blood flows down it becomes cooler, and does not lose much heat to the snow. As the (cold) blood flows back up from the paws through the veins, it picks up heat from the blood flowing in the opposite direction, so that it returns to the torso in a warm state, allowing the fox to maintain a comfortable temperature, without losing it to the snow. This system is so efficient that the Arctic fox does not begin to shiver until the temperature drops to .
Countercurrent exchange in sea and desert birds to conserve water
Sea and desert birds have been found to have a salt gland near the nostrils which concentrates brine, later to be "sneezed" out to the sea, in effect allowing these birds to drink seawater without the need to find freshwater resources. It also enables the seabirds to remove the excess salt entering the body when eating, swimming or diving in the sea for food. The kidney cannot remove these quantities and concentrations of salt.
The salt secreting gland has been found in seabirds like pelicans, petrels, albatrosses, gulls, and terns. It has also been found in Namibian ostriches and other desert birds, where a buildup of salt concentration is due to dehydration and scarcity of drinking water.
In seabirds the salt gland is above the beak, leading to a main canal above the beak, and water is blown from two small nostrils on the beak, to empty it. The salt gland has two countercurrent mechanisms working in it:
a. A salt extraction system with a countercurrent multiplication mechanism, where salt is actively pumped from the blood 'venules' (small veins) into the gland tubules. Although the fluid in the tubules is with a higher concentration of salt than the blood, the flow is arranged in a countercurrent exchange, so that the blood with a high concentration of salt enters the system close to where the gland tubules exit and connect to the main canal. Thus, all along the gland, there is only a small gradient to climb, in order to push the salt from the blood to the salty fluid with active transport powered by ATP.
b. The blood supply system to the gland is set in countercurrent exchange loop mechanism for keeping the high concentration of salt in the gland's blood, so that it does not leave back to the blood system.
The glands remove the salt efficiently and thus allow the birds to drink the salty water from their environment while they are hundreds of miles away from land.
Countercurrent exchange in industry and scientific research
Countercurrent Chromatography is a method of separation, that is based on the differential partitioning of analytes between two immiscible liquids using countercurrent or cocurrent flow. Evolving from Craig's Countercurrent Distribution (CCD), the most widely used term and abbreviation is CounterCurrent Chromatography (CCC), in particular when using hydrodynamic CCC instruments. The term partition chromatography is largely a synonymous and predominantly used for hydrostatic CCC instruments.
Distillation of chemicals such as in petroleum refining is done in towers or columns with perforated trays. Vapor from the low boiling fractions bubbles upward through the holes in the trays in contact with the down flowing high boiling fractions. The concentration of low boiling fraction increases in each tray up the tower as it is "stripped". The low boiling fraction is drawn off the top of the tower and the high boiling fraction drawn from the bottom. The process in the trays is a combination of heat transfer and mass transfer. Heat is supplied at the bottom, known as a "reboiler" and cooling is done with a condenser at the top.
Liquid–liquid extraction (also called 'solvent extraction' or 'partitioning') is a common method for extracting a substance from one liquid into another liquid at a different 'phase' (such as "slurry"). This method, which implements a countercurrent mechanism, is used in nuclear reprocessing, ore processing, the production of fine organic compounds, the processing of perfumes, the production of vegetable oils and biodiesel, and other industries.
Gold can be separated from a cyanide solution with the Merrill–Crowe process using Counter Current Decantation (CCD). In some mines, nickel and cobalt are treated with CCD, after the original ore was treated with concentrated sulfuric acid and steam in titanium covered autoclaves, producing nickel cobalt slurry. The nickel and cobalt in the slurry are removed from it almost completely using a CCD system exchanging the cobalt and nickel with flash steam heated water.
Lime can be manufactured in countercurrent furnaces allowing the heat to reach high temperatures using low cost, low temperature burning fuel. Historically this was developed by the Japanese in certain types of the Anagama kiln. The kiln is built in stages, where fresh air coming to the fuel is passed downwards while the smoke and heat is pushed up and out. The heat does not leave the kiln, but is transferred back to the incoming air, and thus slowly builds up to and more.
Cement may be created using a countercurrent kiln where the heat is passed in the cement and the exhaust combined, while the incoming air draft is passed along the two, absorbing the heat and retaining it inside the furnace, finally reaching high temperatures.
Gasification: the process of creating methane and carbon monoxide from organic or fossil matter, can be done using a counter-current fixed bed ("up draft") gasifier which is built in a similar way to the Anagama kiln, and must therefore withstand more harsh conditions, but reaches better efficiency.
In nuclear power plants, water leaving the plant must not contain even trace particles of Uranium. Counter Current Decantation (CCD) is used in some facilities to extract water, totally clear of Uranium.
Zippe-type centrifuges use countercurrent multiplication between rising and falling convection currents to reduce the number of stages needed in a cascade.
Some Centrifugal extractors use counter current exchange mechanisms for extracting high rates of the desired material.
Some protein skimmers (devices used to clean saltwater pools and fish ponds of organic matter) use counter current technologies.
Countercurrent processes have also been used to study the behavior of small animals and isolate individuals with altered behaviors due to genetic mutations.
See also
Anagama kiln
Bidirectional traffic
Economizer
Heat recovery ventilation
Regenerative heat exchanger
Countercurrent multiplier
References
External links
Countercurrent multiplier animation from Colorado University.
Research about elephant seals using countercurrent heat exchange to keep heat from leaving their body while breathing out, during hibernation.
Patent for a snow mask with a removable countercurrent exchange module which keeps the warmth from leaving the mask when breathing out.
Chemical process engineering
Industrial processes
Animal anatomy
Renal physiology
Heat transfer | Countercurrent exchange | [
"Physics",
"Chemistry",
"Engineering"
] | 5,329 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Chemical engineering",
"Thermodynamics",
"Chemical process engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.