id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
579,219 | https://en.wikipedia.org/wiki/%CE%92-Carotene | β-Carotene (beta-carotene) is an organic, strongly colored red-orange pigment abundant in fungi, plants, and fruits. It is a member of the carotenes, which are terpenoids (isoprenoids), synthesized biochemically from eight isoprene units and thus having 40 carbons.
Dietary β-carotene is a provitamin A compound, converting in the body to retinol (vitamin A). In foods, it has rich content in carrots, pumpkin, spinach, and sweet potato. It is used as a dietary supplement and may be prescribed to treat erythropoietic protoporphyria, an inherited condition of sunlight sensitivity.
β-carotene is the most common carotenoid in plants. When used as a food coloring, it has the E number E160a. The structure was deduced in 1930.
Isolation of β-carotene from fruits abundant in carotenoids is commonly done using column chromatography. It is industrially extracted from richer sources such as the algae Dunaliella salina. The separation of β-carotene from the mixture of other carotenoids is based on the polarity of a compound. β-Carotene is a non-polar compound, so it is separated with a non-polar solvent such as hexane. Being highly conjugated, it is deeply colored, and as a hydrocarbon lacking functional groups, it is lipophilic.
Provitamin A activity
Plant carotenoids are the primary dietary source of provitamin A worldwide, with β-carotene as the best-known provitamin A carotenoid. Others include α-carotene and β-cryptoxanthin. Carotenoid absorption is restricted to the duodenum of the small intestine. One molecule of β-carotene can be cleaved by the intestinal enzyme β,β-carotene 15,15'-monooxygenase into two molecules of vitamin A.
Absorption, metabolism and excretion
As part of the digestive process, food-sourced carotenoids must be separated from plant cells and incorporated into lipid-containing micelles to be bioaccessible to intestinal enterocytes. If already extracted (or synthetic) and then presented in an oil-filled dietary supplement capsule, there is greater bioavailability compared to that from foods.
At the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is then either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to retinol binding protein 2, before being incorporated into chylomicrons. The conversion process consists of one molecule of β-carotene cleaved by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene, into two molecules of retinal. When plasma retinol is in the normal range the gene expression for SCARB1 and BCO1 are suppressed, creating a feedback loop that suppresses β-carotene absorption and conversion.
The majority of chylomicrons are taken up by the liver, then secreted into the blood repackaged into low density lipoproteins (LDLs). From these circulating lipoproteins and the chylomicrons that bypassed the liver, β-carotene is taken into cells via receptor SCARB1. Human tissues differ in expression of SCARB1, and hence β-carotene content. Examples expressed as ng/g, wet weight: liver=479, lung=226, prostate=163 and skin=26.
Once taken up by peripheral tissue cells, the major usage of absorbed β-carotene is as a precursor to retinal via symmetric cleavage by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene. A lesser amount is metabolized by the mitochondrial enzyme beta-carotene 9',10'-dioxygenase, which is encoded by the BCO2 gene. The products of this asymmetric cleavage are two beta-ionone molecules and rosafluene. BCO2 appears to be involved in preventing excessive accumulation of carotenoids; a BCO2 defect in chickens results in yellow skin color due to accumulation in subcutaneous fat.
Conversion factors
For counting dietary vitamin A intake, β-carotene may be converted either using the newer retinol activity equivalents (RAE) or the older international unit (IU).
Retinol activity equivalents (RAEs)
Since 2001, the US Institute of Medicine uses retinol activity equivalents (RAE) for their Dietary Reference Intakes, defined as follows:
1 μg RAE = 1 μg retinol from food or supplements
1 μg RAE = 2 μg all-trans-β-carotene from supplements
1 μg RAE = 12 μg of all-trans-β-carotene from food
1 μg RAE = 24 μg α-carotene or β-cryptoxanthin from food
RAE takes into account carotenoids' variable absorption and conversion to vitamin A by humans better than and replaces the older retinol equivalent (RE) (1 μg RE = 1 μg retinol, 6 μg β-carotene, or 12 μg α-carotene or β-cryptoxanthin). RE was developed 1967 by the United Nations/World Health Organization Food and Agriculture Organization (FAO/WHO).
International Units
Another older unit of vitamin A activity is the international unit (IU). Like retinol equivalent, the international unit does not take into account carotenoid variable absorption and conversion to vitamin A by humans, as well as the more modern retinol activity equivalent. Food and supplement labels still generally use IU, but IU can be converted to the more useful retinol activity equivalent as follows:
1 μg RAE = 3.33 IU retinol
1 IU retinol = 0.3 μg RAE
1 IU β-carotene from supplements = 0.3 μg RAE
1 IU β-carotene from food = 0.05 μg RAE
1 IU α-carotene or β-cryptoxanthin from food = 0.025 μg RAE1
Dietary sources
The average daily intake of β-carotene is in the range 2–7 mg, as estimated from a pooled analysis of 500,000 women living in the US, Canada, and some European countries. Beta-carotene is found in many foods and is sold as a dietary supplement. β-Carotene contributes to the orange color of many different fruits and vegetables. Vietnamese gac (Momordica cochinchinensis Spreng.) and crude palm oil are particularly rich sources, as are yellow and orange fruits, such as cantaloupe, mangoes, pumpkin, and papayas, and orange root vegetables such as carrots and sweet potatoes.
The color of β-carotene is masked by chlorophyll in green leaf vegetables such as spinach, kale, sweet potato leaves, and sweet gourd leaves.
The U.S. Department of Agriculture lists foods high in β-carotene content:
No dietary requirement
Government and non-government organizations have not set a dietary requirement for β-carotene.
Side effects
Excess β-carotene is predominantly stored in the fat tissues of the body. The most common side effect of excessive β-carotene consumption is carotenodermia, a physically harmless condition that presents as a conspicuous orange skin tint arising from deposition of the carotenoid in the outermost layer of the epidermis.
Carotenosis
Carotenoderma, also referred to as carotenemia, is a benign and reversible medical condition where an excess of dietary carotenoids results in orange discoloration of the outermost skin layer. It is associated with a high blood β-carotene value. This can occur after a month or two of consumption of beta-carotene rich foods, such as carrots, carrot juice, tangerine juice, mangos, or in Africa, red palm oil. β-carotene dietary supplements can have the same effect. The discoloration extends to palms and soles of feet, but not to the white of the eye, which helps distinguish the condition from jaundice. Carotenodermia is reversible upon cessation of excessive intake. Consumption of greater than 30 mg/day for a prolonged period has been confirmed as leading to carotenemia.
No risk for hypervitaminosis A
At the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is then either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to retinol binding protein 2, before being incorporated into chylomicrons. The conversion process consists of one molecule of β-carotene cleaved by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene, into two molecules of retinal. When plasma retinol is in the normal range the gene expression for SCARB1 and BCO1 are suppressed, creating a feedback loop that suppresses absorption and conversion. Because of these two mechanisms, high intake will not lead to hypervitaminosis A.
Drug interactions
β-Carotene can interact with medication used for lowering cholesterol. Taking them together can lower the effectiveness of these medications and is considered only a moderate interaction. Bile acid sequestrants and proton-pump inhibitors can decrease absorption of β-carotene. Consuming alcohol with β-carotene can decrease its ability to convert to retinol and could possibly result in hepatotoxicity.
β-Carotene and lung cancer in smokers
Chronic high doses of β-carotene supplementation increases the probability of lung cancer in smokers while its natural vitamer, retinol, increases lung cancer in smokers and nonsmokers. The effect is specific to supplementation dose as no lung damage has been detected in those who are exposed to cigarette smoke and who ingest a physiological dose of β-carotene (6 mg), in contrast to high pharmacological dose (30 mg).
Increases in lung cancer have been attributed to the tendency of β-carotene to oxidize, yet based on the pharmacokinetics of β-carotene absorption and transport through the intestine and the lack of specific β-carotene transporters, it is unlikely that β-carotene reaches the lung of smokers in sufficient quantities. Additional research is required to understand the link between the increased risk of cancer and all-cause mortality following β-carotene supplementation.
Additionally, supplemental, high-dose β-carotene may increase the risk of prostate cancer, intracerebral hemorrhage, and cardiovascular and total mortality irrespective of smoking status.
Industrial sources
β-carotene is industrially made either by total synthesis (see ) or by extraction from biological sources such as vegetables, microalgae (especially Dunaliella salina), and genetically-engineered microbes. The synthetic path is low-cost and high-yield.
Research
Medical authorities generally recommend obtaining beta-carotene from food rather than dietary supplements. A 2013 meta-analysis of randomized controlled trials concluded that high-dosage (≥9.6 mg/day) beta-carotene supplementation is associated with a 6% increase in the risk of all-cause mortality, while low-dosage (<9.6 mg/day) supplementation does not have a significant effect on mortality. Research is insufficient to determine whether a minimum level of beta-carotene consumption is necessary for human health and to identify what problems might arise from insufficient beta-carotene intake. However, a 2018 meta-analysis mostly of prospective cohort studies found that both dietary and circulating beta-carotene are associated with a lower risk of all-cause mortality. The highest circulating beta-carotene category, compared to the lowest, correlated with a 37% reduction in the risk of all-cause mortality, while the highest dietary beta-carotene intake category, compared to the lowest, was linked to an 18% decrease in the risk of all-cause mortality.
Macular degeneration
Age-related macular degeneration (AMD) represents the leading cause of irreversible blindness in elderly people. AMD is an oxidative stress, retinal disease that affects the macula, causing progressive loss of central vision. β-carotene content is confirmed in human retinal pigment epithelium. Reviews reported mixed results for observational studies, with some reporting that diets higher in β-carotene correlated with a decreased risk of AMD whereas other studies reporting no benefits. Reviews reported that for intervention trials using only β-carotene, there was no change to risk of developing AMD.
Cancer
A meta-analysis concluded that supplementation with β-carotene does not appear to decrease the risk of cancer overall, nor specific cancers including: pancreatic, colorectal, prostate, breast, melanoma, or skin cancer generally. High levels of β-carotene may increase the risk of lung cancer in current and former smokers. Results are not clear for thyroid cancer.
Cataract
A Cochrane review looked at supplementation of β-carotene, vitamin C, and vitamin E, independently and combined, on people to examine differences in risk of cataract, cataract extraction, progression of cataract, and slowing the loss of visual acuity. These studies found no evidence of any protective effects afforded by β-carotene supplementation on preventing and slowing age-related cataract. A second meta-analysis compiled data from studies that measured diet-derived serum beta-carotene and reported a not statistically significant 10% decrease in cataract risk.
Erythropoietic protoporphyria
High doses of β-carotene (up to 180 mg per day) may be used as a treatment for erythropoietic protoporphyria, a rare inherited disorder of sunlight sensitivity, without toxic effects.
Food drying
Foods rich in carotenoid dyes show discoloration upon drying. This is due to thermal degradation of carotenoids, possibly via isomerization and oxidation reactions.
See also
Sunless tanning with beta-carotene
Vitamin A
Retinol
Carotenoids
References
Carotenoids
Vitamin A
Hydrocarbons
Tetraterpenes
Cyclohexenes | Β-Carotene | [
"Chemistry",
"Biology"
] | 3,169 | [
"Hydrocarbons",
"Biomarkers",
"Vitamin A",
"Carotenoids",
"Organic compounds",
"Biomolecules"
] |
579,223 | https://en.wikipedia.org/wiki/Land%20reclamation | Land reclamation, often known as reclamation, and also known as land fill (not to be confused with a waste landfill), is the process of creating new land from oceans, seas, riverbeds or lake beds. The land reclaimed is known as reclamation ground, reclaimed land, or land fill.
History
In Ancient Egypt, the rulers of the Twelfth Dynasty (c. 2000–1800 BC) undertook a far-sighted land reclamation scheme to increase agricultural output. They constructed levees and canals to connect the Faiyum with the Bahr Yussef waterway, diverting water that would have flowed into Lake Moeris and causing gradual evaporation around the lake's edges, creating new farmland from the reclaimed land. A similar land reclamation system using dams and drainage canals was used in the Greek Copaic Basin during the Middle Helladic Period (c. 1900–1600 BC). Another early large-scale project was the Beemster Polder in the Netherlands, realized in 1612 adding of land. In Hong Kong the Praya Reclamation Scheme added of land in 1890 during the second phase of construction. It was one of the most ambitious projects ever taken during the Colonial Hong Kong era. Some 20% of land in the Tokyo Bay area has been reclaimed, most notably Odaiba artificial island. The city of Rio de Janeiro was largely built on reclaimed land, as was Wellington, New Zealand.
Methods
Land reclamation can be achieved by a number of different methods. The simplest method involves filling the area with large amounts of heavy rock and/or cement, then filling with clay and dirt until the desired height is reached. The process is called "infilling" and the material used to fill the space is generally called "infill". Draining of submerged wetlands is often used to reclaim land for agricultural use. Deep cement mixing is used typically in situations in which the material displaced by either dredging or draining may be contaminated and hence needs to be contained. Land dredging is also another method of land reclamation. It is the removal of sediments and debris from the bottom of a body of water. It is commonly used for maintaining reclaimed land masses as sedimentation, a natural process, fills channels and harbors.
Notable instances
Africa
The Hassan II Mosque is built on reclaimed land.
The Eko Atlantic in Lagos.
Gracefield Island in Lekki, Lagos.
The Foreshore in Cape Town.
Stone Town in Zanzibar.
Asia
Parts of the coastlines of Mainland China, Hong Kong, North Korea and South Korea. It is estimated that nearly 65% of tidal flats around the Yellow Sea have been reclaimed.
The north of Bahrain.
Inland lowlands in the Yangtze valley, China, including the areas of important cities like Wuhan.
Nanhui New City in Shanghai
Haikou Bay, Hainan Province, China, where the west side of Haidian Island is being extended, and off the coast of Haikou, where new land for a marina is being created.
The Cotai area of Macau, where many casinos are located.
Parts of Shekou in Shenzhen, Guangdong province.
Much of the coastline of Mumbai, India. It took over 150 years to join the original Seven Islands of Bombay. These seven islands were lush, green, thickly wooded, and dotted with 22 hills, with the Arabian Sea washing through them at high tide. The original Isle of Bombay was only long and wide from Dongri to Malabar Hill (at its broadest point) and the other six were Colaba, Old Woman's Island, Mahim, Parel, Worli and Mazgaon. (See also Hornby Vellard).
The shore of Jakarta Bay. Land is usually reclaimed to create new housing areas and real estate properties, for the rapidly expanding city of Jakarta. So far, the largest reclamation project in the city is the creation of Golf Island, north of Pantai Indah Kapuk.
Giant Sea Wall Jakarta.
Nagoya Centrair Airport.
Kansai International Airport, Osaka.
Beirut Central District.
Hulhumalé Island, it is one of the six divisions of Malé City.
Forest City, an integrated residential and tourism district in Johor, Malaysia, was controversial due to its reclamation of wetlands of international importance under the Ramsar Convention in a designated Environmentally Sensitive Area (ESA) Rank 1 area.
Much of the coastline of Karachi.
The North Reclamation Area in Cebu City.
The whole business district of Cebu South Road Properties in Cebu City.
The shore of Manila Bay, especially along Metro Manila, has attracted major developments such as the Mall of Asia Complex, Entertainment City and the Cultural Center of the Philippines Complex.
A part of the Hamad International Airport, around .
The entire island of The Pearl Island situated in West Bay (Doha).
The city-state of Singapore, where land is in short supply, is also famous for its efforts on land reclamation.
The size of Singapore has increased by 25% from 581.5 square kilometres in 1960 to 725.7 in 2019. This is part of the nation's plans to create more homes and common spaces in the land scarce city-state. Upcoming projects, such as the Long Island project, involving the reclamation of three tracts of land (expected to span around 800 ha), which is set at a higher level to protect against rising sea levels. It will also enclose a body of water, acting as a reservoir, strengthening the nation's water resilience. Detailed technical studies are currently underway lasting 5 years. This project would take a few decades to plan and implement.
Incheon International Airport.
Colombo International Financial City.
Some of the coastline of Saadiyat Island which is used for commercial purposes.
The Palm Islands, The World and hotel Burj al-Arab off Dubai.
The Yas Island in Abu Dhabi.
Europe
The southwestern residential area in Brest.
The port of Zeebrugge.
Certain areas of Denmark.
Paljassaare, Tallinn is a peninsula consisting of two former islands connected to the mainland during the 20th century
Port of Tallinn is largely built on land reclaimed over centuries.
Helsinki (of which the major part of the city center is built on reclaimed land).
Airport of Nice.
A big part of Kavala.
Lake Copais.
Parts of Dublin, including the North Wall, East Wall, Grand Canal Dock and Bull Island.
The airport peninsula, the industrial area of Cornigliano, the PSA container terminal and other parts of the port in Genoa.
Venice.
Rione Orsini, part of Borgo Santa Lucia, Naples.
Fucine Lake.
Almost half of the microstate of Monaco
Most of Fontvieille, Monaco
Parts surrounding Port Hercules in La Condamine, Monaco
Large parts of the Netherlands.
Parts of Bryggen, Bergen including the Dreggekaien cruise terminal and other ship services.
Parts of Saint Petersburg, such as the Marine Facade.
Barceloneta area, Barcelona.
Airports of Trabzon, Giresun and Rize.
Coastal parks and streets of Istanbul
Yenikapı.
Pier Head in Liverpool.
Samphire Hoe in Kent was created using 4.9 million cubic metres of chalk marl from the nearby Channel Tunnel excavations from 1988 to 1994.
Almost all of the Thames estuary including large parts of London
The Fens in East Anglia.
Waterfront Centre, St. Helier.
Most of Belfast Harbour and areas of Belfast.
The entire waterfront area of Dundee.
Majority of left-bank and some right-bank residential areas of Kyiv were built on a reclaimed fens and floodplains of the Dnieper river.
North America
The Potter's Cay in Nassau, The Bahamas was connected to the island of New Providence.
The shore of Nassau, The Bahamas especially along East Bay street.
Much of Bermuda's St David's Island are reclaimed; the island, the site of Bermuda's international airport, was formerly several smaller islands.
Notre Dame Island in Montreal. In the Saint Lawrence River, 15 million tons of rock excavated from the Montreal Metro underground rail in 1965 to form an artificial island.
Leslie Street Spit, the downtown waterfront south of Front Street, and sections of the Toronto Islands in Toronto.
Part of Nuns' Island in Montreal.
Infilling False Creek, Burrard Inlet and various creekways of Vancouver.
Tsawwassen ferry terminal causeway in Delta.
Wreck Beach, Metro Vancouver Electoral Area A
Mexico City (which is situated at the former site of Lake Texcoco); the chinampas are a famous example.
The Chicago shoreline.
The Northwestern University Lakefill, part of the campus of Northwestern University in Evanston, Illinois.
Several neighborhoods in Boston, Massachusetts are the result of landfill.
Battery Park City, Manhattan.
Several islands in Biscayne Bay in the Miami metropolitan area, including the Venetian Islands, are the result of landfill.
Brooklyn Bridge Park, Brooklyn.
Liberty State Park, Jersey City.
Parts of New Orleans (which is partially built on land that was once swamp).
Much of the urbanized area adjacent to San Francisco Bay, including most of San Francisco's waterfront and Financial District, San Francisco International Airport, the Port of Oakland, and large portions of the city of Alameda has been reclaimed from the bay. The entirety of Treasure Island was also reclaimed to cover over the shallow waters north of Yerba Buena Island that presented a navigational hazard.
Large hills in Seattle were removed and used to create Harbor Island and reclaim land along Elliott Bay. In particular, the neighborhoods of SoDo, Seattle and Interbay are largely built on filled wetlands.
Oceania
Most of Barangaroo, a current commercial and residential suburb in the central business district of Sydney, New South Wales.
Parts of Darling Harbour, a locality west of the Sydney central business district.
A large portion of the southern suburb of Sylvania Waters in Sydney
The southernmost portions of runways at Sydney Airport.
Large portions of Port Botany in metropolitan Sydney.
Large amounts of the Melbourne Docklands.
Portions of the Swan River foreshore adjoining the Perth central business district in Western Australia, including the entirety of Mounts Bay (pictured above).
My Suva park, a recreation park for the Greater Suva area.
Considerable areas of Dunedin, New Zealand, including the "Southern Endowment", stretching from the central city to the southeastern suburbs along the shore of Otago Harbour.
Prior to the Napier earthquake of 1931, significant reclamation of the then-lagoon was undertaken in areas of Napier South and Ahuriri. There were also minor reclamation works undertaken after 1931 on the new low-lying lands brought up by the earthquake.
Areas around Wellington and Auckland's harbours have also been reclaimed.
South America
The entire riverfront of Buenos Aires, including the port and an airport.
Large parts of Rio de Janeiro, most notably several blocks in the new docks area, the entire Flamengo Park and the neighborhood of Urca.
Parts of Florianópolis.
Parts of the Historic District of Porto Alegre, including the docks of Port of Porto Alegre and the Beira-Rio Stadium, were built on reclaimed lands of Lake Guaíba between the end of the 19th century and the 1970s.
Parts of Valparaíso.
Santa Cruz del Islote, in the Caribbean Sea of Colombia, one of the most densely populated islands in the world, was built in an artificial way gaining land from the sea.
Parts of Panama City urban and street development are based on reclaimed land, using material extracted from Panama Canal excavations.
The Cinta Costera, in Panama City.
Parts of Montevideo, Rambla Sur and several projects still going on in Montevideo's Bay.
Parts of the Vargas State in the north of Venezuela, parts of Los Monjes Archipelago, the Isla Paraíso (paradise island) in the Anzoátegui State and the La Salina island in the Zulia State, were built with land reclaimed from the sea.
Agriculture
Agriculture was a driver of land reclamation before industrialisation. In South China, farmers reclaimed paddy fields by enclosing an area with a stone wall on the sea shore near a river mouth or river delta. The species of rice that are grown on these grounds are more salt tolerant. Another use of such enclosed land is the creation of fish ponds. It is commonly seen on the Pearl River Delta and Hong Kong. These reclaimed areas also attract species of migrating birds.
A related practice is the draining of swampy or seasonally submerged wetlands to convert them to farmland. While this does not create new land exactly, it allows commercially productive use of land that would otherwise be restricted to wildlife habitat. It is also an important method of mosquito control.
Even in the post-industrial age, there have been land reclamation projects intended for increasing available agricultural land. For example, the village of Ogata in Akita, Japan, was established on land reclaimed from Lake Hachirōgata (Japan's second largest lake at the time) starting in 1957. By 1977, the amount of land reclaimed totalled .
Artificial islands
Artificial islands are an example of land reclamation. Creating an artificial island is an expensive and risky undertaking. It is often considered in places with high population density and a scarcity of flat land. Kansai International Airport (in Osaka) and Hong Kong International Airport are examples where this process was deemed necessary. The Palm Islands, The World and hotel Burj al-Arab off Dubai in the United Arab Emirates are other examples of artificial islands (although there is yet no real "scarcity of land" in Dubai), as well as the Flevopolder in the Netherlands which is the largest artificial island in the world.
Beach restoration
Beach rebuilding is the process of repairing beaches using materials such as sand or mud from inland. This can be used to build up beaches suffering from beach starvation or erosion from longshore drift. It stops the movement of the original beach material through longshore drift and retains a natural look to the beach. Although it is not a long-lasting solution, it is cheap compared to other types of coastal defences. An example of this is the city of Mumbai.
Landfill
As human overcrowding of developed areas intensified during the 20th century, it has become important to develop land re-use strategies for completed landfills. Some of the most common usages are for parks, golf courses and other sports fields. Increasingly, however, office buildings and industrial uses are made on a completed landfill. In these latter uses, methane capture is customarily carried out to minimize explosive hazard within the building.
An example of a Class A office building constructed over a landfill is the Dakin Building at Sierra Point, Brisbane, California. The underlying fill was deposited from 1965 to 1985, mostly consisting of construction debris from San Francisco and some municipal wastes. Aerial photographs prior to 1965 show this area to be tidelands of the San Francisco Bay. A clay cap was constructed over the debris prior to building approval.
A notable example is Sydney Olympic Park, the primary venue for the 2000 Summer Olympic Games, which was built atop an industrial wasteland that included landfills.
Another strategy for landfill is the incineration of landfill trash at high temperature via the plasma-arc gasification process, which is currently used at two facilities in Japan, and was proposed to be used at a facility in St. Lucie County, Florida. The planned facility in Florida was later canceled.
Environmental impact
Draining wetlands for ploughing, for example, is a form of habitat destruction. In some parts of the world, new reclamation projects are restricted or no longer allowed, due to environmental protection laws. Reclamation projects have strong negative impacts on coastal populations, although some species can take advantage of the newly created area. A 2022 global analysis estimated that 39% of losses (approximately ) and 14% of gains (approximately ) of tidal wetlands (mangroves, tidal flats, and tidal marshes) between 1999 and 2019 were due to direct human activities, including conversion to aquaculture, agriculture, plantations, coastal developments and other physical structures.
Environmental legislation
The State of California created a state commission, the San Francisco Bay Conservation and Development Commission, in 1965 to protect San Francisco Bay and regulate development near its shores. The commission was created in response to growing concern over the shrinking size of the bay.
Hong Kong legislators passed the Protection of the Harbour Ordinance, proposed by the Society for Protection of the Harbour, in 1997 in an effort to safeguard the increasingly threatened Victoria Harbour against encroaching land development. Several large reclamation schemes at Green Island, West Kowloon, and Kowloon Bay were subsequently shelved, and others reduced in size.
Dangers
Reclaimed land is highly susceptible to soil liquefaction during earthquakes, which can amplify the amount of damage that occurs to buildings and infrastructure. Subsidence is another issue, both from soil compaction on filled land, and also when wetlands are enclosed by levees and drained to create polders. Drained marshes will eventually sink below the surrounding water level, increasing the danger from flooding.
Land amounts added
Asia
Europe
Other countries
List of reclaimed land by country and territory
See also
Artificial island
Great wall of sand
Marine regression – the formation of new land by reductions in sea level
Drainage system (agriculture) – drainage for land reclamation
Land improvement
Land recycling
Hong Kong Society for Protection of the Harbour
Mine reclamation
Polder – low-lying land reclaimed from a lake or sea
Reclamation of Wellington Harbour, New Zealand
River reclamation
Water reclamation
Rainbowing
Notes
References
http://gulfnews.com/news/gulf/bahrain/bahrain-parliament-wants-solution-to-land-reclamation-issue-1.567052
External links
The Cape Town Foreshore Plan 1947
The Canadian Land Reclamation Association
The case for offshore Mumbai airport
Coastal construction
Riparian zone
Environmental issues with water
Environmental soil science | Land reclamation | [
"Engineering",
"Environmental_science"
] | 3,587 | [
"Hydrology",
"Environmental soil science",
"Construction",
"Coastal construction",
"Riparian zone"
] |
579,311 | https://en.wikipedia.org/wiki/Image%20%28mathematics%29 | In mathematics, for a function , the image of an input value is the single output value produced by when passed . The preimage of an output value is the set of input values that produce .
More generally, evaluating at each element of a given subset of its domain produces a set, called the "image of under (or through) ". Similarly, the inverse image (or preimage) of a given subset of the codomain is the set of all elements of that map to a member of
The image of the function is the set of all output values it may produce, that is, the image of . The preimage of , that is, the preimage of under , always equals (the domain of ); therefore, the former notion is rarely used.
Image and inverse image may also be defined for general binary relations, not just functions.
Definition
The word "image" is used in three related ways. In these definitions, is a function from the set to the set
Image of an element
If is a member of then the image of under denoted is the value of when applied to is alternatively known as the output of for argument
Given the function is said to or if there exists some in the function's domain such that
Similarly, given a set is said to if there exists in the function's domain such that
However, and means that for point in the domain of .
Image of a subset
Throughout, let be a function.
The under of a subset of is the set of all for It is denoted by or by when there is no risk of confusion. Using set-builder notation, this definition can be written as
This induces a function where denotes the power set of a set that is the set of all subsets of See below for more.
Image of a function
The image of a function is the image of its entire domain, also known as the range of the function. This last usage should be avoided because the word "range" is also commonly used to mean the codomain of
Generalization to binary relations
If is an arbitrary binary relation on then the set is called the image, or the range, of Dually, the set is called the domain of
Inverse image
Let be a function from to The preimage or inverse image of a set under denoted by is the subset of defined by
Other notations include and
The inverse image of a singleton set, denoted by or by is also called the fiber or fiber over or the level set of The set of all the fibers over the elements of is a family of sets indexed by
For example, for the function the inverse image of would be Again, if there is no risk of confusion, can be denoted by and can also be thought of as a function from the power set of to the power set of The notation should not be confused with that for inverse function, although it coincides with the usual one for bijections in that the inverse image of under is the image of under
Notation for image and inverse image
The traditional notations used in the previous section do not distinguish the original function from the image-of-sets function ; likewise they do not distinguish the inverse function (assuming one exists) from the inverse image function (which again relates the powersets). Given the right context, this keeps the notation light and usually does not cause confusion. But if needed, an alternative is to give explicit names for the image and preimage as functions between power sets:
Arrow notation
with
with
Star notation
instead of
instead of
Other terminology
An alternative notation for used in mathematical logic and set theory is
Some texts refer to the image of as the range of but this usage should be avoided because the word "range" is also commonly used to mean the codomain of
Examples
defined by The image of the set under is The image of the function is The preimage of is The preimage of is also The preimage of under is the empty set
defined by The image of under is and the image of is (the set of all positive real numbers and zero). The preimage of under is The preimage of set under is the empty set, because the negative numbers do not have square roots in the set of reals.
defined by The fibers are concentric circles about the origin, the origin itself, and the empty set (respectively), depending on whether (respectively). (If then the fiber is the set of all satisfying the equation that is, the origin-centered circle with radius )
If is a manifold and is the canonical projection from the tangent bundle to then the fibers of are the tangent spaces This is also an example of a fiber bundle.
A quotient group is a homomorphic image.
Properties
General
For every function and all subsets and the following properties hold:
Also:
Multiple functions
For functions and with subsets and the following properties hold:
Multiple subsets of domain or codomain
For function and subsets and the following properties hold:
The results relating images and preimages to the (Boolean) algebra of intersection and union work for any collection of subsets, not just for pairs of subsets:
(Here, can be infinite, even uncountably infinite.)
With respect to the algebra of subsets described above, the inverse image function is a lattice homomorphism, while the image function is only a semilattice homomorphism (that is, it does not always preserve intersections).
See also
Notes
References
.
Basic concepts in set theory
Isomorphism theorems | Image (mathematics) | [
"Mathematics"
] | 1,113 | [
"Basic concepts in set theory"
] |
579,322 | https://en.wikipedia.org/wiki/Fairness%20doctrine | The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses both to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints. In 1987, the FCC abolished the fairness doctrine, prompting some to urge its reintroduction through either Commission policy or congressional legislation. The FCC removed the rule that implemented the policy from the Federal Register in August 2011.
The fairness doctrine had two basic elements: It required broadcasters to devote some of their airtime to discussing controversial matters of public interest, and to air contrasting views regarding those matters. Stations were given wide latitude as to how to provide contrasting views: It could be done through news segments, public affairs shows, or editorials. The doctrine did not require equal time for opposing views but required that contrasting viewpoints be presented. The demise of this FCC rule has been cited as a contributing factor in the rising level of party polarization in the United States.
While the original purpose of the doctrine was to ensure that viewers were exposed to a diversity of viewpoints, it was used by both the Kennedy and later the Johnson administration to combat political opponents operating on talk radio. In 1969 the United States Supreme Court, in Red Lion Broadcasting Co. v. FCC, upheld the FCC's general right to enforce the fairness doctrine where channels were limited. However, the court did not rule that the FCC was obliged to do so. The courts reasoned that the scarcity of the broadcast spectrum, which limited the opportunity for access to the airwaves, created a need for the doctrine.
The fairness doctrine is not the same as the equal-time rule, which is still in place. The fairness doctrine deals with discussion of controversial issues, while the equal-time rule deals only with political candidates.
Origins
In 1938, Lawrence J. Flynn, a former Yankee Network employee, challenged the license of John Shepard III's WAAB in Boston, and lodged a complaint about WNAC. Flynn asserted that these stations were being used to air one-sided political viewpoints and broadcast attacks, including editorials, against local and federal politicians that Shepard opposed. The FCC requested that Shepard provide details about these programs. To appease the commission, the Yankee Network agreed to drop the editorials.
Flynn created a company called Mayflower Broadcasting and tried to get the FCC to award him WAAB's license. The FCC refused. In 1941, the commission made a ruling that came to be known as the Mayflower Decision, which declared that radio stations, due to their public interest obligations, must remain neutral in matters of news and politics, and they were not allowed to give editorial support to any particular political position or candidate.
In 1949, the FCC's Editorializing Report repealed the Mayflower doctrine, which since 1941 had forbidden on-air editorializing. This laid the foundation for the fairness doctrine, by reaffirming the FCC's holding that licensees must not use their stations "for the private interest, whims or caprices [of licensees], but in a manner which will serve the community generally."
The FCC Report established two forms of regulation on broadcasters: to provide adequate coverage of public issues, and to ensure that coverage fairly represented opposing views. The second rule required broadcasters to provide reply time to issue-oriented citizens. Broadcasters could therefore trigger fairness doctrine complaints without editorializing. The commission required neither of the fairness doctrine's obligations before 1949. Until then broadcasters had to satisfy only general "public interest" standards of the Communications Act.
The doctrine remained a matter of general policy and was applied on a case-by-case basis until 1967, when certain provisions of the doctrine were incorporated into FCC regulations.
In 1969, the United States courts of appeals, in an opinion written by Warren Burger, directed the FCC to revoke Lamar Broadcasting's license for television station WLBT due to the station's segregationist politics and ongoing censorship of NBC network news coverage of the U.S. civil rights movement.
Application of the doctrine by the FCC
In 1974, the Federal Communications Commission stated that the Congress had delegated the power to mandate a system of "access, either free or paid, for person or groups wishing to express a viewpoint on a controversial public issue" but that it had not yet exercised that power because licensed broadcasters had "voluntarily" complied with the "spirit" of the doctrine. It warned that:
In one landmark case, the FCC argued that teletext was a new technology that created soaring demand for a limited resource, and thus could be exempt from the fairness doctrine. The Telecommunications Research and Action Center (TRAC) and Media Access Project (MAP) argued that teletext transmissions should be regulated like any other airwave technology, hence the fairness doctrine was applicable, and must be enforced by the FCC. In 1986, Judges Robert Bork and Antonin Scalia of the United States Court of Appeals for the District of Columbia Circuit concluded that the fairness doctrine did apply to teletext, but that the FCC was not required to apply it. In a 1987 case, Meredith Corp. v. FCC, two other judges on the same court declared that Congress did not mandate the doctrine and the FCC did not have to continue to enforce it.
Decisions of the United States Supreme Court
In Red Lion Broadcasting Co. v. FCC, , the U.S. Supreme Court upheld, by a vote of 8–0, the constitutionality of the fairness doctrine in a case of an on-air personal attack, in response to challenges that the doctrine violated the First Amendment to the U.S. Constitution. The case began when journalist Fred J. Cook, after the publication of his Goldwater: Extremist of the Right, was the topic of discussion by Billy James Hargis on his daily Christian Crusade radio broadcast on WGCB in Red Lion, Pennsylvania. Cook sued arguing that the fairness doctrine entitled him to free air time to respond to the personal attacks.
Although similar laws are unconstitutional when applied to the press, the court cited a Senate report (S. Rep. No. 562, 86th Cong., 1st Sess., 8-9 [1959]) stating that radio stations could be regulated in this way because of the limited public airwaves at the time. Writing for the court, Justice Byron White declared:
The court did not see how the fairness doctrine went against the First Amendment's goal of creating an informed public. The fairness doctrine required that those who were talked about be given chance to respond to the statements made by broadcasters. The court believed that this helped create a more informed public. Justice White explained that, without this doctrine, station owners would only have people on the air who agreed with their opinions. Throughout his opinion, Justice White argued that radio frequencies, and by extension, television stations, should be used to educate listeners, or viewers, about controversial issues in a way that is fair and non-biased so that they can create their own opinions.
In 1969, the court "ruled unanimously that the Fairness Doctrine was not only constitutional, but essential to democracy. The public airwaves should not just express the opinions of those who can pay for air time; they must allow the electorate to be informed about all sides of controversial issues." The court also warned that if the doctrine ever restrained speech, then its constitutionality should be reconsidered. Justice William O. Douglas did not participate, but later wrote that he would have dissented because the Constitutional guarantee of Freedom of the press was absolute.
However, in the case of Miami Herald Publishing Co. v. Tornillo, , Chief Justice Warren Burger wrote (for a unanimous court):
This decision differs from Red Lion v. FCC in that it applies to a newspaper, which, unlike a broadcaster, is unlicensed and can theoretically face an unlimited number of competitors.
In 1984, the Supreme Court ruled that Congress could not forbid editorials by non-profit stations that received grants from the Corporation for Public Broadcasting (FCC v. League of Women Voters of California, ). The court's 5-4 majority decision by William J. Brennan Jr. stated that while many now considered that expanding sources of communication had made the fairness doctrine's limits unnecessary:
After noting that the FCC was considering repealing the fairness doctrine rules on editorials and personal attacks out of fear that those rules might be "chilling speech", the court added:
Use in political leveraging
Various presidential governments used the Fairness Doctrine to counter their political opponents. At the FCC, Martin Firestone's memorandum to the Democratic National Committee presented political strategies to combat small, rural radio stations unfriendly to Democratic politicians:
The right-wingers operate on a strictly-cash basis and it is for this reason that they are carried by so many small [radio] stations. Were our efforts to be continued on a year-round basis, we would find that many of these stations would consider the broadcasts of these programs bothersome and burdensome (especially if they are ultimately required to give us free time) and would start dropping the programs from their broadcast schedule.
The use of the fairness doctrine by the National Council for Civic Responsibility (NCCR) was to urge right-wing radio stations to air rebuttals against the opinions expressed on their radio stations.
Revocation
Basic doctrine
In 1985, under FCC Chairman Mark S. Fowler, a communications attorney who had served on Ronald Reagan's presidential campaign staff in 1976 and 1980, the FCC released its report on General Fairness Doctrine Obligations stating that the doctrine hurt the public interest and violated free speech rights guaranteed by the First Amendment. The commission could not, however, come to a determination as to whether the doctrine had been enacted by Congress through its 1959 Amendment to Section 315 of the Communications Act.
In response to the 1986 Telecommunications Research & Action Center v. F.C.C. decision, the 99th Congress directed the FCC to examine alternatives to the fairness doctrine and to submit a report to Congress on the subject. In 1987, in Meredith Corporation v. F.C.C. the case was returned to the FCC with a directive to consider whether the doctrine had been "self-generated pursuant to its general congressional authorization or specifically mandated by Congress."
The FCC opened an inquiry inviting public comment on alternative means for administrating and enforcing the fairness doctrine. In its 1987 report, the alternatives—including abandoning a case-by-case enforcement approach, replacing the doctrine with open access time for all members of the public, doing away with the personal attack rule, and eliminating certain other aspects of the doctrine—were rejected by the FCC for various reasons.
On August 4, 1987, under FCC Chairman Dennis R. Patrick, the FCC abolished the doctrine by a 4–0 vote, in the Syracuse Peace Council decision, which was upheld by a panel of the Appeals Court for the D.C. Circuit in February 1989, though the court stated in their decision that they made "that determination without reaching the constitutional issue." The FCC suggested in Syracuse Peace Council that because of the many media voices in the marketplace, the doctrine be deemed unconstitutional, stating that:
At the 4–0 vote, Chairman Patrick said:
Sitting commissioners at the time of the vote were:
Dennis R. Patrick, chairman, Republican(Named an FCC commissioner by Ronald Reagan in 1983)
Mimi Weyforth Dawson, Republican(Named an FCC commissioner by Ronald Reagan in 1986)
Patricia Diaz Dennis, Democrat(Named an FCC commissioner by Ronald Reagan in 1986)
James Henry Quello, Democrat(Named an FCC commissioner by Richard M. Nixon in 1974)
The FCC vote was opposed by members of Congress who said the FCC had tried to "flout the will of Congress" and the decision was "wrongheaded, misguided and illogical". The decision drew political fire, and cooperation with Congress was one issue. In June 1987, Congress attempted to preempt the FCC decision and codify the fairness doctrine, but the legislation was vetoed by President Ronald Reagan. In 1991, another attempt to revive the doctrine was stopped when President George H. W. Bush threatened another veto.
In February 2009, Fowler said that his work toward revoking the fairness doctrine under the Reagan administration had been a matter of principle, his belief that the doctrine impinged upon the First Amendment, not partisanship. Fowler described the White House staff raising concerns, at a time before the prominence of conservative talk radio and during the preeminence of the Big Three television networks and PBS in political discourse, that repealing the policy would be politically unwise. He described the staff's position as saying to Reagan:
Conservative talk radio
The 1987 repeal of the fairness doctrine enabled the rise of talk radio that has been described as "unfiltered", divisive and/or vicious: "In 1988, a savvy former ABC Radio executive named Ed McLaughlin signed Rush Limbaugh — then working at a little-known Sacramento station — to a nationwide syndication contract. McLaughlin offered Limbaugh to stations at an unbeatable price: free. All they had to do to carry his program was to set aside four minutes per hour for ads that McLaughlin's company sold to national sponsors. The stations got to sell the remaining commercial time to local advertisers."
According to The Washington Post, "From his earliest days on the air, Limbaugh trafficked in conspiracy theories, divisiveness, even viciousness", e.g., "feminazis". Prior to 1987 people using much less controversial verbiage had been taken off the air as obvious violations of the fairness doctrine.
Corollary rules
Two corollary rules of the doctrine, the personal attack rule and the "political editorial" rule, remained in practice until 2000. The "personal attack" rule applied whenever a person, or small group, was subject to a personal attack during a broadcast. Stations had to notify such persons, or groups, within a week of the attack, send them transcripts of what was said and offer the opportunity to respond on-the-air. The "political editorial" rule applied when a station broadcast editorials endorsing or opposing candidates for public office, and stipulated that the unendorsed candidates be notified and allowed a reasonable opportunity to respond.
The U.S. Court of Appeals for the D.C. Circuit ordered the FCC to justify these corollary rules in light of the decision to repeal the fairness doctrine. The FCC did not provide prompt justification, so both corollary rules were repealed in October 2000.
Reinstatement considered
Support
In February 2005, U.S. Representative Louise Slaughter (D-NY) and 23 co-sponsors introduced the Fairness and Accountability in Broadcasting Act (H.R. 501) in the 1st session of the 109th Congress of 2005-2007, when Republicans held a majority of both Houses. The bill would have shortened a station's license term from eight years to four, with the requirement that a license-holder cover important issues fairly, hold local public hearings about its coverage twice a year, and document to the FCC how it was meeting its obligations. The bill was referred to committee, but progressed no further.
In the same Congress, Representative Maurice Hinchey (D-NY) introduced legislation "to restore the Fairness Doctrine". H.R. 3302, also known as the "Media Ownership Reform Act of 2005" or MORA, had 16 co-sponsors in Congress.
In June 2007, Senator Richard Durbin (D-Ill.) said, "It's time to reinstitute the Fairness Doctrine", an opinion shared by his Democratic colleague, Senator John Kerry (D-Mass.). However, according to Marin Cogan of The New Republic in late 2008:
On June 24, 2008, U.S. Representative Nancy Pelosi (D-Calif.), the Speaker of the House at the time, told reporters that her fellow Democratic representatives did not want to forbid reintroduction of the fairness doctrine, adding "the interest in my caucus is the reverse." When asked by John Gizzi of Human Events, "Do you personally support revival of the 'Fairness Doctrine?, the Speaker replied "Yes".
On December 15, 2008, U.S. Representative Anna Eshoo (D-Calif.) told The Daily Post in Palo Alto, California that she thought it should also apply to cable and satellite broadcasters, stating:
On February 11, 2009, Senator Tom Harkin (D-Iowa) told radio host Bill Press, "we gotta get the Fairness Doctrine back in law again." Later in response to Press's assertion that "they are just shutting down progressive talk from one city after another", Senator Harkin responded, "Exactly, and that's why we need the fairthat's why we need the Fairness Doctrine back."
Former President Bill Clinton has also shown support for the fairness doctrine. During a February 13, 2009, appearance on the Mario Solis Marich radio show, Clinton said:
Clinton cited the "blatant drumbeat" against the stimulus program from conservative talk radio, suggesting that it does not reflect economic reality.
On September 19, 2019, Representative Tulsi Gabbard (D-HI) introduced H.R. 4401 Restore the Fairness Doctrine Act of 2019 in the House of Representatives, 116th Congress. Rep. Gabbard was the only sponsor. H.R. 4401 was immediately referred to the House Committee on Energy and Commerce on the same day. It was then referred to the Subcommittee on Communications and Technology on September 20, 2019.
H.R. 4401 would mandate equal media discussion of key political and social topics, requiring television and radio broadcasters to give airtime to opposing sides of issues of civic interest. The summary reads: "Restore the Fairness Doctrine Act of 2019. This bill requires a broadcast radio or television licensee to provide reasonable opportunity for discussion of conflicting views on matters of public importance. The Restore the Fairness Doctrine Act would once again mandate television and radio broadcasters present both sides when discussing political or social issues, reinstituting the rule in place from 1949 to 1987 ... . Supporters argue that the doctrine allowed for a more robust public debate and affected positive political change as a result, rather than allowing only the loudest voices or deepest pockets to win."
Opposition
The fairness doctrine has been strongly opposed by prominent conservatives and libertarians who view it as an attack on First Amendment rights and property rights. Editorials in The Wall Street Journal and The Washington Times in 2005 and 2008 said that Democratic attempts to bring back the fairness doctrine have been made largely in response to conservative talk radio.
In 1987, Edward O. Fritts, president of the National Association of Broadcasters, in applauding President Reagan's veto of a bill intended to turn the doctrine into law, said that the doctrine is an infringement on free speech and intrudes on broadcasters' journalistic judgment.
In 2007, Senator Norm Coleman (R-MN) proposed an amendment to a defense appropriations bill that forbade the FCC from "using any funds to adopt a fairness rule." It was blocked, in part on grounds that "the amendment belonged in the Commerce Committee's jurisdiction."
In 2007, the Broadcaster Freedom Act of 2007 was proposed in the Senate by Senators Coleman with 35 co-sponsors (S.1748) and John Thune (R-SD), with 8 co-sponsors (S.1742), and in the House by Republican Representative Mike Pence (R-IN) with 208 co-sponsors (H.R. 2905). It provided:
Neither of these measures came to the floor of either house.
On August 12, 2008, FCC Commissioner Robert M. McDowell stated that the reinstitution of the fairness doctrine could be intertwined with the debate over network neutrality (a proposal to classify network operators as common carriers required to admit all Internet services, applications and devices on equal terms), presenting a potential danger that net neutrality and fairness doctrine advocates could try to expand content controls to the Internet. It could also include "government dictating content policy". The conservative Media Research Center's Culture & Media Institute argued that the three main points supporting the fairness doctrine — media scarcity, liberal viewpoints being censored at a corporate level, and public interest — are all myths.
In June 2008, Barack Obama's press secretary wrote that Obama, then a Democratic U.S. senator from Illinois and candidate for president, did not support it, stating:
On February 16, 2009, Mark Fowler said:
In February 2009, a White House spokesperson said that President Obama continued to oppose the revival of the doctrine.
In the 111th Congress, January 2009 to January 2011, the Broadcaster Freedom Act of 2009 (S.34, S.62, H.R.226) was introduced to block reinstatement of the doctrine. On February 26, 2009, by a vote of 87–11, the Senate added that act as an amendment to the District of Columbia House Voting Rights Act of 2009 (S.160), a bill which later passed the Senate 61–37 but not the House of Representatives.
The Associated Press reported that the vote on the fairness doctrine rider was "in part a response to conservative radio talk show hosts who feared that Democrats would try to revive the policy to ensure liberal opinions got equal time." The AP report went on to say that President Obama had no intention of reimposing the doctrine, but Republicans, led by Sen. Jim DeMint, R-SC, wanted more in the way of a guarantee that the doctrine would not be reimposed.
Suggested alternatives
Media reform organizations such as Free Press feel that a return to the fairness doctrine is not as important as setting stronger station ownership caps and stronger "public interest" standards enforcement, with funding from fines given to public broadcasting.
Public opinion
In an August 2008 telephone poll, released by Rasmussen Reports, 47% of 1,000 likely voters supported a government requirement that broadcasters offer equal amounts of liberal and conservative commentary. 39% opposed such a requirement. In the same poll, 57% opposed and 31% favored requiring Internet websites and bloggers that offer political commentary to present opposing points of view. By a margin of 71–20%, the respondents agreed that it is "possible for just about any political view to be heard in today's media", including the Internet, newspapers, cable TV and satellite radio, but only half the sample said they had followed recent news stories about the fairness doctrine closely. The margin of error was 3%, with a 95% confidence interval.
Formal revocation
In June 2011, the chairman and a subcommittee chairman of the House Energy and Commerce Committee, both Republicans, said that the FCC, in response to their requests, had set a target date of August 2011 for removing the fairness doctrine and other "outdated" regulations from the FCC's rulebook.
On August 22, 2011, the FCC voted to remove the rule that implemented the fairness doctrine, along with more than 80 other rules and regulations, from the Federal Register following an executive order by President Obama directing a "government-wide review of regulations already on the books" to eliminate unnecessary regulations.
See also
Right of reply
False balance
Free speech
Nakdi Report
Prior restraint
Mayflower doctrine
Zapple doctrine
Accurate News and Information Act
Notes
References
Pickard, Victor (2014). America's Battle for Media Democracy: The Triumph of Corporate Libertarianism and the Future of Media Reform, Cambridge University Press,
Primary sources
Further reading
External links
A primer on the Fairness Doctrine and how its absence now affects politics and culture in the media.
Fairness Doctrine by Val E. Limburg, from the Museum of Broadcast Communications
Fairness Doctrine from NOW on PBS
The Media Cornucopia from City Journal
Important legislation for and against the Fairness Doctrine from Ceasespin.org
Speech to the Media Institute by FCC Commissioner Robert M. McDowell on January 28, 2009, outlining the likely practical and constitutional challenges of reviving a fairness or neutrality doctrine.
Censorship of broadcasting in the United States
Federal Communications Commission
Legal doctrines and principles
Legal history of the United States
Net neutrality
United States communications regulation
1949 establishments in the United States
1987 disestablishments in the United States | Fairness doctrine | [
"Engineering"
] | 4,948 | [
"Net neutrality",
"Computer networks engineering"
] |
579,390 | https://en.wikipedia.org/wiki/Gene%20prediction | In computational biology, gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes. This includes protein-coding genes as well as RNA genes, but may also include prediction of other functional elements such as regulatory regions. Gene finding is one of the first and most important steps in understanding the genome of a species once it has been sequenced.
In its earliest days, "gene finding" was based on painstaking experimentation on living cells and organisms. Statistical analysis of the rates of homologous recombination of several different genes could determine their order on a certain chromosome, and information from many such experiments could be combined to create a genetic map specifying the rough location of known genes relative to each other. Today, with comprehensive genome sequence and powerful computational resources at the disposal of the research community, gene finding has been redefined as a largely computational problem.
Determining that a sequence is functional should be distinguished from determining the function of the gene or its product. Predicting the function of a gene and confirming that the gene prediction is accurate still demands in vivo experimentation through gene knockout and other assays, although frontiers of bioinformatics research are making it increasingly possible to predict the function of a gene based on its sequence alone.
Gene prediction is one of the key steps in genome annotation, following sequence assembly, the filtering of non-coding regions and repeat masking.
Gene prediction is closely related to the so-called 'target search problem' investigating how DNA-binding proteins (transcription factors) locate specific binding sites within the genome. Many aspects of structural gene prediction are based on current understanding of underlying biochemical processes in the cell such as gene transcription, translation, protein–protein interactions and regulation processes, which are subject of active research in the various omics fields such as transcriptomics, proteomics, metabolomics, and more generally structural and functional genomics.
Empirical methods
In empirical (similarity, homology or evidence-based) gene finding systems, the target genome is searched for sequences that are similar to extrinsic evidence in the form of the known expressed sequence tags, messenger RNA (mRNA), protein products, and homologous or orthologous sequences. Given an mRNA sequence, it is trivial to derive a unique genomic DNA sequence from which it had to have been transcribed. Given a protein sequence, a family of possible coding DNA sequences can be derived by reverse translation of the genetic code. Once candidate DNA sequences have been determined, it is a relatively straightforward algorithmic problem to efficiently search a target genome for matches, complete or partial, and exact or inexact. Given a sequence, local alignment algorithms such as BLAST, FASTA and Smith-Waterman look for regions of similarity between the target sequence and possible candidate matches. Matches can be complete or partial, and exact or inexact. The success of this approach is limited by the contents and accuracy of the sequence database.
A high degree of similarity to a known messenger RNA or protein product is strong evidence that a region of a target genome is a protein-coding gene. However, to apply this approach systemically requires extensive sequencing of mRNA and protein products. Not only is this expensive, but in complex organisms, only a subset of all genes in the organism's genome are expressed at any given time, meaning that extrinsic evidence for many genes is not readily accessible in any single cell culture. Thus, to collect extrinsic evidence for most or all of the genes in a complex organism requires the study of many hundreds or thousands of cell types, which presents further difficulties. For example, some human genes may be expressed only during development as an embryo or fetus, which might be difficult to study for ethical reasons.
Despite these difficulties, extensive transcript and protein sequence databases have been generated for human as well as other important model organisms in biology, such as mice and yeast. For example, the RefSeq database contains transcript and protein sequence from many different species, and the Ensembl system comprehensively maps this evidence to human and several other genomes. It is, however, likely that these databases are both incomplete and contain small but significant amounts of erroneous data.
New high-throughput transcriptome sequencing technologies such as RNA-Seq and ChIP-sequencing open opportunities for incorporating additional extrinsic evidence into gene prediction and validation, and allow structurally rich and more accurate alternative to previous methods of measuring gene expression such as expressed sequence tag or DNA microarray.
Major challenges involved in gene prediction involve dealing with sequencing errors in raw DNA data, dependence on the quality of the sequence assembly, handling short reads, frameshift mutations, overlapping genes and incomplete genes.
In prokaryotes it's essential to consider horizontal gene transfer when searching for gene sequence homology. An additional important factor underused in current gene detection tools is existence of gene clusters — operons (which are functioning units of DNA containing a cluster of genes under the control of a single promoter) in both prokaryotes and eukaryotes. Most popular gene detectors treat each gene in isolation, independent of others, which is not biologically accurate.
Ab initio methods
Ab Initio gene prediction is an intrinsic method based on gene content and signal detection. Because of the inherent expense and difficulty in obtaining extrinsic evidence for many genes, it is also necessary to resort to ab initio gene finding, in which the genomic DNA sequence alone is systematically searched for certain tell-tale signs of protein-coding genes. These signs can be broadly categorized as either signals, specific sequences that indicate the presence of a gene nearby, or content, statistical properties of the protein-coding sequence itself. Ab initio gene finding might be more accurately characterized as gene prediction, since extrinsic evidence is generally required to conclusively establish that a putative gene is functional.
In the genomes of prokaryotes, genes have specific and relatively well-understood promoter sequences (signals), such as the Pribnow box and transcription factor binding sites, which are easy to systematically identify. Also, the sequence coding for a protein occurs as one contiguous open reading frame (ORF), which is typically many hundred or thousands of base pairs long. The statistics of stop codons are such that even finding an open reading frame of this length is a fairly informative sign. (Since 3 of the 64 possible codons in the genetic code are stop codons, one would expect a stop codon approximately every 20–25 codons, or 60–75 base pairs, in a random sequence.) Furthermore, protein-coding DNA has certain periodicities and other statistical properties that are easy to detect in a sequence of this length. These characteristics make prokaryotic gene finding relatively straightforward, and well-designed systems are able to achieve high levels of accuracy.
Ab initio gene finding in eukaryotes, especially complex organisms like humans, is considerably more challenging for several reasons. First, the promoter and other regulatory signals in these genomes are more complex and less well-understood than in prokaryotes, making them more difficult to reliably recognize. Two classic examples of signals identified by eukaryotic gene finders are CpG islands and binding sites for a poly(A) tail.
Second, splicing mechanisms employed by eukaryotic cells mean that a particular protein-coding sequence in the genome is divided into several parts (exons), separated by non-coding sequences (introns). (Splice sites are themselves another signal that eukaryotic gene finders are often designed to identify.) A typical protein-coding gene in humans might be divided into a dozen exons, each less than two hundred base pairs in length, and some as short as twenty to thirty. It is therefore much more difficult to detect periodicities and other known content properties of protein-coding DNA in eukaryotes.
Advanced gene finders for both prokaryotic and eukaryotic genomes typically use complex probabilistic models, such as hidden Markov models (HMMs) to combine information from a variety of different signal and content measurements. The GLIMMER system is a widely used and highly accurate gene finder for prokaryotes. GeneMark is another popular approach. Eukaryotic ab initio gene finders, by comparison, have achieved only limited success; notable examples are the GENSCAN and geneid programs. The GeneMark-ES and SNAP gene finders are GHMM-based like GENSCAN. They attempt to address problems related to using a gene finder on a genome sequence that it was not trained against. A few recent approaches like mSplicer, CONTRAST, or mGene also use machine learning techniques like support vector machines for successful gene prediction. They build a discriminative model using hidden Markov support vector machines or conditional random fields to learn an accurate gene prediction scoring function.
Ab Initio methods have been benchmarked, with some approaching 100% sensitivity, however as the sensitivity increases, accuracy suffers as a result of increased false positives.
Other signals
Among the derived signals used for prediction are statistics resulting from the sub-sequence statistics like k-mer statistics, Isochore (genetics) or Compositional domain GC composition/uniformity/entropy, sequence and frame length, Intron/Exon/Donor/Acceptor/Promoter and Ribosomal binding site vocabulary, Fractal dimension, Fourier transform of a pseudo-number-coded DNA, Z-curve parameters and certain run features.
It has been suggested that signals other than those directly detectable in sequences may improve gene prediction. For example, the role of secondary structure in the identification of regulatory motifs has been reported. In addition, it has been suggested that RNA secondary structure prediction helps splice site prediction.
Neural networks
Artificial neural networks are computational models that excel at machine learning and pattern recognition. Neural networks must be trained with example data before being able to generalise for experimental data, and tested against benchmark data. Neural networks are able to come up with approximate solutions to problems that are hard to solve algorithmically, provided there is sufficient training data. When applied to gene prediction, neural networks can be used alongside other ab initio methods to predict or identify biological features such as splice sites. One approach involves using a sliding window, which traverses the sequence data in an overlapping manner. The output at each position is a score based on whether the network thinks the window contains a donor splice site or an acceptor splice site. Larger windows offer more accuracy but also require more computational power. A neural network is an example of a signal sensor as its goal is to identify a functional site in the genome.
Combined approaches
Programs such as Maker combine extrinsic and ab initio approaches by mapping protein and EST data to the genome to validate ab initio predictions. Augustus, which may be used as part of the Maker pipeline, can also incorporate hints in the form of EST alignments or protein profiles to increase the accuracy of the gene prediction.
Comparative genomics approaches
As the entire genomes of many different species are sequenced, a promising direction in current research on gene finding is a comparative genomics approach.
This is based on the principle that the forces of natural selection cause genes and other functional elements to undergo mutation at a slower rate than the rest of the genome, since mutations in functional elements are more likely to negatively impact the organism than mutations elsewhere. Genes can thus be detected by comparing the genomes of related species to detect this evolutionary pressure for conservation. This approach was first applied to the mouse and human genomes, using programs such as SLAM, SGP and TWINSCAN/N-SCAN and CONTRAST.
Multiple informants
TWINSCAN examined only human-mouse synteny to look for orthologous genes. Programs such as N-SCAN and CONTRAST allowed the incorporation of alignments from multiple organisms, or in the case of N-SCAN, a single alternate organism from the target. The use of multiple informants can lead to significant improvements in accuracy.
CONTRAST is composed of two elements. The first is a smaller classifier, identifying donor splice sites and acceptor splice sites as well as start and stop codons. The second element involves constructing a full model using machine learning. Breaking the problem into two means that smaller targeted data sets can be used to train the classifiers,
and that classifier can operate independently and be trained with smaller windows. The full model can use the independent classifier, and not have to waste computational time or model complexity re-classifying intron-exon boundaries. The paper in which CONTRAST is introduced proposes that their method (and those of TWINSCAN, etc.) be classified as de novo gene assembly, using alternate genomes, and identifying it as distinct from ab initio, which uses a target 'informant' genomes.
Comparative gene finding can also be used to project high quality annotations from one genome to another. Notable examples include Projector, GeneWise, GeneMapper and GeMoMa. Such techniques now play a central role in the annotation of all genomes.
Pseudogene prediction
Pseudogenes are close relatives of genes, sharing very high sequence homology, but being unable to code for the same protein product. Whilst once relegated as byproducts of gene sequencing, increasingly, as regulatory roles are being uncovered, they are becoming predictive targets in their own right. Pseudogene prediction utilises existing sequence similarity and ab initio methods, whilst adding additional filtering and methods of identifying pseudogene characteristics.
Sequence similarity methods can be customised for pseudogene prediction using additional filtering to find candidate pseudogenes. This could use disablement detection, which looks for nonsense or frameshift mutations that would truncate or collapse an otherwise functional coding sequence. Additionally, translating DNA into proteins sequences can be more effective than just straight DNA homology.
Content sensors can be filtered according to the differences in statistical properties between pseudogenes and genes, such as a reduced count of CpG islands in pseudogenes, or the differences in G-C content between pseudogenes and their neighbours. Signal sensors also can be honed to pseudogenes, looking for the absence of introns or polyadenine tails.
Metagenomic gene prediction
Metagenomics is the study of genetic material recovered from the environment, resulting in sequence information from a pool of organisms. Predicting genes is useful for comparative metagenomics.
Metagenomics tools also fall into the basic categories of using either sequence similarity approaches (MEGAN4) and ab initio techniques (GLIMMER-MG).
Glimmer-MG is an extension to GLIMMER that relies mostly on an ab initio approach for gene finding and by using training sets from related organisms. The prediction strategy is augmented by classification and clustering gene data sets prior to applying ab initio gene prediction methods. The data is clustered by species. This classification method leverages techniques from metagenomic phylogenetic classification. An example of software for this purpose is, Phymm, which uses interpolated markov models—and PhymmBL, which integrates BLAST into the classification routines.
MEGAN4 uses a sequence similarity approach, using local alignment against databases of known sequences, but also attempts to classify using additional information on functional roles, biological pathways and enzymes. As in single organism gene prediction, sequence similarity approaches are limited by the size of the database.
FragGeneScan and MetaGeneAnnotator are popular gene prediction programs based on Hidden Markov model. These predictors account for sequencing errors, partial genes and work for short reads.
Another fast and accurate tool for gene prediction in metagenomes is MetaGeneMark. This tool is used by the DOE Joint Genome Institute to annotate IMG/M, the largest metagenome collection to date.
See also
List of gene prediction software
Phylogenetic footprinting
Protein function prediction
Protein structure prediction
Protein–protein interaction prediction
Pseudogene (database)
Sequence mining
Sequence similarity (homology)
References
External links
Augustus
FGENESH
GeMoMa - Homology-based gene prediction based on amino acid and intron position conservation as well as RNA-Seq data
geneid, SGP2
Glimmer , GlimmerHMM
GenomeThreader
ChemGenome
GeneMark
Gismo
mGene
StarORF — A multi-platform and web tool for predicting ORFs and obtaining reverse complement sequence
Maker - A portable and easily configurable genome annotation pipeline
Bioinformatics
Mathematical and theoretical biology
Markov models | Gene prediction | [
"Mathematics",
"Engineering",
"Biology"
] | 3,397 | [
"Bioinformatics",
"Applied mathematics",
"Biological engineering",
"Mathematical and theoretical biology"
] |
579,400 | https://en.wikipedia.org/wiki/Levinthal%27s%20paradox | Levinthal's paradox is a thought experiment in the field of computational protein structure prediction; protein folding seeks a stable energy configuration. An algorithmic search through all possible conformations to identify the minimum energy configuration (the native state) would take an immense duration; however in reality protein folding happens very quickly, even in the case of the most complex structures, suggesting that the transitions are somehow guided into a stable state through an uneven energy landscape.
History
In 1969, Cyrus Levinthal noted that, because of the very large number of degrees of freedom in an unfolded polypeptide chain, the molecule has an astronomical number of possible conformations. An estimate of 10300 was made in one of his papers (often incorrectly cited as the 1968 paper). For example, a polypeptide of 100 residues will have 200 different phi and psi bond angles, two within each residue. If each of these bond angles can be in one of three stable conformations, the protein may misfold into a maximum of 3200 different conformations (including any possible folding redundancy), not even considering the peptide linkages between each residue or the conformations of the side-chains. Therefore, if a protein were to attain its correctly folded configuration by sequentially sampling all the possible conformations, it would require a time longer than the age of the universe to arrive at its correct native conformation. This is true even if conformations are sampled at rapid (nanosecond or picosecond) rates. The "paradox" is that most small proteins fold spontaneously on a millisecond or even microsecond time scale. The solution to this paradox has been established by computational approaches to protein structure prediction.
Levinthal himself was aware that proteins fold spontaneously and on short timescales. He suggested that the paradox can be resolved if "protein folding is sped up and guided by the rapid formation of local interactions which then determine the further folding of the peptide; this suggests local amino acid sequences which form stable interactions and serve as nucleation points in the folding process". Indeed, the protein folding intermediates and the partially folded transition states were experimentally detected, which explains the fast protein folding. This is also described as protein folding directed within funnel-like energy landscapes. Some computational approaches to protein structure prediction have sought to identify and simulate the mechanism of protein folding.
Levinthal also suggested that the native structure might have a higher energy, if the lowest energy was not kinetically accessible. An analogy is a rock tumbling down a hillside that lodges in a gully rather than reaching the base.
Levinthal's paradox was cited on the first page of the Scientific Background to the 2024 Nobel Prize in Chemistry (awarded to David Baker, Demis Hassabis, and John M. Jumper for computational protein design and protein structure prediction) by way of demonstrating the sheer scale of the problem given the astronomical number of permutations.
Suggested explanations
According to Edward Trifonov and Igor Berezovsky, proteins fold by subunits (modules) of the size of 25–30 amino acids.
See also
Chaperone proteins that assist other proteins in folding or unfolding
Folding funnel
Anfinsen's dogma
References
External links
http://www-wales.ch.cam.ac.uk/~mark/levinthal/levinthal.html
https://www.wired.com/wired/archive/9.07/blue_pr.html
https://web.archive.org/web/20041011182039/http://www.sdsc.edu/~nair/levinthal.html
Eponymous paradoxes
Protein structure
Physical paradoxes
Thought experiments | Levinthal's paradox | [
"Chemistry"
] | 749 | [
"Protein structure",
"Structural biology"
] |
579,414 | https://en.wikipedia.org/wiki/Drug%20design | Drug design, often referred to as rational drug design or simply rational design, is the inventive process of finding new medications based on the knowledge of a biological target. The drug is most commonly an organic small molecule that activates or inhibits the function of a biomolecule such as a protein, which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves the design of molecules that are complementary in shape and charge to the biomolecular target with which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques. This type of modeling is sometimes referred to as computer-aided drug design. Finally, drug design that relies on the knowledge of the three-dimensional structure of the biomolecular target is known as structure-based drug design. In addition to small molecules, biopharmaceuticals including peptides and especially therapeutic antibodies are an increasingly important class of drugs and computational methods for improving the affinity, selectivity, and stability of these protein-based therapeutics have also been developed.
Definition
The phrase "drug design" is similar to ligand design (i.e., design of a molecule that will bind tightly to its target). Although design techniques for prediction of binding affinity are reasonably successful, there are many other properties, such as bioavailability, metabolic half-life, and side effects, that first must be optimized before a ligand can become a safe and effictive drug. These other characteristics are often difficult to predict with rational design techniques.
Due to high attrition rates, especially during clinical phases of drug development, more attention is being focused early in the drug design process on selecting candidate drugs whose physicochemical properties are predicted to result in fewer complications during development and hence more likely to lead to an approved, marketed drug. Furthermore, in vitro experiments complemented with computation methods are increasingly used in early drug discovery to select compounds with more favorable ADME (absorption, distribution, metabolism, and excretion) and toxicological profiles.
Drug targets
A biomolecular target (most commonly a protein or a nucleic acid) is a key molecule involved in a particular metabolic or signaling pathway that is associated with a specific disease condition or pathology or to the infectivity or survival of a microbial pathogen. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. In some cases, small molecules will be designed to enhance or inhibit the target function in the specific disease modifying pathway. Small molecules (for example receptor agonists, antagonists, inverse agonists, or modulators; enzyme activators or inhibitors; or ion channel openers or blockers) will be designed that are complementary to the binding site of target. Small molecules (drugs) can be designed so as not to affect any other important "off-target" molecules (often referred to as antitargets) since drug interactions with off-target molecules may lead to undesirable side effects. Due to similarities in binding sites, closely related targets identified through sequence homology have the highest chance of cross reactivity and hence highest side effect potential.
Most commonly, drugs are organic small molecules produced through chemical synthesis, but biopolymer-based drugs (also known as biopharmaceuticals) produced through biological processes are becoming increasingly more common. In addition, mRNA-based gene silencing technologies may have therapeutic applications. For example, nanomedicines based on mRNA can streamline and expedite the drug development process, enabling transient and localized expression of immunostimulatory molecules. In vitro transcribed (IVT) mRNA allows for delivery to various accessible cell types via the blood or alternative pathways. The use of IVT mRNA serves to convey specific genetic information into a person's cells, with the primary objective of preventing or altering a particular disease.
Drug discovery
Phenotypic drug discovery
Phenotypic drug discovery is a traditional drug discovery method, also known as forward pharmacology or classical pharmacology. It uses the process of phenotypic screening on collections of synthetic small molecules, natural products, or extracts within chemical libraries to pinpoint substances exhibiting beneficial therapeutic effects. This method is to first discover the in vivo or in vitro functional activity of drugs (such as extract drugs or natural products), and then perform target identification. Phenotypic discovery uses a practical and target-independent approach to generate initial leads, aiming to discover pharmacologically active compounds and therapeutics that operate through novel drug mechanisms. This method allows the exploration of disease phenotypes to find potential treatments for conditions with unknown, complex, or multifactorial origins, where the understanding of molecular targets is insufficient for effective intervention.
Rational drug discovery
Rational drug design (also called reverse pharmacology) begins with a hypothesis that modulation of a specific biological target may have therapeutic value. In order for a biomolecule to be selected as a drug target, two essential pieces of information are required. The first is evidence that modulation of the target will be disease modifying. This knowledge may come from, for example, disease linkage studies that show an association between mutations in the biological target and certain disease states. The second is that the target is capable of binding to a small molecule and that its activity can be modulated by the small molecule.
Once a suitable target has been identified, the target is normally cloned and produced and purified. The purified protein is then used to establish a screening assay. In addition, the three-dimensional structure of the target may be determined.
The search for small molecules that bind to the target is begun by screening libraries of potential drug compounds. This may be done by using the screening assay (a "wet screen"). In addition, if the structure of the target is available, a virtual screen may be performed of candidate drugs. Ideally, the candidate drug compounds should be "drug-like", that is they should possess properties that are predicted to lead to oral bioavailability, adequate chemical and metabolic stability, and minimal toxic effects. Several methods are available to estimate druglikeness such as Lipinski's Rule of Five and a range of scoring methods such as lipophilic efficiency. Several methods for predicting drug metabolism have also been proposed in the scientific literature.
Due to the large number of drug properties that must be simultaneously optimized during the design process, multi-objective optimization techniques are sometimes employed. Finally because of the limitations in the current methods for prediction of activity, drug design is still very much reliant on serendipity and bounded rationality.
Computer-aided drug design
The most fundamental goal in drug design is to predict whether a given molecule will bind to a target and if so how strongly. Molecular mechanics or molecular dynamics is most often used to estimate the strength of the intermolecular interaction between the small molecule and its biological target. These methods are also used to predict the conformation of the small molecule and to model conformational changes in the target that may occur when the small molecule binds to it. Semi-empirical, ab initio quantum chemistry methods, or density functional theory are often used to provide optimized parameters for the molecular mechanics calculations and also provide an estimate of the electronic properties (electrostatic potential, polarizability, etc.) of the drug candidate that will influence binding affinity.
Molecular mechanics methods may also be used to provide semi-quantitative prediction of the binding affinity. Also, knowledge-based scoring function may be used to provide binding affinity estimates. These methods use linear regression, machine learning, neural nets or other statistical techniques to derive predictive binding affinity equations by fitting experimental affinities to computationally derived interaction energies between the small molecule and the target.
Ideally, the computational method will be able to predict affinity before a compound is synthesized and hence in theory only one compound needs to be synthesized, saving enormous time and cost. The reality is that present computational methods are imperfect and provide, at best, only qualitatively accurate estimates of affinity. In practice, it requires several iterations of design, synthesis, and testing before an optimal drug is discovered. Computational methods have accelerated discovery by reducing the number of iterations required and have often provided novel structures.
Computer-aided drug design may be used at any of the following stages of drug discovery:
hit identification using virtual screening (structure- or ligand-based design)
hit-to-lead optimization of affinity and selectivity (structure-based design, QSAR, etc.)
lead optimization of other pharmaceutical properties while maintaining affinity
In order to overcome the insufficient prediction of binding affinity calculated by recent scoring functions, the protein-ligand interaction and compound 3D structure information are used for analysis. For structure-based drug design, several post-screening analyses focusing on protein-ligand interaction have been developed for improving enrichment and effectively mining potential candidates:
Consensus scoring
Selecting candidates by voting of multiple scoring functions
May lose the relationship between protein-ligand structural information and scoring criterion
Cluster analysis
Represent and cluster candidates according to protein-ligand 3D information
Needs meaningful representation of protein-ligand interactions.
Types
There are two major types of drug design. The first is referred to as ligand-based drug design and the second, structure-based drug design.
Ligand-based
Ligand-based drug design (or indirect drug design) relies on knowledge of other molecules that bind to the biological target of interest. These other molecules may be used to derive a pharmacophore model that defines the minimum necessary structural characteristics a molecule must possess in order to bind to the target. A model of the biological target may be built based on the knowledge of what binds to it, and this model in turn may be used to design new molecular entities that interact with the target. Alternatively, a quantitative structure-activity relationship (QSAR), in which a correlation between calculated properties of molecules and their experimentally determined biological activity, may be derived. These QSAR relationships in turn may be used to predict the activity of new analogs.
Structure-based
Structure-based drug design (or direct drug design) relies on knowledge of the three dimensional structure of the biological target obtained through methods such as x-ray crystallography or NMR spectroscopy. If an experimental structure of a target is not available, it may be possible to create a homology model of the target based on the experimental structure of a related protein. Using the structure of the biological target, candidate drugs that are predicted to bind with high affinity and selectivity to the target may be designed using interactive graphics and the intuition of a medicinal chemist. Alternatively, various automated computational procedures may be used to suggest new drug candidates.
Current methods for structure-based drug design can be divided roughly into three main categories. The first method is identification of new ligands for a given receptor by searching large databases of 3D structures of small molecules to find those fitting the binding pocket of the receptor using fast approximate docking programs. This method is known as virtual screening.
A second category is de novo design of new ligands. In this method, ligand molecules are built up within the constraints of the binding pocket by assembling small pieces in a stepwise manner. These pieces can be either individual atoms or molecular fragments. The key advantage of such a method is that novel structures, not contained in any database, can be suggested. A third method is the optimization of known ligands by evaluating proposed analogs within the binding cavity.
Binding site identification
Binding site identification is the first step in structure based design. If the structure of the target or a sufficiently similar homolog is determined in the presence of a bound ligand, then the ligand should be observable in the structure in which case location of the binding site is trivial. However, there may be unoccupied allosteric binding sites that may be of interest. Furthermore, it may be that only apoprotein (protein without ligand) structures are available and the reliable identification of unoccupied sites that have the potential to bind ligands with high affinity is non-trivial. In brief, binding site identification usually relies on identification of concave surfaces on the protein that can accommodate drug sized molecules that also possess appropriate "hot spots" (hydrophobic surfaces, hydrogen bonding sites, etc.) that drive ligand binding.
Scoring functions
Structure-based drug design attempts to use the structure of proteins as a basis for designing new ligands by applying the principles of molecular recognition. Selective high affinity binding to the target is generally desirable since it leads to more efficacious drugs with fewer side effects. Thus, one of the most important principles for designing or obtaining potential new ligands is to predict the binding affinity of a certain ligand to its target (and known antitargets) and use the predicted affinity as a criterion for selection.
One early general-purposed empirical scoring function to describe the binding energy of ligands to receptors was developed by Böhm. This empirical scoring function took the form:
where:
ΔG0 – empirically derived offset that in part corresponds to the overall loss of translational and rotational entropy of the ligand upon binding.
ΔGhb – contribution from hydrogen bonding
ΔGionic – contribution from ionic interactions
ΔGlip – contribution from lipophilic interactions where |Alipo| is surface area of lipophilic contact between the ligand and receptor
ΔGrot – entropy penalty due to freezing a rotatable in the ligand bond upon binding
A more general thermodynamic "master" equation is as follows:
where:
desolvation – enthalpic penalty for removing the ligand from solvent
motion – entropic penalty for reducing the degrees of freedom when a ligand binds to its receptor
configuration – conformational strain energy required to put the ligand in its "active" conformation
interaction – enthalpic gain for "resolvating" the ligand with its receptor
The basic idea is that the overall binding free energy can be decomposed into independent components that are known to be important for the binding process. Each component reflects a certain kind of free energy alteration during the binding process between a ligand and its target receptor. The Master Equation is the linear combination of these components. According to Gibbs free energy equation, the relation between dissociation equilibrium constant, Kd, and the components of free energy was built.
Various computational methods are used to estimate each of the components of the master equation. For example, the change in polar surface area upon ligand binding can be used to estimate the desolvation energy. The number of rotatable bonds frozen upon ligand binding is proportional to the motion term. The configurational or strain energy can be estimated using molecular mechanics calculations. Finally the interaction energy can be estimated using methods such as the change in non polar surface, statistically derived potentials of mean force, the number of hydrogen bonds formed, etc. In practice, the components of the master equation are fit to experimental data using multiple linear regression. This can be done with a diverse training set including many types of ligands and receptors to produce a less accurate but more general "global" model or a more restricted set of ligands and receptors to produce a more accurate but less general "local" model.
Examples
A particular example of rational drug design involves the use of three-dimensional information about biomolecules obtained from such techniques as X-ray crystallography and NMR spectroscopy. Computer-aided drug design in particular becomes much more tractable when there is a high-resolution structure of a target protein bound to a potent ligand. This approach to drug discovery is sometimes referred to as structure-based drug design. The first unequivocal example of the application of structure-based drug design leading to an approved drug is the carbonic anhydrase inhibitor dorzolamide, which was approved in 1995.
Another case study in rational drug design is imatinib, a tyrosine kinase inhibitor designed specifically for the bcr-abl fusion protein that is characteristic for Philadelphia chromosome-positive leukemias (chronic myelogenous leukemia and occasionally acute lymphocytic leukemia). Imatinib is substantially different from previous drugs for cancer, as most agents of chemotherapy simply target rapidly dividing cells, not differentiating between cancer cells and other tissues.
Additional examples include:
Many of the atypical antipsychotics
Cimetidine, the prototypical H2-receptor antagonist from which the later members of the class were developed
Selective COX-2 inhibitor NSAIDs
Enfuvirtide, a peptide HIV entry inhibitor
Nonbenzodiazepines like zolpidem and zopiclone
Raltegravir, an HIV integrase inhibitor
SSRIs (selective serotonin reuptake inhibitors), a class of antidepressants
Zanamivir, an antiviral drug
Drug screening
Types of drug screening include phenotypic screening, high-throughput screening, and virtual screening. Phenotypic screening is characterized by the process of screening drugs using cellular or animal disease models to identify compounds that alter the phenotype and produce beneficial disease-related effects. Emerging technologies in high-throughput screening substantially enhance processing speed and decrease the required detection volume. Virtual screening is completed by computer, enabling a large number of molecules can be screened with a short cycle and low cost. Virtual screening uses a range of computational methods that empower chemists to reduce extensive virtual libraries into more manageable sizes.
Case studies
5-HT3 antagonists
Acetylcholine receptor agonists
Angiotensin receptor antagonists
Bcr-Abl tyrosine-kinase inhibitors
Cannabinoid receptor antagonists
CCR5 receptor antagonists
Cyclooxygenase 2 inhibitors
Dipeptidyl peptidase-4 inhibitors
HIV protease inhibitors
NK1 receptor antagonists
Non-nucleoside reverse transcriptase inhibitors
Nucleoside and nucleotide reverse transcriptase inhibitors
PDE5 inhibitors
Proton pump inhibitors
Renin inhibitors
Triptans
TRPV1 antagonists
c-Met inhibitors
Criticism
It has been argued that the highly rigid and focused nature of rational drug design suppresses serendipity in drug discovery.
See also
Bioisostere
Bioinformatics
Cheminformatics
Drug development
Drug discovery
List of pharmaceutical companies
Medicinal chemistry
Molecular design software
Molecular modification
Retrometabolic drug design
References
External links
[Drug Design Org](https://www.drugdesign.org/chapters/drug-design/)
Design of experiments
Drug discovery
Medicinal chemistry | Drug design | [
"Chemistry",
"Biology"
] | 3,758 | [
"Life sciences industry",
"Drug discovery",
"nan",
"Medicinal chemistry",
"Biochemistry"
] |
579,576 | https://en.wikipedia.org/wiki/Bengt%20Str%C3%B6mgren | Bengt Georg Daniel Strömgren (21 January 1908 – 4 July 1987) was a Danish astronomer and astrophysicist.
Life and career
Bengt Strömgren was born in Gothenburg. His parents were Hedvig Strömgren (née Lidforss) and Elis Strömgren, who was professor of astronomy at the University of Copenhagen and director of the University Observatory in Copenhagen. Bengt grew up in the professor's mansion surrounded with scientists, assistants, observers and guests. His father paced and promoted Bengt into a life with science, and Bengt's first paper was published already at the age of 14. He graduated from high school in 1925 and enrolled at the Copenhagen university. Only two years later, he graduated in astronomy and atomic physics, and during the following two years, he completed a doctoral degree, which was evaluated with the best marks in December 1929, when he was 21 years old.
He gained a great deal of useful experience from his studies in theoretical physics at Niels Bohr's Institute close by, and he was at the right place at the right time. He soon found out that he intended to use the fresh theoretical framework of quantum physics in space, and investigate the applications of quantum mechanics in stars. Obviously, questions of nepotism were in play when he applied for an assistantship already in 1925, which he didn't get. But only one year later it was given to him anyway — he was the best, regardless of his employer being also his own father.
After being appointed as lecturer at the university in 1932, Strömgren was invited to the University of Chicago in 1936 by Otto Struve. Going abroad for 18 months meant a lot to the young researcher, and when he went back to Denmark and to the rising national socialism in Europe, he succeeded his father's professorship in 1940. During five years of isolation, under the German occupation of Denmark, he initiated the building of a new Danish Observatory, the Brorfelde Observatory. But after the Second World War, Bengt Strömgren became tired of lacking state funding for the project, and with a stagnant national economy, he felt that he had to leave Danish research, which he did in 1951.
He went to the United States and became director of the Yerkes and McDonald Observatories, and stayed there for six years. In 1957, he was appointed the first professor of theoretical astrophysics at the Institute for Advanced Study in Princeton, where he got Albert Einstein's office. He stayed at Princeton with his family until 1967, when he went back to his homeland Denmark, and became the next to the last resident in a series of great Danish scientists of the Carlsberg Mansion and Honor, which had earlier been occupied by Niels Bohr among others. In 1987, he died after a short period of illness.
Science
Bengt Strömgren made momentous contributions to astrophysics. He found that the chemical composition of stars was very much different than previously assumed. In the late 1930s, he found the relative abundance of hydrogen to be nearly 70%, and helium to be about 27%. Just before the war, he discovered the so-called Strömgren Spheres — huge interstellar shells of ionized hydrogen around stars. And in the 1950s and 1960s, he pioneered photoelectric photometry with a novel four-colour system, now called Strömgren photometric system. Apart from the Danish observatory of Brorfelde, Strömgren was active in the early organisation of the joint European observatory of ESO at La Silla in Chile.
Honors
Awards
Bruce Medal (1959)
Gold Medal of the Royal Astronomical Society (1962)
Henry Norris Russell Lectureship (1965)
Janssen Medal from the French Academy of Sciences (1967)
Memberships
Member of the American Academy of Arts and Sciences (1955)
Member of the United States National Academy of Sciences (1971)
Member of the American Philosophical Society (1973)
Named after him
Asteroid 1846 Bengt
Strömgren age
Strömgren photometry
Strömgren spheres
Strömgren integral
Miscellaneous
Asteroid 1493 Sigrid, named after his wife
References
Sources
Svend Cedergreen Bech (ed.): Dansk Biografisk Leksikon (1979–84), 3rd ed. Put on-line by Den Store Danske in 2011.
Bengt Strömgrens life among the stars, Niels Bohr Institute
Bruce Medalist Bengt Strömgren
Autobiography in a celebrative commemorative published by University of Copenhagen November 1930 (176-78) (Berlingske Tidende id. 17.9.1962).
Knude, Jens, Bengt Stromgren's Work in Photometry, in A.G.D. Philip, A.R. Upgren and K.A. Janes, eds., "Precision Photometry: Astrophysics of the Galaxy", Proceedings of the conference held 3–4 October 1990 at Union College, Schenectady, NY (Davis Press, Schenectady, NY, 1991).
Rebsdorf, Simon Olling (May 2003): Bengt Strömgren: growing up with astronomy, 1908 - 1932, Journal for the History of Astronomy (ISSN 0021-8286), Vol. 34, Part 2, No. 115, pp. 171 – 199 (2003)
Rebsdorf, Simon Olling (August 2004): The Father, the Son, and the Stars: Bengt Strömgren and the History of Twentieth Century Astronomy in Denmark and in the USA. Ph.D. dissertation, University of Aarhus.
Rebsdorf, Simon Olling (February 2007): Bengt Strömgren: Interstellar Glow, Helium Content, and Solar Life Supply, 1932–1940. Centaurus, Vol. 49, Issue 1, pages 56–79.
Gustafsson, Bengt (2009): Bengt Strömgren’s Approach to the Galaxy, in J. Andersen, J. Bland-Hawthorn & B. Nordström, eds., "The Galaxy Disk in Cosmological Context." Proceedings IAU Symposium No. 254 (Cambridge Univ. Press, 2009), pp. 3–16.
External links
Bibliography of Bengt Strömgren
Bruce Medalist Bengt Strömgren
1908 births
1987 deaths
20th-century Danish astronomers
University of Copenhagen alumni
Academic staff of the University of Copenhagen
Danish people of Swedish descent
University of Chicago faculty
Institute for Advanced Study faculty
Science teachers
Recipients of the Gold Medal of the Royal Astronomical Society
Foreign associates of the National Academy of Sciences
Scientists from Gothenburg
Presidents of the International Astronomical Union
Members of the American Philosophical Society | Bengt Strömgren | [
"Astronomy"
] | 1,352 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
579,579 | https://en.wikipedia.org/wiki/Commission%20for%20Dark%20Skies | The Commission for Dark Skies (CfDS) (formerly the Campaign for Dark Skies; the name was changed on March 29, 2015) is the United Kingdom's largest anti-light-pollution campaign group forming part of the international dark-sky movement.
It is run by the British Astronomical Association (BAA) and affiliated with the International Dark-Sky Association (IDA), and composed of a network of local officers (and other members) who try to improve lighting in their areas and advise local people.
The campaign was founded in 1989 by amateur astronomers as a sub-section of the BAA specialising in combatting skyglow. It is now open to non-members of the BAA, includes lighting engineers and environmentalists, and campaigns on the wider effects of light pollution.
In April 2023, the founder and coordinator of CfDS, Robert Mizon MBE died. Following a period of mourning and readjustment, the Commission was relaunched in November 2024 with new officers and committee members, tasked to reinvigorate the campaign to reduce light pollution and preserve and protect the UK's dark skies.
Legislation
CfDS's work with the House of Commons Science and Technology Committee on legislating against light pollution has resulted in the government including provisions in their Clean Neighbourhoods and Environment Bill.
Dark sky park, island and reserve
Members of the CfDS have been involved in the following International Dark-Sky Association designations:
Galloway Forest Park – Dark Sky Park (2009)
Sark – IDA's first international dark-sky island (Silver tier) (2011)
Exmoor – Dark Sky Reserve (2011)
Elan Valley Estate (mid-Wales) Dark Sky Park (2015)
Tomintoul and Glenlivet-Cairngorms Dark Sky Park (2018)
Publications
In 2009, the CfDS published its handbook Blinded by the Light?.
Conferences
CfDS 2006: Dark-Skies Symposium, Portsmouth, UK, September 15–16, 2006.
Exterior lighting, statutory nuisance and light pollution, De Montfort University, April, 2006.
Planning, Exterior Lighting and the Environment, De Montfort University, 20 April 2012.
Notes
References
Campaign for Dark Skies, Blinded by the Light - A Handbook on light pollution, Campaign for Dark Skies, 2009
Mizon, Bob, Light Pollution - Responses and Remedies, Springer, 2002. ()
Mizon, Bob, Light Pollution - Responses and Remedies, 2nd Edition, Springer, 2012. ()
Mizon, Bob, "20 years of fighting for the stars", Astronomy Now, September 2009. pp28–31
Mizon, Bob, 'Finding a Million-Star Hotel' Springer, 2018 ()
Philip's, in association with the BAA Campaign for Dark Skies, Dark Skies Map, Philip's, 2004. ()
Tabb, Michael, "Where are the UK's darkest skies", Astronomy Now, November 2004. pp75–6 (Article on the production of the Philip's Dark Skies Map.)
Various authors, "Focus - Light Pollution", Astronomy Now, April 2001. pp49–59
External links
The CfDS's online image library
CfDS - Facebook
CfDS - Twitter
Light Pollution and Astronomy - House of Commons Report, 2003. (pdf)
Conference Proceedings
CfDS 2006 (pdf)
Exterior lighting, statutory nuisance and light pollution, DeMontfort University, April, 2006
Light Pollution: Causes and effects (pdf)
Exterior Lighting as a Statutory Nuisance (pdf)
Is Lighting Needed to reduce Crime? (pdf)
Effective use of lighting as a crime deterrent (pdf)
+
Organizations established in 1989
Environmental organisations based in the United Kingdom
Astronomy in the United Kingdom
Amateur astronomy organizations | Commission for Dark Skies | [
"Astronomy"
] | 752 | [
"Amateur astronomy organizations",
"Astronomy organizations"
] |
579,645 | https://en.wikipedia.org/wiki/Strontium%20titanate | Strontium titanate is an oxide of strontium and titanium with the chemical formula SrTiO3. At room temperature, it is a centrosymmetric paraelectric material with a perovskite structure. At low temperatures it approaches a ferroelectric phase transition with a very large dielectric constant ~104 but remains paraelectric down to the lowest temperatures measured as a result of quantum fluctuations, making it a quantum paraelectric. It was long thought to be a wholly artificial material, until 1982 when its natural counterpart—discovered in Siberia and named tausonite—was recognised by the IMA. Tausonite remains an extremely rare mineral in nature, occurring as very tiny crystals. Its most important application has been in its synthesized form wherein it is occasionally encountered as a diamond simulant, in precision optics, in varistors, and in advanced ceramics.
The name tausonite was given in honour of Lev Vladimirovich Tauson (1917–1989), a Russian geochemist. Disused trade names for the synthetic product include strontium mesotitanate, Diagem, and Marvelite. This product is currently being marketed for its use in jewelry under the name Fabulite. Other than its type locality of the Murun Massif in the Sakha Republic, natural tausonite is also found in Cerro Sarambi, Concepción department, Paraguay; and along the Kotaki River of Honshū, Japan.
Properties
SrTiO3 has an indirect band gap of 3.25 eV and a direct gap of 3.75 eV in the typical range of semiconductors.
Synthetic strontium titanate has a very large dielectric constant (300) at room temperature and low electric field. It has a specific resistivity of over 109 Ω-cm for very pure crystals. It is also used in high-voltage capacitors.
Introducing mobile charge carriers by doping leads to Fermi-liquid metallic behavior already at very low charge carrier densities.
At high electron densities strontium titanate becomes superconducting below 0.35 K and was the first insulator and oxide discovered to be superconductive.
Strontium titanate is both much denser (specific gravity 4.88 for natural, 5.13 for synthetic) and much softer (Mohs hardness 5.5 for synthetic, 6–6.5 for natural) than diamond. Its crystal system is cubic and its refractive index (2.410—as measured by sodium light, 589.3 nm) is nearly identical to that of diamond (at 2.417), but the dispersion (the optical property responsible for the "fire" of the cut gemstones) of strontium titanate is 4.3x that of diamond, at 0.190 (B–G interval). This results in a shocking display of fire compared to diamond and diamond simulants such as YAG, GAG, GGG, Cubic Zirconia, and Moissanite.
Synthetics are usually transparent and colourless, but can be doped with certain rare earth or transition metals to give reds, yellows, browns, and blues. Natural tausonite is usually translucent to opaque, in shades of reddish brown, dark red, or grey. Both have an adamantine (diamond-like) lustre. Strontium titanate is considered extremely brittle with a conchoidal fracture; natural material is cubic or octahedral in habit and streaks brown. Through a hand-held (direct vision) spectroscope, doped synthetics will exhibit a rich absorption spectrum typical of doped stones. Synthetic material has a melting point of ca. 2080 °C (3776 °F) and is readily attacked by hydrofluoric acid. Under extremely low oxygen partial pressure, strontium titanate decomposes via incongruent sublimation of strontium well below the melting temperature.
At temperatures lower than 105 K, its cubic structure transforms to tetragonal. Its monocrystals can be used as optical windows and high-quality sputter deposition targets.
SrTiO3 is an excellent substrate for epitaxial growth of high-temperature superconductors and many oxide-based thin films. It is particularly well known as the substrate for the growth of the lanthanum aluminate-strontium titanate interface. Doping strontium titanate with niobium makes it electrically conductive, being one of the only conductive commercially available single crystal substrates for the growth of perovskite oxides. Its bulk lattice parameter of 3.905Å makes it suitable as the substrate for the growth of many other oxides, including the rare-earth manganites, titanates, lanthanum aluminate (LaAlO3), strontium ruthenate (SrRuO3) and many others. Oxygen vacancies are fairly common in SrTiO3 crystals and thin films. Oxygen vacancies induce free electrons in the conduction band of the material, making it more conductive and opaque. These vacancies can be caused by exposure to reducing conditions, such as high vacuum at elevated temperatures.
High-quality, epitaxial SrTiO3 layers can also be grown on silicon without forming silicon dioxide, thereby making SrTiO3 an alternative gate dielectric material. This also enables the integration of other thin film perovskite oxides onto silicon.
SrTiO3 can change its properties when it is exposed to light. These changes depend on the temperature and the defects in the material. SrTiO3 has been shown to possess persistent photoconductivity where exposing the crystal to light will increase its electrical conductivity by over 2 orders of magnitude. After the light is turned off, the enhanced conductivity persists for several days, with negligible decay. At low temperatures, the main effects of light are electronic, meaning that they involve the creation, movement, and recombination of electrons and holes (positive charges) in the material. These effects include photoconductivity, photoluminescence, photovoltage, and photochromism. They are influenced by the defect chemistry of SrTiO3, which determines the energy levels, band gap, carrier concentration, and mobility of the material. At high temperatures (>200 °C), the main effects of light are photoionic, meaning that they involve the migration of oxygen vacancies (negative ions) in the material. These vacancies are the main ionic defects in SrTiO3, and they can alter the electronic structure, defect chemistry, and surface properties of the material. These effects include photoinduced phase transitions, photoinduced oxygen exchange, and photoinduced surface reconstruction. They are influenced by the oxygen pressure, the crystal structure, and the doping level of SrTiO3.
Due to the significant ionic and electronic conduction of SrTiO3, it is potent to be used as the mixed conductor.
Synthesis
Synthetic strontium titanate was one of several titanates patented during the late 1940s and early 1950s; other titanates included barium titanate and calcium titanate. Research was conducted primarily at the National Lead Company (later renamed NL Industries) in the United States, by Leon Merker and Langtry E. Lynd. Merker and Lynd first patented the growth process on February 10, 1953; a number of refinements were subsequently patented over the next four years, such as modifications to the feed powder and additions of colouring dopants.
A modification to the basic Verneuil process (also known as flame-fusion) is the favoured method of growth. An inverted oxy-hydrogen blowpipe is used, with feed powder mixed with oxygen carefully fed through the blowpipe in the typical fashion, but with the addition of a third pipe to deliver oxygen—creating a tricone burner. The extra oxygen is required for successful formation of strontium titanate, which would otherwise fail to oxidize completely due to the titanium component. The ratio is ca. 1.5 volumes of hydrogen for each volume of oxygen. The highly purified feed powder is derived by first producing titanyl double oxalate salt (SrTiO(C2O4)2) by reacting strontium chloride (SrCl2) and oxalic acid ((COOH)2) with titanium tetrachloride (TiCl4). The salt is washed to eliminate chloride, heated to 1000 °C in order to produce a free-flowing granular
powder of the required composition, and is then ground and sieved to ensure all particles are between 0.2 and 0.5 micrometres in size.
The feed powder falls through the oxyhydrogen flame, melts, and lands on a rotating and slowly descending pedestal below. The height of the pedestal is constantly adjusted to keep its top at the optimal position below the flame, and over a number of hours the molten powder cools and crystallises to form a single pedunculated pear or boule crystal. This boule is usually no larger than 2.5 centimetres in diameter and 10 centimetres long; it is an opaque black to begin with, requiring further annealing in an oxidizing atmosphere in order to make the crystal colourless and to relieve strain. This is done at over 1000 °C for 12 hours.
Thin films of SrTiO3 can be grown epitaxially by various methods, including pulsed laser deposition, molecular beam epitaxy, RF sputtering and atomic layer deposition. As in most thin films, different growth methods can result in significantly different defect and impurity densities and crystalline quality, resulting in a large variation of the electronic and optical properties.
Use as a diamond simulant
Its cubic structure and high dispersion once made synthetic strontium titanate a prime candidate for simulating diamond. Beginning , large quantities of strontium titanate were manufactured for this sole purpose. Strontium titanate was in competition with synthetic rutile ("titania") at the time, and had the advantage of lacking the unfortunate yellow tinge and strong birefringence inherent to the latter material. While it was softer, it was significantly closer to diamond in likeness. Eventually, however, both would fall into disuse, being eclipsed by the creation of "better" simulants: first by yttrium aluminium garnet (YAG) and followed shortly after by gadolinium gallium garnet (GGG); and finally by the (to date) ultimate simulant in terms of diamond-likeness and cost-effectiveness, cubic zirconia.
Despite being outmoded, strontium titanate is still manufactured and periodically encountered in jewellery. It is one of the most costly of diamond simulants, and due to its rarity collectors may pay a premium for large i.e. >2 carat (400 mg) specimens. As a diamond simulant, strontium titanate is most deceptive when mingled with melée i.e. <0.20 carat (40 mg) stones and when it is used as the base material for a composite or doublet stone (with, e.g., synthetic corundum as the crown or top of the stone). Under the microscope, gemmologists distinguish strontium titanate from diamond by the former's softness—manifested by surface abrasions—and excess dispersion (to the trained eye), and occasional gas bubbles which are remnants of synthesis. Doublets can be detected by a join line at the girdle ("waist" of the stone) and flattened air bubbles or glue visible within the stone at the point of bonding.
Use in radioisotope thermoelectric generators
Due to its high melting point and insolubility in water, strontium titanate has been used as a strontium-90-containing material in radioisotope thermoelectric generators (RTGs), such as the US Sentinel and Soviet Beta-M series. As strontium-90 has a high fission product yield and is easily extracted from spent nuclear fuel, Sr-90 based RTGs can in principle be produced cheaper than those based on plutonium-238 or other radionuclides which have to be produced in dedicated facilities. However, due to the lower power density (~0.45W thermal per gram of Strontium-90-Titanate) and half life, space based applications, which put a particular premium on low weight, high reliability and longevity prefer Plutonium-238. Terrestrial off-grid applications of RTGs meanwhile have been largely phased out due to concern over orphan sources and the decreasing price and increasing availability of solar panels, small wind turbines, chemical battery storage and other off-grid power solutions.
Use in solid oxide fuel cells
Strontium titanate's mixed conductivity has attracted attention for use in solid oxide fuel cells (SOFCs). It demonstrates both electronic and ionic conductivity which is useful for SOFC electrodes because there is an exchange of gas and oxygen ions in the material and electrons on both sides of the cell.
H2 + O^2- -> H2O + 2e- (anode)
1/2O2 + 2e- -> O^2- (cathode)
Strontium titanate is doped with different materials for use on different sides of a fuel cell. On the fuel side (anode), where the first reaction occurs, it is often doped with lanthanum to form lanthanum-doped strontium titanate (LST). In this case, the A-site, or position in the unit cell where strontium usually sits, is sometimes filled by lanthanum instead, this causes the material to exhibit n-type semiconductor properties, including electronic conductivity. It also shows oxygen ion conduction due to the perovskite structure tolerance for oxygen vacancies. This material has a thermal coefficient of expansion similar to that of the common electrolyte yttria-stabilized zirconia (YSZ), chemical stability during the reactions which occur at fuel cell electrodes, and electronic conductivity of up to 360 S/cm under SOFC operating conditions. Another key advantage of these LST is that it shows a resistance to sulfur poisoning, which is an issue with the currently used nickel - ceramic (cermet) anodes.
Another related compound is strontium titanium ferrite (STF) which is used as a cathode (oxygen-side) material in SOFCs. This material also shows mixed ionic and electronic conductivity which is important as it means the reduction reaction which happens at the cathode can occur over a wider area. Building on this material by adding cobalt on the B-site (replacing titanium) as well as iron, we have the material STFC, or cobalt-substituted STF, which shows remarkable stability as a cathode material as well as lower polarization resistance than other common cathode materials such as lanthanum strontium cobalt ferrite. These cathodes also have the advantage of not containing rare earth metals which make them cheaper than many of the alternatives.
See also
Calcium copper titanate
References
External links
An electron micrograph of strontium titanate, as artwork entitled "Strontium" at the DeYoung Museum in San Francisco
Titanates
Strontium compounds
Gemstones
Ceramic materials
Transition metal oxides
Diamond simulants
Perovskites | Strontium titanate | [
"Physics",
"Engineering"
] | 3,216 | [
"Materials",
"Ceramic materials",
"Gemstones",
"Ceramic engineering",
"Matter"
] |
579,673 | https://en.wikipedia.org/wiki/Jean-Yves%20Girard | Jean-Yves Girard (; born 1947) is a French logician working in proof theory. He is a research director (emeritus) at the mathematical institute of University of Aix-Marseille, at Luminy.
Biography
Jean-Yves Girard is an alumnus of the École normale supérieure de Saint-Cloud.
He made a name for himself in the 1970s with his proof of strong normalization in a system of second-order logic called System F. This result gave a new proof of Takeuti's conjecture, which was proven a few years earlier by William W. Tait, Motō Takahashi and Dag Prawitz. For this purpose, he introduced the notion of "reducibility candidate" ("candidat de réducibilité"). He is also credited with the discovery of Girard's paradox, linear logic, the geometry of interaction, ludics, and (satirically) the mustard watch.
He obtained the CNRS Silver Medal in 1983 and is a member of the French Academy of Sciences.
Bibliography
Jean-Yves Girard (2011). The Blind Spot: Lectures on Logic
See also
Affine logic
Linear logic
References
External links
Journées Jean-Yves Girard web site of 2007 conference in honour of Girard's 60th birthday
Living people
Proof theorists
French mathematicians
French logicians
ENS Fontenay-Saint-Cloud-Lyon alumni
Members of the French Academy of Sciences
1947 births
French National Centre for Scientific Research scientists
20th-century French philosophers
21st-century French philosophers
French male non-fiction writers | Jean-Yves Girard | [
"Mathematics"
] | 318 | [
"Proof theorists",
"Proof theory"
] |
579,675 | https://en.wikipedia.org/wiki/Linear%20logic | Linear logic is a substructural logic proposed by French logician Jean-Yves Girard as a refinement of classical and intuitionistic logic, joining the dualities of the former with many of the constructive properties of the latter. Although the logic has also been studied for its own sake, more broadly, ideas from linear logic have been influential in fields such as programming languages, game semantics, and quantum physics (because linear logic can be seen as the logic of quantum information theory), as well as linguistics, particularly because of its emphasis on resource-boundedness, duality, and interaction.
Linear logic lends itself to many different presentations, explanations, and intuitions.
Proof-theoretically, it derives from an analysis of classical sequent calculus in which uses of (the structural rules) contraction and weakening are carefully controlled. Operationally, this means that logical deduction is no longer merely about an ever-expanding collection of persistent "truths", but also a way of manipulating resources that cannot always be duplicated or thrown away at will. In terms of simple denotational models, linear logic may be seen as refining the interpretation of intuitionistic logic by replacing cartesian (closed) categories by symmetric monoidal (closed) categories, or the interpretation of classical logic by replacing Boolean algebras by C*-algebras.
Connectives, duality, and polarity
Syntax
The language of classical linear logic (CLL) is defined inductively by the BNF notation
Here and range over logical atoms. For reasons to be explained below, the connectives ⊗, ⅋, 1, and ⊥ are called multiplicatives, the connectives &, ⊕, ⊤, and 0 are called additives, and the connectives ! and ? are called exponentials.
We can further employ the following terminology:
Binary connectives ⊗, ⊕, & and ⅋ are associative and commutative; 1 is the unit for ⊗, 0 is the unit for ⊕, ⊥ is the unit for ⅋ and ⊤ is the unit for &.
Every proposition in CLL has a dual , defined as follows:
Observe that is an involution, i.e., for all propositions. is also called the linear negation of .
The columns of the table suggest another way of classifying the connectives of linear logic, termed : the connectives negated in the left column (⊗, ⊕, 1, 0, !) are called positive, while their duals on the right (⅋, &, ⊥, ⊤, ?) are called negative; cf. table on the right.
is not included in the grammar of connectives, but is definable in CLL using linear negation and multiplicative disjunction, by . The connective ⊸ is sometimes pronounced "lollipop", owing to its shape.
Sequent calculus presentation
One way of defining linear logic is as a sequent calculus. We use the letters and to range over lists of propositions , also called contexts. A sequent places a context to the left and the right of the turnstile, written . Intuitively, the sequent asserts that the conjunction of entails the disjunction of (though we mean the "multiplicative" conjunction and disjunction, as explained below). Girard describes classical linear logic using only one-sided sequents (where the left-hand context is empty), and we follow here that more economical presentation. This is possible because any premises to the left of a turnstile can always be moved to the other side and dualised.
We now give inference rules describing how to build proofs of sequents.
First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule of exchange:
Note that we do not add the structural rules of weakening and contraction, because we do care about the absence of propositions in a sequent, and the number of copies present.
Next we add initial sequents and cuts:
The cut rule can be seen as a way of composing proofs, and initial sequents serve as the units for composition. In a certain sense these rules are redundant: as we introduce additional rules for building proofs below, we will maintain the property that arbitrary initial sequents can be derived from atomic initial sequents, and that whenever a sequent is provable it can be given a cut-free proof.
Ultimately, this canonical form property (which can be divided into the completeness of atomic initial sequents and the cut-elimination theorem, inducing a notion of analytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-aware lambda-calculus.
Now, we explain the connectives by giving logical rules. Typically in sequent calculus one gives both "right-rules" and "left-rules" for each connective, essentially describing two modes of reasoning about propositions involving that connective (e.g., verification and falsification).
In a one-sided presentation, one instead makes use of negation: the right-rules for a connective (say ⅋) effectively play the role of left-rules for its dual (⊗).
So, we should expect a certain "harmony" between the rule(s) for a connective and the rule(s) for its dual.
Multiplicatives
The rules for multiplicative conjunction (⊗) and disjunction (⅋):
and for their units:
Observe that the rules for multiplicative conjunction and disjunction are admissible for plain conjunction and disjunction under a classical interpretation
(i.e., they are admissible rules in LK).
Additives
The rules for additive conjunction (&) and disjunction (⊕) :
and for their units:
Observe that the rules for additive conjunction and disjunction are again admissible under a classical interpretation.
But now we can explain the basis for the multiplicative/additive distinction in the rules for the two different versions of conjunction: for the multiplicative connective (⊗), the context of the conclusion () is split up between the premises, whereas for the additive case connective (&) the context of the conclusion () is carried whole into both premises.
Exponentials
The exponentials are used to give controlled access to weakening and contraction. Specifically, we add structural rules of weakening and contraction for 'd propositions:
and use the following logical rules, in which stands for a list of propositions each prefixed with :
One might observe that the rules for the exponentials follow a different pattern from the rules for the other connectives, resembling the inference rules governing modalities in sequent calculus formalisations of the normal modal logic S4, and that there is no longer such a clear symmetry between the duals and .
This situation is remedied in alternative presentations of CLL (e.g., the LU presentation).
Remarkable formulas
In addition to the De Morgan dualities described above, some important equivalences in linear logic include:
Distributivity
By definition of as , the last two distributivity laws also give:
(Here is .)
Exponential isomorphism
Linear distributions
A map that is not an isomorphism yet plays a crucial role in linear logic is:
Linear distributions are fundamental in the proof theory of linear logic. The consequences of this map were first investigated in and called a "weak distribution". In subsequent work it was renamed to "linear distribution" to reflect the fundamental connection to linear logic.
Other implications
The following distributivity formulas are not in general an equivalence, only an implication:
Encoding classical/intuitionistic logic in linear logic
Both intuitionistic and classical implication can be recovered from linear implication by inserting exponentials: intuitionistic implication is encoded as , while classical implication can be encoded as or (or a variety of alternative possible translations). The idea is that exponentials allow us to use a formula as many times as we need, which is always possible in classical and intuitionistic logic.
Formally, there exists a translation of formulas of intuitionistic logic to formulas of linear logic in a way that guarantees that the original formula is provable in intuitionistic logic if and only if the translated formula is provable in linear logic. Using the Gödel–Gentzen negative translation, we can thus embed classical first-order logic into linear first-order logic.
The resource interpretation
Lafont (1993) first showed how intuitionistic linear logic can be explained as a logic of resources, so providing the logical language with access to formalisms that can be used for reasoning about resources within the logic itself, rather than, as in classical logic, by means of non-logical predicates and relations. Tony Hoare (1985)'s classic example of the vending machine can be used to illustrate this idea.
Suppose we represent having a candy bar by the atomic proposition , and having a dollar by . To state the fact that a dollar will buy you one candy bar, we might write the implication . But in ordinary (classical or intuitionistic) logic, from and one can conclude . So, ordinary logic leads us to believe that we can buy the candy bar and keep our dollar! Of course,
we can avoid this problem by using more sophisticated encodings, although typically such encodings suffer from the frame problem. However, the rejection of weakening and contraction allows linear logic to avoid this kind of spurious reasoning even with the "naive" rule. Rather than , we express the property of the vending machine as a linear implication . From and this fact, we can conclude , but not . In general, we can use the linear logic proposition to express the validity of transforming resource into resource .
Running with the example of the vending machine, consider the "resource interpretations" of the other multiplicative and additive connectives. (The exponentials provide the means to combine this resource interpretation with the usual notion of persistent logical truth.)
Multiplicative conjunction denotes simultaneous occurrence of resources, to be used as the consumer directs. For example, if you buy a stick of gum and a bottle of soft drink, then you are requesting . The constant 1 denotes the absence of any resource, and so functions as the unit of ⊗.
Additive conjunction represents alternative occurrence of resources, the choice of which the consumer controls. If in the vending machine there is a packet of chips, a candy bar, and a can of soft drink, each costing one dollar, then for that price you can buy exactly one of these products. Thus we write . We do not write , which would imply that one dollar suffices for buying all three products together. However, from , we can correctly deduce , where . The unit ⊤ of additive conjunction can be seen as a wastebasket for unneeded resources. For example, we can write to express that with three dollars you can get a candy bar and some other stuff, without being more specific (for example, chips and a drink, or $2, or $1 and chips, etc.).
Additive disjunction represents alternative occurrence of resources, the choice of which the machine controls. For example, suppose the vending machine permits gambling: insert a dollar and the machine may dispense a candy bar, a packet of chips, or a soft drink. We can express this situation as . The constant 0 represents a product that cannot be made, and thus serves as the unit of ⊕ (a machine that might produce or is as good as a machine that always produces because it will never succeed in producing a 0). So unlike above, we cannot deduce from this.
Other proof systems
Proof nets
Introduced by Jean-Yves Girard, proof nets have been created to avoid the bureaucracy, that is all the things that make two derivations different in the logical point of view, but not in a "moral" point of view.
For instance, these two proofs are "morally" identical:
The goal of proof nets is to make them identical by creating a graphical representation of them.
Semantics
Algebraic semantics
Decidability/complexity of entailment
The entailment relation in full CLL is undecidable. When considering fragments of
CLL, the decision problem has varying complexity:
Multiplicative linear logic (MLL): only the multiplicative connectives. MLL entailment is NP-complete, even restricting to Horn clauses in the purely implicative fragment, or to atom-free formulas.
Multiplicative-additive linear logic (MALL): only multiplicatives and additives (i.e., exponential-free). MALL entailment is PSPACE-complete.
Multiplicative-exponential linear logic (MELL): only multiplicatives and exponentials. By reduction from the reachability problem for Petri nets, MELL entailment must be at least EXPSPACE-hard, although decidability itself has had the status of a longstanding open problem. In 2015, a proof of decidability was published in the journal Theoretical Computer Science, but was later shown to be erroneous.
Affine linear logic (that is linear logic with weakening, an extension rather than a fragment) was shown to be decidable, in 1995.
Variants
Many variations of linear logic arise by further tinkering with the structural rules:
Affine logic, which forbids contraction but allows global weakening (a decidable extension).
Strict logic or relevant logic, which forbids weakening but allows global contraction.
Non-commutative logic or ordered logic, which removes the rule of exchange, in addition to barring weakening and contraction. In ordered logic, linear implication divides further into left-implication and right-implication.
Different intuitionistic variants of linear logic have been considered. When based on a single-conclusion sequent calculus presentation, like in ILL (Intuitionistic Linear Logic), the connectives ⅋, ⊥, and ? are absent, and linear implication is treated as a primitive connective. In FILL (Full Intuitionistic Linear Logic) the connectives ⅋, ⊥, and ? are present, linear implication is a primitive connective and, similarly to what happens in intuitionistic logic, all connectives (except linear negation) are independent.
There are also first- and higher-order extensions of linear logic, whose formal development is somewhat standard (see first-order logic and higher-order logic).
See also
Chu spaces
Computability logic
Game semantics
Geometry of interaction
Intuitionistic logic
Linear logic programming
Linear type system, a substructural type system
Ludics
Proof nets
Uniqueness type
References
Further reading
Girard, Jean-Yves. Linear logic, Theoretical Computer Science, Vol 50, no 1, pp. 1–102, 1987.
Girard, Jean-Yves, Lafont, Yves, and Taylor, Paul. Proofs and Types. Cambridge Press, 1989.
Hoare, C. A. R., 1985. Communicating Sequential Processes. Prentice-Hall International.
Lafont, Yves, 1993. Introduction to Linear Logic. Lecture notes from TEMPUS Summer School on Algebraic and Categorical Methods in Computer Science, Brno, Czech Republic.
Troelstra, A.S. Lectures on Linear Logic. CSLI (Center for the Study of Language and Information) Lecture Notes No. 29. Stanford, 1992.
A. S. Troelstra, H. Schwichtenberg (1996). Basic Proof Theory. In series Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, .
Di Cosmo, Roberto, and Danos, Vincent. The linear logic primer.
Introduction to Linear Logic (Postscript) by Patrick Lincoln
Introduction to Linear Logic by Torben Brauner
A taste of linear logic by Philip Wadler
Linear Logic by Roberto Di Cosmo and Dale Miller. The Stanford Encyclopedia of Philosophy (Fall 2006 Edition), Edward N. Zalta (ed.).
Overview of linear logic programming by Dale Miller. In Linear Logic in Computer Science, edited by Ehrhard, Girard, Ruet, and Scott. Cambridge University Press. London Mathematical Society Lecture Notes, Volume 316, 2004.
Linear Logic Wiki
External links
A Linear Logic Prover (llprover) , available for use online, from: Naoyuki Tamura / Dept of CS / Kobe University / Japan
Click And Collect interactive linear logic prover, available online
Non-classical logic
Substructural logic
Logic | Linear logic | [
"Mathematics"
] | 3,392 | [
"Substructural logic",
"Proof theory"
] |
579,687 | https://en.wikipedia.org/wiki/Graflex | Graflex was a manufacturer that gave its brand name to several camera models.
The company was founded as the Folmer and Schwing Manufacturing Company in New York City in 1887 by William F. Folmer and William E. Schwing as a metal working factory, manufacturing gas light fixtures, chandeliers, bicycles and eventually, cameras.
In 1909, it was acquired by George Eastman, and the company was moved to 12 Caledonia Avenue (later renamed Clarissa Street) in Rochester, New York in 1928. It operated as the Folmer & Schwing Division of the Eastman Kodak Company.
In 1926, Kodak was forced to divest itself of the division, which was spun off forming a new company, the Folmer Graflex Corporation, which changed its name to Graflex Inc. in 1946. In 1956, it became a Division of the General Instrument Precision Company, and moved its offices to Pittsford, New York outside Rochester. In 1968, the company was sold to the Singer Corporation.
Graflex was known for the quintessential press camera, the Speed Graphic which was manufactured for over 60 years, and was used by most of the photojournalists in the first half of the 20th century.
History
William F. Folmer, an inventor, co-owned the Folmer and Schwing Manufacturing Company, founded in New York City as a gas lamp company. As the gas lamp market declined, the company expanded into other areas including bicycles and photographic equipment, leading to the release of the first Graflex camera in 1899. As the company's success grew, it chose to focus on photography and dropped its other manufacturing lines, and in 1905 was acquired by George Eastman, in 1907 becoming the Folmer Graflex Division of Eastman Kodak. After a succession of name changes, it finally became simply "Graflex, Inc." in 1945. Eastman Kodak made all of the Graflex cameras in their professional equipment manufacturing plant on Clarrisa street in Rochester NY. In 1926, as a result of violations of the Sherman Anti-Trust Act (Comp. St. § 8820 et seq.) Kodak was forced to divest itself of its professional equipment division, which became Graflex Inc. This company existed under independent ownership until 1958, when the company was bought by the General Precision Equipment, which operated it as an independent division until 1968, when it was sold to the Singer Corporation, who also operated it as a division until 1973, when it was finally wrapped up and its tooling sold to the Toyo Corporation.
From 1912 to 1973 Graflex produced large format and medium format press cameras in film formats from to . They also produced rangefinder, SLR and TLR cameras in a variety of formats ranging from 35mm to .
The Rochester Folmer plant also manufactured the Century Studio Camera, which was marketed under both the Kodak and Graflex nameplates. However, because Graflex printed separate catalogs for its studio and portable offerings, many erroneously believe the Century Studios to have been manufactured elsewhere.
Graflex Reflex cameras
The first of the Graflex-branded cameras, released in 1898, was the Graflex camera, also known as the Graflex Reflex, or Graflex single lens reflex (SLR). This camera used the same swinging-mirror, through-the-lens viewing mechanism as modern single lens reflex cameras, introduced many decades later, and quickly became popular for sports and press photography in the early 20th century due largely to its use of a focal plane shutter. To produce shutter speeds fast enough to appear to freeze rapid motion, early Graflex cameras employed a cloth shutter with a narrow slit that quickly moved across the film plane, exposing only one small strip at any given moment in its travel. To set the shutter speed, the photographer wound the shutter spring to one of a series of calculated tensions using a key, and selected the slit width with another control. A table on the side of the box gave the shutter speed for each combination. The Graflex Reflex was also popular among early 20th Century fine art photographers, leading several lens manufacturers to design special soft-focus lenses, including the famous Wollensak's Verito, to support the camera's creative potential.
Speed Graphic and Crown Graphic press cameras
Graflex Speed Graphic folding cameras, produced from 1912 to 1973 also employed a focal plane shutter, but omitted the SLR swinging mirror and through-the lens viewing, replacing it by an external viewfinder, while retaining a view camera's traditional ground glass for static subjects. This allowed the camera to be considerably lighter, and fold into a rugged boxy shape. These cameras could also be used with "between-the-lens" shutters mounted to the front lens board as more typically seen on large format cameras. The Speed Graphic became even more popular than the Graflex Reflex as a press and sports camera, so much so that to this type of classic press camera features in the masthead of the New York Daily News. The top-to-bottom motion of the focal plane shutter exposed the upper portion of the film first (i.e., the bottom of the inverted image as seen at the focal plane), so many photographs of automobile racing taken with Speed Graphics depicted the wheels of cars in an oval shape leaning forward. This feature was so ubiquitous in racing photography that it came to be a conventional graphical indication for speed, influencing many cartoonists who drew wheels in this same style to indicate fast motion.
Speed Graphics have also been used with success by many fine art photographers, as they work quite well with special un-shuttered lenses that were manufactured originally for the Graflex Reflex. Speed Graphics are still widely used by modern fine art photographers because of their unique image creation capabilities and simple, easily serviced mechanical design.
The Crown Graphic models of this same period were similar in overall design to the Speed Graphics, but omitted their focal plane shutter, allowing Crown Graphic models to be about one inch (2.5 cm) smaller and 1 pound lighter (.5 kg) Furthermore, their lack of a focal plane shutter allowed lenses to be mounted closer to the film plane, enabling the use of wider angle lenses on these models.
Graflex cameras
Sources:
Press cameras
1912–1973 Speed Graphic Models
1912–1927 Top Handle Speed Graphic
1928–1939 Pre-Anniversary Speed Graphic
1939–1946 Miniature Speed Graphic
1940–1946 Anniversary Speed Graphic
1947–1970 Pacemaker Speed Graphic
1947–1973 Pacemaker Crown Graphic
1949–1970 Century Graphic
1958–1973 Super Graphic
1961–1970 Super Speed Graphic
Other large format and SLR cameras
1907–1923 Press Graflex -
1909–1941 Auto Graflex
1923–1952 R.B.Graflex Series B (R.B. for Rotating Back)
1938–1942 Crown View
1941–1949 Graphic View
1949–1967 Graphic View II
1912–1940 5x7 Home Portrait Graflex
1923–1932 5x7 Series B Graflex
5x7, 3x5 Compact Graflex
5x7 Stereo Graflex
1928–1947 3x4 and 4x5 Series D
1941–1963 Super D Graflex
3x4 Series C Graflex with Cooke 2.5 Lens
3x4 and 4x5 RB Auto Graflex
4x5 Naturalist Graflex
(Graflex Century Studio portrait Cameras)
Other 120/220 and 70mm film cameras
1933–1941 National Graflex Series I, Series II
1952–1956 Graflex 22
1965–1973 Graflex XL
1953–1957 Combat Graphic
1971–1976 Graflex Norita (a.k.a. Norita 66)
35mm rangefinder and stereo
1949–1953 Graflex Ciro 35
1955–1962 Graflex Stereo Graphic
1955–1957 Graflex Graphic 35
1957–1961 Graflex Century 35
1959–1963 Graphic 35 Electric (a.k.a. Iloca Electric)
Aerial cameras
1941–1945 Folmer Graflex K-20 Aircraft Camera (a.k.a. Fairchild K-20)
Folmer Graflex K-21 Aircraft Camera
Folmer Graflex K-25 Aircraft Camera
Military cameras
Source:
1940–1943 Graflex C-3
1942–1944 Graflex PH-47-F
1942–1944 Graflex PH-47-E
1947–1949 Graflex PH-47-H
1947–1950 Graflex C-6
1949–1952 Graflex PH-47-J
1953–1957 Graflex KE-4
1953–1955 Graflex KE-12
1965–1973 Graflex XLRF KS-98B
The company name changed several times over the years, as it was absorbed and released by the Kodak empire—finally becoming a division of the Singer Corporation. It dissolved in 1973. The Graflex plant in suburban Pittsford, New York still stands at 3750 Monroe Avenue, and was the corporate headquarters of Veramark Technologies from 1997 to 2010.
Popular culture and today's usage
Because the Speed Graphic had its historical origins as the quintessential press camera prior to the advent of 35mm and digital photography for use by press photographers, some still consider the use of a Graflex obsolete. However, both the Speed Graphic and the Graflex SLR have focal plane shutters that allow use of large un-shuttered barrel lenses. These cameras are now being used by fine art photographers to make images that excel in depth of field control and image detail. As an example, a Kodak Aero Ektar 178 mm f/2.5 lens can be fitted to Speed Graphic 4x5 cameras and used to take soft/sharp photographs with complete control of the depth of focus.
The lightsaber prop used in the 1977 release of Star Wars was a modified 1940s Graflex 3-cell flashgun, which was designed to hold flash bulbs for vintage "Speed Graphic" cameras. The model of the flashgun used in the movie had the patent #2310165 stamped onto the bottom.
A stylized logo of a Speed Graphic camera appears on the flag of the New York Daily News
Graflex, Inc., a company in Jupiter, Fla. manufactures precision optical, electronic, and mechanical devices, mostly for the military.
See also
Fairchild K-20 (a World War II-era aerial camera made by Folmer Graflex Corp., which became Graflex Inc. in 1945)
Press camera
References
Sources
Kingslake, Rudolf, The Rochester Camera and Lens Companies (Rochester NY, Photographic Historical Society, 1974) OCLC 3335854
External links
Homepage of Graflex.Org: "Dedicated to promoting the use and preservation of Graflex Speed Graphics and other classic and large-format cameras."
The Graflex Speed Graphic FAQ on Graflex.org
Graflex.org: Kingslake historical essay
Information on the Graflex Press Camera (at a website run by a collector named Jo Lommen)
Hendersonville Camera Club page on history of photography.
Graflex Camera Catalog Info Historic Camera
Graflex Collection website dedicated to Graflex cameras
Cameras
Kodak
Photography equipment manufacturers of the United States
Manufacturing companies based in Rochester, New York
History of Rochester, New York | Graflex | [
"Technology"
] | 2,304 | [
"Recording devices",
"Cameras"
] |
579,730 | https://en.wikipedia.org/wiki/Data%20center | A data center is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.
Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a medium town. Estimated global data center electricity consumption in 2022 was 240–340 TWh, or roughly 1–1.3% of global electricity demand. This excludes energy used for cryptocurrency mining, which was estimated to be around 110 TWh in 2022, or another 0.4% of global electricity demand. The IEA projects that data center electric use could double between 2022 and 2026. High demand for electricity from data centers, including by cryptomining and artificial intelligence, has also increased strain on local electric grids and increased electricity prices in some markets.
Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers.
History
Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.
During the microcomputer industry boom of the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term data center, as applied to specially designed computer rooms, started to gain popular recognition about this time.
A boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called internet data centers (IDCs), which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the time of outage."
The term cloud data centers (CDCs) has been used. Increasingly, the division of these terms has almost disappeared and they are being integrated into the term data center.
The global data center market saw steady growth in the 2010s, with a notable acceleration in the latter half of the decade. According to Gartner, worldwide data center infrastructure spending reached $200 billion in 2021, representing a 6% increase from 2020 despite the economic challenges posed by the COVID-19 pandemic.
The latter part of the 2010s and early 2020s saw a significant shift towards AI and machine learning applications, generating a global boom for more powerful and efficient data center infrastructure. As of March 2021, global data creation was projected to grow to more than 180 zettabytes by 2025, up from 64.2 zettabytes in 2020.
The United States is currently the foremost leader in data center infrastructure, hosting 5,381 data centers as of March 2024, the highest number of any country worldwide. According to global consultancy McKinsey & Co., U.S. market demand is expected to double to 35 gigawatts (GW) by 2030, up from 17 GW in 2022. As of 2023, the U.S. accounts for roughly 40 percent of the global market.
A study published by the Electric Power Research Institute (EPRI) in May 2024 estimates U.S. data center power consumption could range from 4.6% to 9.1% of the country’s generation by 2030. As of 2023, about 80% of U.S. data center load was concentrated in 15 states, led by Virginia and Texas.
Requirements for modern data centers
Modernization and data center transformation enhances performance and energy efficiency.
Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment.
Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize.
Focus on modernization is not new: concern about obsolete equipment was decried in 2007, and in 2011 Uptime Institute was concerned about the age of the equipment therein. By 2018 concern had shifted once again, this time to the age of the staff: "data center staff are aging faster than the equipment."
Meeting standards for data centers
The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.
Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:
Operate and manage a carrier's telecommunication network
Provide data center based applications directly to the carrier's customers
Provide hosted applications for a third party to provide services to their customers
Provide a combination of these and similar data center applications
Data center transformation
Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.
Standardization/consolidation: Reducing the number of data centers and avoiding server sprawl (both physical and virtual) often includes replacing aging data center equipment, and is aided by standardization.
Virtualization: Lowers capital and operational expenses, reduces energy consumption. Virtualized desktops can be hosted in data centers and rented out on a subscription basis. Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.
Automating: Automating tasks such as provisioning, configuration, patching, release management, and compliance is needed, not just when facing fewer skilled IT workers.
Securing: Protection of virtual systems is integrated with the existing security of physical infrastructures.
Raised floor
A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson.
Although the first raised floor computer room was made by IBM in 1956, and they've "been around since the 1960s", it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficiently.
The first purpose of the raised floor was to allow access for wiring.
Lights out
The lights-out data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.
Noise levels
Generally speaking, local authorities prefer noise levels at data centers to be "10 dB below the existing night-time background noise level at the nearest residence."
OSHA regulations require monitoring of noise levels inside data centers if noise exceeds 85 decibels. The average noise level in server areas of a data center may reach as high as 92-96 dB(A).
Residents living near data centers have described the sound as "a high-pitched whirring noise 24/7", saying “It’s like being on a tarmac with an airplane engine running constantly ... Except that the airplane keeps idling and never leaves.”
External sources of noise include HVAC equipment and energy generators.
Data center design
The field of data center design has been growing for decades in various directions, including new construction big and small along with the creative re-use of existing facilities, like abandoned retail space, old salt mines and war-era bunkers.
a 65-story data center has already been proposed
the number of data centers as of 2016 had grown beyond 3 million USA-wide, and more than triple that number worldwide
Local building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are:
Size - one room of a building, one or more floors, or an entire building,
Capacity - can hold up to or past 1,000 servers
Other considerations - Space, power, cooling, and costs in the data center.
Mechanical engineering infrastructure - heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization.
Electrical engineering infrastructure design - utility service planning; distribution, switching and bypass from power sources; uninterruptible power source (UPS) systems; and more.
Design criteria and trade-offs
Availability expectations: The costs of avoiding downtime should not exceed the cost of the downtime itself
Site selection: Location factors include proximity to power grids, telecommunications infrastructure, networking services, transportation lines and emergency services. Other considerations should include flight paths, neighboring power drains, geological risks, and climate (associated with cooling costs).
Often, power availability is the hardest to change.
High availability
Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many nines can be placed after 99%.
Modularity and flexibility
Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.
A modular data center may consist of data center equipment contained within shipping containers or similar portable containers. Components of the data center can be prefabricated and standardized which facilitates moving if needed.
Environmental control
Temperature and humidity are controlled via:
Air conditioning
indirect cooling, such as using outside air, Indirect Evaporative Cooling (IDEC) units, and also using sea water.
It is important that computers do not get humid or overheat, as high humidity can lead to dust clogging the fans, which leads to overheat, or can cause components to malfunction, ruining the board and running a fire hazard. Overheat can cause components, usually the silicon or copper of the wires or circuits to melt, causing connections to loosen, causing fire hazards.
Electrical power
Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel / gas turbine generators.
To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically given redundant copies, and critical servers are connected to both the A-side and B-side power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
Low-voltage cable routing
Options include:
Data cabling can be routed through overhead cable trays
Raised floor cabling, both for security reasons and to avoid the extra cost of cooling systems over the racks.
Smaller/less expensive data centers may use anti-static tiles instead for a flooring surface.
Air flow
Air flow management addresses the need to improve data center computer cooling efficiency by preventing the recirculation of hot air exhausted from IT equipment and reducing bypass airflow. There are several methods of separating hot and cold airstreams, such as hot/cold aisle containment and in-row cooling units.
Aisle containment
Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. This is similar to how large-scale food companies refrigerate and store their products.
Computer cabinets/Server farms are often organized for containment of hot/cold aisles. Proper air duct placement prevents the cold and hot air from mixing. Rows of cabinets are paired to face each other so that the cool and hot air intakes and exhausts don't mix air, which would severely reduce cooling efficiency.
Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised-floor vented tiles. Either the cold aisle or the hot aisle can be contained.
Another option is fitting cabinets with vertical exhaust duct chimneys. Hot exhaust pipes/vents/ducts can direct the air into a Plenum space above a Dropped ceiling and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement.
Fire protection
Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage.
Although the main room usually does not allow Wet Pipe-based Systems due to the fragile nature of Circuit-boards, there still exist systems that can be used in the rest of the facility or in cold/hot aisle air circulation systems that are closed systems, such as:
Sprinkler systems
Misting, using high pressure to create extremely small water droplets, which can be used in sensitive rooms due to the nature of the droplets.
However, there also exist other means to put out fires, especially in Sensitive areas, usually using Gaseous fire suppression, of which Halon gas was the most popular, until the negative effects of producing and using it were discovered.
Security
Physical access is usually restricted. Layered security often starts with fencing, bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps are starting to be commonplace.
Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent power distribution units, so that locks are networked through the same appliance.
Energy use
Energy use is a central issue for data centers. Power draw ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building. For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center.
Greenhouse gas emissions
In 2020, data centers (excluding cryptocurrency mining) and data transmission each used about 1% of world electricity. Although some of this electricity was low carbon, the IEA called for more "government and industry efforts on energy efficiency, renewables procurement and RD&D", as some data centers still use electricity generated by fossil fuels. They also said that lifecycle emissions should be considered, that is including embodied emissions, such as in buildings. Data centers are estimated to have been responsible for 0.5% of US greenhouse gas emissions in 2018. Some Chinese companies, such as Tencent, have pledged to be carbon neutral by 2030, while others such as Alibaba have been criticized by Greenpeace for not committing to become carbon neutral. Google and Microsoft now each consume more power than some fairly big countries, surpassing the consumption of more than 100 countries.
Energy efficiency and overhead
The most commonly used energy efficiency metric for data centers is power usage effectiveness (PUE), calculated as the ratio of total power entering the data center divided by the power used by IT equipment.
PUE measures the percentage of power used by overhead devices (cooling, lighting, etc.). The average USA data center has a PUE of 2.0, meaning two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data centers are estimated to have a PUE of roughly 1.2. Google publishes quarterly efficiency metrics from its data centers in operation. PUEs of as low as 1.01 have been achieved with two phase immersion cooling.
The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile in energy efficiency of all reported facilities. The Energy Efficiency Improvement Act of 2015 (United States) requires federal facilities — including data centers — to operate more efficiently. California's Title 24 (2014) of the California Code of Regulations mandates that every newly constructed data center must have some form of airflow containment in place to optimize energy efficiency.
The European Union also has a similar initiative: EU Code of Conduct for Data Centres.
Energy use analysis and projects
The focus of measuring and analyzing energy use goes beyond what is used by IT equipment; facility support hardware such as chillers and fans also use energy.
In 2011, server racks in data centers were designed for more than 25 kW and the typical server was estimated to waste about 30% of the electricity it consumed. The energy demand for information storage systems is also rising. A high-availability data center is estimated to have a 1 megawatt (MW) demand and consume $20,000,000 in electricity over its lifetime, with cooling representing 35% to 45% of the data center's total cost of ownership. Calculations show that in two years, the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware. Research in 2018 has shown that a substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization.
In 2011, Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project, Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved. In 2016, Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30% increase in energy efficiency. In 2017, sales for data center hardware built to OCP designs topped $1.2 billion and are expected to reach $6 billion by 2021.
Power and cooling analysis
Power is the largest recurring cost to the user of a data center. Cooling it at or below wastes money and energy. Furthermore, overcooling equipment in environments with a high relative humidity can expose equipment to a high amount of moisture that facilitates the growth of salt deposits on conductive filaments in the circuitry.
A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures. A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity. The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers.
Energy efficiency analysis
An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's Power Use Effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics. However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise.
Computational Fluid Dynamics (CFD) analysis
This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling. By predicting the effects of these environmental conditions, CFD analysis of a data center can be used to predict the impact of high-density racks mixed with low-density racks and the onward impact on cooling resources, poor infrastructure management practices, and AC failure or AC shutdown for scheduled maintenance.
Thermal zone mapping
Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.
This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units.
Green data centers
Data centers use a lot of power, consumed by two main usages: The power required to run the actual equipment and then the power required to cool the equipment. Power efficiency reduces the first category.
Cooling cost reduction through natural means includes location decisions: When the focus is avoiding good fiber connectivity, power grid connections, and people concentrations to manage the equipment, a data center can be miles away from the users. Mass data centers like Google or Facebook don't need to be near population centers. Arctic locations that can use outside air, which provides cooling, are becoming more popular.
Renewable electricity sources are another plus. Thus countries with favorable conditions, such as Canada, Finland, Sweden, Norway, and Switzerland are trying to attract cloud computing data centers.
Singapore lifted a three-year ban on new data centers in April 2022. A major data center hub for the Asia-Pacific region, Singapore lifted its moratorium on new data center projects in 2022, granting 4 new projects, but rejecting more than 16 data center applications from over 20 new data centers applications received. Singapore's new data centers shall meet very strict green technology criteria including "Water Usage Effectiveness (WUE) of 2.0/MWh, Power Usage Effectiveness (PUE) of less than 1.3, and have a "Platinum certification under Singapore's BCA-IMDA Green Mark for New Data Centre" criteria that clearly addressed decarbonization and use of hydrogen cells or solar panels.
Direct current data centers
Direct current data centers are data centers that produce direct current on site with solar panels and store the electricity on site in a battery storage power station. Computers run on direct current and the need for inverting the AC power from the grid would be eliminated. The data center site could still use AC power as a grid-as-a-backup solution. DC data centers could be 10% more efficient and use less floor space for inverting components.
Energy reuse
It is very difficult to reuse the heat which comes from air-cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid-cooled infrastructure that captures all heat with water. Different liquid technologies are categorized in 3 main groups, indirect liquid cooling (water-cooled racks), direct liquid cooling (direct-to-chip cooling) and total liquid cooling (complete immersion in liquid, see server immersion cooling). This combination of technologies allows the creation of a thermal cascade as part of temperature chaining scenarios to create high-temperature water outputs from the data center.
Impact on electricity prices
Cryptomining and the artificial intelligence boom of the 2020s has also led to increased demand for electricity, that the IEA expects could double global overall data center demand for electricity between 2022 and 2026. The US could see its share of the electricity market going to data centers increase from 4% to 6% over those four years. Bitcoin used up 2% of US electricity in 2023. This has led to increased electricity prices in some regions, particularly in regions with lots of data centers like Santa Clara, California and upstate New York. Data centers have also generated concerns in Northern Virginia about whether residents will have to foot the bill for future power lines. It has also made it harder to develop housing in London. A Bank of America Institute report in July 2024 found that the increase in demand for electricity due in part to AI has been pushing electricity prices higher and is a significant contributor to electricity inflation.
Dynamic infrastructure
Dynamic infrastructure provides the ability to intelligently, automatically and securely move workloads within a data center anytime, anywhere, for migrations, provisioning, to enhance performance, or building co-location facilities. It also facilitates performing routine maintenance on either physical or virtual systems all while minimizing interruption. A related concept is Composable Infrastructure, which allows for the dynamic reconfiguration of the available resources to suit needs, only when needed.
Side benefits include
reducing cost
facilitating business continuity and high availability
enabling cloud and grid computing.
Network infrastructure
Communications in data centers today are most often based on networks running the Internet protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world which are connected according to the data center network architecture. Redundancy of the internet connection is often provided by using two or more upstream service providers (see Multihoming).
Some of the servers at the data center are used for running the basic internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers.
Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off-site monitoring systems are also typical, in case of a failure of communications inside the data center.
Software/data backup
Non-mutually exclusive options for data backup are:
Onsite
Offsite
Onsite is traditional, and one of its major advantages is immediate availability.
Offsite backup storage
Data backup techniques include having an encrypted copy of the data offsite. Methods used for transporting data are:
Having the customer write the data to a physical medium, such as magnetic tape, and then transporting the tape elsewhere.
Directly transferring the data to another site during the backup, using appropriate links.
Uploading the data "into the cloud".
Modular data center
For quick deployment or IT disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in a very short amount of time.
Micro data center
Micro data centers (MDCs) are access-level data centers which are smaller in size than traditional data centers but provide the same features. They are typically located near the data source to reduce communication delays, as their small size allows several MDCs to be spread out over a wide area. MDCs are well suited to user-facing, front end applications. They are commonly used in edge computing and other areas where low latency data processing is needed.
See also
Notes
References
External links
Lawrence Berkeley Lab - Research, development, demonstration, and deployment of energy-efficient technologies and practices for data centers
DC Power For Data Centers Of The Future - FAQ: 380VDC testing and demonstration at a Sun data center.
White Paper - Property Taxes: The New Challenge for Data Centers
The European Commission H2020 EURECA Data Centre Project - Data centre energy efficiency guidelines, extensive online training material, case studies/lectures (under events page), and tools.
Applications of distributed computing
Cloud storage
Computer networking
Data management
Distributed data storage systems
Distributed data storage
Heating, ventilation, and air conditioning
Servers (computing)
Infrastructure | Data center | [
"Technology",
"Engineering"
] | 6,096 | [
"Computer networking",
"Computer engineering",
"Data centers",
"Data management",
"Construction",
"Computer science",
"Data",
"Computers",
"Infrastructure"
] |
579,750 | https://en.wikipedia.org/wiki/Methanogenesis | Methanogenesis or biomethanation is the formation of methane coupled to energy conservation by microbes known as methanogens. It is the fourth and final stage of anaerobic digestion. Organisms capable of producing methane for energy conservation have been identified only from the domain Archaea, a group phylogenetically distinct from both eukaryotes and bacteria, although many live in close association with anaerobic bacteria. The production of methane is an important and widespread form of microbial metabolism. In anoxic environments, it is the final step in the decomposition of biomass. Methanogenesis is responsible for significant amounts of natural gas accumulations, the remainder being thermogenic.
Biochemistry
Methanogenesis in microbes is a form of anaerobic respiration. Methanogens do not use oxygen to respire; in fact, oxygen inhibits the growth of methanogens. The terminal electron acceptor in methanogenesis is not oxygen, but carbon. The two best described pathways involve the use of acetic acid (acetoclastic) or inorganic carbon dioxide (hydrogenotrophic) as terminal electron acceptors:
CO2 + 4 H2 → CH4 + 2 H2O
CH3COOH → CH4 + CO2
During anaerobic respiration of carbohydrates, H2 and acetate are formed in a ratio of 2:1 or lower, so H2 contributes only to methanogenesis, with acetate contributing the greater proportion. In some circumstances, for instance in the rumen, where acetate is largely absorbed into the bloodstream of the host, the contribution of H2 to methanogenesis is greater.
However, depending on pH and temperature, methanogenesis has been shown to use carbon from other small organic compounds, such as formic acid (formate), methanol, methylamines, tetramethylammonium, dimethyl sulfide, and methanethiol. The catabolism of the methyl compounds is mediated by methyl transferases to give methyl coenzyme M.
Proposed mechanism
The biochemistry of methanogenesis involves the following coenzymes and cofactors: F420, coenzyme B, coenzyme M, methanofuran, and methanopterin.
The mechanism for the conversion of bond into methane involves a ternary complex of the enzyme, with the substituents forming a structure α2β2γ2. Within the complex, methyl coenzyme M and coenzyme B fit into a channel terminated by the axial site on nickel of the cofactor F430. One proposed mechanism invokes electron transfer from Ni(I) (to give Ni(II)), which initiates formation of . Coupling of the coenzyme M thiyl radical (RS.) with HS coenzyme B releases a proton and re-reduces Ni(II) by one electron, regenerating Ni(I).
Reverse methanogenesis
Some organisms can oxidize methane, functionally reversing the process of methanogenesis, also referred to as the anaerobic oxidation of methane (AOM). Organisms performing AOM have been found in multiple marine and freshwater environments including methane seeps, hydrothermal vents, coastal sediments and sulfate-methane transition zones. These organisms may accomplish reverse methanogenesis using a nickel-containing protein similar to methyl-coenzyme M reductase used by methanogenic archaea. Reverse methanogenesis occurs according to the reaction:
+ CH4 → + HS− + H2O
Importance in carbon cycle
Methanogenesis is the final step in the anaerobic decay of organic matter. During the decay process, electron acceptors (such as oxygen, ferric iron, sulfate, and nitrate) become depleted, while hydrogen (H2) and carbon dioxide accumulate. Light organics produced by fermentation also accumulate. During advanced stages of organic decay, all electron acceptors become depleted except carbon dioxide. Carbon dioxide is a product of most catabolic processes, so it is not depleted like other potential electron acceptors.
Only methanogenesis and fermentation can occur in the absence of electron acceptors other than carbon. Fermentation only allows the breakdown of larger organic compounds, and produces small organic compounds. Methanogenesis effectively removes the semi-final products of decay: hydrogen, small organics, and carbon dioxide. Without methanogenesis, a great deal of carbon (in the form of fermentation products) would accumulate in anaerobic environments.
Natural occurrence
In ruminants
Enteric fermentation occurs in the gut of some animals, especially ruminants. In the rumen, anaerobic organisms, including methanogens, digest cellulose into forms nutritious to the animal. Without these microorganisms, animals such as cattle would not be able to consume grasses. The useful products of methanogenesis are absorbed by the gut, but methane is released from the animal mainly by belching (eructation). The average cow emits around 250 liters of methane per day. In this way, ruminants contribute about 25% of anthropogenic methane emissions. One method of methane production control in ruminants is by feeding them 3-nitrooxypropanol.
In humans
Some humans produce flatus that contains methane. In one study of the feces of nine adults, five of the samples contained archaea capable of producing methane. Similar results are found in samples of gas obtained from within the rectum.
Even among humans whose flatus does contain methane, the amount is in the range of 10% or less of the total amount of gas.
In plants
Many experiments have suggested that leaf tissues of living plants emit methane. Other research has indicated that the plants are not actually generating methane; they are just absorbing methane from the soil and then emitting it through their leaf tissues.
In soils
Methanogens are observed in anoxic soil environments, contributing to the degradation of organic matter. This organic matter may be placed by humans through landfill, buried as sediment on the bottom of lakes or oceans as sediments, and as residual organic matter from sediments that have formed into sedimentary rocks.
In Earth's crust
Methanogens are a notable part of the microbial communities in continental and marine deep biosphere.
Industry
Methanogenesis can also be beneficially exploited, to treat organic waste, to produce useful compounds, and the methane can be collected and used as biogas, a fuel. It is the primary pathway whereby most organic matter disposed of via landfill is broken down. Some biogas plants use methanogenesis to combine the with hydrogen to create more methane.
Role in global warming
Atmospheric methane is an important greenhouse gas with a global warming potential 25 times greater than carbon dioxide (averaged over 100 years), and methanogenesis in livestock and the decay of organic material is thus a considerable contributor to global warming. It may not be a net contributor in the sense that it works on organic material which used up atmospheric carbon dioxide when it was created, but its overall effect is to convert the carbon dioxide into methane which is a much more potent greenhouse gas.
Extra-terrestrial life
The presence of atmospheric methane has a role in the scientific search for extra-terrestrial life. The justification is that on an astronomical timescale, methane in the atmosphere of an Earth-like celestial body will quickly dissipate, and that its presence on such a planet or moon therefore indicates that something is replenishing it. If methane is detected (by using a spectrometer for example) this may indicate that life is, or recently was, present.
This was debated when methane was discovered in the Martian atmosphere by M.J. Mumma of NASA's Goddard Flight Center, and verified by the Mars Express Orbiter (2004) and in Titan's atmosphere by the Huygens probe (2005). This debate was furthered with the discovery of 'transient', 'spikes of methane' on Mars by the Curiosity Rover.
It is argued that atmospheric methane can come from volcanoes or other fissures in the planet's crust and that without an isotopic signature, the origin or source may be difficult to identify.
On 13 April 2017, NASA confirmed that the dive of the Cassini orbiter spacecraft on 28 October 2015 discovered an Enceladus plume which has all the ingredients for methanogenesis-based life forms to feed on. Previous results, published in March 2015, suggested hot water is interacting with rock beneath the sea of Enceladus; the new finding supported that conclusion, and add that the rock appears to be reacting chemically. From these observations scientists have determined that nearly 98 percent of the gas in the plume is water, about 1 percent is hydrogen, and the rest is a mixture of other molecules including carbon dioxide, methane and ammonia.
See also
Aerobic methane production
Anaerobic digestion
Anaerobic oxidation of methane
Electromethanogenesis
Hydrogen cycle
Methanotroph
Mootral
References
Anaerobic digestion
Biochemical reactions
Biodegradable waste management
Biodegradation
Hydrogen biology
Sewerage
Methanogenesis | Methanogenesis | [
"Chemistry"
] | 1,901 | [
"Methanogenesis",
"Biochemical reactions"
] |
579,755 | https://en.wikipedia.org/wiki/Mach%E2%80%93Zehnder%20interferometer | The Mach–Zehnder interferometer is a device used to determine the relative phase shift variations between two collimated beams derived by splitting light from a single source. The interferometer has been used, among other things, to measure phase shifts between the two beams caused by a sample or a change in length of one of the paths. The apparatus is named after the physicists Ludwig Mach (the son of Ernst Mach) and Ludwig Zehnder; Zehnder's proposal in an 1891 article was refined by Mach in an 1892 article. Mach–Zehnder interferometry has been demonstrated with electrons as well as with light. The versatility of the Mach–Zehnder configuration has led to its being used in a range of research topics efforts especially in fundamental quantum mechanics.
Design
The Mach–Zehnder check interferometer is a highly configurable instrument. In contrast to the well-known Michelson interferometer, each of the well-separated light paths is traversed only once.
If the source has a low coherence length then great care must be taken to equalize the two optical paths. White light in particular requires the optical paths to be simultaneously equalized over all wavelengths, or no fringes will be visible (unless a monochromatic filter is used to isolate a single wavelength). As seen in Fig. 1, a compensating cell made of the same type of glass as the test cell (so as to have equal optical dispersion) would be placed in the path of the reference beam to match the test cell. Note also the precise orientation of the beam splitters. The reflecting surfaces of the beam splitters would be oriented so that the test and reference beams pass through an equal amount of glass. In this orientation, the test and reference beams each experience two front-surface reflections, resulting in the same number of phase inversions. The result is that light travels through an equal optical path length in both the test and reference beams leading to constructive interference.
Collimated sources result in a nonlocalized fringe pattern. Localized fringes result when an extended source is used. In Fig. 2, we see that the fringes can be adjusted so that they are localized in any desired plane. In most cases, the fringes would be adjusted to lie in the same plane as the test object, so that fringes and test object can be photographed together.
Operation
The collimated beam is split by a half-silvered mirror. The two resulting beams (the "sample beam" and the "reference beam") are each reflected by a mirror. The two beams then pass a second half-silvered mirror and enter two detectors.
The Fresnel equations for reflection and transmission of a wave at a dielectric imply that there is a phase change for a reflection, when a wave propagating in a lower-refractive index medium reflects from a higher-refractive index medium, but not in the opposite case. A 180° phase shift occurs upon reflection from the front of a mirror, since the medium behind the mirror (glass) has a higher refractive index than the medium the light is traveling in (air). No phase shift accompanies a rear-surface reflection, since the medium behind the mirror (air) has a lower refractive index than the medium the light is traveling in (glass).
The speed of light is lower in media with an index of refraction greater than that of a vacuum, which is 1. Specifically, its speed is: v = c/n, where c is the speed of light in vacuum, and n is the index of refraction. This causes a phase shift increase proportional to (n − 1) × length traveled. If k is the constant phase shift incurred by passing through a glass plate on which a mirror resides, a total of 2k phase shift occurs when reflecting from the rear of a mirror. This is because light traveling toward the rear of a mirror will enter the glass plate, incurring k phase shift, and then reflect from the mirror with no additional phase shift, since only air is now behind the mirror, and travel again back through the glass plate, incurring an additional k phase shift.
The rule about phase shifts applies to beamsplitters constructed with a dielectric coating and must be modified if a metallic coating is used or when different polarizations are taken into account. Also, in real interferometers, the thicknesses of the beamsplitters may differ, and the path lengths are not necessarily equal. Regardless, in the absence of absorption, conservation of energy guarantees that the two paths must differ by a half-wavelength phase shift. Also beamsplitters that are not 50/50 are frequently employed to improve the interferometer's performance in certain types of measurement.
In Fig. 3, in the absence of a sample, both the sample beam (SB) and the reference beam (RB) will arrive in phase at detector 1, yielding constructive interference. Both SB and RB will have undergone a phase shift of (1 × wavelength + k) due to two front-surface reflections and one transmission through a glass plate. At detector 2, in the absence of a sample, the sample beam and reference beam will arrive with a phase difference of half a wavelength, yielding complete destructive interference. The RB arriving at detector 2 will have undergone a phase shift of (0.5 × wavelength + 2k) due to one front-surface reflection and two transmissions. The SB arriving at detector 2 will have undergone a (1 × wavelength + 2k) phase shift due to two front-surface reflections, one rear-surface reflection. Therefore, when there is no sample, only detector 1 receives light. If a sample is placed in the path of the sample beam, the intensities of the beams entering the two detectors will change, allowing the calculation of the phase shift caused by the sample.
Quantum treatment
We can model a photon going through the interferometer by assigning a probability amplitude to each of the two possible paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state describing the photon is therefore a vector that is a superposition of the "lower" path and the "upper" path , that is, for complex such that .
Both beam splitters are modelled as the unitary matrix , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of , or be reflected to the other path with a probability amplitude of . The phase shifter on the upper arm is modelled as the unitary matrix , which means that if the photon is on the "upper" path it will gain a relative phase of , and it will stay unchanged if it is on the lower path.
A photon that enters the interferometer from the left will then end up described by the state
and the probabilities that it will be detected at the right or at the top are given respectively by
One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities.
It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases there will no longer be interference between the paths, and the probabilities are given by , independently of the phase . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it must be described by a genuine quantum superposition of the two paths.
Uses
The Mach–Zehnder interferometer's relatively large and freely accessible working space, and its flexibility in locating the fringes has made it the interferometer of choice for visualizing flow in wind tunnels and for flow visualization studies in general. It is frequently used in the fields of aerodynamics, plasma physics and heat transfer to measure pressure, density, and temperature changes in gases.
Mach–Zehnder interferometers are used in electro-optic modulators, electronic devices used in various fiber-optic communication applications. Mach–Zehnder modulators are incorporated in monolithic integrated circuits and offer well-behaved, high-bandwidth electro-optic amplitude and phase responses over a multiple-gigahertz frequency range.
Mach–Zehnder interferometers are also used to study one of the most counterintuitive predictions of quantum mechanics, the phenomenon known as quantum entanglement.
The possibility to easily control the features of the light in the reference channel without disturbing the light in the object channel popularized the Mach–Zehnder configuration in holographic interferometry. In particular, optical heterodyne detection with an off-axis, frequency-shifted reference beam ensures good experimental conditions for shot-noise limited holography with video-rate cameras, vibrometry, and laser Doppler imaging of blood flow.
In optical telecommunications it is used as an electro-optic modulator for phase and amplitude modulation of light. Optical computing researchers have proposed using Mach-Zehnder interferometer configurations in optical neural chips for greatly accelerating complex-valued neural network algorithms.
The versatility of the Mach–Zehnder configuration has led to its being used in a wide range of fundamental research topics in quantum mechanics, including studies on counterfactual definiteness, quantum entanglement, quantum computation, quantum cryptography, quantum logic, Elitzur–Vaidman bomb tester, the quantum eraser experiment, the quantum Zeno effect, and neutron diffraction.
See also
Interferometry
List of types of interferometers
Schlieren photography
Shadowgraph
References
External links
Mach-Zehnder - Virtual Lab by Quantum Flytrap, an interactive simulation for both classical and quantum interference
Interferometers | Mach–Zehnder interferometer | [
"Technology",
"Engineering"
] | 2,088 | [
"Interferometers",
"Measuring instruments"
] |
579,844 | https://en.wikipedia.org/wiki/South%20Downs%20National%20Park | The South Downs National Park is England's newest national park, designated on 31 March 2010. The park, covering an area of in southern England, stretches for from Winchester in the west to Eastbourne in the east, through the counties of Hampshire, West Sussex and East Sussex. The national park covers the chalk hills of the South Downs (which on the English Channel coast form the white cliffs of the Seven Sisters and Beachy Head) and a substantial part of a separate physiographic region, the western Weald, with its heavily wooded sandstone and clay hills and vales. The South Downs Way spans the entire length of the park and is the only National Trail that lies wholly within a national park.
History
The idea of a South Downs National Park originated in the 1920s, when public concern was mounting about increasing threats to the beauty of the downland environment, particularly the impact of indiscriminate speculative housing development on the eastern Sussex Downs (Peacehaven was an example of this). In 1929, the Council for the Preservation of Rural England, led by campaigners including the geographer Vaughan Cornish, submitted a memorandum to the Prime Minister urging the case for national parks, including a national park on part of the South Downs. When however, towards the end of World War II, John Dower was asked to report on how a system of national parks in England and Wales might be established, his 1945 report, National Parks in England and Wales, did not identify the South Downs for national park status, but rather included it in a list of "other amenity areas". Sir Arthur Hobhouse's 1947 Report of the National Parks Committee took a different view, and he included the South Downs in his list of twelve areas recommended for designation as a national park, defined by John Dower as an "extensive area of beautiful and relatively wild country in which, for the nation's benefit...the characteristic landscape beauty is strictly preserved".
The South Downs was the last of the original twelve recommended national parks to be designated. Extensive damage to the chalk downland from 1940 onwards through arable farming, and a resulting decline in sheep grazing, militated at an early stage against further work on designation. When in 1956 the National Parks Commission came to consider the case for the South Downs as a national park, it found designation no longer appropriate, noting that the value of the South Downs as a potential national park had been reduced by cultivation. It did however recognise the "great natural beauty" of the area, and proposed it be designated as an Area of Outstanding Natural Beauty. In due course two AONBs were designated, split along the county boundary, namely the East Hampshire AONB in 1962 and the Sussex Downs AONB in 1966. These were later to form the basis of the South Downs National Park.
In September 1999, the government, following a review of national parks policy, declared support for a South Downs National Park and announced a consultation on its creation. In January 2003 the then Countryside Agency (now Natural England) made an Order to designate the proposed park in 2003 which was submitted to the Secretary of State for the Environment on 27 January 2003.
As a result of objections and representations received on the proposed Order, a public inquiry was conducted between 10 November 2003 and 23 March 2005, with the aim of recommending to ministers whether a national park should be confirmed and, if so, where its boundaries should be. The results of the inquiry were expected by the end of 2005, but were delayed pending a legal issue arising from a High Court case challenging part of the Order designating the New Forest National Park.
Following an appeal on the High Court case and new legislation included in the Natural Environment and Rural Communities Act 2006, the South Downs Inquiry report was published on 31 March 2006. It recommended a 23% reduction in the size of the originally proposed national park, focussing it more narrowly on the chalk downland and excluding from it a large part of the existing East Hampshire and Sussex Downs AONBs. This proved controversial, leading to calls from the Campaign for the Protection of Rural England and others for the inclusion of the so-called western Weald, a region within the two AONBs possessing a geology, ecology and landscape quite different from the chalk hills of the South Downs, within the park boundary to ensure that it remained protected from development. The Secretary of State invited objections and representations on new issues relating to the proposed national park in a consultation that ran from 2 July to 13 August 2007. In the light of the responses received, the Secretary of State decided that it was appropriate to re-open the 2003–05 public inquiry. The inquiry re-opened on 12 February 2008 and was closed on 4 July 2008 after 27 sitting days. The Inspector's report was submitted on 28 November 2008.
On 31 March 2009, the result of the inquiry was published. The Secretary of State, Hilary Benn, announced that the South Downs would be designated a national park, and on 12 November 2009 he signed the order confirming the designation. He confirmed that a number of disputed areas – including the western Weald, the town of Lewes and the village of Ditchling – would be included within the national park.
The new national park came into full operation on 1 April 2011 when the new South Downs National Park Authority assumed statutory responsibility for it. The occasion was marked by an opening ceremony which took place in the market square of Petersfield, a town in the western Weald just north of the chalk escarpment of the South Downs.
In 2016 the national park was granted International Dark Sky Reserve status, to restrict artificial light pollution above the park. It was the second such area in England and the 11th in the world.
Administration
The national park is administered by the South Downs National Park Authority (SDNPA). The body was established on 1 April 2010, and became fully functioning, including becoming the planning authority for the national park, on 1 April 2011. It is responsible for promoting the statutory purposes of the national park and the interests of the people who live and work within it. The statutory purposes of the SDNPA, as a national park authority, are specified by the Environment Act 1995; these are:
To conserve and enhance the natural beauty, wildlife and cultural heritage of the area
To promote opportunities for the understanding and enjoyment of the Park's special qualities by the public.
It must also fulfil the following duty:
In carrying out its role, the authority has a duty to seek to foster the economic and social well-being of the communities living within the national park.
The SDNPA is a public body, funded by central government, and run by a board of 27 members. The board consists of seven national members, appointed by the environment secretary by means of an open recruitment process; fourteen local authority nominees drawn from the fifteen local authorities covering the park area (with Adur and Worthing opting to share a place); and six parish council representatives, two for each county.
As at June 2024, the chair of SDNPA is Vanessa Rowlands, and the chief executive (interim) is Tim Slaney.
Geography
The South Downs National Park stretches for across southern England from St Catherine's Hill near Winchester in Hampshire in the west to Beachy Head, near Eastbourne in East Sussex in the east. In its western half, the southern boundary of the park lies up to inland from the south coast; it thus excludes the major coastal towns and cities of Southampton, Portsmouth, Chichester, Bognor Regis and Littlehampton. Further east, where the park's southern boundary lies much closer to the coast, it has been carefully drawn to exclude the urban areas of Worthing, Brighton and Hove, Newhaven, Seaford and Eastbourne, which had all made substantial encroachments onto the Downs during the 19th and 20th centuries. By contrast, the park includes a number of settlements in the western Weald, including Petersfield, Liss, Midhurst and Petworth, and the two historic Sussex towns of Arundel and Lewes.
The population living within the national park is approximately 108,000. Of these 42,000 live in Hampshire, 40,000 in West Sussex and 25,000 in East Sussex. East Hampshire District Council area and Chichester District each have around 30,000 residents in the area, and Lewes District 22,000. Winchester has 11,500 residents in the park, with much smaller numbers for the other districts and boroughs. In 2024, the park authority stated that it received around 18million visitors each year.
The national park has an area of , of which is in Hampshire, in West Sussex and in East Sussex. Among the district council areas, Chichester District has the largest area at , followed by East Hampshire District with , Winchester with , Lewes District with and Arun . are in Horsham District and in Wealden District.
Apart from a number of boundary revisions, the park incorporates two areas previously designated as Areas of Outstanding Natural Beauty, the East Hampshire AONB and Sussex Downs AONB. The park also includes the Queen Elizabeth Country Park near Petersfield.
The South Downs National Park's chalk downland sets it apart from other national parks in Britain. However, almost a quarter (23%) of the national park consists of a quite different and strongly contrasting physiographic region, the western Weald, whose densely wooded hills and vales are based on an older Wealden geology of resistant sandstones and softer clays. The highest point in the national park, Blackdown, at above sea level, is within the Weald, on the Greensand Ridge, whereas the highest point on the chalk escarpment of the South Downs, Butser Hill, has an elevation of above sea level.
Within the national park there are two chalk hill figures, the Litlington White Horse and the Long Man of Wilmington.
Geology
Most of the national park consists of chalk downland, although a significant part consists of the sandstones and clays of the western Weald, a strongly contrasting and distinctive landscape of densely wooded hills and vales.
The chalk was formed in the Late Cretaceous epoch, between 100 million and 66 million years ago, when the area was under the sea. During the Cenozoic era the chalk was uplifted as part of the Weald uplift which created the great Weald–Artois Anticline, caused by the same orogenic movements that created the Alps. The relatively resistant chalk rock has, through weathering, resulted in a classic cuesta landform, with a northward-facing chalk escarpment that rises dramatically above the low-lying vales of the Low Weald.
The chalk escarpment reaches the English Channel west of Eastbourne, where it forms the dramatic white cliffs of Beachy Head, the Seven Sisters and Seaford Head. These cliffs were formed after the end of the last ice age, when sea levels rose and the English Channel was formed, resulting in under-cutting of the chalk by the sea.
The South Downs run linearly west-north-westwards from the Eastbourne area through southern Sussex to the Hampshire downs, separating the south coastal plain from the clays and sandstones of the Weald. Behind the escarpment, on the dip slope, are the characteristic high, smooth, rolling downland hills interrupted by dry valleys and wind gaps, and the major river gaps of the Cuckmere, Ouse, Adur and Arun.
The chalk is a white sedimentary rock, notably homogeneous and fine-grained, and very permeable. It consists of minute calcite plates (coccoliths) shed from micro-organisms called coccolithophores. The strata include numerous layers of flint nodules, which have been widely exploited as a material for manufacture of stone tools as well as a building material for dwellings. Similar areas in Britain include the North Downs and the Chilterns.
In its western section, the national park extends north beyond the chalk escarpment of the South Downs into a quite different and strongly contrasting physiographic region, the western Weald, taking in the valley of the western River Rother, incised into Lower Greensand bedrock, and the densely wooded hills and valleys of the Greensand Ridge and Weald Clay south of Haslemere.
See also
Wiston House
Butser Ancient Farm
References
External links
National Park Authority website
Links to detailed Defra maps of the confirmed boundary, archived in 2010
Visit South Downs
South Downs Trust
National parks in England
Environment of Hampshire
Parks and open spaces in Hampshire
Parks and open spaces in West Sussex
Parks and open spaces in East Sussex
Environment of East Sussex
Protected areas established in 2010
Environment of West Sussex
Dark-sky preserves in England
International Dark Sky Reserves | South Downs National Park | [
"Astronomy"
] | 2,582 | [
"International Dark Sky Reserves",
"Dark-sky preserves"
] |
579,902 | https://en.wikipedia.org/wiki/Rest%20%28music%29 | A rest is the absence of a sound for a defined period of time in music, or one of the musical notation signs used to indicate that.
The length of a rest corresponds with that of a particular note value, thus indicating how long the silence should last. Each type of rest is named for the note value it corresponds with (e.g. quarter note and quarter rest, or quaver and quaver rest), and each of them has a distinctive sign.
Description
Rests are intervals of silence in pieces of music, marked by symbols indicating the length of the silence. Each rest symbol and name corresponds with a particular note value, indicating how long the silence should last, generally as a multiplier of a measure or whole note.
The quarter (crotchet) rest (𝄽) may take a different form in older music.
The four-measure rest or longa rest are only used in long silent passages which are not divided into bars.
The combination of rests used to mark a silence follows the same rules as for note values.
One-bar rests
When an entire bar is devoid of notes, a whole (semibreve) rest is used, regardless of the actual time signature. Historically exceptions were made for a time signature (four half notes per bar), when a double whole (breve) rest was typically used for a bar's rest, and for time signatures shorter than , when a rest of the actual measure length would be used. Some published (usually earlier) music places the numeral "" above the rest to confirm the extent of the rest.
Occasionally in manuscripts and facsimiles of them, bars of rest are sometimes left completely empty and unmarked, possibly even without the staves.
Multiple measure rests
In instrumental parts, rests of more than one bar in the same meter and key may be indicated with a multimeasure rest (British English: multiple bar rest), showing the number of bars of rest, as shown. A multimeasure rest is usually drawn in one of two ways:
As a thick horizontal line placed on the middle line of the staff, with serifs at both ends (see above middle picture), or as thick diagonal lines placed between the second and fourth lines of the staff, resembling a large heavy minus sign or equals sign set at a slant (the diagonal style is much less common than the horizontal one; although a small number of publishers use it, it is more commonly found in modern manuscripts in a casual style). Both variants of thick line rests are drawn in the same shape each time, regardless of how many bars' rest they represent.
The older system of notating multirests (deriving from Baroque notation conventions that were adapted from the old mensural rest system dating from Medieval times) draws each multimeasure rest according to the picture above right unless it will exceed a certain number of bars; rests longer than that limit are drawn using the thick horizontal line mentioned above. How long a multimeasure rest must be before resorting to a horizontal line is a matter of personal taste or editorial policy; most publishers use ten bars as the changing point, however, larger and smaller changing points are used, especially in earlier music.
The number of bars for which a horizontal line multimeasure rest lasts is indicated by a number printed above the musical staff (usually at the same size as the numerals in a time signature). If a change of meter or key occurs during a multimeasure rest, that rest must be divided into shorter sections for clarity, with the changes of key and/or meter indicated between the rests. Multimeasure rests must also be divided at double barlines, which demarcate musical phrases or sections, and at rehearsal letters.
Dotted rests
A rest may also have a dot after it, increasing its duration by half, but this is less commonly used than with notes, except occasionally in modern music notated in compound meters such as or . In these meters the long-standing convention has been to indicate one beat of rest as a quarter rest followed by an eighth rest (equivalent to three eighths). See: Anacrusis.
General pause
In a score for an ensemble piece, "G.P." (general pause) indicates silence for one bar or more for the entire ensemble. Specifically marking general pauses each time they occur (rather than writing them as ordinary rests) is relevant for performers, as making any kind of noise should be avoided there—for instance, page turns in sheet music are not made during general pauses, as the sound of turning the page becomes noticeable when no one is playing.
See also
Caesura
List of silent musical compositions
List of musical symbols
Tacet
References
Musical notation
Rhythm and meter
Silence
de:Notenwert#Pausen | Rest (music) | [
"Physics"
] | 973 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
579,976 | https://en.wikipedia.org/wiki/Antarctic%20realm | The Antarctic realm is one of eight terrestrial biogeographic realms. The ecosystem includes Antarctica and several island groups in the southern Atlantic and Indian oceans. The continent of Antarctica is so cold that it has supported only 2 vascular plants for millions of years, and its flora presently consists of around 250 lichens, 100 mosses, 25–30 liverworts, and around 700 terrestrial and aquatic algal species, which live on the areas of exposed rock and soil around the shore of the continent. Antarctica's two flowering plant species, the Antarctic hair grass (Deschampsia antarctica) and Antarctic pearlwort (Colobanthus quitensis), are found on the northern and western parts of the Antarctic Peninsula. Antarctica is also home to a diversity of animal life, including penguins, seals, and whales.
Several Antarctic and sub-Antarctic island groups are considered part of the Antarctic realm, including Bouvet Island, the Crozet Islands, Heard Island, the Kerguelen Islands, the McDonald Islands, the Prince Edward Islands, the South Georgia Group, the South Orkney Islands, the South Sandwich Islands, and the South Shetland Islands. These islands have a somewhat milder climate than Antarctica proper, and support a greater diversity of tundra plants, although they are all too windy and cold to support trees.
Antarctic krill is the keystone species of the ecosystem of the Southern Ocean, and is an important food organism for whales, seals, leopard seals, fur seals, crabeater seals, squid, icefish, penguins, albatrosses and many other birds. The ocean there is so full of phytoplankton because water rises from the depths to the light-flooded surface, bringing nutrients from all oceans back to the photic zone.
On August 20, 2014, scientists confirmed the existence of microorganisms living below the ice of Antarctica.
History
Millions of years ago, Antarctica was warmer and wetter, and supported the Antarctic flora, including forests of podocarps and southern beech. Antarctica was also part of the ancient supercontinent of Gondwanaland, which gradually broke up by continental drift starting 110 million years ago. The separation of South America from Antarctica 30–35 million years ago allowed the Antarctic Circumpolar Current to form, which isolated Antarctica climatically and caused it to become much colder. The Antarctic flora subsequently died out in Antarctica, but is still an important component of the flora of southern Neotropical (Latin America and the Caribbean) and Australasian realms, which were also former parts of Gondwana.
Some botanists recognize an Antarctic Floristic Kingdom that includes Antarctica, New Zealand, and parts of Temperate South America where the Antarctic Flora is still a major component.
Ecoregions
The Antarctic realm is divided into 17 tundra ecoregions:
References
Bibliography
Life in the Freezer, a BBC television series on life on and around Antarctica
Biodiversity at Ardley Island, South Shetland archipelago, Antarctica
Deep Sea Foraminifera – Deep Sea Foraminifera from 4400m depth, Weddell Sea – an image gallery of hundreds of specimens and description
Aliens in Antarctica; Visitors carry unwelcome species into a once pristine environment May 5, 2012 Science News
Biogeographic realms
Biogeography
Phytogeography | Antarctic realm | [
"Biology"
] | 666 | [
"Biogeography"
] |
580,039 | https://en.wikipedia.org/wiki/Goods | In economics, goods are items that satisfy human wants and provide utility, for example, to a consumer making a purchase of a satisfying product. Economics focuses on the study of economic goods, or goods that are scarce; in other words, producing the good requires expending effort or resources. Economic goods contrast with free goods such as air, for which there is an unlimited supply.
A consumer good or "final good" is any item that is ultimately consumed, rather than used in the production of another good. For example, a microwave oven or a bicycle that is sold to a consumer is a final good or consumer good, but the components that are sold to be used in those goods are intermediate goods. For example, textiles or transistors can be used to make some further goods.
Commercial goods are construed as tangible products that are manufactured and then made available for supply to be used in an industry of commerce. Commercial goods could be tractors, commercial vehicles, mobile structures, airplanes, and even roofing materials. Commercial and personal goods as categories are very broad and cover almost everything a person sees from the time they wake up in their home, on their commute to work to their arrival at the workplace.
Commodities may be used as a synonym for economic goods but often refer to marketable raw materials and primary products.
Although common goods are tangible, certain classes of goods, such as information, only take intangible forms. For example, among other goods an apple is a tangible object, while news belongs to an intangible class of goods and can be perceived only by means of an instrument such as printers or television.
Utility and characteristics of goods
The change in utility (pleasure or satisfaction) gained by consuming one unit of a good is called its marginal utility. Goods are commonly considered to have diminishing marginal utility, which means that consuming more gives less utility per amount consumed. Some things are useful, but not scarce enough to have monetary value, such as the Earth's atmosphere; these are referred to as free goods.
The opposite of a good is a bad—in other words, a 'bad' is anything with a negative value to the consumer. A bad lowers a consumer's overall welfare.
Types of goods
Goods' diversity allows for their classification into different categories based on distinctive characteristics, such as tangibility and (ordinal) relative elasticity. A tangible good like an apple differs from an intangible good like information due to the impossibility of a person to physically hold the latter, whereas the former occupies physical space. Intangible goods differ from services in that final (intangible) goods are transferable and can be traded, whereas a service cannot.
Price elasticity also differentiates types of goods. An elastic good is one for which there is a relatively large change in quantity due to a relatively small change in price, and therefore is likely to be part of a family of substitute goods; for example, as pen prices rise, consumers might buy more pencils instead. An inelastic good is one for which there are few or no substitutes, such as tickets to major sporting events, original works by famous artists, and prescription medicine such as insulin. Complementary goods are generally more inelastic than goods in a family of substitutes. For example, if a rise in the price of beef results in a decrease in the quantity of beef demanded, it is likely that the quantity of hamburger buns demanded will also drop, despite no change in buns' prices. This is because hamburger buns and beef (in Western culture) are complementary goods. Goods considered complements or substitutes are relative associations and should not be understood in a vacuum. The degree to which a good is a substitute or a complement depends on its relationship to other goods, rather than an intrinsic characteristic, and can be measured as cross elasticity of demand by employing statistical techniques such as covariance and correlation.
Bads
A bad is the opposite of a good, because its consumption or presence lowers the customer's utility. With goods, a two-party transaction results in the exchange of money for some object, as when money is exchanged for a car. With a bad, however, both money and the object in question go the same direction, as when a household gives up both money and garbage to a waste collector, meaning the garbage has a negative price: the waste collector is receiving both garbage and money and thus is paying a negative amount for the garbage.
Goods classified by exclusivity and competitiveness
Fourfold model of goods
Goods can be classified based on their degree of excludability and rivalry (competitiveness). Considering excludability can be measured on a continuous scale, some goods would not be able to fall into one of the four common categories used.
There are four types of goods based on the characteristics of rival in consumption and excludability: Public Goods, Private Goods, Common Resources, and Club Goods. These four types plus examples for anti-rivalry appear in the accompanying table.
Public goods
Goods that are both non-rival and non-excludable are called public goods. In many cases, renewable resources, such as land, are common commodities but some of them are contained in public goods. Public goods are non-exclusive and non-competitive, meaning that individuals cannot be stopped from using them and anyone can consume this good without hindering the ability of others to consume them. Examples in addition to the ones in the matrix are national parks, or firework displays. It is generally accepted by mainstream economists that the market mechanism will under-provide public goods, so these goods have to be produced by other means, including government provision. Public goods can also suffer from the Free-Rider problem.
Private goods
Private goods are excludable goods, which prevent other consumers from consuming them. Private goods are also rivalrous because one good in private ownership cannot be used by someone else. That is to say, consuming some goods will deprive another consumer of the ability to consume the goods. Private goods are the most common type of goods. They include what you have to get from the store. For examples food, clothing, cars, parking spaces, etc. An individual who consumes an apple denies another individual from consuming the same one. It is excludable because consumption is only offered to those willing to pay the price.
Common-pool resources
Common-pool resources are rival in consumption and non-excludable. An example is that of fisheries, which harvest fish from a shared common resource pool of fish stock. Fish caught by one group of fishermen are no longer accessible to another group, thus being rivalrous. However, oftentimes, due to an absence of well-defined property rights, it is difficult to restrict access to fishermen who may overfish.
Club goods
Club goods are excludable but not rivalrous in the consumption. That is, not everyone can use the good, but when one individual has claim to use it, they do not reduce the amount or the ability for others to consume the good. By joining a specific club or organization we can obtain club goods; As a result, some people are excluded because they are not members. Examples in addition to the ones in the matrix are cable television, golf courses, and any merchandise provided to club members. A large television service provider would already have infrastructure in place which would allow for the addition of new customers without infringing on existing customers viewing abilities. This would also mean that marginal cost would be close to zero, which satisfies the criteria for a good to be considered non-rival. However, access to cable TV services is only available to consumers willing to pay the price, demonstrating the excludability aspect.
Economists set these categories for these goods and their impact on consumers. The government is usually responsible for public goods and common goods, and enterprises are generally responsible for the production of private and club goods, although this is not always the case.
History of the fourfold model of goods
In 1977, Nobel winner Elinor Ostrom and her husband Vincent Ostrom proposed additional modifications to the existing classification of goods so to identify fundamental differences that affect the incentives facing individuals. Their definitions are presented on the matrix.
Elinor Ostrom proposed additional modifications to the classification of goods to identify fundamental differences that affect the incentives facing individuals
Replacing the term "rivalry of consumption" with "subtractability of use".
Conceptualizing subtractability of use and excludability to vary from low to high rather than characterizing them as either present or absent.
Overtly adding a very important fourth type of good—common-pool resources—that shares the attribute of subtractability with private goods and difficulty of exclusion with public goods. Forests, water systems, fisheries, and the global atmosphere are all common-pool resources of immense importance for the survival of humans on this earth.
Changing the name of a "club" good to a "toll" good since goods that share these characteristics are provided by small scale public as well as private associations.
Expansion of Fourfold model: Anti-rivalrous
Consumption can be extended to include "Anti-rivalrous" consumption.
Expansion of Fourfold model: Semi-Excludable
The additional definition matrix shows the four common categories alongside providing some examples of fully excludable goods, Semi-excludable goods and fully non-excludeable goods. Semi-excludable goods can be considered goods or services that a mostly successful in excluding non-paying customer, but are still able to be consumed by non-paying consumers. An example of this is movies, books or video games that could be easily pirated and shared for free.
Trading of goods
Goods are capable of being physically delivered to a consumer. Goods that are economic intangibles can only be stored, delivered, and consumed by means of media.
Goods, both tangibles and intangibles, may involve the transfer of product ownership to the consumer. Services do not normally involve transfer of ownership of the service itself, but may involve transfer of ownership of goods developed or marketed by a service provider in the course of the service. For example, sale of storage related goods, which could consist of storage sheds, storage containers, storage buildings as tangibles or storage supplies such as boxes, bubble wrap, tape, bags and the like which are consumables, or distributing electricity among consumers is a service provided by an electric utility company. This service can only be experienced through the consumption of electrical energy, which is available in a variety of voltages and, in this case, is the economic goods produced by the electric utility company. While the service (namely, distribution of electrical energy) is a process that remains in its entirety in the ownership of the electric service provider, the goods (namely, electric energy) is the object of ownership transfer. The consumer becomes an electric energy owner by purchase and may use it for any lawful purposes just like any other goods.
See also
Bad (economics)
Commodification
Fast-moving consumer goods
Final goods
Goods and services
Intangible asset
Intangible good
List of economics topics
Property
Tangible property
Service (economics)
Notes
References
Bannock, Graham et al. (1997). Dictionary of Economics, Penguin Books.
Milgate, Murray (1987), "goods and commodities," The New Palgrave: A Dictionary of Economics, v. 2, pp. 546–48. Includes historical and contemporary uses of the terms in economics.
Vuaridel, R. (1968). Une définition des biens économiques. (A definition of economic goods). L'Année sociologique (1940/1948-), 19, 133–170.
External links
Utility
Supply chain management
Microeconomics | Goods | [
"Physics"
] | 2,390 | [
"Materials",
"Goods (economics)",
"Matter"
] |
580,067 | https://en.wikipedia.org/wiki/Ecoinformatics | Ecoinformatics, or ecological informatics, is the science of information in ecology and environmental science. It integrates environmental and information sciences to define entities and natural processes with language common to both humans and computers. However, this is a rapidly developing area in ecology and there are alternative perspectives on what constitutes ecoinformatics.
A few definitions have been circulating, mostly centered on the creation of tools to access and analyze natural system data. However, the scope and aims of ecoinformatics are certainly broader than the development of metadata standards to be used in documenting datasets. Ecoinformatics aims to facilitate environmental research and management by developing ways to access, integrate databases of environmental information, and develop new algorithms enabling different environmental datasets to be combined to test ecological hypotheses. Ecoinformatics is related to the concept of ecosystem services.
Ecoinformatics characterize the semantics of natural system knowledge. For this reason, much of today's ecoinformatics research relates to the branch of computer science known as knowledge representation, and active ecoinformatics projects are developing links to activities such as the Semantic Web.
Current initiatives to effectively manage, share, and reuse ecological data are indicative of the increasing importance of fields like ecoinformatics to develop the foundations for effectively managing ecological information. Examples of these initiatives are National Science Foundation Datanet projects, DataONE, Data Conservancy, and Artificial Intelligence for Environment & Sustainability.
Software Development Lifecycle
Central to the concept of ecoinformatics is the Software Development Lifecycle (SDLC), a systematic framework for writing, implementing, and maintaining software products. Typically in Ecoinformatics projects, the development pipeline includes data collection, usually from several different environmental data sources, then integrating these data sources together, and then analyzing the data. Here, each step of the SDLC is described in the context of ecoinformatics, per Michener et al. It is important to note that the plan, collect, assure, describes and preserve steps refer to the data collection entity, which can be individual researchers or large data-collection networks, while the discover, integrate, and analyze steps typically refer to the individual researcher.
Plan: Ecoinformatics projects require data from several databases. Each database holds different data, and therefore researchers should identify what types of environmental or ecological data they will need to answer their research question.
Collect: Data is collected in several different ways. In ecoinformatics, this is usually restricted to manually entering data into a spreadsheet, and parsing data from an existing database. The growth of relational databases has made it easier for ecologists to download relevant data and integrate datasets together
Assure: Data entries should be checked thoroughly to validate their accuracy and usability, such as to check for outliers and erroneous points. The same principle applies to data downloaded from datasets. This responsibility falls on both the ecologist downloading the data, and the entity that sets up the data collection system.
Describe: An accurate description of the metadata of a dataset that is used in a study should include enough information to deduce the data collection and processing methodology, when the data were collected, why the data were collected, and how the data were stored. This is important for reproducibility, especially for projects that build on each other and may recycle data
Preserve: After data is collected by an institutional entity, it should be archived such that it is easily accessible. Ideally, this is in databases that are maintained and not at risk of deprecation
Discover: While there are good practices for discovering data to start a research project, this process is often marred by a lack of usable, published data, as researchers may collect data specific to their study, but may not publish this data for wider use. On the data collection end, this can be addressed by better data-sharing practices, such as by linking datasets when publishing papers or studies. On the data procurement end, this can be addressed by more precise data searching, such as using key words to find relevant datasets.
Integrate: Synthesizing datasets together can be difficult and labor-intensive, largely due to the methodological differences in data collection. There are several approaches to this, but the best practices typically involve computational approaches, namely using R or Python, to automate the processes and prevent errors
Analyze: Data analysis can take several forms, and should be tailored to the specific ecological project. However, all data analysis methods should be well-documented, including the procedure for analysis, justification for analysis methods, and any shortcomings in a specific approach.
Applications of Ecoinformatics Across Ecology
Ecosystem Ecology
Ecosystem studies, by definition, encompass interactions across the entire life sciences spectrum, from microscopic biochemical reactions to large-scale geological phenomena. As a result, big databases may not be designed specifically for any particular research question, but should be inclusive enough to support most studies. Since ecosystem-level questions require a broad perspective, data-related ecosystem projects would likely incorporate data from several databases.
A common framework for incorporating data into ecosystem-level studies is the network science model, in which data collection mechanisms and resources are treated like a large, interconnected network instead of individual entities. The network may include several data collection stations within one databases, or may span across multiple databases. Currently there are several large-scale networks, but they do not generate data on the scale to consider ecology as a big data science.
A current challenge for ecoinformatics in ecosystem ecology is that most funding is prioritized for generating new data rather than maintaining existing data infrastructures. Integrating data across the different spatial scales can also be difficult, since each dataset may hold different types of data.
Urban Ecology
The current push for smart cities, and sensor network integration into infrastructure, has positioned as a major source of data for ecological studies. Typical urban ecology questions address the effects of urbanization on the local ecosystem, and how to drive future development to promote urban biodiversity.
While sensor networks in cities typically collect environmental data to optimize city processes, they may also be used for ecological initiatives, especially with respect to understanding the complex, multi-layered relationship between cities and their local ecosystem. It can also be used to better understand the current landscape of cities, and identify avenues for rewinding of cities. For example, analyzing mobility patterns can identify areas that may lend themselves well to building parks and green spaces. Bird watching data can also be used to identify the types of bird species in a local area.
Infectious Disease
Like other disciplines of ecology, emerging infectious disease and epidemiology span multiple scales, from understanding the genetics that drive disease trends to large-scale spatiotemporal analyses. As a result, infectious disease studies can incorporate everything from bioinformatics, genetic sequences, amino acid sequences, and environmental observation data.
On the micro-scale, these data can then be used to predict infectivity/transmissibility, drug resistance, drug candidates, and mutation sites. On the macro-scale, it can be used to identify societal trends or environmental factors that lend themselves to spillover, locations of infection, and practices that cause disease transmission.
Databases
USGS National Streamflow sensor network
GBIF
Neotoma
Paleobiology database
European Vegetation Archive
USDA Forest Inventory Analysis
TRY
BIEN
AmeriFlux
TEAM
iNaturalist
NEON
GLEON
LTER
CZO
TERN
SAEON
References
External links
ecoinformatics.org, Online Resource for Managing Ecological Data and Information
Ecoinformatics Collaboratory, Research links and public wiki for discussion.
Ecoinformatics Education, Ecosystem Informatics at Oregon State University
industrial Environmental Informatics, Industrial Environmental Informatics at HTW-Berlin, University of Applied Sciences
International Society for Ecological Informatics
Canadian Facility for Ecoinformatics Research, Ecoinformatics at the University of Ottawa, Canada
Ecoinformatics program at the National Center for Ecological Analysis & Synthesis
Ecological Informatics: An International Journal on Computational Ecology and Ecological Data Science
Ecological Data
NSF DataNet call for proposals
DataONE
Data Conservancy
, EcoInformatics Summer Institute, an NSF-funded REU site (Research Experience for Undergraduates)
Ecology
Information science | Ecoinformatics | [
"Biology"
] | 1,680 | [
"Ecology"
] |
580,069 | https://en.wikipedia.org/wiki/Gay-Lussac%27s%20law | Gay-Lussac's law usually refers to Joseph-Louis Gay-Lussac's law of combining volumes of gases, discovered in 1808 and published in 1809. However, it sometimes refers to the proportionality of the volume of a gas to its absolute temperature at constant pressure. The latter law was published by Gay-Lussac in 1802, but in the article in which he described his work, he cited earlier unpublished work from the 1780s by Jacques Charles. Consequently, the volume-temperature proportionality is usually known as Charles's Law.
Law of combining volumes
The law of combining volumes states that when gases chemically react together, they do so in amounts by volume which bear small whole-number ratios (the volumes calculated at the same temperature and pressure).
The ratio between the volumes of the reactant gases and the gaseous products can be expressed in simple whole numbers.
For example, Gay-Lussac found that two volumes of hydrogen react with one volume of oxygen to form two volumes of gaseous water. Expressed concretely, 100 mL of hydrogen combine with 50 mL of oxygen to give 100 mL of water vapor: Hydrogen(100 mL) + Oxygen(50 mL) = Water(100 mL). Thus, the volumes of hydrogen and oxygen which combine (i.e., 100mL and 50mL) bear a simple ratio of 2:1, as also is the case for the ratio of product water vapor to reactant oxygen.
Based on Gay-Lussac's results, Amedeo Avogadro hypothesized in 1811 that, at the same temperature and pressure, equal volumes of gases (of whatever kind) contain equal numbers of molecules (Avogadro's law). He pointed out that if this hypothesis is true, then the previously stated result
2 volumes of hydrogen + 1 volume of oxygen = 2 volume of gaseous water
could also be expressed as
2 molecules of hydrogen + 1 molecule of oxygen = 2 molecule of water.
The law of combining volumes of gases was announced publicly by Joseph Louis Gay-Lussac on the last day of 1808, and published in 1809. Since there was no direct evidence for Avogadro's molecular theory, very few chemists adopted Avogadro's hypothesis as generally valid until the Italian chemist Stanislao Cannizzaro argued convincingly for it during the First International Chemical Congress in 1860.
Pressure-temperature law
In the 17th century Guillaume Amontons discovered a regular relationship between the pressure and temperature of a gas at constant volume. Some introductory physics textbooks still define the pressure-temperature relationship as Gay-Lussac's law. Gay-Lussac primarily investigated the relationship between volume and temperature and published it in 1802, but his work did cover some comparison between pressure and temperature. Given the relative technology available to both men, Amontons could only work with air as a gas, whereas Gay-Lussac was able to experiment with multiple types of common gases, such as oxygen, nitrogen, and hydrogen.
Volume-temperature law
Regarding the volume-temperature relationship, Gay-Lussac attributed his findings to Jacques Charles because he used much of Charles's unpublished data from 1787 – hence, the law became known as Charles's law or the Law of Charles and Gay-Lussac.
Amontons's, Charles', and Boyle's law form the combined gas law. These three gas laws in combination with Avogadro's law can be generalized by the ideal gas law.
Gay-Lussac used the formula acquired from ΔV/V = αΔT to define the rate of expansion α for gases. For air, he found a relative expansion ΔV/V = 37.50% and obtained a value of α = 37.50%/100 °C = 1/266.66 °C which indicated that the value of absolute zero was approximately 266.66 °C below 0 °C. The value of the rate of expansion α is approximately the same for all gases and this is also sometimes referred to as Gay-Lussac's Law. See the introduction to this article, and Charles's Law.
See also
References
Further reading
Gas laws
de:Thermische Zustandsgleichung idealer Gase#Gesetz von Amontons
ga:Dlí Gay-Lussac | Gay-Lussac's law | [
"Chemistry"
] | 887 | [
"Gas laws"
] |
580,076 | https://en.wikipedia.org/wiki/Skyglow | Skyglow (or sky glow) is the diffuse luminance of the night sky, apart from discrete light sources such as the Moon and visible individual stars. It is a commonly noticed aspect of light pollution. While usually referring to luminance arising from artificial lighting, skyglow may also involve any scattered light seen at night, including natural ones like starlight, zodiacal light, and airglow.
In the context of light pollution, skyglow arises from the use of artificial light sources, including electrical (or rarely gas) lighting used for illumination and advertisement and from gas flares. Light propagating into the atmosphere directly from upward-directed or incompletely shielded sources, or after reflection from the ground or other surfaces, is partially scattered back toward the ground, producing a diffuse glow that is visible from great distances. Skyglow from artificial lights is most often noticed as a glowing dome of light over cities and towns, yet is pervasive throughout the developed world.
Causes
Light used for all purposes in the outdoor and indoor environments contributes to the artificial skyglow. In fact, both intentional and unintentional usage of light such as lampposts, fixtures, and buildings illumination contribute to the scattering of the light into the atmosphere and represent one of the most detrimental effects of light pollution at night.
Part of this artificial light at night interacts with the air molecules and aerosols, and it is absorbed and scattered depending on the optical characteristics of the surrounding environment (see ) thus creating skyglow. Whether clouds are present, this effect is amplified by the interaction with water droplets.
Research indicates that when viewed from nearby, about half of skyglow arises from direct upward emissions, and half from reflected, though the ratio varies depending on details of lighting fixtures and usage, and distance of the observation point from the light source. In most communities, direct upward emission averages about 10–15%. Fully shielded lighting (with no light emitted directly upward) decreases skyglow by about half when viewed nearby, but by much greater factors when viewed from a distance.
Skyglow is significantly amplified by the presence of snow, and within and near urban areas when clouds are present. In remote areas, snow brightens the sky, but clouds make the sky darker.
Mechanism
There are two kinds of light scattering that lead to sky glow: scattering from molecules such as N2 and O2 (called Rayleigh scattering), and that from aerosols, described by Mie theory. Rayleigh scattering is much stronger for short-wavelength (blue) light, while scattering from aerosols is less affected by wavelength. Rayleigh scattering makes the sky appear blue in the daytime; the more aerosols there are, the less blue or whiter the sky appears. In many areas, most particularly in urban areas, aerosol scattering dominates, due to the heavy aerosol loading caused by modern industrial activity, power generation, farming and transportation.
Despite the strong wavelength dependence of Rayleigh scattering, its effect on sky glow for real light sources is small. Though the shorter wavelengths suffer increased scattering, this increased scattering also gives rise to increased extinction: the effects approximately balance when the observation point is near the light source.
For human visual perception of sky glow, generally the assumed context under discussions of sky glow, sources rich in shorter wavelengths produce brighter sky glow, but for a different reason (see ).
Measurement
Professional astronomers and light pollution researchers use various measures of luminous or radiant intensity per unit area, such as magnitudes per square arcsecond, watts per square meter per steradian,(nano-)Lamberts, or (micro-)candela per square meter. All-sky maps of skyglow brightness are produced with professional-grade imaging cameras with CCD detectors and using stars as calibration sources. Amateur astronomers have used the Bortle Dark-Sky Scale to approximately quantify skyglow ever since it was published in Sky & Telescope magazine in February 2001. The scale rates the darkness of the night sky inhibited by skyglow with nine classes and provides a detailed description of each position on the scale. Amateurs also increasingly use Sky Quality Meters (SQM) that nominally measure in astronomical photometric units of visual (Johnson V) magnitudes per square arcsecond.
Dependence on distance from source
Sky glow brightness arising from artificial light sources falls steeply with distance from the light source, due to the geometric effects characterized by an inverse square law in combination with atmospheric absorption. An approximate relation is given by
which is known as "Walker's Law."
Walker's Law has been verified by observation to describe both the measurements of sky brightness at any given point or direction in the sky caused by a light source (such as a city), as well as to integrated measures such as the brightness of the "light dome" over a city, or the integrated brightness of the entire night sky. At very large distances (over about 50 km) the brightness falls more rapidly, largely due to extinction and geometric effects caused by the curvature of the Earth.
Dependence on light source
Different light sources produce differing amounts of visual sky glow. The dominant effect arises from the Purkinje shift, and not as commonly claimed from Rayleigh scattering of short wavelengths (see ). When observing the night sky, even from moderately light polluted areas, the eye becomes nearly or completely dark-adapted or scotopic. The scotopic eye is much more sensitive to blue and green light, and much less sensitive to yellow and red light, than the light-adapted or photopic eye. Predominantly because of this effect, white light sources such as metal halide, fluorescent, or white LED can produce as much as 3.3 times the visual sky glow brightness of the currently most-common high-pressure sodium lamp, and up to eight times the brightness of low-pressure sodium or amber Aluminium gallium indium phosphide LED.
In detail, the effects are complex, depending both on the distance from the source as well as the viewing direction in the night sky. But the basic results of recent research are unambiguous: assuming equal luminous flux (that is, equal amounts of visible light), and matched optical characteristics of the fixtures (particularly the amount of light allowed to radiate directly upward), white sources rich in shorter (blue and green) wavelengths produce dramatically greater sky glow than sources with little blue and green. The effect of Rayleigh scattering on skyglow impacts of differing light source spectra is small.
Much discussion in the lighting industry and even by some dark-sky advocacy organizations (e.g. International Dark-Sky Association) of the sky glow consequences of replacing the currently prevalent high-pressure sodium roadway lighting systems with white LEDs neglects critical issues of human visual spectral sensitivity, or focuses exclusively on white LED light sources, or focuses concerns narrowly on the blue portion (<500 nm) of the spectrum. All of these deficiencies lead to the incorrect conclusion that increases in sky glow brightness arising from the change in light source spectrum are minimal, or that light-pollution regulations that limit the CCT of white LEDs to so-called "warm white" (i.e. CCT <4000K or 3500K) will prevent sky glow increases. Improved efficiency (efficiency in distributing light onto the target area – such as the roadway – with diminished "waste" falling outside of the target area and more uniform distribution patterns) can allow designers to lower lighting amounts. But efficiency improvement sufficient to overcome sky glow doubling or tripling arising from a switch to even warm-white LED from high-pressure sodium (or a 4–8x increase compared to low-pressure sodium) has not been demonstrated.
Negative effects
Skyglow, and more generally light pollution, has various negative effects: from aesthetic diminishment of the beauty of a star-filled sky, through energy and resources wasted in the production of excessive or uncontrolled lighting, to impacts on birds and other biological systems, including humans. Skyglow is a prime problem for astronomers, because it reduces contrast in the night sky to the extent where it may become impossible to see all but the brightest stars.
Many nocturnal organisms are believed to navigate using the polarization signal of scattered moonlight. Because skyglow is mostly unpolarized, it can swamp the weaker signal from the moon, making this type of navigation impossible. Close to global coastal megacities (e.g. Tokyo, Shanghai), the natural illumination cycles provided by the moon in the marine environment are considerably disrupted by light pollution, with only nights around the full moon providing greater radiances, and over a given month lunar dosages may be a factor of 6 less than light pollution dosage.
Due to skyglow, people who live in or near urban areas see thousands fewer stars than in an unpolluted sky, and commonly cannot see the Milky Way. Fainter sights like the zodiacal light and Andromeda Galaxy are nearly impossible to discern even with telescopes.
Effects on the ecosystem
The effects of sky glow in relation to the ecosystem have been observed to be detrimental to a variety of organisms. The lives of plants and animals (especially those which are nocturnal) are affected as their natural environment becomes subjected to unnatural change. It can be assumed that the rate of human development technology exceeds the rate of non-human natural adaptability to their environment, therefore, organisms such as plants and animals are unable to keep up and can suffer as a consequence.
Although sky glow can be the result of a natural occurrence, the presence of artificial sky glow has become a detrimental problem as urbanization continues to flourish. The effects of urbanization, commercialization, and consumerism are the result of human development; these developments in turn have ecological consequences. For example, lighted fishing fleets, offshore oil platforms, and cruise ships all bring the disruption of artificial night lighting to the world's oceans.
As a whole, these effects derive from changes in orientation, disorientation, or misorientation, and attraction or repulsion from the altered light environment, which in turn may affect foraging, predator-prey dynamics, reproduction, migration, and communication. These changes can result in the death of some species such as certain migratory birds, sea creatures, and nocturnal predators.
Besides the effect on animals, crops and trees are also susceptible to destruction. The constant exposure to light has an impact of the photosynthesis of a plant, as a plant needs a balance of both sun and darkness in order for it to survive. In turn, the effects of sky glow can affect production rates of agriculture, especially in farming areas that are close to large city centers.
See also
Light pollution
SKYGLOW
Urbanization
Polarized light pollution
Over-illumination
Dark-sky movement
International Dark-Sky Association (IDA)
Campaign for Dark Skies (CfDS)
Notes
References
External links
List of peer reviewed research papers about sky glow
Skyglow: the effect of poor lighting (CfDS) (examples of skyglow in the UK)
Skyglow across the Great Lakes (examples of skyglow in the US)
Filtering Skyglow (from CCD cameras)
Towns and Skyglow (UK skyglow image collection)
Loss of the Night an Android app for estimating skyglow by measuring naked eye limiting magnitude
Dark Sky Meter an iPhone app for measuring skyglow luminance
LED light pollution: Can we save energy and save the night? SPIE Newsroom article on reducing skyglow
Light pollution
Light sources
Night | Skyglow | [
"Astronomy"
] | 2,344 | [
"Time in astronomy",
"Night"
] |
580,145 | https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20relativity | This is a list of mathematical topics in relativity, by Wikipedia page.
Special relativity
Foundational issues
principle of relativity
speed of light
faster-than-light
biquaternion
conjugate diameters
four-vector
four-acceleration
four-force
four-gradient
four-momentum
four-velocity
hyperbolic orthogonality
hyperboloid model
light-like
Lorentz covariance
Lorentz group
Lorentz transformation
Lorentz–FitzGerald contraction hypothesis
Minkowski diagram
Minkowski space
Poincaré group
proper length
proper time
rapidity
relativistic wave equations
relativistic mass
split-complex number
unit hyperbola
world line
General relativity
black holes
no-hair theorem
Hawking radiation
Hawking temperature
Black hole entropy
charged black hole
rotating black hole
micro black hole
Schwarzschild black hole
Schwarzschild metric
Schwarzschild radius
Reissner–Nordström black hole
Immirzi parameter
closed timelike curve
cosmic censorship hypothesis
chronology protection conjecture
Einstein–Cartan theory
Einstein's field equation
geodesic
gravitational redshift
Penrose–Hawking singularity theorems
Pseudo-Riemannian manifold
stress–energy tensor
worm hole
Cosmology
anti-de Sitter space
Ashtekar variables
Batalin–Vilkovisky formalism
Big Bang
Cauchy horizon
cosmic inflation
cosmic microwave background
cosmic variance
cosmological constant
dark energy
dark matter
de Sitter space
Friedmann–Lemaître–Robertson–Walker metric
horizon problem
large-scale structure of the cosmos
Randall–Sundrum model
warped geometry
Weyl curvature hypothesis
Relativity
Mathematics | List of mathematical topics in relativity | [
"Physics"
] | 308 | [
"Theory of relativity"
] |
580,252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | A Reuleaux triangle is a curved triangle with constant width, the simplest and best known curve of constant width other than the circle. It is formed from the intersection of three circular disks, each having its center on the boundary of the other two. Constant width means that the separation of every two parallel supporting lines is the same, independent of their orientation. Because its width is constant, the Reuleaux triangle is one answer to the question "Other than a circle, what shape can a manhole cover be made so that it cannot fall down through the hole?"
They are named after Franz Reuleaux, a 19th-century German engineer who pioneered the study of machines for translating one type of motion into another, and who used Reuleaux triangles in his designs. However, these shapes were known before his time, for instance by the designers of Gothic church windows, by Leonardo da Vinci, who used it for a map projection, and by Leonhard Euler in his study of constant-width shapes. Other applications of the Reuleaux triangle include giving the shape to guitar picks, fire hydrant nuts, pencils, and drill bits for drilling filleted square holes, as well as in graphic design in the shapes of some signs and corporate logos.
Among constant-width shapes with a given width, the Reuleaux triangle has the minimum area and the sharpest (smallest) possible angle (120°) at its corners. By several numerical measures it is the farthest from being centrally symmetric. It provides the largest constant-width shape avoiding the points of an integer lattice, and is closely related to the shape of the quadrilateral maximizing the ratio of perimeter to diameter. It can perform a complete rotation within a square while at all times touching all four sides of the square, and has the smallest possible area of shapes with this property. However, although it covers most of the square in this rotation process, it fails to cover a small fraction of the square's area, near its corners. Because of this property of rotating within a square, the Reuleaux triangle is also sometimes known as the Reuleaux rotor.
The Reuleaux triangle is the first of a sequence of Reuleaux polygons whose boundaries are curves of constant width formed from regular polygons with an odd number of sides. Some of these curves have been used as the shapes of coins. The Reuleaux triangle can also be generalized into three dimensions in multiple ways: the Reuleaux tetrahedron (the intersection of four balls whose centers lie on a regular tetrahedron) does not have constant width, but can be modified by rounding its edges to form the Meissner tetrahedron, which does. Alternatively, the surface of revolution of the Reuleaux triangle also has constant width.
Construction
The Reuleaux triangle may be constructed either directly from three circles, or by rounding the sides of an equilateral triangle.
The three-circle construction may be performed with a compass alone, not even needing a straightedge. By the Mohr–Mascheroni theorem
the same is true more generally of any compass-and-straightedge construction, but the construction for the Reuleaux triangle is particularly simple.
The first step is to mark two arbitrary points of the plane (which will eventually become vertices of the triangle), and use the compass to draw a circle centered at one of the marked points, through the other marked point. Next, one draws a second circle, of the same radius, centered at the other marked point and passing through the first marked point.
Finally, one draws a third circle, again of the same radius, with its center at one of the two crossing points of the two previous circles, passing through both marked points. The central region in the resulting arrangement of three circles will be a Reuleaux triangle.
Alternatively, a Reuleaux triangle may be constructed from an equilateral triangle T by drawing three arcs of circles, each centered at one vertex of T and connecting the other two vertices.
Or, equivalently, it may be constructed as the intersection of three disks centered at the vertices of T, with radius equal to the side length of T.
Mathematical properties
The most basic property of the Reuleaux triangle is that it has constant width, meaning that for every pair of parallel supporting lines (two lines of the same slope that both touch the shape without crossing through it) the two lines have the same Euclidean distance from each other, regardless of the orientation of these lines. In any pair of parallel supporting lines, one of the two lines will necessarily touch the triangle at one of its vertices. The other supporting line may touch the triangle at any point on the opposite arc, and their distance (the width of the Reuleaux triangle) equals the radius of this arc.
The first mathematician to discover the existence of curves of constant width, and to observe that the Reuleaux triangle has constant width, may have been Leonhard Euler. In a paper that he presented in 1771 and published in 1781 entitled De curvis triangularibus, Euler studied curvilinear triangles as well as the curves of constant width, which he called orbiforms.
Extremal measures
By many different measures, the Reuleaux triangle is one of the most extreme curves of constant width.
By the Blaschke–Lebesgue theorem, the Reuleaux triangle has the smallest possible area of any curve of given constant width. This area is
where s is the constant width. One method for deriving this area formula is to partition the Reuleaux triangle into an inner equilateral triangle and three curvilinear regions between this inner triangle and the arcs forming the Reuleaux triangle, and then add the areas of these four sets. At the other extreme, the curve of constant width that has the maximum possible area is a circular disk, which has area .
The angles made by each pair of arcs at the corners of a Reuleaux triangle are all equal to 120°. This is the sharpest possible angle at any vertex of any curve of constant width. Additionally, among the curves of constant width, the Reuleaux triangle is the one with both the largest and the smallest inscribed equilateral triangles. The largest equilateral triangle inscribed in a Reuleaux triangle is the one connecting its three corners, and the smallest one is the one connecting the three midpoints of its sides. The subset of the Reuleaux triangle consisting of points belonging to three or more diameters is the interior of the larger of these two triangles; it has a larger area than the set of three-diameter points of any other curve of constant width.
Although the Reuleaux triangle has sixfold dihedral symmetry, the same as an equilateral triangle, it does not have central symmetry.
The Reuleaux triangle is the least symmetric curve of constant width according to two different measures of central asymmetry, the Kovner–Besicovitch measure (ratio of area to the largest centrally symmetric shape enclosed by the curve) and the Estermann measure (ratio of area to the smallest centrally symmetric shape enclosing the curve). For the Reuleaux triangle, the two centrally symmetric shapes that determine the measures of asymmetry are both hexagonal, although the inner one has curved sides. The Reuleaux triangle has diameters that split its area more unevenly than any other curve of constant width. That is, the maximum ratio of areas on either side of a diameter, another measure of asymmetry, is bigger for the Reuleaux triangle than for other curves of constant width.
Among all shapes of constant width that avoid all points of an integer lattice, the one with the largest width is a Reuleaux triangle. It has one of its axes of symmetry parallel to the coordinate axes on a half-integer line. Its width, approximately 1.54, is the root of a degree-6 polynomial with integer coefficients.
Just as it is possible for a circle to be surrounded by six congruent circles that touch it, it is also possible to arrange seven congruent Reuleaux triangles so that they all make contact with a central Reuleaux triangle of the same size. This is the maximum number possible for any curve of constant width.
Among all quadrilaterals, the shape that has the greatest ratio of its perimeter to its diameter is an equidiagonal kite that can be inscribed into a Reuleaux triangle.
Other measures
By Barbier's theorem all curves of the same constant width including the Reuleaux triangle have equal perimeters. In particular this perimeter equals the perimeter of the circle with the same width, which is .
The radii of the largest inscribed circle of a Reuleaux triangle with width s, and of the circumscribed circle of the same triangle, are
respectively; the sum of these radii equals the width of the Reuleaux triangle. More generally, for every curve of constant width, the largest inscribed circle and the smallest circumscribed circle are concentric, and their radii sum to the constant width of the curve.
The optimal packing density of the Reuleaux triangle in the plane remains unproven, but is conjectured to be
which is the density of one possible double lattice packing for these shapes. The best proven upper bound on the packing density is approximately 0.947. It has also been conjectured, but not proven, that the Reuleaux triangles have the highest packing density of any curve of constant width.
Rotation within a square
Any curve of constant width can form a rotor within a square, a shape that can perform a complete rotation while staying within the square and at all times touching all four sides of the square. However, the Reuleaux triangle is the rotor with the minimum possible area. As it rotates, its axis does not stay fixed at a single point, but instead follows a curve formed by the pieces of four ellipses. Because of its 120° angles, the rotating Reuleaux triangle cannot reach some points near the sharper angles at the square's vertices, but rather covers a shape with slightly rounded corners, also formed by elliptical arcs.
At any point during this rotation, two of the corners of the Reuleaux triangle touch two adjacent sides of the square, while the third corner of the triangle traces out a curve near the opposite vertex of the square. The shape traced out by the rotating Reuleaux triangle covers approximately 98.8% of the area of the square.
As a counterexample
Reuleaux's original motivation for studying the Reuleaux triangle was as a counterexample, showing that three single-point contacts may not be enough to fix a planar object into a single position. The existence of Reuleaux triangles and other curves of constant width shows that diameter measurements alone cannot verify that an object has a circular cross-section.
In connection with the inscribed square problem, observed that the Reuleaux triangle provides an example of a constant-width shape in which no regular polygon with more than four sides can be inscribed, except the regular hexagon, and he described a small modification to this shape that preserves its constant width but also prevents regular hexagons from being inscribed in it. He generalized this result to three dimensions using a cylinder with the same shape as its cross section.
Applications
Reaching into corners
Several types of machinery take the shape of the Reuleaux triangle, based on its property of being able to rotate within a square.
The Watts Brothers Tool Works square drill bit has the shape of a Reuleaux triangle, modified with concavities to form cutting surfaces. When mounted in a special chuck which allows for the bit not having a fixed centre of rotation, it can drill a hole that is nearly square. Although patented by Henry Watts in 1914, similar drills invented by others were used earlier. Other Reuleaux polygons are used to drill pentagonal, hexagonal, and octagonal holes.
Panasonic's RULO robotic vacuum cleaner has its shape based on the Reuleaux triangle in order to ease cleaning up dust in the corners of rooms.
Rolling cylinders
Another class of applications of the Reuleaux triangle involves cylindrical objects with a Reuleaux triangle cross section. Several pencils are manufactured in this shape, rather than the more traditional round or hexagonal barrels. They are usually promoted as being more comfortable or encouraging proper grip, as well as being less likely to roll off tables (since the center of gravity moves up and down more than a rolling hexagon).
A Reuleaux triangle (along with all other curves of constant width) can roll but makes a poor wheel because it does not roll about a fixed center of rotation. An object on top of rollers that have Reuleaux triangle cross-sections would roll smoothly and flatly, but an axle attached to Reuleaux triangle wheels would bounce up and down three times per revolution. This concept was used in a science fiction short story by Poul Anderson titled "The Three-Cornered Wheel". A bicycle with floating axles and a frame supported by the rim of its Reuleaux triangle shaped wheel was built and demonstrated in 2009 by Chinese inventor Guan Baihua, who was inspired by pencils with the same shape.
Mechanism design
Another class of applications of the Reuleaux triangle involves using it as a part of a mechanical linkage that can convert rotation around a fixed axis
into reciprocating motion. These mechanisms were studied by Franz Reuleaux. With the assistance of the Gustav Voigt company, Reuleaux built approximately 800 models of mechanisms, several of which involved the Reuleaux triangle. Reuleaux used these models in his pioneering scientific investigations of their motion. Although most of the Reuleaux–Voigt models have been lost, 219 of them have been collected at Cornell University, including nine based on the Reuleaux triangle. However, the use of Reuleaux triangles in mechanism design predates the work of Reuleaux; for instance, some steam engines from as early as 1830 had a cam in the shape of a Reuleaux triangle.
One application of this principle arises in a film projector. In this application, it is necessary to advance the film in a jerky, stepwise motion, in which each frame of film stops for a fraction of a second in front of the projector lens, and then much more quickly the film is moved to the next frame. This can be done using a mechanism in which the rotation of a Reuleaux triangle within a square is used to create a motion pattern for an actuator that pulls the film quickly to each new frame and then pauses the film's motion while the frame is projected.
The rotor of the Wankel engine is shaped as a curvilinear triangle that is often cited as an example of a Reuleaux triangle. However, its curved sides are somewhat flatter than those of a Reuleaux triangle and so it does not have constant width.
Architecture
In Gothic architecture, beginning in the late 13th century or early 14th century, the Reuleaux triangle became one of several curvilinear forms frequently used for windows, window tracery, and other architectural decorations. For instance, in English Gothic architecture, this shape was associated with the decorated period, both in its geometric style of 1250–1290 and continuing into its curvilinear style of 1290–1350. It also appears in some of the windows of the Milan Cathedral. In this context, the shape is sometimes called a spherical triangle, which should not be confused with spherical triangle meaning a triangle on the surface of a sphere. In its use in Gothic church architecture, the three-cornered shape of the Reuleaux triangle may be seen both as a symbol of the Trinity, and as "an act of opposition to the form of the circle".
The Reuleaux triangle has also been used in other styles of architecture. For instance, Leonardo da Vinci sketched this shape as the plan for a fortification. Modern buildings that have been claimed to use a Reuleaux triangle shaped floorplan include the MIT Kresge Auditorium, the Kölntriangle, the Donauturm, the Torre de Collserola, and the Mercedes-Benz Museum. However in many cases these are merely rounded triangles, with different geometry than the Reuleaux triangle.
Mapmaking
Another early application of the Reuleaux triangle, da Vinci's world map from circa 1514, was a world map in which the spherical surface of the earth was divided into eight octants, each flattened into the shape of a Reuleaux triangle.
Similar maps also based on the Reuleaux triangle were published by Oronce Finé in 1551 and by John Dee in 1580.
Other objects
Many guitar picks employ the Reuleaux triangle, as its shape combines a sharp point to provide strong articulation, with a wide tip to produce a warm timbre. Because all three points of the shape are usable, it is easier to orient and wears less quickly compared to a pick with a single tip.
The Reuleaux triangle has been used as the shape for the cross section of a fire hydrant valve nut. The constant width of this shape makes it difficult to open the fire hydrant using standard parallel-jawed wrenches; instead, a wrench with a special shape is needed. This property allows the fire hydrants to be opened only by firefighters (who have the special wrench) and not by other people trying to use the hydrant as a source of water for other activities.
Following a suggestion of , the antennae of the Submillimeter Array, a radio-wave astronomical observatory on Mauna Kea in Hawaii, are arranged on four nested Reuleaux triangles. Placing the antennae on a curve of constant width causes the observatory to have the same spatial resolution in all directions, and provides a circular observation beam. As the most asymmetric curve of constant width, the Reuleaux triangle leads to the most uniform coverage of the plane for the Fourier transform of the signal from the array. The antennae may be moved from one Reuleaux triangle to another for different observations, according to the desired angular resolution of each observation. The precise placement of the antennae on these Reuleaux triangles was optimized using a neural network. In some places the constructed observatory departs from the preferred Reuleaux triangle shape because that shape was not possible within the given site.
Signs and logos
The shield shapes used for many signs and corporate logos feature rounded triangles. However, only some of these are Reuleaux triangles.
The corporate logo of Petrofina (Fina), a Belgian oil company with major operations in Europe, North America and Africa, used a Reuleaux triangle with the Fina name from 1950 until Petrofina's merger with Total S.A. (today TotalEnergies) in 2000.
Another corporate logo framed in the Reuleaux triangle, the south-pointing compass of Bavaria Brewery, was part of a makeover by design company Total Identity that won the SAN 2010 Advertiser of the Year award. The Reuleaux triangle is also used in the logo of Colorado School of Mines.
In the United States, the National Trails System and United States Bicycle Route System both mark routes with Reuleaux triangles on signage.
In nature
According to Plateau's laws, the circular arcs in two-dimensional soap bubble clusters meet at 120° angles, the same angle found at the corners of a Reuleaux triangle. Based on this fact, it is possible to construct clusters in which some of the bubbles take the form of a Reuleaux triangle.
The shape was first isolated in crystal form in 2014 as Reuleaux triangle disks. Basic bismuth nitrate disks with the Reuleaux triangle shape were formed from the hydrolysis and precipitation of bismuth nitrate in an ethanol–water system in the presence of 2,3-bis(2-pyridyl)pyrazine.
Generalizations
Triangular curves of constant width with smooth rather than sharp corners may be obtained as the locus of points at a fixed distance from the Reuleaux triangle. Other generalizations of the Reuleaux triangle include surfaces in three dimensions, curves of constant width with more than three sides, and the Yanmouti sets which provide extreme examples of an inequality between width, diameter, and inradius.
Three-dimensional version
The intersection of four balls of radius s centered at the vertices of a regular tetrahedron with side length s is called the Reuleaux tetrahedron, but its surface is not a surface of constant width. It can, however, be made into a surface of constant width, called Meissner's tetrahedron, by replacing three of its edge arcs by curved surfaces, the surfaces of rotation of a circular arc. Alternatively, the surface of revolution of a Reuleaux triangle through one of its symmetry axes forms a surface of constant width, with minimum volume among all known surfaces of revolution of given constant width.
Reuleaux polygons
The Reuleaux triangle can be generalized to regular or irregular polygons with an odd number of sides, yielding a Reuleaux polygon, a curve of constant width formed from circular arcs of constant radius. The constant width of these shapes allows their use as coins that can be used in coin-operated machines. Although coins of this type in general circulation usually have more than three sides, a Reuleaux triangle has been used for a commemorative coin from Bermuda.
Similar methods can be used to enclose an arbitrary simple polygon within a curve of constant width, whose width equals the diameter of the given polygon. The resulting shape consists of circular arcs (at most as many as sides of the polygon), can be constructed algorithmically in linear time, and can be drawn with compass and straightedge. Although the Reuleaux polygons all have an odd number of circular-arc sides, it is possible to construct constant-width shapes with an even number of circular-arc sides of varying radii.
Yanmouti sets
The Yanmouti sets are defined as the convex hulls of an equilateral triangle together with three circular arcs, centered at the triangle vertices and spanning the same angle as the triangle, with equal radii that are at most equal to the side length of the triangle. Thus, when the radius is small enough, these sets degenerate to the equilateral triangle itself, but when the radius is as large as possible they equal the corresponding Reuleaux triangle. Every shape with width w, diameter d, and inradius r (the radius of the largest possible circle contained in the shape) obeys the inequality
and this inequality becomes an equality for the Yanmouti sets, showing that it cannot be improved.
Related figures
In the classical presentation of a three-set Venn diagram as three overlapping circles, the central region (representing elements belonging to all three sets) takes the shape of a Reuleaux triangle. The same three circles form one of the standard drawings of the Borromean rings, three mutually linked rings that cannot, however, be realized as geometric circles. Parts of these same circles are used to form the triquetra, a figure of three overlapping semicircles (each two of which form a vesica piscis symbol) that again has a Reuleaux triangle at its center; just as the three circles of the Venn diagram may be interlaced to form the Borromean rings, the three circular arcs of the triquetra may be interlaced to form a trefoil knot.
Relatives of the Reuleaux triangle arise in the problem of finding the minimum perimeter shape that encloses a fixed amount of area and includes three specified points in the plane. For a wide range of choices of the area parameter, the optimal solution to this problem will be a curved triangle whose three sides are circular arcs with equal radii. In particular, when the three points are equidistant from each other and the area is that of the Reuleaux triangle, the Reuleaux triangle is the optimal enclosure.
Circular triangles are triangles with circular-arc edges, including the Reuleaux triangle as well as other shapes.
The deltoid curve is another type of curvilinear triangle, but one in which the curves replacing each side of an equilateral triangle are concave rather than convex. It is not composed of circular arcs, but may be formed by rolling one circle within another of three times the radius. Other planar shapes with three curved sides include the arbelos, which is formed from three semicircles with collinear endpoints, and the Bézier triangle.
The Reuleaux triangle may also be interpreted as the stereographic projection of one triangular face of a spherical tetrahedron, the Schwarz triangle of parameters with spherical angles of measure and sides of spherical length
References
External links
Piecewise-circular curves
Types of triangles
Constant width
Eponymous geometric shapes | Reuleaux triangle | [
"Mathematics"
] | 5,129 | [
"Planes (geometry)",
"Euclidean plane geometry",
"Piecewise-circular curves"
] |
580,264 | https://en.wikipedia.org/wiki/Barbier%27s%20theorem | In geometry, Barbier's theorem states that every curve of constant width has perimeter times its width, regardless of its precise shape. This theorem was first published by Joseph-Émile Barbier in 1860.
Examples
The most familiar examples of curves of constant width are the circle and the Reuleaux triangle. For a circle, the width is the same as the diameter; a circle of width w has perimeter w. A Reuleaux triangle of width w consists of three arcs of circles of radius w. Each of these arcs has central angle /3, so the perimeter of the Reuleaux triangle of width w is equal to half the perimeter of a circle of radius w and therefore is equal to w. A similar analysis of other simple examples such as Reuleaux polygons gives the same answer.
Proofs
One proof of the theorem uses the properties of Minkowski sums. If K is a body of constant width w, then the Minkowski sum of K and its 180° rotation is a disk with radius w and perimeter 2w. The Minkowski sum acts linearly on the perimeters of convex bodies, so the perimeter of K must be half the perimeter of this disk, which is w as the theorem states.
Alternatively, the theorem follows immediately from the Crofton formula in integral geometry according to which the length of any curve equals the measure of the set of lines that cross the curve, multiplied by their numbers of crossings. Any two curves that have the same constant width are crossed by sets of lines with the same measure, and therefore they have the same length. Historically, Crofton derived his formula later than, and independently of, Barbier's theorem.
An elementary probabilistic proof of the theorem can be found at Buffon's noodle.
Higher dimensions
The analogue of Barbier's theorem for surfaces of constant width is false. In particular, the unit sphere has surface area , while the surface of revolution of a Reuleaux triangle with the same constant width has surface area .
Instead, Barbier's theorem generalizes to bodies of constant brightness, three-dimensional convex sets for which every two-dimensional projection has the same area. These all have the same surface area as a sphere of the same projected area.
And in general, if is a convex subset of , for which every (n−1)-dimensional projection has area of the unit ball in , then the surface area of is equal to that of the unit sphere in . This follows from the general form of Crofton formula.
See also
Blaschke–Lebesgue theorem and isoperimetric inequality, bounding the areas of curves of constant width
References
Theorems in plane geometry
Pi
Length
Constant width | Barbier's theorem | [
"Physics",
"Mathematics"
] | 548 | [
"Scalar physical quantities",
"Physical quantities",
"Distance",
"Quantity",
"Size",
"Theorems in plane geometry",
"Length",
"Theorems in geometry",
"Wikipedia categories named after physical quantities",
"Pi"
] |
580,345 | https://en.wikipedia.org/wiki/Chlamydomonas%20reinhardtii | Chlamydomonas reinhardtii is a single-cell green alga about 10 micrometres in diameter that swims with two flagella. It has a cell wall made of hydroxyproline-rich glycoproteins, a large cup-shaped chloroplast, a large pyrenoid, and an eyespot apparatus that senses light.
Chlamydomonas species are widely distributed worldwide in soil and fresh water, of which Chlamydomonas reinhardtii is one of the most common and widespread. C. reinhardtii is an especially well studied biological model organism, partly due to its ease of culturing and the ability to manipulate its genetics. When illuminated, C. reinhardtii can grow photoautotrophically, but it can also grow in the dark if supplied with organic carbon. Commercially, C. reinhardtii is of interest for producing biopharmaceuticals and biofuel, as well being a valuable research tool in making hydrogen.
History
The C. reinhardtii wild-type laboratory strain c137 (mt+) originates from an isolate collected near Amherst, Massachusetts, in 1945 by Gilbert M. Smith.
The species' name has been spelled several different ways because of different transliterations of the name from Russian: reinhardi, reinhardii, and reinhardtii all refer to the same species, C. reinhardtii Dangeard.
Description
Cells of Chlamydomonas reinhardtii are mostly spherical, but can range from ellipsoidal, ovoid, obovoid, or asymmetrical. They are 10–22 μm long and 8–22 μm wide. The cell wall is thin, lacking a papilla. The flagella are 1.5 to 2 times the length of the cell body. Cells contain a single cup-shaped chloroplast lining the bottom of the cell, with a single basal pyrenoid.
Eye spot
C. reinhardtii has an eyespot apparatus similar to that of dinoflagellates. The eyespot is located near the cell equator. It is composed of a carotenoid-rich granule layer in the chloroplast which act like a light reflector. The main function of the eyespot is the phototaxis, which consist of the movement (with the flagella) related to a light stimulus. The phototaxis is crucial for the alga and allows for localization of the environment with optimal light conditions for photosynthesis. Phototaxis can be positive or negative depending on the light intensity. The phototactic pathway consists of four steps leading to a change in the beating balance between the two flagella (the cis-flagellum which is the one closest to the eyespot, and the trans-flagellum which is the one farthest from the eyespot).
Model organism
Chlamydomonas is used as a model organism for research on fundamental questions in cell and molecular biology such as:
How do cells move?
How do cells respond to light?
How do cells recognize one another?
How do cells generate regular, repeatable flagellar waveforms?
How do cells regulate their proteome to control flagellar length?
How do cells respond to changes in mineral nutrition? (nitrogen, sulfur, etc.)
There are many known mutants of C. reinhardtii. These mutants are useful tools for studying a variety of biological processes, including flagellar motility, photosynthesis, and protein synthesis. Since Chlamydomonas species are normally haploid, the effects of mutations are seen immediately without further crosses.
In 2007, the complete nuclear genome sequence of C. reinhardtii was published.
Channelrhodopsin-1 and Channelrhodopsin-2, proteins that function as light-gated cation channels, were originally isolated from C. reinhardtii. These proteins and others like them are increasingly widely used in the field of optogenetics.
Mitochondrial significance
The genome of C. reinhardtii is significant for mitochondrial study as it is one species where the genes for 6 of the 13 proteins encoded for the mitochondria are found in the nucleus of the cell, leaving 7 in the mitochondria. In species outside of Chlorophyceae, these genes are present only in the mitochondria and are unable to be allotopically expressed. This is significant for the testing and development of therapies for genetic mitochondrial diseases.
Reproduction
Vegetative cells of reinhardtii species are haploid with 17 small chromosomes. Under nitrogen starvation, vegetative cells differentiate into haploid gametes. There are two mating types, identical in appearance, thus isogamous, and known as mt(+) and mt(-), which can fuse to form a diploid zygote. The zygote is not flagellated, and it serves as a dormant form of the species in the soil. In the light, the zygote undergoes meiosis and releases four flagellated haploid cells that resume the vegetative lifecycle.
Under ideal growth conditions, cells may sometimes undergo two or three rounds of mitosis before the daughter cells are released from the old cell wall into the medium. Thus, a single growth step may result in 4 or 8 daughter cells per mother cell.
The cell cycle of this unicellular green algae can be synchronized by alternating periods of light and dark. The growth phase is dependent on light, whereas, after a point designated as the transition or commitment point, processes are light-independent.
Genetics
The attractiveness of the algae as a model organism has recently increased with the release of several genomic resources to the public domain. The Chlre3 draft of the Chlamydomonas nuclear genome sequence prepared by Joint Genome Institute of the U.S. Dept of Energy comprises 1557 scaffolds totaling 120 Mb. Roughly half of the genome is contained in 24 scaffolds all at least 1.6 Mb in length. The current assembly of the nuclear genome is available online.
The ~15.8 Kb mitochondrial genome (database accession: NC_001638) is available online at the NCBI database. The complete ~203.8 Kb chloroplast genome (database accession: NC_005353) is available online.
In addition to genomic sequence data, there is a large supply of expression sequence data available as cDNA libraries and expressed sequence tags (ESTs). Seven cDNA libraries are available online. A BAC library can be purchased from the Clemson University Genomics Institute. There are also two databases of >50 000 and >160 000 ESTs available online.
A genome-wide collection of mutants with mapped insertion sites covering most nuclear genes is available: https://www.chlamylibrary.org/.
The genome of C. reinhardtii has been shown to contain N6-Methyladenosine (m6A), a mark common in prokaryotes but much rarer in eukaryotes. Some research has indicated that 6mA in Chlamydomonas may be involved in nucleosome positioning, as it is present in the linker regions between nucleosomes as well as near the transcription start sites of actively transcribed genes.
C. reinhardtii appears to be capable of several DNA repair processes. These include recombinational repair, strand break repair and excision repair. In particular, C. reinhardtii chloroplasts possess an efficient system for repairing DNA double-strand breaks. In chloroplast DNA homologous recombinational repair is strongly stimulated by double-strand breaks.
Experimental evolution
Chlamydomonas has been used to study different aspects of evolutionary biology and ecology. It is an organism of choice for many selection experiments because :-
it has a short generation time,
it is both an autotroph and a facultative heterotroph,
it can reproduce both sexually and asexually, and
there is a wealth of genetic information already available.
Some examples (nonexhaustive) of evolutionary work done with Chlamydomonas include the evolution of sexual reproduction, the fitness effect of mutations, and the effect of adaptation to different levels of .
According to one frequently cited theoretical hypothesis, sexual reproduction (in contrast to asexual reproduction) is adaptively maintained in benign environments because it reduces mutational load by combining deleterious mutations from different lines of descent and increases mean fitness. However, in a long-term experimental study of C. reinhardtii, evidence was obtained that contradicted this hypothesis. In sexual populations, mutation clearance was not found to occur and fitness was not found to increase.
Motion
C. reinhardtii swims thanks to its two flagella, in a movement analogous to human breaststroke. Repeating this elementary movement 50 times per second the algae have a mean velocity of 70 μm/s; the genetic diversity of the different strains results in a huge range of values for this quantity. After few seconds of run, an asynchronous beating of the two flagella leads to a random change of direction, a movement called "run and tumble". At a larger time and space scale, the random movement of the alga can be described as an active diffusion phenomenon.
DNA transformation techniques
Gene transformation occurs mainly by homologous recombination in the chloroplast and heterologous recombination in the nucleus. The C. reinhardtii chloroplast genome can be transformed using microprojectile particle bombardment or glass bead agitation, however this last method is far less efficient. The nuclear genome has been transformed with both glass bead agitation and electroporation. The biolistic procedure appears to be the most efficient way of introducing DNA into the chloroplast genome. This is probably because the chloroplast occupies over half of the volume of the cell providing the microprojectile with a large target. Electroporation has been shown to be the most efficient way of introducing DNA into the nuclear genome with maximum transformation frequencies two orders of magnitude higher than obtained using glass bead method.
Practical uses
Production of biopharmaceuticals
Genetically engineered C. reinhardtii has been used to produce a mammalian serum amyloid protein (needs citation), a human antibody protein (needs citation), human Vascular endothelial growth factor, a potential therapeutic Human Papillomavirus 16 vaccine, a potential malaria vaccine (an edible algae vaccine), and a complex designer drug that could be used to treat cancer.
Alternative protein source
C. reinhardtii has been suggested as a new algae-based nutritional source. Compared to Chlorella and Spirulina, C. reinhardtii was found to have more Alpha-linolenic acid, and a lower quantity of heavy metals while also containing all the essential amino acids and similar protein content. Triton Algae Innovations was developing a commercial alternative protein product made from C reinhardtii.
Clean source of hydrogen production
In 1939, the German researcher Hans Gaffron (1902–1979), who was at that time attached to the University of Chicago, discovered the hydrogen metabolism of unicellular green algae. C reinhardtii and some other green algae can, under specified circumstances, stop producing oxygen and convert instead to the production of hydrogen. This reaction by hydrogenase, an enzyme active only in the absence of oxygen, is short-lived. Over the next thirty years, Gaffron and his team worked out the basic mechanics of this photosynthetic hydrogen production by algae.
To increase the production of hydrogen, several tracks are being followed by the researchers.
The first track is decoupling hydrogenase from photosynthesis. This way, oxygen accumulation can no longer inhibit the production of hydrogen. And, if one goes one step further by changing the structure of the enzyme hydrogenase, it becomes possible to render hydrogenase insensitive to oxygen. This makes a continuous production of hydrogen possible. In this case, the flux of electrons needed for this production no longer comes from the production of sugars but is drawn from the breakdown of its own stock of starch.
A second track is to interrupt temporarily, through genetic manipulation of hydrogenase, the photosynthesis process. This inhibits oxygen's reaching a level where it is able to stop the production of hydrogen.
The third track, mainly investigated by researchers in the 1950s, is chemical or mechanical methods of removal of O2 produced by the photosynthetic activity of the algal cells. These have included the addition of O2 scavengers, the use of added reductants, and purging the cultures with inert gases. However, these methods are not inherently scalable, and may not be applicable to applied systems. New research has appeared on the subject of removing oxygen from algae cultures, and may eliminate scaling problems.
The fourth track has been investigated, namely using copper salts to decouple hydrogenase action from oxygen production.
The fifth track has been suggested to reroute the photosynthetic electron flow from fixation in Calvin cycle to hydrogenase by applying short light pulses to anaerobic algae or by depleting the culture of . Under pulse-illumination conditions, algae produce H2 via the most efficient mechanism of direct water biophotolysis, proceeding in two distinct steps:
1. Photosystem II-dependent water oxidation reaction:
2H2O → 4H+ + O2 + 4e⁻;
2. Hydrogenase-dependent reversible reduction of protons to molecular hydrogen:
4H+ + 4e⁻ ⇄ 2H2.
If the water oxidation reaction leading to O2 production is balanced with O2 consumption, either by respiration or by introducing additional O2 absorbents or scavengers, the photosynthetic production of H2 can be sustained for a prolonged period. This balance prevents the inhibition of hydrogenase activity by accumulated O2, ensuring steady hydrogen production under these optimized conditions.
See also
Protist locomotion#Biohybrid microswimmers
D66 strain of Chlamydomonas reinhardtii
References
Further reading
External links
The Chlamydomonas Resource Center — "A central repository to receive, catalog, preserve, and distribute high-quality and reliable wild type and mutant cultures of the green alga Chlamydomonas reinhardtii, as well as useful molecular reagents and kits for education and research."
Plant Comparative Genomics portal — Chlamydomonas reinhardtii resources from the Department of Energy Joint Genome Institute
Chlamydomonas reinhardtii cell, life cycle, strains, mating types — archived database.
Chlamydomonadaceae
Plant models
Hydrogen production
Chlorophyta species | Chlamydomonas reinhardtii | [
"Biology"
] | 3,061 | [
"Model organisms",
"Plant models"
] |
580,384 | https://en.wikipedia.org/wiki/Hopf%20fibration | In differential topology, the Hopf fibration (also known as the Hopf bundle or Hopf map) describes a 3-sphere (a hypersphere in four-dimensional space) in terms of circles and an ordinary sphere. Discovered by Heinz Hopf in 1931, it is an influential early example of a fiber bundle. Technically, Hopf found a many-to-one continuous function (or "map") from the -sphere onto the -sphere such that each distinct point of the -sphere is mapped from a distinct great circle of the -sphere . Thus the -sphere is composed of fibers, where each fiber is a circle — one for each point of the -sphere.
This fiber bundle structure is denoted
meaning that the fiber space (a circle) is embedded in the total space (the -sphere), and (Hopf's map) projects onto the base space (the ordinary -sphere). The Hopf fibration, like any fiber bundle, has the important property that it is locally a product space. However it is not a trivial fiber bundle, i.e., is not globally a product of and although locally it is indistinguishable from it.
This has many implications: for example the existence of this bundle shows that the higher homotopy groups of spheres are not trivial in general. It also provides a basic example of a principal bundle, by identifying the fiber with the circle group.
Stereographic projection of the Hopf fibration induces a remarkable structure on , in which all of 3-dimensional space, except for the z-axis, is filled with nested tori made of linking Villarceau circles. Here each fiber projects to a circle in space (one of which is a line, thought of as a "circle through infinity"). Each torus is the stereographic projection of the inverse image of a circle of latitude of the -sphere. (Topologically, a torus is the product of two circles.) These tori are illustrated in the images at right. When is compressed to the boundary of a ball, some geometric structure is lost although the topological structure is retained (see Topology and geometry). The loops are homeomorphic to circles, although they are not geometric circles.
There are numerous generalizations of the Hopf fibration. The unit sphere in complex coordinate space fibers naturally over the complex projective space with circles as fibers, and there are also real, quaternionic, and octonionic versions of these fibrations. In particular, the Hopf fibration belongs to a family of four fiber bundles in which the total space, base space, and fiber space are all spheres:
By Adams's theorem such fibrations can occur only in these dimensions.
Definition and construction
For any natural number n, an n-dimensional sphere, or n-sphere, can be defined as the set of points in an -dimensional space which are a fixed distance from a central point. For concreteness, the central point can be taken to be the origin, and the distance of the points on the sphere from this origin can be assumed to be a unit length. With this convention, the n-sphere, , consists of the points in with x12 + x22 + ⋯+ xn + 12 = 1. For example, the -sphere consists of the points (x1, x2, x3, x4) in R4 with x12 + x22 + x32 + x42 = 1.
The Hopf fibration of the -sphere over the -sphere can be defined in several ways.
Direct construction
Identify with and with (where denotes the complex numbers) by writing:
and
.
Thus is identified with the subset of all in such that , and is identified with the subset of all in such that . (Here, for a complex number , where the star denotes the complex conjugate.) Then the Hopf fibration is defined by
The first component is a complex number, whereas the second component is real. Any point on the -sphere must have the property that . If that is so, then lies on the unit -sphere in , as may be shown by adding the squares of the absolute values of the complex and real components of
Furthermore, if two points on the 3-sphere map to the same point on the 2-sphere, i.e., if , then must equal for some complex number with . The converse is also true; any two points on the -sphere that differ by a common complex factor map to the same point on the -sphere. These conclusions follow, because the complex factor cancels with its complex conjugate in both parts of : in the complex component and in the real component .
Since the set of complex numbers with form the unit circle in the complex plane, it follows that for each point in , the inverse image is a circle, i.e., . Thus the -sphere is realized as a disjoint union of these circular fibers.
A direct parametrization of the -sphere employing the Hopf map is as follows.
or in Euclidean
Where runs over the range from to , runs over the range from to , and can take any value from to . Every value of , except and which specify circles, specifies a separate flat torus in the -sphere, and one round trip ( to ) of either or causes you to make one full circle of both limbs of the torus.
A mapping of the above parametrization to the -sphere is as follows, with points on the circles parametrized by .
Geometric interpretation using the complex projective line
A geometric interpretation of the fibration may be obtained using the complex projective line, , which is defined to be the set of all complex one-dimensional subspaces of . Equivalently, is the quotient of by the equivalence relation which identifies with for any nonzero complex number . On any complex line in there is a circle of unit norm, and so the restriction of the quotient map to the points of unit norm is a fibration of over .
is diffeomorphic to a -sphere: indeed it can be identified with the Riemann sphere , which is the one point compactification of (obtained by adding a point at infinity). The formula given for above defines an explicit diffeomorphism between the complex projective line and the ordinary -sphere in -dimensional space. Alternatively, the point can be mapped to the ratio in the Riemann sphere .
Fiber bundle structure
The Hopf fibration defines a fiber bundle, with bundle projection . This means that it has a "local product structure", in the sense that every point of the -sphere has some neighborhood whose inverse image in the -sphere can be identified with the product of and a circle: . Such a fibration is said to be locally trivial.
For the Hopf fibration, it is enough to remove a single point from and the corresponding circle from ; thus one can take , and any point in has a neighborhood of this form.
Geometric interpretation using rotations
Another geometric interpretation of the Hopf fibration can be obtained by considering rotations of the -sphere in ordinary -dimensional space. The rotation group SO(3) has a double cover, the spin group , diffeomorphic to the -sphere. The spin group acts transitively on by rotations. The stabilizer of a point is isomorphic to the circle group; its elements are angles of rotation leaving the given point unmoved, all sharing the axis connecting that point to the sphere's center. It follows easily that the -sphere is a principal circle bundle over the -sphere, and this is the Hopf fibration.
To make this more explicit, there are two approaches: the group can either be identified with the group Sp(1) of unit quaternions, or with the special unitary group SU(2).
In the first approach, a vector in is interpreted as a quaternion by writing
The -sphere is then identified with the versors, the quaternions of unit norm, those for which , where , which is equal to for as above.
On the other hand, a vector in can be interpreted as a pure quaternion
Then, as is well-known since , the mapping
is a rotation in : indeed it is clearly an isometry, since , and it is not hard to check that it preserves orientation.
In fact, this identifies the group of versors with the group of rotations of , modulo the fact that the versors and determine the same rotation. As noted above, the rotations act transitively on , and the set of versors which fix a given right versor have the form , where and are real numbers with . This is a circle subgroup. For concreteness, one can take , and then the Hopf fibration can be defined as the map sending a versor . All the quaternions , where is one of the circle of versors that fix , get mapped to the same thing (which happens to be one of the two rotations rotating to the same place as does).
Another way to look at this fibration is that every versor ω moves the plane spanned by to a new plane spanned by . Any quaternion , where is one of the circle of versors that fix , will have the same effect. We put all these into one fibre, and the fibres can be mapped one-to-one to the -sphere of rotations which is the range of .
This approach is related to the direct construction by identifying a quaternion with the matrix:
This identifies the group of versors with , and the imaginary quaternions with the skew-hermitian matrices (isomorphic to ).
Explicit formulae
The rotation induced by a unit quaternion is given explicitly by the orthogonal matrix
Here we find an explicit real formula for the bundle projection by noting that the fixed unit vector along the axis, , rotates to another unit vector,
which is a continuous function of . That is, the image of is the point on the -sphere where it sends the unit vector along the axis. The fiber for a given point on consists of all those unit quaternions that send the unit vector there.
We can also write an explicit formula for the fiber over a point in . Multiplication of unit quaternions produces composition of rotations, and
is a rotation by around the axis. As varies, this sweeps out a great circle of , our prototypical fiber. So long as the base point, , is not the antipode, , the quaternion
will send to . Thus the fiber of is given by quaternions of the form , which are the points
Since multiplication by acts as a rotation of quaternion space, the fiber is not merely a topological circle, it is a geometric circle.
The final fiber, for , can be given by defining to equal , producing
which completes the bundle. But note that this one-to-one mapping between and is not continuous on this circle, reflecting the fact that is not topologically equivalent to .
Thus, a simple way of visualizing the Hopf fibration is as follows. Any point on the -sphere is equivalent to a quaternion, which in turn is equivalent to a particular rotation of a Cartesian coordinate frame in three dimensions. The set of all possible quaternions produces the set of all possible rotations, which moves the tip of one unit vector of such a coordinate frame (say, the vector) to all possible points on a unit -sphere. However, fixing the tip of the vector does not specify the rotation fully; a further rotation is possible about the axis. Thus, the -sphere is mapped onto the -sphere, plus a single rotation.
The rotation can be represented using the Euler angles θ, φ, and ψ. The Hopf mapping maps the rotation to the point on the 2-sphere given by θ and φ, and the associated circle is parametrized by ψ. Note that when θ = π the Euler angles φ and ψ are not well defined individually, so we do not have a one-to-one mapping (or a one-to-two mapping) between the 3-torus of (θ, φ, ψ) and S3.
Fluid mechanics
If the Hopf fibration is treated as a vector field in 3 dimensional space then there is a solution to the (compressible, non-viscous) Navier–Stokes equations of fluid dynamics in which the fluid flows along the circles of the projection of the Hopf fibration in 3 dimensional space. The size of the velocities, the density and the pressure can be chosen at each point to satisfy the equations. All these quantities fall to zero going away from the centre. If a is the distance to the inner ring, the velocities, pressure and density fields are given by:
for arbitrary constants and . Similar patterns of fields are found as soliton solutions of magnetohydrodynamics:
Generalizations
The Hopf construction, viewed as a fiber bundle p: S3 → CP1, admits several generalizations, which are also often known as Hopf fibrations. First, one can replace the projective line by an n-dimensional projective space. Second, one can replace the complex numbers by any (real) division algebra, including (for n = 1) the octonions.
Real Hopf fibrations
A real version of the Hopf fibration is obtained by regarding the circle S1 as a subset of R2 in the usual way and by
identifying antipodal points. This gives a fiber bundle S1 → RP1 over the real projective line with fiber S0 = {1, −1}. Just as CP1 is diffeomorphic to a sphere, RP1 is diffeomorphic to a circle.
More generally, the n-sphere Sn fibers over real projective space RPn with fiber S0.
Complex Hopf fibrations
The Hopf construction gives circle bundles p : S2n+1 → CPn over complex projective space. This is actually the restriction of the tautological line bundle over CPn to the unit sphere in Cn+1.
Quaternionic Hopf fibrations
Similarly, one can regard S4n+3 as lying in Hn+1 (quaternionic n-space) and factor out by unit quaternion (= S3) multiplication to get the quaternionic projective space HPn. In particular, since S4 = HP1, there is a bundle S7 → S4 with fiber S3.
Octonionic Hopf fibrations
A similar construction with the octonions yields a bundle S15 → S8 with fiber S7. But the sphere S31 does not fiber over S16 with fiber S15. One can regard S8 as the octonionic projective line OP1. Although one can also define an octonionic projective plane OP2, the sphere S23 does not fiber over OP2
with fiber S7.
Fibrations between spheres
Sometimes the term "Hopf fibration" is restricted to the fibrations between spheres obtained above, which are
S1 → S1 with fiber S0
S3 → S2 with fiber S1
S7 → S4 with fiber S3
S15 → S8 with fiber S7
As a consequence of Adams's theorem, fiber bundles with spheres as total space, base space, and fiber can occur only in these dimensions.
Fiber bundles with similar properties, but different from the Hopf fibrations, were used by John Milnor to construct exotic spheres.
Geometry and applications
The Hopf fibration has many implications, some purely attractive, others deeper. For example, stereographic projection S3 → R3 induces a remarkable structure in R3, which in turn illuminates the topology of the bundle . Stereographic projection preserves circles and maps the Hopf fibers to geometrically perfect circles in R3 which fill space. Here there is one exception: the Hopf circle containing the projection point maps to a straight line in R3 — a "circle through infinity".
The fibers over a circle of latitude on S2 form a torus in S3 (topologically, a torus is the product of two circles) and these project to nested toruses in R3 which also fill space. The individual fibers map to linking Villarceau circles on these tori, with the exception of the circle through the projection point and the one through its opposite point: the former maps to a straight line, the latter to a unit circle perpendicular to, and centered on, this line, which may be viewed as a degenerate torus whose minor radius has shrunken to zero. Every other fiber image encircles the line as well, and so, by symmetry, each circle is linked through every circle, both in R3 and in S3. Two such linking circles form a Hopf link in R3
Hopf proved that the Hopf map has Hopf invariant 1, and therefore is not null-homotopic. In fact it generates the homotopy group π3(S2) and has infinite order.
In quantum mechanics, the Riemann sphere is known as the Bloch sphere, and the Hopf fibration describes the topological structure of a quantum mechanical two-level system or qubit. Similarly, the topology of a pair of entangled two-level systems is given by the Hopf fibration
. Moreover, the Hopf fibration is equivalent to the fiber bundle structure of the Dirac monopole.
Hopf fibration also found applications in robotics, where it was used to generate uniform samples on SO(3) for the probabilistic roadmap algorithm in motion planning. It also found application in the automatic control of quadrotors.
See also
Villarceau circles
Notes
References
; reprinted as article 20 in
.
External links
Dimensions Math Chapters 7 and 8 illustrate the Hopf fibration with animated computer graphics.
An Elementary Introduction to the Hopf Fibration by David W. Lyons (PDF)
YouTube animation showing dynamic mapping of points on the 2-sphere to circles in the 3-sphere, by Professor Niles Johnson.
YouTube animation of the construction of the 120-cell By Gian Marco Todesco shows the Hopf fibration of the 120-cell.
Video of one 30-cell ring of the 600-cell from http://page.math.tu-berlin.de/~gunn/.
Interactive visualization of the mapping of points on the 2-sphere to circles in the 3-sphere
Algebraic topology
Geometric topology
Differential geometry
Fiber bundles
Homotopy theory | Hopf fibration | [
"Mathematics"
] | 3,861 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology",
"Geometric topology"
] |
580,448 | https://en.wikipedia.org/wiki/Cape%20Hatteras%20Lighthouse | Cape Hatteras Light is a lighthouse located on Hatteras Island in the Outer Banks in the town of Buxton, North Carolina and is part of the Cape Hatteras National Seashore. It is the tallest lighthouse in the U.S. from base to tip at 210 feet. The lighthouse's semi-unique pattern makes it easy to recognize and famous. It is often ranked high on lists of most beautiful, and famous lighthouses in the US.
The Outer Banks are a group of barrier islands on the North Carolina coast that separate the Atlantic Ocean from the coastal sounds and inlets. Atlantic currents in this area made for excellent travel for ships, except in the area of Diamond Shoals, just offshore at Cape Hatteras. Nearby, the warm Gulf Stream ocean current collides with the colder Labrador Current, creating ideal conditions for powerful ocean storms and sea swells. The large number of ships that ran aground because of these shifting sandbars gave this area the nickname "Graveyard of the Atlantic." It also led the U.S. Congress to authorize the construction of the Cape Hatteras Light. Its 198-foot height makes it the tallest brick lighthouse structure in the United States and 2nd in the world. Since its base is almost at sea level, it is only the 15th highest light in the United States, the first 14 being built on higher ground.
Hatteras Island Visitor Center and Museum of the Sea
Adjacent to the Cape Hatteras Light is the Hatteras Island Visitor Center and Museum of the Sea, operated by the National Park Service, which is located in the historic Cape Hatteras Lighthouse Double Keepers' Quarters. Exhibits include the history, maritime heritage and natural history of the Outer Banks and the lighthouse. The visitor center offers information about the Cape Hatteras National Seashore, ranger programs and a bookstore.
History
Original lighthouse
On July 10, 1794, after Secretary of Treasury Alexander Hamilton requested that they make a lighthouse on this location after his ship almost crashed and sank on its to way to The New World giving it the nickname "Hamilton's light" Congress appropriated $44,000 "for erecting a lighthouse on the headland of Cape Hatteras and a lighted beacon on Shell Castle Island, in the harbor of Ocracoke in the State of North Carolina." The Cape Hatteras Lighthouse was constructed in 1802.
The Cape Hatteras light marked very dangerous shoals that extend from the cape for a distance of . The original tower was built of dark sandstone and retained its natural color. The original light consisted of 18 lamps; with reflectors, and was above sea level. It was visible in clear weather for a distance of .
In July 1851, Lt. David D. Porter, USN, reported as follows:
The improvement in the light referred to had begun in 1845 when the reflectors were changed from 14 to . In 1848 the 18 lamps were changed to 15 lamps with reflectors and the light had become visible in clear weather at a distance of . In 1854 a first-order Fresnel lens with flashing white light was substituted for the old reflecting apparatus, and the tower was raised to .
In 1860 the Lighthouse Board reported that Cape Hatteras Lighthouse required protection, due to the outbreak of the Civil War. In 1862 the Board reported "Cape Hatteras, lens and lantern destroyed, light reexhibited.
Second lighthouse
At the behest of mariners and officers of the U.S. Navy, Congress appropriated $80,000 to the United States Lighthouse Board to construct a new beacon at Cape Hatteras in 1868.
Completed in just under two years under the direction of brevet Brigadier General J. H. Simpson of the U.S. Army Corps of Engineers, the new Cape Hatteras lighthouse cost $167,000. The new tower, from which the first-order light was first exhibited on December 16, 1871, was the tallest brick lighthouse tower in the world. It was above ground and the focal height of the light was above water. The old tower was demolished in February 1871, leaving ruins that lasted until finally eroded away in a storm in 1980.
In the spring of 1879 the tower was struck by lightning. Cracks subsequently appeared in the masonry walls, which were remedied by placing a metal rod to connect the iron work of the tower with an iron disk sunk in the ground. In 1912 the candlepower of the light was increased from 27,000 to 80,000.
Ever since the completion of the new tower in 1870, there had begun a very gradual encroachment of the sea upon the beach. This did not become serious, however, until 1919, when the high water line had advanced to about 120 ft (36.5) from the base of the tower. Since that time the surf gnawed steadily toward the base of the tower until 1935, when the site was finally reached by the surf. Several attempts were made to arrest this erosion, but dikes and breakwaters had been of no avail. In 1935, therefore, the tower light was replaced by an Aerobeacon atop a four-legged steel skeleton tower, placed farther back from the sea on a sand dune above the sea, visible for . The abandoned brick tower was then put in the custody of the National Park Service.
The Civilian Conservation Corps and Works Progress Administration erected a series of wooden revetments which checked the wash that was carrying away the beach. In 1942, when German U-boats began attacking ships just offshore, the Coast Guard resumed its control over the brick tower and manned it as a lookout station until 1945. By then, due to accretion of sand on the beach, the brick tower was 500 to inland from the sea and again tenable as a site for the light, which was placed back in commission January 23, 1950.
The new light consisted of a aviation-type rotating beacon of 250,000 candlepower, visible , and flashing white every 7.5 seconds. The steel skeleton tower, known as the Buxton Woods Tower, was retained by the Coast Guard in the event that the brick tower again became endangered by erosion requiring that the light again be moved.
The light displays a highly visible black and white diagonal daymark paint scheme. It shares similar markings with the St. Augustine Light. Another lighthouse, with helical markings—red and white 'candy cane stripe'-- is the White Shoal Light (Michigan), which is the only true 'barber pole' lighthouse in the United States. Its distinctive "barber pole" paint job is consistent with other North Carolina black-and-white lighthouses, "each with their own pattern to help sailors identify lighthouses during daylight hours."
Today the Coast Guard owns and operates the navigational equipment, while the National Park Service maintains the tower as a historic structure. The Hatteras Island Visitor Center, formerly the Double Keepers Quarters located next to the lighthouse, elaborates on the Cape Hatteras story and the lifestyle on the Outer Banks. Cape Hatteras Lighthouse, tallest in the United States, stands from the bottom of the foundation to the peak of the roof. To reach the light, which shines above mean high-water mark, requires climbing 268 steps. The construction order of 1,250,000 bricks was used in the construction of the lighthouse and principal keeper's quarters. The light is still active. Up until 1970 the light was still using its First Order Fresnel lens. In 1970 it was replaced by an Aerobeacon. The light is used and maintained by the U.S Coast Guard as an Aid to Navigation, protecting Mariners from the icy depths of the Graveyard of the Atlantic. The active light can be seen from anywhere on Hatteras Island.
Relocation
In 1999, with the sea again encroaching, the Cape Hatteras lighthouse had to be moved from its original location at the edge of the ocean to safer ground. Due to erosion of the shore, the lighthouse was just from the water's edge and was in imminent danger. The move was a total distance of to the southwest, placing the lighthouse from the current shoreline. All other support buildings at the site were also moved at the same time. All support buildings were placed back in positions that maintained their original compass orientations and distance/height relationship to the lighthouse. International Chimney Corp. of Buffalo, New York was awarded the contract to move the lighthouse, assisted by, among other contractors, Expert House Movers. The move was controversial at the time with speculation that the structure would not survive the move, resulting in lawsuits that were later dismissed. Despite some opposition, work progressed and the move was completed on September 14, 1999.
The Cape Hatteras Light House Station Relocation Project became known as "The Move of the Millennium." General contractor International Chimney and Expert House Movers won the 40th Annual Outstanding Civil Engineering Achievement Award from the American Society of Civil Engineers in 1999. The Cape Hatteras Lighthouse is one of the tallest masonry structures ever moved (200 feet tall and weighing 5,000 tons).
Specifications
Construction material: Approximately 1,250,000 bricks
Height above sea level:
Height of the structure:
Daymark: black double helix spiral stripes on white background
Number of steps: 257 steps to reach the light
Brightness: 800,000 candle power from each of two 1,000-watt lamps
Flash pattern: 1 second flash, 6.5 second eclipse
Visibility: From 20 nautical miles (37 km) in clear conditions. In exceptional conditions, it has been seen from 51 miles (94 km) out.
See also
List of tallest lighthouses
References
Further reading
External links
Moving the Cape Hatteras Lighthouse - Cape Hatteras National Seashore (U.S. National Park Service)
Video showing the Cape Hatteras Lighthouse from the ground and the view from the top from 2016
Lighthouses completed in 1802
Lighthouses completed in 1870
Historic American Buildings Survey in North Carolina
Lighthouses on the National Register of Historic Places in North Carolina
National Historic Landmarks in North Carolina
Relocated buildings and structures in North Carolina
Hatteras Island
Museums in Dare County, North Carolina
Lighthouse museums in North Carolina
Historic Civil Engineering Landmarks
Civilian Conservation Corps in North Carolina
National Historic Landmark lighthouses
Works Progress Administration in North Carolina
Cape Hatteras National Seashore
National Register of Historic Places in Dare County, North Carolina
Historic districts on the National Register of Historic Places in North Carolina
1802 establishments in North Carolina
Brick buildings and structures in North Carolina | Cape Hatteras Lighthouse | [
"Engineering"
] | 2,122 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
580,494 | https://en.wikipedia.org/wiki/Backlink | From the point of view of a given web resource (referent), a backlink is a regular hyperlink on another web resource (the referrer) that points to the referent. A web resource may be (for example) a website, web page, or web directory.
A backlink is a reference comparable to a citation. The quantity, quality, and relevance of backlinks for a web page are among the factors that search engines like Google evaluate in order to estimate how important the page is. PageRank calculates the score for each web page based on how all the web pages are connected among themselves, and is one of the variables that Google Search uses to determine how high a web page should go in search results. This weighting of backlinks is analogous to citation analysis of books, scholarly papers, and academic journals. A Topical PageRank has been researched and implemented as well, which gives more weight to backlinks coming from the page of a same topic as a target page.
Some other words for backlink are incoming link, inbound link, inlink, inward link, and citation.
Wikis
Backlinks are offered in Wikis, but usually only within the bounds of the Wiki itself and enabled by the database backend. MediaWiki specifically offers the "What links here" tool, some older Wikis, especially the first WikiWikiWeb, had the backlink functionality exposed in the page title.
Backlinks and search engines
Search engines often use the number of backlinks that a website has as one of the most important factors for determining that website's search engine ranking, popularity and importance. Google's description of its PageRank system (January 1998), for instance, noted that "Google interprets a link from page A to page B as a vote, by page A, for page B." Knowledge of this form of search engine rankings has fueled a portion of the search engine optimization (SEO) industry commonly termed linkspam, where a company attempts to place as many inbound links as possible to their site regardless of the context of the originating site. In January 2017, Google launched Penguin 4 update which devalued such link spam practices.
The significance of search engine rankings is high, and it is regarded as a crucial parameter in online business and the conversion rate of visitors to any website, particularly when it comes to online shopping. Blog commenting, guest blogging, article submission, press release distribution, social media engagements, and forum posting can be used to increase backlinks.
Websites often employ SEO techniques to increase the number of backlinks pointing to their website. Some methods are free for use by everyone whereas some methods, like linkbaiting, require quite a bit of planning and marketing to work. There are also paid techniques to increase the number of backlinks to a target site. For example, private blog networks can be used to purchase backlinks. It has been estimated that the average cost of buying a link in 2019 was $291.55 and $391.55, when marketing blogs were excluded from the calculation.
There are several factors that determine the value of a backlink. Backlinks from authoritative sites on a given topic are highly valuable. If both sites and pages have content geared toward the topic, the backlink is considered relevant and believed to have strong influence on the search engine rankings of the web page granted the backlink. A backlink represents a favorable 'editorial vote' for the receiving webpage from another granting webpage. Another important factor is the anchor text of the backlink. Anchor text is the descriptive labeling of the hyperlink as it appears on a web page. Search engine bots (i.e., spiders, crawlers, etc.) examine the anchor text to evaluate how relevant it is to the content on a webpage. Backlinks can be generated by submissions, such as directory submissions, forum submission, social bookmarking, business listing, blog submissions, etc. Anchor text and webpage content congruency are highly weighted in search engine results page (SERP) rankings of a webpage with respect to any given keyword query by a search engine user.
Changes to the algorithms that produce search engine rankings can place a heightened focus on relevance to a particular topic. While some backlinks might be from sources containing highly valuable metrics, they could also be unrelated to the consumer's query or interest. An example of this would be a link from a popular shoe blog (with valuable metrics) to a site selling vintage pencil sharpeners. While the link appears valuable, it provides little to the consumer in terms of relevance.
See also
Link farms
Linkback
Methods of website linking
Internal links
PageRank
Search engine optimization
Search engine results page
Trackback
Search engine optimization metrics
Website audit
References
Internet terminology
URL
Hypertext | Backlink | [
"Technology"
] | 991 | [
"Computing terminology",
"Internet terminology"
] |
580,665 | https://en.wikipedia.org/wiki/Flame%20holder | A flame holder is a component of a jet engine designed to help maintain continual combustion. In a scramjet engine the residence time of the fuel is very low and complete penetration of the fuel into the flow will not occur. To avoid these conditions flame holders are used.
All continuous-combustion jet engines require a flame holder. A flame holder creates a low-speed eddy in the engine to prevent the flame from being blown out. The design of the flame holder is an issue of balance between a stable eddy and drag.
The simplest design, often used in amateur projects, is the can-type flame holder, which consists of a can covered in small holes. Much more effective is the H-gutter flame holder, which is shaped like a letter H with a curve facing and opposing the flow of air. Even more effective, however, is the V-gutter flame holder, which is shaped like a V with the point in the direction facing the flow of air. Some studies have suggested that adding a small amount of base bleed to a V-gutter helps reduce drag without reducing effectiveness. The most effective of the flame holders are the step type flame holder and the strut type flame holder.
The first mathematical model of a flame holder was proposed in 1953.
See also
Index of aviation articles
AVPIN - A monofuel used to power turbojet starter motors.
Components of jet engines
References
Jet engines | Flame holder | [
"Technology"
] | 284 | [
"Jet engines",
"Engines"
] |
580,713 | https://en.wikipedia.org/wiki/Virtual%20sex | Virtual sex is sexual activity where two or more people (or one person and a virtual character) gather together via some form of communications equipment to arouse each other, often by the means of transmitting sexually explicit messages. Virtual sex describes the phenomenon, no matter the communications equipment used.
Digital remote stimulation involves the use of electronic sex toys to stimulate a person in the genital area from a distance
Camming is virtual sex that is over video chat from services that provide it.
Cybersex is virtual sex typed over the Internet, including IRC, e-mail, instant messaging, chat rooms, webcam, role-playing games, etc.
Phone sex is virtual sex spoken over the telephone.
Sexting is virtual sex sent via mobile phone network text messaging. The advent of cell phones with built-in digital cameras has undoubtedly added new dimensions to these activities.
Modern consumer virtual reality headsets allow users to engage in virtual sex through simulated environments, either with other humans or with virtual characters.
These terms and practices continuously evolve as technologies and methods of communication change.
Increases in Internet connectivity, bandwidth availability, and the proliferation of webcams have also had implications for virtual sex enthusiasts. It is increasingly common for these activities to include the exchange of pictures or motion video. There are companies which allow paying customers to watch people have live sex or masturbate and at the same time allow themselves to be watched as well. Recently, devices have been introduced and marketed to allow remote-controlled stimulation.
Consent
An important part of taking part in virtual sex, or sexual acts, would be consent. The ethics of sexting are already being established by young people for whom consent figures as a critical concept. Distinctions between positive and negative experiences of sexting are mostly dependent on whether consent was given to make and share the images. , it is illegal for any person under the age of 18 to consent to any form of virtual sex (only if nude pictures are sent), because images of minors are considered child pornography.
Addiction
There are approximately one half to 2 million sex addicts in the world that have access to the Internet and the prospectives of virtual sex on the Internet are appealing to them. The internet opens up a world where people can reinvent themselves and try on a completely different online persona; they can freely experiment with and explore a variety of new, hidden or repressed sexual behaviors, fetishes and sexual fantasies. This can feel liberating, but can also be extremely dangerous as it has the potential of becoming addicting and have adverse effects on cybernauts' other aspects of life. What attracts people to sex via the Internet can be explained by the “Triple A” engine of Affordability, Accessibility, and Anonymity. The "Triple A" engine represents the risk factors for people that are already susceptible to sexual compulsivity or psychological vulnerability related to sexual compulsivity.
Affordability is about the cheap price of virtual sex. Pornography magazines and videos used to have a price of $20 or more per individual piece, while today anyone can have access to unlimited amount of pornographic content at the price of a $20 monthly subscription to the internet. Accessibility is a person's capacity to have access to the Internet - a service that is virtually accessible to anyone in the world. Finally, Anonymity references the ability to have access to sexual content without disclosing your true identity; this can feel empowering and make it that much easier to have sex, as one would not have to risk being seen by someone they know and feel ashamed or worried of possible gossips and rumors about them.
When does healthy virtual sex become a pathology? Addiction is defined by 3 main characteristics: compulsivity (not being able to freely choose when to stop or continue a behavior), continuation of the behavior despite adverse consequences, and obsession with the activity. When one losses control and lets virtual sex impact negatively at least one aspect of their life, this is when it stops being healthy. According to clinical studies, the main adverse consequences of virtual sex addiction are about the damage it causes in marital and other romantic relationships, disrupted due to online affairs and online sexual compulsivity. In a research study, it was found that online affairs and sexual compulsivity were reported by 53% of the virtual sex addicts interviewed to be the cause of disruption of their romantic relationships.
Virtual sex can become a coping mechanism to temporarily escape real life problems. However, it is not an effective one and even potentially harmful, as the underlying issues will go on unaddressed and only become more complex with time. Generally, there are a couple of patterns explaining why one can become addicted to virtual sex and the ways one can use it as a coping mechanism. Often, it is used to cope with emotional problems. Virtual sex can serve as a distraction from painful emotions, such as loneliness, stress, and anxiety, as consuming online pornographic content makes the addict feel more confident, desirable, and excited, creating a numbing effect. Another pattern involves young, insecure, socially awkward or emotionally troubled people who use internet to interact with others online rather than in person in order to avoid rejection from a real person. In the Internet they can find a virtually unlimited number of people who seem interesting and interested in them. They find the online world more comforting and safe, as it is harder to pick on social clues of disapproval or judgement. Gradually online friends can become more "real" than offline friends and an online friend can become an opportunity for online affair and cybersex. Partners that are cheated on through online affairs feel that online affairs are just as painful as offline ones - it is a significant source of stress, makes them feel betrayed as they were lied to, and feel insecure as they will negatively compare themselves with the online women or men. Virtual sex can become an escape and a new addiction for recovering sex addicts that are going through a stressful period in their life. Feeling triggered by life problems, prior sex addicts can find themselves using online pornographic content as a quick and easy, but temporary fix to help them soothe themselves, forget about life's problems, and feel better about themselves. Another pattern is when an individual takes advantage of the online sexual content to explore forbidden, hidden, and repressed sexual fantasies, which can become addicting and completely absorb the person into this virtual space.
Long-distance relationships
Approximately 14 million people in the United States are in a long distance relationship. Among young adults, 40% to 50% are in a long distance relationship at any given time, as well as 75% of college students at least at one given moment during their studies. It is expected that the number of long distance relationships will be increasing due to the globalized nature of today's world. Hence, the internet might be a useful tool to make long distance relationships work. One way couples in long distance relationships engage in a sexual activity online is through sexting. Self-expression through sexting between partners can create a feeling of intimacy and closeness between partners even at a distance. Long distance relationships may be more susceptible to sexual boredom, hence sexting can be an effective way of keeping partners sexually engaged at a distance. In a study, the associations between sexting and feelings of closeness were studied. It was found that more sexting more often in a long distance relationship was not predictive of higher interpersonal closeness between the partners. However, there was found a correlation between sexting and sexual satisfaction, as well as relationship satisfaction.
See also
Red Light Center
Teledildonics
Virtual reality sex
Deuel, Nancy R. 1996. Our passionate response to virtual reality. Computer-mediated Communication: Linguistic, Social, and Cross-Cultural Perspectives, p. 129-146. Ed. by Susan C. Herring. John Benjamins Publishing Company, Philadelphia.
Lunceford, Brett. “Virtual Sex.” In Encyclopedia of Gender in Media, edited by Mary Kosut. Thousand Oaks, CA: Sage, 2012.
References
External links
"Cyberwatch: Online Dating and Cybersex"
"Teledildonics Products and Teledildonic Devices" at Tinynibbles.com
"Cyborgasms: Cybersex Amongst Multiple-selves and Cyborgs in the Narrow-bandwidth Space of American Online Chat Rooms", 1996 MA Dissertation by Robin B. Hamman
Sexuality and computing | Virtual sex | [
"Technology"
] | 1,696 | [
"Computing and society",
"Sexuality and computing"
] |
580,715 | https://en.wikipedia.org/wiki/Mammary%20intercourse | Mammary intercourse is a sex act, performed as either foreplay or as non-penetrative sex, that involves the stimulation of a man's penis by a woman's breasts and vice versa. It involves placing the penis between a woman's breasts and moving the penis up and down to simulate sexual penetration and to create sexual pleasure.
Practice
Mammary intercourse involves a man kneeling or sitting on a woman's stomach or chest, placing his erect penis on her cleavage, and rubbing or thrusting while the breasts are squeezed around the penile shaft, by either the woman or the man, creating tightness similar to masturbation, and in simulation of penetrative sex. A lubricant or saliva may be used between the breasts or on the penis. Alternatively, the woman can tighten her breasts around the penis and move them back and forth. Other positions involve either the man standing while the woman kneels, or the man laying back with the woman on top.
In some cases, mammary intercourse can be combined with oral sex. Mammary intercourse may be carried out face to face or head to tail.
Mammary intercourse is mostly suited for women with naturally larger breasts, while it is recommended that women with smaller breasts be on top. Smaller female breasts, however, tend to be more sensitive than larger ones.
The woman normally does not receive direct sexual stimulation during mammary intercourse, other than the erotic stimulation of bringing her partner to orgasm, without sexual penetration. However, Alex Comfort has said that mammary intercourse can produce orgasm in women with sensitive breasts (what Margot Anand terms local orgasms of the breast), and it was one of the nine substitute exercises for penetrative sexual activities, as detailed in the Paradis Charnels of 1903. It is possible for the man to perform a fingering on the woman during mammary intercourse.
Since mammary intercourse is a non-penetrative sex act, the risk of passing a sexually transmitted infection (STI) that requires direct contact between the mucous membranes and pre-ejaculate or semen is greatly reduced. HIV is among the infections that require such direct contact and is therefore very unlikely to be transmitted via mammary intercourse. A study of the condom usage habits of New Zealand's sex workers said that they offered various safe sex alternatives to vaginal sex to clients who refused to wear a condom. One sex worker said that mammary intercourse was one alternative used; mammary intercourse performed by a woman with large breasts felt to the client like penetrative vaginal sex.
Depictions of the practice, at least in advertising, have been described as pornographic or erotic. Mammary intercourse has sometimes been considered a perversion. Sigmund Freud, however, considered such extensions of sexual interest to fall within the range of the normal, unless marked out by exclusivity (i.e. the repudiation of all other forms of sexual contact).
Slang terms
Slang terms for mammary intercourse include:
Titty-fuck, titjob or titfuck in the United States
Tit wank or French fuck in the United Kingdom – the latter term dating back to the 1930s; while a more jocular equivalent is a trip down mammary lane.
See also
Breast fetishism
Kama Sutra
Non-penetrative sex
Pearl necklace (sexual act)
References
Further reading
External links
Inter-Mammary Intercourse
Breast
Sexual acts
Sexual slang
Non-penetrative sex | Mammary intercourse | [
"Biology"
] | 717 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
580,852 | https://en.wikipedia.org/wiki/Mononuclear%20phagocyte%20system | In immunology, the mononuclear phagocyte system or mononuclear phagocytic system (MPS) also known as the macrophage system is a part of the immune system that consists of the phagocytic cells located in reticular connective tissue. The cells are primarily monocytes and macrophages, and they accumulate in lymph nodes and the spleen. The Kupffer cells of the liver and tissue histiocytes are also part of the MPS. The mononuclear phagocyte system and the monocyte macrophage system refer to two different entities, often mistakenly understood as one.
"Reticuloendothelial system" is an older term for the mononuclear phagocyte system, but it is used less commonly now, as it is understood that most endothelial cells are not macrophages.
The mononuclear phagocyte system is also a somewhat dated concept trying to combine a broad range of cells, and should be used with caution.
Cell types and locations
The spleen is the second largest unit of the mononuclear phagocyte system. The monocyte is formed in the bone marrow and transported by the blood; it migrates into the tissues, where it transforms into a histiocyte or a macrophage.
Macrophages are diffusely scattered in the connective tissue and in liver (Kupffer cells), spleen and lymph nodes (sinus histiocytes), lungs (alveolar macrophages), and central nervous system (microglia). The half-life of blood monocytes is about 1 day, whereas the life span of tissue macrophages is several months or years. The mononuclear phagocyte system is part of both humoral and cell-mediated immunity. The mononuclear phagocyte system has an important role in defense against microorganisms, including mycobacteria, fungi, bacteria, protozoa, and viruses. Macrophages remove senescent erythrocytes, leukocytes, and megakaryocytes by phagocytosis and digestion.
Functions
Formation of new red blood cells (RBCs) and white blood cells (WBCs).
Destruction of senescent RBCs.
Formation of plasma proteins.
Formation of bile pigments.
Storage of iron. In the liver, Kupffer cells store excess iron from catabolism of heme from the breakdown of red blood cells. In bone marrow and spleen, iron is stored in MPS cells mostly as ferritin; in iron overload states, most of the iron is stored as hemosiderin.
Clearance of heparin via heparinases
Hematopoiesis
The various cell types of the mononuclear phagocyte system are all part of the myeloid lineage from the CFU-GEMM (precursor of granulocytes, erythrocytes, monocytes and megakaryocytes)
References
External links
Immune system | Mononuclear phagocyte system | [
"Biology"
] | 635 | [
"Immune system",
"Organ systems"
] |
580,863 | https://en.wikipedia.org/wiki/Trowel | A trowel is a small hand tool used for digging, applying, smoothing, or moving small amounts of viscous or particulate material. Common varieties include the masonry trowel, garden trowel, and float trowel.
A power trowel is a much larger gasoline or electrically powered walk-behind device with rotating paddles used to finish concrete floors.
Hand trowel
Numerous forms of trowel are used in masonry, concrete, and drywall construction, as well as applying adhesives such as those used in tiling and laying synthetic flooring. Masonry trowels are traditionally made of forged carbon steel, but some newer versions are made of cast stainless steel, which has longer wear and is rust-free. These include:
Bricklayer's trowel has an elongated triangular-shaped flat metal blade, used by masons for leveling, spreading, and shaping cement, plaster, and mortar.
Pointing trowel, a scaled-down version of a bricklayer's trowel, for small jobs and repair work.
Tuck pointing trowel is long and thin, designed for packing mortar between bricks.
Float trowel or finishing trowel is usually rectangular, used to smooth, level, or texture the top layer of hardening concrete. A flooring trowel has one rectangular end and one pointed end, made to fit corners. A grout float is used for applying and working grout into gaps in floor and wall tile.
Gauging trowel has a rounded tip, used to mix measured proportions of the different ingredients for quick set plaster.
Pool trowel is a flat-bladed tool with rounded ends used to apply coatings to concrete, especially on swimming pool decks.
Margin trowel is a small rectangular bladed tool used to move, apply, and smooth small amounts of masonry or adhesive material.
Notched trowel is a rectangular shaped tool with regularly spaced notches along one or more sides used to apply adhesive when adhering tile, or laying synthetic floor surfaces.
Other forms of trowel include:
Garden trowel, a hand tool with a pointed, scoop-shaped metal blade and wooden, metal, or plastic handle. It is comparable to a spade or shovel, but is generally much smaller, being designed for use with one hand. It is used for breaking up earth, digging small holes, especially for planting and weeding, mixing in fertilizer or other additives, and transferring plants to pots.
Camping trowel, a hand tool used in the outdoors to securely stake and prop up a tent, channel a small stream of water, level a sleeping surface, dig a cathole for no traces of waste and do many more outdoor survival chores. Camping trowels can sometimes be made of lighter weight materials than gardening trowels to make them easier to carry in a backpack or they can be made of heavier materials for chopping kindling or shoveling soil without having to awkwardly reach or bend over. Camping trowels may incorporate a secondary side ruler to measure ground surface depth; however, the ruler might prematurely become defaced from course soil particulates. Camping trowels sometimes have a front tip and side features, such as a pointed tip and a serrated side edge to easily cut through tree roots or frozen soil. These serrated camping trowels may include a cover guard to protect the user from cut wounds as well as save backpacks from puncture holes and tears. They may also fold-up for added protection and easy storage. Few others allow for items such as toilet paper to be stored upon or inside the handle.[2]
In archaeology brick or pointing trowels (usually 4" or 5" steel trowels) are used to scratch the strata in an excavation and allow the colors of the soil to be clear, so that the different strata can be identified, processed and excavated. In the United States, there are several preferred brands of pointing trowels, including the Marshalltown trowel; while in the British Isles the WHS 4" pointing trowel is the traditional tool.
See also
Taping knife, a drywall tool with a wide blade for spreading joint compound
Putty knife, a generally smaller and variously shaped tool used in a variety of applications
Palette knife, a smaller but similarly shaped tool used in oil painting
Power trowel
Kunai
Hori hori
References
Mechanical hand tools
Gardening tools
Plastering
de:Kelle | Trowel | [
"Physics",
"Chemistry",
"Engineering"
] | 913 | [
"Building engineering",
"Coatings",
"Mechanics",
"Mechanical hand tools",
"Plastering"
] |
580,879 | https://en.wikipedia.org/wiki/Cheek%20kissing | Cheek kissing is a ritual or social kissing gesture to indicate friendship, family relationship, perform a greeting, to confer congratulations, to comfort someone, or to show respect.
Cheek kissing is very common in the Middle East, the Mediterranean, Southern, Central and Eastern Europe, the Low Countries, the Horn of Africa, Central America and South America. In other countries, including the U.S. and Japan, cheek kissing is common as well at an international meeting between heads of state and First Ladies or members of royal and the Imperial families.
Depending on the local culture, cheek kissing may be considered appropriate among family members as well as friends and acquaintances: a man and a woman, two women, or two men. The last has different degrees of familiarity.
In Eastern Europe, male–female and female–female cheek kissing is a standard greeting among friends, while male–male cheek kisses are less common. Eastern European communist leaders often greeted each other with a socialist fraternal kiss on public and state occasions.
In a cheek kiss, both persons lean forward and either lightly touch cheek with cheek or lip with cheek. Generally the gesture is repeated with the other cheek, or more, alternating cheeks. Depending on country and situation, the number of kisses range from one to four. Hand-shaking or hugging may also take place.
Cheek kissing is used in many cultures with slightly varying meaning and gesture. For example, cheek kissing may or may not be associated with a hug. The appropriate social context for use can vary greatly from one country to the other, though the gesture might look similar.
In cultures and situations where cheek kissing is the social norm, the failure or refusal to give or accept a kiss is commonly taken as an indicator of antipathy between the people, and to dispel such an implication and avoid giving offense may require an explanation, such as the person has a contagious disease such as a cold.
Europe
Southern Europe
Cheek kissing is a standard greeting throughout Southern Europe between friends or acquaintances, but less common in professional settings. In general, men and women will kiss the opposite sex, and women will kiss women. Men kissing men varies depending on the country and even on the family, in some countries (like Italy) men will kiss men; in others only men of the same family would consider kissing.
Greece is an example of a country where cheek kissing highly depends on the region and the type of event. For example, in most parts of Crete, it is common between a man and a woman who are friends, but is very uncommon between men unless they are very close relatives. In Athens it is commonplace for men to kiss women and women to kiss other women on the cheek when meeting or departing. It is uncommon between strangers of any sex, and it may be considered offensive otherwise. It is standard for children and parents, children and grandparents etc., and in its "formal" form it will be two kisses, one on each cheek. It may be a standard formal form of greeting in special events such as weddings.
However, in Spain, usually, the gender of the kisser doesn't matter as long as they are family or very close friends. In Portuguese families men rarely kiss men (except between brothers or father and son); the handshake is the most common salutation between them. However, men kissing is common in Spain as well particularly when congratulating close friends or relatives. Cheek to cheek and the kiss in the air are also very popular. Hugging is common between men and men and women and women; when the other is from the opposite sex, a kiss may be added.
In Italy (especially southern and central Italy) it is common for men to kiss men, especially relatives or friends.
In most Southern European countries, kissing is initiated by leaning to the left side and joining the right cheeks and if there's a second kiss, changing to the left cheeks. In some cases (e.g. some parts of Italy) the process is the opposite, you first lean to the right, join the left cheeks and then switch to the right cheeks.
Southeastern Europe
In the former Yugoslavia, cheek kissing is also very commonplace, with your ethnicity being ascertainable by the number of kisses on each cheek. Typically, Croats and Bosniaks will kiss once on each cheek, for two total kisses, whereas Serbs will kiss once, but three times as a traditional greeting, typically starting at the right cheek. In Serbia and Montenegro, it is also common for men to kiss each other on the cheek three times as a form of greeting, usually for people they have not encountered in a while, or during the celebrations (wedding, birthday, New Year, religious celebrations, etc.).
In Bulgaria cheek kissing is practiced to a far lesser extent compared to ex-Yugoslavia and is usually seen only between very close relatives or sometimes between close female friends. Kissing is usually performed by people of the opposite sex and between two women. Men kissing is rare even between close friends and is usually considered unnatural and awkward. Male relatives are more likely to initiate kissing if there is a significant age gap, such as between uncles and nephews, or if both men are elderly.
In Romania, cheek kissing is commonly used as a greeting between a man and a woman or two women, once on each cheek. Men usually prefer handshakes among themselves, though sometimes close male relatives may also practice cheek kissing.
In Albania, cheek kissing is used as a greeting between the opposite sex and also the same sex. The cheek is kissed from left to right on each cheek. Males usually slightly bump their heads or just touch their cheeks (no kissing) so to masculinize the act. Females practice the usual left to right cheek kissing. Albanian old women often kiss four times, so two times on each cheek.
Western Europe
In France, cheek kissing is called "faire la bise". A popular French joke states that you may recognize the city you are in by counting the number of cheek kisses, as it varies across the country. It is very common, in the southern parts of France, even between males, be they relatives or friends, whereas in the north (Langue-d'oïl France), it is less usual for two unrelated males to perform ′la bise′. (See Kissing traditions#Greetings.) The custom came under scrutiny during the H1N1 epidemic of 2009.
In the Netherlands and Belgium, cheek kissing is a common greeting between relatives and friends (in the Netherlands slightly more so in the south). Generally speaking, women will kiss both women and men, while men will kiss women but refrain from kissing other men, instead preferring to shake hands with strangers. In the Netherlands usually three kisses are exchanged, mostly for birthdays. The same number of kisses is found in Switzerland and Luxembourg. In the Flanders, one kiss is exchanged as a greeting, and three to celebrate (e.g., a birthday). In Wallonia, the custom is usually one or three kisses, and is also common between men who are friends.
In northern European countries such as Sweden and Germany, hugs are preferred to kisses, though also rare. It is customary in many regions to only have kisses between women and women, but not men and women, who tend to shake hands.
Although cheek kissing is not as widely practiced in the United Kingdom or Ireland as in other parts of Europe, it is still common and increasing. Generally, a kiss on one cheek is common, while a kiss on each cheek is also practiced by some depending on relation or reason. It is mostly used as a greeting and/or a farewell, but can also be offered as a congratulation or as a general declaration of friendship or love. Cheek kissing is acceptable between parents and children, family members (though not often two adult males), couples, two female friends or a male friend and a female friend. Cheek kissing between two men who are not a couple is unusual but socially acceptable if both men are happy to take part. Cheek kissing is associated with the middle and upper classes, as they are more influenced by French culture. This behaviour was traditionally seen as a French practice.
Asia
Southeast Asia
In the Philippines, cheek kissing or beso (also beso-beso, from the Spanish for "kiss") is a common greeting. The Philippine cheek kiss is a cheek-to-cheek kiss, not a lips-to-cheek kiss. The cheek kiss is usually made once (right cheek to right cheek), either between two women, or between a woman and a man. Amongst the upper classes, it is a common greeting among adults who are friends, while for the rest of the population, however, the gesture is generally reserved for relatives. Filipinos who are introduced to each other for the first time do not cheek kiss unless they are related.
In certain communities in Indonesia, notably the Manado or Minahasa people, kissing on the cheeks (twice) is normal among relatives, including males. Cheek kissing in Indonesia is commonly known as cipika-cipiki which is an acronym for cium pipi kanan, cium pipi kiri (kissing right cheek, kissing left cheek)
In parts of Central, South, and East Asia with predominantly Buddhist or Hindu cultures, or in cultures heavily influenced by these two religions, cheek kissing is largely uncommon and may be considered offensive, although its instances are now growing.
Middle East
Cheek kissing in the Arab world is relatively common, between friends and relatives. Cheek kissing between males is very common. However, cheek kissing between a male and female is usually considered inappropriate, unless within the same family; e.g. brother and sister, or if they are a married couple. Some exceptions to this are liberal areas within cities in some of the more liberal Arab countries such as Lebanon, Syria and Jordan, where cheek kissing is a common greeting between unrelated males and females in most communities. The Lebanese custom has become the norm for non-Lebanese in Lebanese-dominated communities of the Arab diaspora. Normally in Lebanon, the typical number of kisses is three: one on the left cheek, then right, and then left between relatives. In other countries, it is typically two kisses with one on each cheek.
Cheek kissing in Turkey is also widely accepted in greetings. Male to male cheek kissing is considered normal in almost every occasion, but very rarely for men who are introduced for the first time. Some men hit each other's head on the side instead of cheek kissing, possibly as an attempt to masculinize the action. Cheek kissing between women is also very common, but it is also very rare for women who are introduced for the first time. A man and a woman could cheek kiss each other for greeting without sexual connotations only if they are good friends or depending on the circle, the setting, and the location like in big cities.
Cheek kissing in Iran is relatively common between friends and family. Cheek kissing between individuals of the same sex is considered normal. However, cheek kissing between male and female in public is considered to be inappropriate, but it may occur among some youth Iranians.
In 2014, Leila Hatami, a famous Iranian actress, kissed the president of Cannes Gilles Jacob on the red carpet. Responses ranged from criticism by the Iranian government to support from Iranian opposition parties. Former president of Iran Mahmoud Ahmadinejad kissed the mother of former President of Venezuela Hugo Chávez at his funeral.
Americas
United States and Canada
In the United States and Canada, the cheek kiss may involve one or both cheeks. According to the March 8, 2004 edition of Time magazine, "a single [kiss] is [an] acceptable [greeting] in the United States, but it's mostly a big-city phenomenon." Occasionally, cheek kissing is a romantic gesture.
Cheek kissing of young children by adults of both sexes is perhaps the most common cheek kiss in North America. Typically, it is a short, perfunctory greeting, and is most often done by relatives.
Giving someone a kiss on the cheek is also a common occurrence between loving couples.
Cheek kissing between adults, when it occurs at all, is most often done between two people who know each other well, such as between relatives or close friends. In this case, a short hug (generally only upper-body contact) or handshake may accompany the kiss. Likewise, hugs are common but not required. A hug alone may also suffice in both of these situations, and is much more common. Particularly in the southeastern United States (Southern), elderly women may be cheek kissed by younger men as a gesture of affection and respect.
In Quebec, cheek kissing is referred to in the vernacular (Québécois) as un bec ("donner un bec") or la bise ("faire la bise"). Whether francophone or other, people of the opposite sex often kiss once on each cheek. Cheek kissing between women is also very common, although men will often refrain. Two people introduced by a mutual friend may also give each other un bec.
Immigrant groups tend to have their own norms for cheek kissing, usually carried over from their native country. In Miami, Florida, an area heavily influenced by Latin American and European immigrants, kissing hello on the cheek is the social norm.
Latin America
In Latin America, cheek kissing is a universal form of greeting between a man and a woman or two women. In some countries, such as Argentina, Chile and Uruguay, men will kiss on the cheek as a greeting.
It is not necessary to know a person well or be intimate with them to kiss them on the cheek. When introduced to someone new by a mutual acquaintance in social settings, it is customary to greet him or her with a cheek kiss if the person being introduced to them is a member of the opposite sex or if a woman is introduced to another woman. If the person is a complete stranger, i.e. self-introductions, no kissing is done. A cheek kiss may be accompanied by a hug or another sign of physical affection. In business settings, the cheek kiss is not always standard upon introduction, but once a relationship is established, it is common practice.
As with other regions, cheek kissing may be lips-to-cheek or cheek-to-cheek with a kiss in the air, the latter being more common.
As in Southern Europe, in Argentina, Chile and Uruguay men kissing men is common but it varies depending on the region, occasion and even on the family. In Argentina and Uruguay it is common (almost standard) between male friends to kiss "a la italiana", e.g. football players kiss each other to congratulate or to greet. In Chile, one cheek kiss is given between male friends (specially young men) and male relatives (despite age and relationship), although sometimes it can be between acquaintances.
In Ecuador it is normal that two male family members greet with a kiss, especially between father and son or grandfather and grandson.
Africa
Cheek kissing is common in the Horn of Africa, which includes Djibouti, Eritrea, Ethiopia, and Somalia, and is also present in countries within the Arab world such as Tunisia and Morocco, but is largely uncommon in most areas south of the Sahara. In South Africa, cheek kisses are usually found among male and female friends, with handshakes or hugs usually being preferable among other people.
Oceania
In Australia and New Zealand, cheek kissing is usually present among close friends, with handshakes or hugs usually being preferable. In New Zealand, Māori people may also traditionally use the hongi for greetings.
See also
Greeting habits
Hug
Hand-kissing
Kiss of peace
Paschal greeting
Public display of affection
Salute
Socialist fraternal kiss
References
Gestures
Greetings
Parting traditions
Kissing | Cheek kissing | [
"Biology"
] | 3,195 | [
"Behavior",
"Gestures",
"Human behavior"
] |
580,905 | https://en.wikipedia.org/wiki/Presence%20information | In computer and telecommunications networks, presence information is a status indicator that conveys ability and willingness of a potential communication partner—for example a user—to communicate. A user's client provides presence information (presence state) via a network connection to a presence service, which is stored in what constitutes his personal availability record (called a presentity) and can be made available for distribution to other users (called watchers) to convey their availability for communication. Presence information has wide application in many communication services and is one of the innovations driving the popularity of instant messaging or recent implementations of voice over IP clients.
Presence state
A user client may publish a presence state to indicate its current communication status. This published state informs others that wish to contact the user of his availability and willingness to communicate. The most common use of presence today is to display an indicator icon on instant messaging clients, typically from a choice of graphic symbols with easy-to-convey meanings, and a list of corresponding text descriptions of each of the states. Even when technically not the same, the "on-hook" or "off-hook" state of called telephone is an analogy, as long as the caller receives a distinctive tone indicating unavailability or availability.
Common states on the user's availability are "free for chat", "busy", "away", "do not disturb", "out to lunch". Such states exist in many variations across different modern instant messaging clients. Current standards support a rich choice of additional presence attributes that can be used for presence information, such as user mood, location, or free text status.
The analogy with free/busy tone on PSTN is inexact, as the "on-hook" telephone status reflects the ability of the network to reach the recipient after the requester has initiated the conversation. The requester must commit to the connection method before discovering the recipient's availability state. Conversely, Presence shows the availability state before a conversation is initiated. A similar comparison might be the requester needing to know if the recipient is at work. The most straightforward way of checking if the recipient is available is to walk to the desk, which requires the commitment of the walk regardless of the outcome and usually requires some interaction if the recipient is at the desk. The requester can call first to save the walk, but now must commit to an interaction via phone. Presence gives the state of the recipient to the requester and the requester has the choice to interact with the recipient or use that information for non-interactive purposes (such as taking roll).
MPOP and presence by observation
Presence becomes interesting for communication systems when it spans a number of different communication channels. The idea that multiple communication devices can combine state, to provide an aggregated view of a user's presence has been termed Multiple Points of Presence (MPOP). MPOP becomes even more powerful when it is automatically inferred from passive observation of a user's actions. This idea is already familiar to instant messaging users who have their status set to "Away" (or equivalent) if their computer keyboard is inactive for some time. Extension to other devices could include whether the user's cell phone is on, whether they are logged into their computer, or perhaps checking their electronic calendar to see if they are in a meeting or on vacation. For example, if a user's calendar was marked as out of office and their cell phone was on, they might be considered in a "Roaming" state.
MPOP status can then be used to automatically direct incoming messages across all contributing devices. For example, "Out of office" might translate to a system directing all messages and calls to the user's cell phone. The status "Do not disturb" might automatically save all messages for later and send all phone calls to voicemail.
XMPP, discussed below, allows for MPOP by assigning each client a "resource" (a specific identifier) and a priority number for each resource. A message directed to the user's ID would go to the resource with highest priority, although messaging a specific resource is possible by using the form user@domain/resource.
Privacy concerns
Presence is highly sensitive information and in non-trivial systems a presentity may define limits to which its presence information may be revealed to different watchers. For example, a worker may only want colleagues to see detailed presence information during office hours. Basic versions of this idea are already common in instant messaging clients as a "Blocking" facility, where users can appear as unavailable to selected watchers.
Commercial products
Presence, particularly MPOP, requires collaboration between a number of electronic devices (for example IM client, home phone, cell phone, and electronic calendar) and the presence services each of them are connected with. To date, the most common and wide-scale implementations use closed systems, with a SPOP (Single Point of Presence, where a single device publishes state). Some vendors have upgraded their services to automatically log out connected clients when a new login request reaches the server from a newly connecting different device. For presence to universally work with MPOP support, multiple devices must be able to not only intercommunicate among each other, the status information must also be appropriately handled by all other interoperable, connected presence services and the MPOP scheme for their clients.
2.5G and, even more so, 3G cell phone networks can support management and access of presence information services for mobile users cell phone handsets.
In the workplace, private messaging servers offer the possibility of MPOP within a company or work team.
In the business community
Presence information is a growing tool towards more effective and efficient communication within a business setting. Presence information allows you to instantly see who is available in your corporate network, giving more flexibility to set up short-term meetings and conference calls. The result is precise communication that all but eliminates the inefficiency of phone tag or email messaging. An example of the time-saving aspect of presence information is a driver with a GPS; he/she can be tracked and sent messages on upcoming traffic patterns that, in return, save time and money.
According to IDC surveys, employees "often feel that IM gives their workdays the kind of 'flow' that they feel when sitting directly among their colleagues, being able to ask questions of them, and getting the kind of quick responses that allow them to drive on to the next task". This phenomenon has been called the "Presence Effect" in contrast to its predecessor the "water cooler" effect, whereby this level of flow was only thought to be achieved in person.
With presence information, privacy of the users can become an issue. For example, when an employee is on his/her day off they are still connected to the network and have greater ability to be tracked down. Therefore, a concern of presence information is to determine how far the companies want to go with staying connected.
Presence standardization efforts
There was, and still is, significant work done in several working groups on achieving a standardization for presence-related protocols.
In 1999, a group called the Instant Message and Presence Protocol (IMPP) working group (WG), was formed within the Internet Engineering Task Force organization (IETF) in order to develop protocols and data formats for simple presence and instant messaging services. Unfortunately, IMPP WG was not able to come to consensus on a single protocol for presence. Instead it issued a common profile for presence and instant messaging (CPP) which defined semantics for common services of presence to facilitate the creation of gateways between presence services. Thus any two CPP-compatible presence protocol suites are automatically interoperable.
In 2001, the SIMPLE working group was formed within IETF to develop a suite of CPP-compliant standards for presence and instant messaging applications over the Session Initiation Protocol (SIP). The SIMPLE activity specifies extensions to the SIP protocol which deal with a publish and subscribe mechanism for presence information and sending instant messages. These extensions include rich presence document formats, privacy control, "partial publications" and notifications, past and future presence, watcher information and more. Despite its name, SIMPLE is far from simple. It is described in about 30 documents on more than 1,000 pages. This is in addition to the complexity of the SIP protocol stack on which SIMPLE is based.
At the end of 2001, Nokia, Motorola, and Ericsson formed the Wireless Village (WV) initiative to define a set of universal specifications for mobile Instant Messaging and Presence Services (IMPS) and presence services for wireless networks. In October 2002, Wireless Village was consolidated into the Open Mobile Alliance (OMA) and a month later released the first version of
the XML-based OMA Instant Messaging and Presence Service (IMPS). IMPS defines a system architecture, syntax, and semantics for representation of presence information and a set of protocols for the four primary features: presence, IM, groups, and shared content. Presence is the key, enabling technology for the IMPS.
The XML-based XMPP or Extensible Messaging and Presence Protocol was designed and is currently maintained by the XMPP Standards Foundation. This IM protocol, which is a robust and widely extended protocol, is also the protocol used in the commercial implementation of Google Talk and Facebook Chat. In October 2004, the XMPP working group at IETF published the documents RFC 3920, RFC 3921, RFC 3922 and RFC 3923, to standardize the core XMPP protocol.
See also
Comparison of cross-platform instant messaging clients
Comparison of instant messaging protocols
Comparison of Internet Relay Chat clients
Comparison of LAN messengers
Comparison of VoIP software
List of SIP software
List of video telecommunication services and product brands
References
Day, M., J. Rosenberg, and H. Sugano. "A Model for Presence and Instant Messaging." RFC 2778 February 2000.
3GPP TR 23.841 (Technical Report) Presence service; Architecture and Functional Description
3GPP TS 23.141 (Technical Specification) Presence service; Architecture and functional description; Stage 2
3GPP TS 24.141 (Technical Specification) Presence service using the IP Multimedia (IM) Core Network (CN) subsystem; Stage 3
“Presence Awareness Indicators - Where Are You Now?” Robin Good. September 23, 2004. Haag, Stephen. Cummings, Maeve. McCubbrey J, Donald. Pinsonneault, Alain. Donovan, Richard. Management Information Systems for the Information Age. Third Canadian Edition. Canada. McGraw-Hill, 2006.
External links
XMPP Standards Foundation
SIP/SIMPLE Charter
Instant messaging | Presence information | [
"Technology"
] | 2,154 | [
"Instant messaging"
] |
580,936 | https://en.wikipedia.org/wiki/Sulfur%20trioxide | Sulfur trioxide (alternative spelling sulphur trioxide) is the chemical compound with the formula SO3. It has been described as "unquestionably the most [economically] important sulfur oxide". It is prepared on an industrial scale as a precursor to sulfuric acid.
Sulfur trioxide exists in several forms: gaseous monomer, crystalline trimer, and solid polymer. Sulfur trioxide is a solid at just below room temperature with a relatively narrow liquid range. Gaseous SO3 is the primary precursor to acid rain.
Molecular structure and bonding
Monomer
The molecule SO3 is trigonal planar. As predicted by VSEPR theory, its structure belongs to the D3h point group. The sulfur atom has an oxidation state of +6 and may be assigned a formal charge value as low as 0 (if all three sulfur-oxygen bonds are assumed to be double bonds) or as high as +2 (if the Octet Rule is assumed). When the formal charge is non-zero, the S-O bonding is assumed to be delocalized. In any case the three S-O bond lengths are equal to one another, at 1.42 Å. The electrical dipole moment of gaseous sulfur trioxide is zero.
Trimer
Both liquid and gaseous SO3 exists in an equilibrium between the monomer and the cyclic trimer. The nature of solid SO3 is complex and at least 3 polymorphs are known, with conversion between them being dependent on traces of water.
Absolutely pure SO3 freezes at 16.8 °C to give the γ-SO3 form, which adopts the cyclic trimer configuration [S(=O)2(μ-O)]3.
Polymer
If SO3 is condensed above 27 °C, then α-SO3 forms, which has a melting point of 62.3 °C. α-SO3 is fibrous in appearance. Structurally, it is the polymer [S(=O)2(μ-O)]n. Each end of the polymer is terminated with OH groups. β-SO3, like the alpha form, is fibrous but of different molecular weight, consisting of an hydroxyl-capped polymer, but melts at 32.5 °C. Both the gamma and the beta forms are metastable, eventually converting to the stable alpha form if left standing for sufficient time. This conversion is caused by traces of water.
Relative vapor pressures of solid SO3 are alpha < beta < gamma at identical temperatures, indicative of their relative molecular weights. Liquid sulfur trioxide has a vapor pressure consistent with the gamma form. Thus heating a crystal of α-SO3 to its melting point results in a sudden increase in vapor pressure, which can be forceful enough to shatter a glass vessel in which it is heated. This effect is known as the "alpha explosion".
Chemical reactions
Sulfur trioxide undergoes many reactions.
Hydration and hydrofluorination
SO3 is the anhydride of H2SO4. Thus, it is susceptible to hydration:
SO3 + H2O → H2SO4(ΔfH = −200 kJ/mol)
Gaseous sulfur trioxide fumes profusely even in a relatively dry atmosphere owing to formation of a sulfuric acid mist.
SO3 is aggressively hygroscopic. The heat of hydration is sufficient that mixtures of SO3 and wood or cotton can ignite. In such cases, SO3 dehydrates these carbohydrates.
Akin to the behavior of H2O, hydrogen fluoride adds to give fluorosulfuric acid:
SO3 + HF → FSO3H
Deoxygenation
SO3 reacts with dinitrogen pentoxide to give the nitronium salt of pyrosulfate:
2 SO3 + N2O5 → [NO2]2S2O7
Oxidant
Sulfur trioxide is an oxidant. It oxidizes sulfur dichloride to thionyl chloride.
SO3 + SCl2 → SOCl2 + SO2
Lewis acid
SO3 is a strong Lewis acid readily forming adducts with Lewis bases. With pyridine, it gives the sulfur trioxide pyridine complex. Related adducts form from dioxane and trimethylamine.
Sulfonating agent
Sulfur trioxide is a potent sulfonating agent, i.e. it adds SO3 groups to substrates. Often the substrates are organic, as in aromatic sulfonation. For activated substrates, Lewis base adducts of sulfur trioxide are effective sulfonating agents.
Preparation
The direct oxidation of sulfur dioxide to sulfur trioxide in air proceeds very slowly:
2 SO2 + O2 → 2 SO3(ΔH = −198.4 kJ/mol)
Industrial
Industrially SO3 is made by the contact process. Sulfur dioxide is produced by the burning of sulfur or iron pyrite (a sulfide ore of iron). After being purified by electrostatic precipitation, the SO2 is then oxidised by atmospheric oxygen at between 400 and 600 °C over a catalyst. A typical catalyst consists of vanadium pentoxide (V2O5) activated with potassium oxide K2O on kieselguhr or silica support. Platinum also works very well but is too expensive and is poisoned (rendered ineffective) much more easily by impurities.
The majority of sulfur trioxide made in this way is converted into sulfuric acid.
Laboratory
Sulfur trioxide can be prepared in the laboratory by the two-stage pyrolysis of sodium bisulfate. Sodium pyrosulfate is an intermediate product:
Dehydration at 315 °C:
2 NaHSO4 → Na2S2O7 + H2O
Cracking at 460 °C:
Na2S2O7 → Na2SO4 + SO3
The latter occurs at much lower temperatures (45–60 °C) in the presence of catalytic H2SO4. In contrast, KHSO4 undergoes the same reactions at a higher temperature.
Another two step method involving a salt pyrolysis starts with concentrated sulfuric acid and anhydrous tin tetrachloride:
Reaction between tin tetrachloride and sulfuric acid in a 1:2 molar mixture at near reflux (114 °C):
SnCl4 + 2 H2SO4 → Sn(SO4)2 + 4 HCl
Pyrolysis of anhydrous tin(IV) sulfate at 150 °C - 200 °C:
Sn(SO4)2 → SnO2 + 2 SO3
The advantage of this method over the sodium bisulfate one is that it requires much lower temperatures and can be done using normal borosilicate laboratory glassware without the risk of shattering. A disadvantage is that it generates significant quantities of hydrogen chloride gas which needs to be captured as well.
SO3 may also be prepared by dehydrating sulfuric acid with phosphorus pentoxide.
Applications
Sulfur trioxide is a reagent in sulfonation reactions. Dimethyl sulfate is produced commercially by the reaction of dimethyl ether with sulfur trioxide:
Sulfate esters are used as detergents, dyes, and pharmaceuticals. Sulfur trioxide is generated in situ from sulfuric acid or is used as a solution in the acid.
B2O3 stabilized sulfur trioxide was traded by Baker & Adamson under the tradename "Sulfan" in the 20th century.
Safety
Along with being an oxidizing agent, sulfur trioxide is highly corrosive. It reacts violently with water to produce highly corrosive sulfuric acid.
See also
Hypervalent molecule
Sulfur trioxide pyridine complex
References
Sources
NIST Standard Reference Database
ChemSub Online
Sulfur oxides
Acid anhydrides
Acidic oxides
Hazardous air pollutants
Interchalcogens
Hypervalent molecules
Sulfur(VI) compounds | Sulfur trioxide | [
"Physics",
"Chemistry"
] | 1,635 | [
"Molecules",
"Hypervalent molecules",
"Matter"
] |
580,972 | https://en.wikipedia.org/wiki/%CE%91-Carotene | α-Carotene (alpha-carotene) is a form of carotene with a β-ionone ring at one end and an α-ionone ring at the opposite end. It is the second most common form of carotene.
Human physiology
In American and Chinese adults, the mean concentration of serum α-carotene was 4.71 μg/dL. Including 4.22 μg/dL among men and 5.31 μg/dL among women.
Dietary sources
The following vegetables are rich in alpha-carotene:
Yellow-orange vegetables: Carrots (the main source for U.S. adults), sweet potatoes, pumpkin, winter squash
Dark-green vegetables: Broccoli, green beans, green peas, spinach, turnip greens, collards, leaf lettuce, avocado
Research
A 2018 meta-analysis found that both dietary and circulating α-carotene are associated with a lower risk of all-cause mortality. The highest circulating α-carotene category, compared to the lowest, correlated with a 32% reduction in the risk of all-cause mortality, while increased dietary α-carotene intake was linked to a 21% decrease in the risk of all-cause mortality.
References
Carotenoids
Tetraterpenes
Cyclohexenes
Vitamin A | Α-Carotene | [
"Chemistry",
"Biology"
] | 277 | [
"Biomarkers",
"Vitamin A",
"Biomolecules",
"Carotenoids"
] |
581,005 | https://en.wikipedia.org/wiki/Implicit%20function%20theorem | In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.
More precisely, given a system of equations (often abbreviated into ), the theorem states that, under a mild condition on the partial derivatives (with respect to each ) at a point, the variables are differentiable functions of the in some neighborhood of the point. As these functions generally cannot be expressed in closed form, they are implicitly defined by the equations, and this motivated the name of the theorem.
In other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function.
History
Augustin-Louis Cauchy (1789–1857) is credited with the first rigorous form of the implicit function theorem. Ulisse Dini (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables.
First example
If we define the function , then the equation cuts out the unit circle as the level set . There is no way to represent the unit circle as the graph of a function of one variable because for each choice of , there are two choices of y, namely .
However, it is possible to represent part of the circle as the graph of a function of one variable. If we let for , then the graph of provides the upper half of the circle. Similarly, if , then the graph of gives the lower half of the circle.
The purpose of the implicit function theorem is to tell us that functions like and almost always exist, even in situations where we cannot write down explicit formulas. It guarantees that and are differentiable, and it even works in situations where we do not have a formula for .
Definitions
Let be a continuously differentiable function. We think of as the Cartesian product and we write a point of this product as Starting from the given function , our goal is to construct a function whose graph is precisely the set of all such that .
As noted above, this may not always be possible. We will therefore fix a point which satisfies , and we will ask for a that works near the point . In other words, we want an open set containing , an open set containing , and a function such that the graph of satisfies the relation on , and that no other points within do so. In symbols,
To state the implicit function theorem, we need the Jacobian matrix of , which is the matrix of the partial derivatives of . Abbreviating to , the Jacobian matrix is
where is the matrix of partial derivatives in the variables and is the matrix of partial derivatives in the variables . The implicit function theorem says that if is an invertible matrix, then there are , , and as desired. Writing all the hypotheses together gives the following statement.
Statement of the theorem
Let be a continuously differentiable function, and let have coordinates . Fix a point with , where is the zero vector. If the Jacobian matrix (this is the right-hand panel of the Jacobian matrix shown in the previous section):
is invertible, then there exists an open set containing such that there exists a unique function such that and Moreover, is continuously differentiable and, denoting the left-hand panel of the Jacobian matrix shown in the previous section as:
the Jacobian matrix of partial derivatives of in is given by the matrix product:
Higher derivatives
If, moreover, is analytic or continuously differentiable times in a neighborhood of , then one may choose in order that the same holds true for inside . In the analytic case, this is called the analytic implicit function theorem.
Proof for 2D case
Suppose is a continuously differentiable function defining a curve . Let be a point on the curve. The statement of the theorem above can be rewritten for this simple case as follows:
Proof. Since is differentiable we write the differential of through partial derivatives:
Since we are restricted to movement on the curve and by assumption around the point (since is continuous at and ). Therefore we have a first-order ordinary differential equation:
Now we are looking for a solution to this ODE in an open interval around the point for which, at every point in it, . Since is continuously differentiable and from the assumption we have
From this we know that is continuous and bounded on both ends. From here we know that is Lipschitz continuous in both and . Therefore, by Cauchy-Lipschitz theorem, there exists unique that is the solution to the given ODE with the initial conditions. Q.E.D.
The circle example
Let us go back to the example of the unit circle. In this case n = m = 1 and . The matrix of partial derivatives is just a 1 × 2 matrix, given by
Thus, here, the in the statement of the theorem is just the number ; the linear map defined by it is invertible if and only if . By the implicit function theorem we see that we can locally write the circle in the form for all points where . For we run into trouble, as noted before. The implicit function theorem may still be applied to these two points, by writing as a function of , that is, ; now the graph of the function will be , since where we have , and the conditions to locally express the function in this form are satisfied.
The implicit derivative of y with respect to x, and that of x with respect to y, can be found by totally differentiating the implicit function and equating to 0:
giving
and
Application: change of coordinates
Suppose we have an -dimensional space, parametrised by a set of coordinates . We can introduce a new coordinate system by supplying m functions each being continuously differentiable. These functions allow us to calculate the new coordinates of a point, given the point's old coordinates using . One might want to verify if the opposite is possible: given coordinates , can we 'go back' and calculate the same point's original coordinates ? The implicit function theorem will provide an answer to this question. The (new and old) coordinates are related by f = 0, with
Now the Jacobian matrix of f at a certain point (a, b) [ where ] is given by
where Im denotes the m × m identity matrix, and is the matrix of partial derivatives, evaluated at (a, b). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends on a.) The implicit function theorem now states that we can locally express as a function of if J is invertible. Demanding J is invertible is equivalent to det J ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the Jacobian J is non-zero. This statement is also known as the inverse function theorem.
Example: polar coordinates
As a simple application of the above, consider the plane, parametrised by polar coordinates . We can go to a new coordinate system (cartesian coordinates) by defining functions and . This makes it possible given any point to find corresponding Cartesian coordinates . When can we go back and convert Cartesian into polar coordinates? By the previous example, it is sufficient to have , with
Since , conversion back to polar coordinates is possible if . So it remains to check the case . It is easy to see that in case , our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined.
Generalizations
Banach space version
Based on the inverse function theorem in Banach spaces, it is possible to extend the implicit function theorem to Banach space valued mappings.
Let X, Y, Z be Banach spaces. Let the mapping be continuously Fréchet differentiable. If , , and is a Banach space isomorphism from Y onto Z, then there exist neighbourhoods U of x0 and V of y0 and a Fréchet differentiable function g : U → V such that f(x, g(x)) = 0 and f(x, y) = 0 if and only if y = g(x), for all .
Implicit functions from non-differentiable functions
Various forms of the implicit function theorem exist for the case when the function f is not differentiable. It is standard that local strict monotonicity suffices in one dimension. The following more general form was proven by Kumagai based on an observation by Jittorntrum.
Consider a continuous function such that . If there exist open neighbourhoods and of x0 and y0, respectively, such that, for all y in B, is locally one-to-one, then there exist open neighbourhoods and of x0 and y0, such that, for all , the equation
f(x, y) = 0 has a unique solution
where g is a continuous function from B0 into A0.
Collapsing manifolds
Perelman’s collapsing theorem for 3-manifolds, the capstone of his proof of Thurston's geometrization conjecture, can be understood as an extension of the implicit function theorem.
See also
Inverse function theorem
Constant rank theorem: Both the implicit function theorem and the inverse function theorem can be seen as special cases of the constant rank theorem.
Notes
References
Further reading
Articles containing proofs
Mathematical identities
Theorems in calculus
Theorems in real analysis | Implicit function theorem | [
"Mathematics"
] | 1,978 | [
"Theorems in mathematical analysis",
"Theorems in calculus",
"Calculus",
"Theorems in real analysis",
"Mathematical problems",
"Articles containing proofs",
"Mathematical identities",
"Mathematical theorems",
"Algebra"
] |
581,071 | https://en.wikipedia.org/wiki/Plus%20construction | In mathematics, the plus construction is a method for simplifying the fundamental group of a space without changing its homology and cohomology groups.
Explicitly, if is a based connected CW complex and is a perfect normal subgroup of then a map is called a +-construction relative to if induces an isomorphism on homology, and is the kernel of .
The plus construction was introduced by , and was used by Daniel Quillen to define algebraic K-theory. Given a perfect normal subgroup of the fundamental group of a connected CW complex , attach two-cells along loops in whose images in the fundamental group generate the subgroup. This operation generally changes the homology of the space, but these changes can be reversed by the addition of three-cells.
The most common application of the plus construction is in algebraic K-theory. If is a unital ring, we denote by the group of invertible -by- matrices with elements in . embeds in by attaching a along the diagonal and s elsewhere. The direct limit of these groups via these maps is denoted and its classifying space is denoted . The plus construction may then be applied to the perfect normal subgroup of , generated by matrices which only differ from the identity matrix in one off-diagonal entry. For , the -th homotopy group of the resulting space, , is isomorphic to the -th -group of , that is,
See also
Semi-s-cobordism
References
.
.
.
External links
Algebraic topology
Homotopy theory | Plus construction | [
"Mathematics"
] | 308 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
581,124 | https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao%20bound | In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic (fixed, though unknown) parameter. The result is named in honor of Harald Cramér and Calyampudi Radhakrishna Rao, but has also been derived independently by Maurice Fréchet, Georges Darmois, and by Alexander Aitken and Harold Silverstone. It is also known as Fréchet-Cramér–Rao or Fréchet-Darmois-Cramér-Rao lower bound. It states that the precision of any unbiased estimator is at most the Fisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on its variance.
An unbiased estimator that achieves this bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is, therefore, the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information.
The Cramér–Rao bound can also be used to bound the variance of estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are the unbiased Cramér–Rao lower bound; see estimator bias.
Significant progress over the Cramér–Rao lower bound was proposed by Anil Kumar Bhattacharyya through a series of works, called Bhattacharyya bound.
Statement
The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section.
Scalar unbiased case
Suppose is an unknown deterministic parameter that is to be estimated from independent observations (measurements) of , each from a distribution according to some probability density function . The variance of any unbiased estimator of is then bounded by the reciprocal of the Fisher information :
where the Fisher information is defined by
and is the natural logarithm of the likelihood function for a single sample and denotes the expected value with respect to the density of . If not indicated, in what follows, the expectation is taken with respect to .
If is twice differentiable and certain regularity conditions hold, then the Fisher information can also be defined as follows:
The efficiency of an unbiased estimator measures how close this estimator's variance comes to this lower bound; estimator efficiency is defined as
or the minimum possible variance for an unbiased estimator divided by its actual variance.
The Cramér–Rao lower bound thus gives
.
General scalar case
A more general form of the bound can be obtained by considering a biased estimator , whose expectation is not but a function of this parameter, say, . Hence is not generally equal to 0. In this case, the bound is given by
where is the derivative of (by ), and is the Fisher information defined above.
Bound on the variance of biased estimators
Apart from being a bound on estimators of functions of the parameter, this approach can be used to derive a bound on the variance of biased estimators with a given bias, as follows. Consider an estimator with bias , and let . By the result above, any unbiased estimator whose expectation is has variance greater than or equal to . Thus, any estimator whose bias is given by a function satisfies
The unbiased version of the bound is a special case of this result, with .
It's trivial to have a small variance − an "estimator" that is constant has a variance of zero. But from the above equation, we find that the mean squared error of a biased estimator is bounded by
using the standard decomposition of the MSE. Note, however, that if this bound might be less than the unbiased Cramér–Rao bound . For instance, in the example of estimating variance below, .
Multivariate case
Extending the Cramér–Rao bound to multiple parameters, define a parameter column vector
with probability density function which satisfies the two regularity conditions below.
The Fisher information matrix is a matrix with element defined as
Let be an estimator of any vector function of parameters, , and denote its expectation vector by . The Cramér–Rao bound then states that the covariance matrix of satisfies
,
where
The matrix inequality is understood to mean that the matrix is positive semidefinite, and
is the Jacobian matrix whose element is given by .
If is an unbiased estimator of (i.e., ), then the Cramér–Rao bound reduces to
If it is inconvenient to compute the inverse of the Fisher information matrix,
then one can simply take the reciprocal of the corresponding diagonal element
to find a (possibly loose) lower bound.
Regularity conditions
The bound relies on two weak regularity conditions on the probability density function, , and the estimator :
The Fisher information is always defined; equivalently, for all such that , exists, and is finite.
The operations of integration with respect to and differentiation with respect to can be interchanged in the expectation of ; that is, whenever the right-hand side is finite. This condition can often be confirmed by using the fact that integration and differentiation can be swapped when either of the following cases hold:
The function has bounded support in , and the bounds do not depend on ;
The function has infinite support, is continuously differentiable, and the integral converges uniformly for all .
Proof
Proof for the general case based on the Chapman–Robbins bound
Proof based on.
A standalone proof for the general scalar case
For the general scalar case:
Assume that is an estimator with expectation (based on the observations ), i.e. that . The goal is to prove that, for all ,
Let be a random variable with probability density function .
Here is a statistic, which is used as an estimator for . Define as the score:
where the chain rule is used in the final equality above. Then the expectation of , written , is zero. This is because:
where the integral and partial derivative have been interchanged (justified by the second regularity condition).
If we consider the covariance of and , we have , because . Expanding this expression we have
again because the integration and differentiation operations commute (second condition).
The Cauchy–Schwarz inequality shows that
therefore
which proves the proposition.
Examples
Multivariate normal distribution
For the case of a d-variate normal distribution
the Fisher information matrix has elements
where "tr" is the trace.
For example, let be a sample of independent observations with unknown mean and known variance .
Then the Fisher information is a scalar given by
and so the Cramér–Rao bound is
Normal variance with known mean
Suppose X is a normally distributed random variable with known mean and unknown variance . Consider the following statistic:
Then T is unbiased for , as . What is the variance of T?
(the second equality follows directly from the definition of variance). The first term is the fourth moment about the mean and has value ; the second is the square of the variance, or .
Thus
Now, what is the Fisher information in the sample? Recall that the score is defined as
where is the likelihood function. Thus in this case,
where the second equality is from elementary calculus. Thus, the information in a single observation is just minus the expectation of the derivative of , or
Thus the information in a sample of independent observations is just times this, or
The Cramér–Rao bound states that
In this case, the inequality is saturated (equality is achieved), showing that the estimator is efficient.
However, we can achieve a lower mean squared error using a biased estimator. The estimator
obviously has a smaller variance, which is in fact
Its bias is
so its mean squared error is
which is less than what unbiased estimators can achieve according to the Cramér–Rao bound.
When the mean is not known, the minimum mean squared error estimate of the variance of a sample from Gaussian distribution is achieved by dividing by , rather than or .
See also
Chapman–Robbins bound
Kullback's inequality
Brascamp–Lieb inequality
Lehmann–Scheffé theorem
References and notes
Further reading
. Chapter 3.
. Section 3.1.3.
Posterior uncertainty, asymptotic law and Cramér-Rao bound, Structural Control and Health Monitoring 25(1851):e2113 DOI: 10.1002/stc.2113
External links
FandPLimitTool a GUI-based software to calculate the Fisher information and Cramér-Rao lower bound with application to single-molecule microscopy.
Articles containing proofs
Statistical inequalities
Estimation theory | Cramér–Rao bound | [
"Mathematics"
] | 1,879 | [
"Articles containing proofs",
"Theorems in statistics",
"Statistical inequalities",
"Inequalities (mathematics)"
] |
581,175 | https://en.wikipedia.org/wiki/Vertex-transitive%20graph | In the mathematical field of graph theory, an automorphism is a permutation of the vertices such that edges are mapped to edges and non-edges are mapped to non-edges. A graph is a vertex-transitive graph if, given any two vertices and of , there is an automorphism such that
In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices. A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical.
Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph).
Finite examples
Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices.
Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees.
Properties
The edge-connectivity of a connected vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2(d + 1)/3.
If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to d.
Infinite examples
Infinite vertex-transitive graphs include:
infinite paths (infinite in both directions)
infinite regular trees, e.g. the Cayley graph of the free group
graphs of uniform tessellations (see a complete list of planar tessellations), including all tilings by regular polygons
infinite Cayley graphs
the Rado graph
Two countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by Diestel and Leader in 2001. In 2005, Eskin, Fisher, and Whyte confirmed the counterexample.
See also
Edge-transitive graph
Lovász conjecture
Semi-symmetric graph
Zero-symmetric graph
References
External links
A census of small connected cubic vertex-transitive graphs. Primož Potočnik, Pablo Spiga, Gabriel Verret, 2012.
Vertex-transitive Graphs On Fewer Than 48 Vertices. Gordon Royle and Derek Holt, 2020.
Graph families
Algebraic graph theory
Regular graphs | Vertex-transitive graph | [
"Mathematics"
] | 671 | [
"Mathematical relations",
"Graph theory",
"Algebra",
"Algebraic graph theory"
] |
581,220 | https://en.wikipedia.org/wiki/Contraceptive%20sponge | The contraceptive sponge combines barrier and spermicidal methods to prevent conception. Sponges work in two ways. First, the sponge is inserted into the vagina, so it can cover the cervix and prevent any sperm from entering the uterus. Secondly, the sponge contains spermicide.
The sponges are inserted vaginally prior to intercourse and must be placed over the cervix to be effective.
Sponges provide no protection from sexually transmitted infections. Sponges can provide contraception for multiple acts of intercourse over a 24-hour period, but cannot be reused beyond that time or once removed.
Effectiveness
Sponge's effectiveness is 91% if used perfectly by women who never gave birth, and 80% if used perfectly by women who have given at least one birth. Since it is hard to use the sponge perfectly every time having vaginal sex, its real effectiveness can be lower, and it is advised to combine sponges with other birth control methods, like withdrawal of penis before ejaculation or condoms.
Use
To use the sponge, wet the sponge and squeeze it, fold it and put it in the vagina covering the cervix. A sponge works for 24 hours once put in, during which the female can have sex multiple times. Once the sponge is pulled out, it should not be reused and should be trashed, not flushed. The sponge should be left in place for 6 hours after having sex. A sponge should not be in the vagina for more than 30 hours.
Spermicide
Sponges are a physical barrier, trapping sperm and preventing their passage through the cervix into the reproductive system. The spermicide is an important component of pregnancy prevention.
Side effects
People sensitive to Nonoxynol-9, an ingredient in the spermicide used in the sponge, may experience unpleasant irritation and may face increase risk of sexually transmitted infections. Sponge users may have a slightly higher risk of toxic shock syndrome.
In popular culture
Shortly after they were taken off the U.S. market, the sponge was featured in an episode of the sitcom Seinfeld titled "The Sponge". In the episode, Elaine Benes conserves her remaining sponges by choosing to not have intercourse unless she is certain her partner is "sponge-worthy".
References
External links
The Contraceptive Sponge – DrDonnica.com
Barrier contraception
Spermicide
Products introduced in 1983 | Contraceptive sponge | [
"Biology"
] | 484 | [
"Biocides",
"Spermicide"
] |
581,246 | https://en.wikipedia.org/wiki/Wobble%20base%20pair | A wobble base pair is a pairing between two nucleotides in RNA molecules that does not follow Watson-Crick base pair rules. The four main wobble base pairs are guanine-uracil (G-U), hypoxanthine-uracil (I-U), hypoxanthine-adenine (I-A), and hypoxanthine-cytosine (I-C). In order to maintain consistency of nucleic acid nomenclature, "I" is used for hypoxanthine because hypoxanthine is the nucleobase of inosine;
nomenclature otherwise follows the names of nucleobases and their corresponding nucleosides (e.g., "G" for both guanine and guanosine – as well as for deoxyguanosine). The thermodynamic stability of a wobble base pair is comparable to that of a Watson-Crick base pair. Wobble base pairs are fundamental in RNA secondary structure and are critical for the proper translation of the genetic code.
Brief history
In the genetic code, there are 43 = 64 possible codons (three-nucleotide sequences). For translation, each of these codons requires a tRNA molecule with an anticodon with which it can stably complement. If each tRNA molecule is paired with its complementary mRNA codon using canonical Watson-Crick base pairing, then 64 types of tRNA molecule would be required. In the standard genetic code, three of these 64 mRNA codons (UAA, UAG and UGA) are stop codons. These terminate translation by binding to release factors rather than tRNA molecules, so canonical pairing would require 61 species of tRNA. Since most organisms have fewer than 45 types of tRNA, some tRNA types can pair with multiple, synonymous codons, all of which encode the same amino acid. In 1966, Francis Crick proposed the Wobble Hypothesis to account for this. He postulated that the 5' base on the anticodon, which binds to the 3' base on the mRNA, was not as spatially confined as the other two bases and could, thus, have non-standard base pairing. Crick creatively named it for the small amount of "play" or wobble that occurs at this third codon position. Movement ("wobble") of the base in the 5' anticodon position is necessary for small conformational adjustments that affect the overall pairing geometry of anticodons of tRNA.
As an example, yeast tRNAPhe has the anticodon 5'-GmAA-3' and can recognize the codons 5'-UUC-3' and 5'-UUU-3'. It is, therefore, possible for non-Watson–Crick base pairing to occur at the third codon position, i.e., the 3' nucleotide of the mRNA codon and the 5' nucleotide of the tRNA anticodon.
Wobble hypothesis
These notions led Francis Crick to the creation of the wobble hypothesis, a set of four relationships explaining these naturally occurring attributes.
The first two bases in the codon create the coding specificity, for they form strong Watson-Crick base pairs and bond strongly to the anticodon of the tRNA.
When reading 5' to 3' the first nucleotide in the anticodon (which is on the tRNA and pairs with the last nucleotide of the codon on the mRNA) determines how many nucleotides the tRNA actually distinguishes. If the first nucleotide in the anticodon is a C or an A, pairing is specific and acknowledges original Watson-Crick pairing, that is: only one specific codon can be paired to that tRNA. If the first nucleotide is U or G, the pairing is less specific and in fact two bases can be interchangeably recognized by the tRNA. Inosine displays the true qualities of wobble, in that if that is the first nucleotide in the anticodon, any of three bases in the original codon can be matched with the tRNA.
Due to the specificity inherent in the first two nucleotides of the codon, if one amino acid is coded for by multiple anticodons and those anticodons differ in either the second or third position (first or second position in the codon) then a different tRNA is required for that anticodon.
The minimum requirement to satisfy all possible codons (61 excluding three stop codons) is 32 tRNAs. That is 31 tRNAs for the amino acids and one initiation codon.
tRNA base pairing schemes
Wobble pairing rules. Watson-Crick base pairs are shown in bold. Parentheses denote bindings that work but will be favoured less. A leading x denotes derivatives (in general) of the base that follows.
Biological importance
Aside from the necessity of wobble, that our cells have a limited amount of tRNAs and wobble allows for more flexibility, wobble base pairs have been shown to facilitate many biological functions, most clearly demonstrated in the bacterium Escherichia coli, a model organism. In fact, in a study of E. colis tRNA for alanine there is a wobble base pair that determines whether the tRNA will be aminoacylated. When a tRNA reaches an aminoacyl tRNA synthetase, the job of the synthetase is to join the t-shaped RNA with its amino acid. These aminoacylated tRNAs go on to the translation of an mRNA transcript, and are the fundamental elements that connect to the codon of the amino acid. The necessity of the wobble base pair is illustrated through experimentation where the Guanine-Uracil pairing is changed to its natural Guanine-Cytosine pairing. Oligoribonucleotides were synthesized on a Gene Assembler Plus, and then spread across a DNA sequence known to code a tRNA for alanine, 2D-NMRs are then run on the products of these new tRNAs and compared to the wobble tRNAs. The results indicate that with that wobble base pair changed, structure is also changed and an alpha helix can no longer be formed. The alpha helix was the recognizable structure for the aminoacyl tRNA synthetase and thus the synthetase does not connect the amino acid alanine with the tRNA for alanine. This wobble base pairing is essential for the use of the amino acid alanine in E. coli and its significance here would imply significance in many related species. More information can be seen on aminoacyl tRNA synthetase and the genomes of E. coli tRNA at the External links, Information on Aminoacyl tRNA Synthetases and Genomic tRNA Database.
See also
Base pair
Hoogsteen base pair
Synonymous substitution
Footnotes
References
External links
tRNA, the Adaptor Hypothesis and the Wobble Hypothesis
Wobble base-pairing between codons and anticodons
Genetic Code and Amino Acid Translation
Information of Aminoacyl tRNA Synthetases
Genomic tRNA Database
Nucleic acids
Protein biosynthesis | Wobble base pair | [
"Chemistry"
] | 1,499 | [
"Biomolecules by chemical classification",
"Protein biosynthesis",
"Gene expression",
"Biosynthesis",
"Nucleic acids"
] |
581,326 | https://en.wikipedia.org/wiki/Littoral%20zone | The littoral zone, also called litoral or nearshore, is the part of a sea, lake, or river that is close to the shore. In coastal ecology, the littoral zone includes the intertidal zone extending from the high water mark (which is rarely inundated), to coastal areas that are permanently submerged — known as the foreshore — and the terms are often used interchangeably. However, the geographical meaning of littoral zone extends well beyond the intertidal zone to include all neritic waters within the bounds of continental shelves.
Etymology
The word littoral may be used both as a noun and as an adjective. It derives from the Latin noun litus, litoris, meaning "shore". (The doubled t is a late-medieval innovation, and the word is sometimes seen in the more classical-looking spelling litoral.)
Description
The term has no single definition. What is regarded as the full extent of the littoral zone, and the way the littoral zone is divided into subregions, varies in different contexts. For lakes, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. The use of the term also varies from one part of the world to another, and between different disciplines. For example, military commanders speak of the littoral in ways that are quite different from the definition used by marine biologists.
The adjacency of water gives a number of distinctive characteristics to littoral regions. The erosive power of water results in particular types of landforms, such as sand dunes, and estuaries. The natural movement of the littoral along the coast is called the littoral drift. Biologically, the ready availability of water enables a greater variety of plant and animal life, and particularly the formation of extensive wetlands. In addition, the additional local humidity due to evaporation usually creates a microclimate supporting unique types of organisms.
In oceanography and marine biology
In oceanography and marine biology, the idea of the littoral zone is extended roughly to the edge of the continental shelf. Starting from the shoreline, the littoral zone begins at the spray region just above the high tide mark. From here, it moves to the intertidal region between the high and low water marks, and then out as far as the edge of the continental shelf. These three subregions are called, in order, the supralittoral zone, the eulittoral zone, and the sublittoral zone.
Supralittoral zone
The supralittoral zone (also called the splash, spray or supratidal zone) is the area above the spring high tide line that is regularly splashed, but not submerged by ocean water. Seawater penetrates these elevated areas only during storms with high tides. Organisms that live here must cope with exposure to fresh water from rain, cold, heat, dryness and predation by land animals and seabirds. At the top of this area, patches of dark lichens can appear as crusts on rocks. Some types of periwinkles, Neritidae and detritus feeding Isopoda commonly inhabit the lower supralittoral.
Eulittoral zone
The eulittoral zone (also called the midlittoral or mediolittoral zone) is the intertidal zone, known also as the foreshore. It extends from the spring high tide line, which is rarely inundated, to the spring low tide line, which is rarely not inundated. It is alternately exposed and submerged once or twice daily. Organisms living here must be able to withstand the varying conditions of temperature, light, and salinity. Despite this, productivity is high in this zone. The wave action and turbulence of recurring tides shape and reform cliffs, gaps and caves, offering a huge range of habitats for sedentary organisms. Protected rocky shorelines usually show a narrow, almost homogenous, eulittoral strip, often marked by the presence of barnacles. Exposed sites show a wider extension and are often divided into further zones. For more on this, see intertidal ecology.
Sublittoral zone
The sublittoral zone starts immediately below the eulittoral zone. This zone is permanently covered with seawater and is approximately equivalent to the neritic zone.
In physical oceanography, the sublittoral zone refers to coastal regions with significant tidal flows and energy dissipation, including non-linear flows, internal waves, river outflows and oceanic fronts. In practice, this typically extends to the edge of the continental shelf, with depths around 200 meters.
In marine biology, the sublittoral zone refers to the areas where sunlight reaches the ocean floor, that is, where the water is never so deep as to take it out of the photic zone. This results in high primary production and makes the sublittoral zone the location of the majority of sea life. As in physical oceanography, this zone typically extends to the edge of the continental shelf. The benthic zone in the sublittoral is much more stable than in the intertidal zone; temperature, water pressure, and the amount of sunlight remain fairly constant. Sublittoral corals do not have to deal with as much change as intertidal corals. Corals can live in both zones, but they are more common in the sublittoral zone.
Within the sublittoral, marine biologists also identify the following:
The infralittoral zone is the algal dominated zone, which may extend to five metres below the low water mark.
The circalittoral zone is the region beyond the infralittoral, that is, below the algal zone and dominated by sessile animals such as mussels and oysters.
Shallower regions of the sublittoral zone, extending not far from the shore, are sometimes referred to as the subtidal zone.
Habitats in littoral zones
Many vertebrates (e.g., mammals, waterfowl, reptiles) and invertebrates (insects, etc.) use both the littoral zone as well as the terrestrial ecosystem for food and habitat. Biota that are commonly assumed to reside in the pelagic zone often rely heavily on resources from the littoral zone. Littoral areas of ponds and lakes are typically better oxygenated, structurally more complex, and afford more abundant and diverse food resources than do profundal sediments. All these factors lead to a high diversity of insects and very complex trophic interactions.
The great lakes of the world represent a global heritage of surface freshwater and aquatic biodiversity. Species lists for 14 of the world's largest lakes reveal that 15% of the global diversity (the total number of species) of freshwater fishes, 9% of non-insect freshwater invertebrate diversity, and 2% of aquatic insect diversity live in this handful of lakes. The vast majority (more than 93%) of species inhabit the shallow, nearshore littoral zone, and 72% are completely restricted to the littoral zone, even though littoral habitats are a small fraction of total lake areas.
Because the littoral zone is important for many recreational and industrial purposes, it is often severely affected by many human activities that increase nutrient loading, spread invasive species, cause acidification and climate change, and produce increased fluctuations in water level. Littoral zones are both more negatively affected by human activity and less intensively studied than offshore waters. Conservation of the remarkable biodiversity and biotic integrity of large lakes will require better integration of littoral zones into our understanding of lake ecosystem functioning and focused efforts to alleviate human impacts along the shoreline.
In freshwater ecosystems
In freshwater situations, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. Sometimes other definitions are used. For example, the Minnesota Department of Natural Resources defines littoral as that portion of the lake that is less than 15 feet in depth. Such fixed-depth definitions often do not accurately represent the true ecological zonation, but are sometimes used because they are simple measurements to make bathymetric maps or when there are no measurements of light penetration. The littoral zone comprises an estimated 78% of Earth's total lake area.
The littoral zone may form a narrow or broad fringing wetland, with extensive areas of aquatic plants sorted by their tolerance to different water depths. Typically, four zones are recognized, from higher to lower on the shore: wooded wetland, wet meadow, marsh and aquatic vegetation. The relative areas of these four types depends not only on the profile of the shoreline, but upon past water levels. The area of wet meadow is particularly dependent upon past water levels; in general, the area of wet meadows along lakes and rivers increases with natural water level fluctuations. Many of the animals in lakes and rivers are dependent upon the wetlands of littoral zones, since the rooted plants provide habitat and food. Hence, a large and productive littoral zone is considered an important characteristic of a healthy lake or river.
Littoral zones are at particular risk for two reasons. First, human settlement is often attracted to shorelines, and settlement often disrupts breeding habitats for littoral zone species. For example, many turtles are killed on roads when they leave the water to lay their eggs in upland sites. Fish can be negatively affected by docks and retaining walls which remove breeding habitat in shallow water. Some shoreline communities even deliberately try to remove wetlands since they may interfere with activities like swimming. Overall, the presence of human settlement has a demonstrated negative impact upon adjoining wetlands. An equally serious problem is the tendency to stabilize lake or river levels with dams. Dams removed the spring flood, which carries nutrients into littoral zones and reduces the natural fluctuation of water levels upon which many wetland plants and animals depend. Hence, over time, dams can reduce the area of wetland from a broad littoral zone to a narrow band of vegetation. Marshes and wet meadows are at particular risk.
Other definitions
For the purposes of naval operations, the US Navy divides the littoral zone in the ways shown on the diagram at the top of this article. The US Army Corps of Engineers and the US Environmental Protection Agency have their own definitions, which have legal implications.
The UK Ministry of Defence defines the littoral as those land areas (and their adjacent areas and associated air space) that are susceptible to engagement and influence from the sea.
See also
References
Sources
Haslett, Simon K (2001) Coastal Systems. Routledge.
Mann, Kenneth Henry (2000) Ecology of Coastal Waters Blackwell Publishing.
Yip, Maricela and Madl, Pierre (1999) Littoral University of Salzburg.
Aquatic biomes
Marine biology
Aquatic ecology
Habitats
Coasts
Fisheries science
Coastal geography
Oceanographical terminology
Limnology
Oceanography | Littoral zone | [
"Physics",
"Biology",
"Environmental_science"
] | 2,267 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Marine biology",
"Ecosystems",
"Aquatic ecology"
] |
581,370 | https://en.wikipedia.org/wiki/Acyl%20chloride | In organic chemistry, an acyl chloride (or acid chloride) is an organic compound with the functional group . Their formula is usually written , where R is a side chain. They are reactive derivatives of carboxylic acids (). A specific example of an acyl chloride is acetyl chloride, . Acyl chlorides are the most important subset of acyl halides.
Nomenclature
Where the acyl chloride moiety takes priority, acyl chlorides are named by taking the name of the parent carboxylic acid, and substituting -yl chloride for -ic acid. Thus:
butyric acid (C3H7COOH) → butyryl chloride (C3H7COCl)
(Idiosyncratically, for some trivial names, -oyl chloride substitutes -ic acid. For example, pivalic acid becomes pivaloyl chloride and acrylic acid becomes acryloyl chloride. The names pivalyl chloride and acrylyl chloride are less commonly used, although they are arguably more logical.)
When other functional groups take priority, acyl chlorides are considered prefixes — chlorocarbonyl-:
Properties
Lacking the ability to form hydrogen bonds, acyl chlorides have lower boiling and melting points than similar carboxylic acids. For example, acetic acid boils at 118 °C, whereas acetyl chloride boils at 51 °C. Like most carbonyl compounds, infrared spectroscopy reveals a band near 1750 cm−1.
The simplest stable acyl chloride is acetyl chloride; formyl chloride is not stable at room temperature, although it can be prepared at –60 °C or below.
Acyl chlorides hydrolyze (react with water) to form the corresponding carboxylic acid and hydrochloric acid:
RCOCl + H2O -> RCOOH + HCl
Synthesis
Industrial routes
The industrial route to acetyl chloride involves the reaction of acetic anhydride with hydrogen chloride:
(CH3CO)2O + HCl -> CH3COCl + CH3CO2H
Propionyl chloride is produced by chlorination of propionic acid with phosgene:
CH3CH2CO2H + COCl2 -> CH3CH2COCl + HCl + CO2
Benzoyl chloride is produced by the partial hydrolysis of benzotrichloride:
C6H5CCl3 + H2O -> C6H5C(O)Cl + 2 HCl
Similarly, benzotrichlorides react with carboxylic acids to the acid chloride. This conversion is practiced for the reaction of 1,4-bis(trichloromethyl)benzene to give terephthaloyl chloride:
C6H4(CCl3)2 + C6H4(CO2H)2 -> 2 C6H4(COCl)2 + 2 HCl
Laboratory methods
Thionyl chloride
In the laboratory, acyl chlorides are generally prepared by treating carboxylic acids with thionyl chloride (). The reaction is catalyzed by dimethylformamide and other additives.
Thionyl chloride is a well-suited reagent as the by-products (HCl, ) are gases and residual thionyl chloride can be easily removed as a result of its low boiling point (76 °C).
Phosphorus chlorides
Phosphorus trichloride () is popular, although excess reagent is required. Phosphorus pentachloride () is also effective, but only one chloride is transferred:
RCO2H + PCl5 -> RCOCl + POCl3 + HCl
Oxalyl chloride
Another method involves the use of oxalyl chloride:
RCO2H + ClCOCOCl ->[DMF] RCOCl + CO + CO2 + HCl
The reaction is catalysed by dimethylformamide (DMF), which reacts with oxalyl chloride to give the Vilsmeier reagent, an iminium intermediate that which reacts with the carboxylic acid to form a mixed imino-anhydride. This structure undergoes an acyl substitution with the liberated chloride, forming the acid anhydride and releasing regenerated molecule of DMF. Relative to thionyl chloride, oxalyl chloride is more expensive but also a milder reagent and therefore more selective.
Other laboratory methods
Acid chlorides can be used as a chloride source. Thus acetyl chloride can be distilled from a mixture of benzoyl chloride and acetic acid:
CH3CO2H + C6H5COCl -> CH3COCl + C6H5CO2H
Other methods that do not form HCl include the Appel reaction:
RCO2H + Ph3P + CCl4 -> RCOCl + Ph3PO + HCCl3
Another is the use of cyanuric chloride:
RCO2H + C3N3Cl3 -> RCOCl + C3N3Cl2OH
Reactions
Acyl chloride are reactive, versatile reagents. Acyl chlorides have a greater reactivity than other carboxylic acid derivatives like acid anhydrides, esters or amides:
Acyl chlorides hydrolyze, yielding the carboxylic acid:
This hydrolysis is usually a nuisance rather than intentional.
Alcoholysis, aminolysis, and related reactions
Acid chlorides are useful for the preparation of amides, esters, anhydrides. These reactions generate chloride, which can be undesirable.Acyl chlorides are used to prepare acid anhydrides, amides and esters, by reacting acid chlorides with: a salt of a carboxylic acid, an amine, or an alcohol, respectively.
Acid halides are the most reactive acyl derivatives, and can easily be converted into any of the others. Acid halides will react with carboxylic acids to form anhydrides. If the structure of the acid and the acid chloride are different, the product is a mixed anhydride. First, the carboxylic acid attacks the acid chloride (1) to give tetrahedral intermediate 2. The tetrahedral intermediate collapses, ejecting chloride ion as the leaving group and forming oxonium species 3. Deprotonation gives the mixed anhydride, 4, and an equivalent of HCl.
Alcohols and amines react with acid halides to produce esters and amides, respectively, in a reaction formally known as the Schotten-Baumann reaction. Acid halides hydrolyze in the presence of water to produce carboxylic acids, but this type of reaction is rarely useful, since carboxylic acids are typically used to synthesize acid halides. Most reactions with acid halides are carried out in the presence of a non-nucleophilic base, such as pyridine, to neutralize the hydrohalic acid that is formed as a byproduct.
Mechanism
The alcoholysis of acyl halides (the alkoxy-dehalogenation) is believed to proceed via an SN2 mechanism (Scheme 10). However, the mechanism can also be tetrahedral or SN1 in highly polar solvents (while the SN2 reaction involves a concerted reaction, the tetrahedral addition-elimination pathway involves a discernible intermediate).
Bases, such as pyridine or N,N-dimethylformamide, catalyze acylations. These reagents activate the acyl chloride via a nucleophilic catalysis mechanism. The amine attacks the carbonyl bond and presumably first forms a transient tetrahedral intermediate, then forms a quaternary acylammonium salt by the displacement of the leaving group. This quaternary acylammonium salt is more susceptible to attack by alcohols or other nucleophiles.
The use of two phases (aqueous for amine, organic for acyl chloride) is called the Schotten-Baumann reaction. This approach is used in the preparation of nylon via the so-called nylon rope trick.
Reactions with carbanions
Acid halides react with carbon nucleophiles, such as Grignards and enolates, although mixtures of products can result. While a carbon nucleophile will react with the acid halide first to produce a ketone, the ketone is also susceptible to nucleophilic attack, and can be converted to a tertiary alcohol. For example, when benzoyl chloride (1) is treated with two equivalents of a Grignard reagent, such as methyl magnesium bromide (MeMgBr), 2-phenyl-2-propanol (3) is obtained in excellent yield. Although acetophenone (2) is an intermediate in this reaction, it is impossible to isolate because it reacts with a second equivalent of MeMgBr rapidly after being formed.
Unlike most other carbon nucleophiles, lithium dialkylcuprates – often called Gilman reagents – can add to acid halides just once to give ketones. The reaction between an acid halide and a Gilman reagent is not a nucleophilic acyl substitution reaction, however, and is thought to proceed via a radical pathway. The Weinreb ketone synthesis can also be used to convert acid halides to ketones. In this reaction, the acid halide is first converted to an N–methoxy–N–methylamide, known as a Weinreb amide. When a carbon nucleophile – such as a Grignard or organolithium reagent – adds to a Weinreb amide, the metal is chelated by the carbonyl and N–methoxy oxygens, preventing further nucleophilic additions.
Carbon nucleophiles such as Grignard reagents, convert acyl chlorides to ketones, which in turn are susceptible to the attack by second equivalent to yield the tertiary alcohol. The reaction of acyl halides with certain organocadmium reagents stops at the ketone stage. The reaction with Gilman reagents also afford ketones, reflecting the low nucleophilicity of these lithium diorganocopper compounds.
Reduction
Acyl chlorides are reduced by lithium aluminium hydride and diisobutylaluminium hydride to give primary alcohols. Lithium tri-tert-butoxyaluminium hydride, a bulky hydride donor, reduces acyl chlorides to aldehydes, as does the Rosenmund reduction using hydrogen gas over a poisoned palladium catalyst.
Acylation of arenes
In the Friedel–Crafts acylation, acid halides act as electrophiles for electrophilic aromatic substitution. A Lewis acid – such as zinc chloride (ZnCl2), iron(III) chloride (FeCl3), or aluminum chloride (AlCl3) – coordinates to the halogen on the acid halide, activating the compound towards nucleophilic attack by an activated aromatic ring. For especially electron-rich aromatic rings, the reaction will proceed without a Lewis acid.
Because of the harsh conditions and the reactivity of the intermediates, this otherwise quite useful reaction tends to be messy, as well as environmentally unfriendly.
Oxidative addition
Acyl chlorides react with low-valent metal centers to give transition metal acyl complexes. Illustrative is the oxidative addition of acetyl chloride to Vaska's complex, converting square planar Ir(I) to octahedral Ir(III):
IrCl(CO)(PPh3)2 + CH3COCl -> CH3COIrCl2(CO)(PPh3)2
Hazards
Low molecular weight acyl chlorides are often lachrymators, and they react violently with water, alcohols, and amines.
References
Functional groups | Acyl chloride | [
"Chemistry"
] | 2,542 | [
"Functional groups"
] |
581,391 | https://en.wikipedia.org/wiki/Alternative%20lifestyle | An alternative lifestyle or unconventional lifestyle is a lifestyle perceived to be outside the norm for a given culture. The term alternative lifestyle is often used pejoratively. Description of a related set of activities as alternative is a defining aspect of certain subcultures.
History
Alternative lifestyles and subcultures were first highlighted in the U.S & the Uk some countries did contributed. in the 1920s with the "flapper" movement. Women cut their hair and skirts short (as a symbol of freedom from oppression and the old ways of living). These women were the first large group of females to practice pre-marital sex, dancing, cursing, and driving in modern America without the ostracism that had occurred in earlier instances.
The American press in the 1970s frequently used the term "alternative lifestyle" as a euphemism for homosexuality out of fear of offending a mass audience. The term was also used to refer to hippies, who were seen as a threat to the social order.
Examples
The following is a non-exhaustive list of activities that have been described as alternative lifestyles:
A Stanford University cooperative house, Synergy, was founded in 1972 with the theme of "exploring alternative lifestyles".
Alternative child-rearing, such as homeschooling, coparenting, and home births
Environmentally-conscious ways of eating, such as veganism, freeganism, or raw foodism
Living in non-traditional communities, such as communes, intentional communities, ecovillages, off-the-grid, or the tiny house movement
Traveling subcultures, including lifestyle travellers, digital nomads, housetruckers, and New Age travellers
Countercultural movements and alternative subcultures such as Bohemianism, punk rock, emo, metal music subculture, antiquarian steampunk, hippies, and vampires
Body modification, including tattoos, body piercings, eye tattooing, scarification, non-surgical stretching like ears or genital stretching, and transdermal implants
Nudism and clothing optional lifestyles
Homosexual lifestyles and relationships.
Non-normative sexual lifestyles and gender identity-based subcultures, such as BDSM, LGBT culture, cross-dressing, transvestism, polyamory, cruising, swinging, down-low, and certain types of sexual fetishism, roleplays, or paraphilias
Adherents to alternative spiritual and religious communities, such as Freemasons, Ordo Templi Orientis, Thelemites, Satanists, Modern Pagans, and New Age communities
Certain traditional religious minorities, such as Anabaptist Christians (most notably Amish, Mennonites, Hutterites, the Bruderhof Communities, and Schwarzenau Brethren) and ultra-Orthodox Jews, who pursue simple living alongside a non-technological or anti-technology lifestyle
Secular anti-technology communities called neo-Luddites
Engagement in artistic pursuits, such as music, visual arts, or performance, often influenced by subcultures like punk, goth, or bohemianism.
Ethical clothing shopping often with the involvement of sourcing garments through thrifting, exploring garage sales, or even crafting one’s own pieces.
See also
Alternative culture
Alternative housing
Intentional living
Lebensreform
Straight edge
Teetotalism
Temperance movement
Underground culture
References
1920s introductions
Deviance (sociology)
Lifestyle
Philosophy of life
Subcultures | Alternative lifestyle | [
"Biology"
] | 692 | [
"Deviance (sociology)",
"Behavior",
"Human behavior"
] |
581,417 | https://en.wikipedia.org/wiki/Table%20of%20standard%20reduction%20potentials%20for%20half-reactions%20important%20in%20biochemistry | The values below are standard apparent reduction potentials for electro-biochemical half-reactions measured at 25 °C, 1 atmosphere and a pH of 7 in aqueous solution.
The actual physiological potential depends on the ratio of the reduced () and oxidized () forms according to the Nernst equation and the thermal voltage.
When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as:
+ z →
The reaction quotient (r) is the ratio of the chemical activity (ai) of the reduced form (the reductant, aRed) to the activity of the oxidized form (the oxidant, aox). It is equal to the ratio of their concentrations (Ci) only if the system is sufficiently diluted and the activity coefficients (γi) are close to unity (ai = γi Ci):
The Nernst equation is a function of and can be written as follows:
At chemical equilibrium, the reaction quotient of the product activity (aRed) by the reagent activity (aOx) is equal to the equilibrium constant () of the half-reaction and in the absence of driving force () the potential () also becomes nul.
The numerically simplified form of the Nernst equation is expressed as:
Where is the standard reduction potential of the half-reaction expressed versus the standard reduction potential of hydrogen. For standard conditions in electrochemistry (T = 25 °C, P = 1 atm and all concentrations being fixed at 1 mol/L, or 1 M) the standard reduction potential of hydrogen is fixed at zero by convention as it serves of reference. The standard hydrogen electrode (SHE), with [] = 1 M works thus at a pH = 0.
At pH = 7, when [] = 10−7 M, the reduction potential of differs from zero because it depends on pH.
Solving the Nernst equation for the half-reaction of reduction of two protons into hydrogen gas gives:
In biochemistry and in biological fluids, at pH = 7, it is thus important to note that the reduction potential of the protons () into hydrogen gas is no longer zero as with the standard hydrogen electrode (SHE) at 1 M (pH = 0) in classical electrochemistry, but that versus the standard hydrogen electrode (SHE).
The same also applies for the reduction potential of oxygen:
For , = 1.229 V, so, applying the Nernst equation for pH = 7 gives:
For obtaining the values of the reduction potential at pH = 7 for the redox reactions relevant for biological systems, the same kind of conversion exercise is done using the corresponding Nernst equation expressed as a function of pH.
The conversion is simple, but care must be taken not to inadvertently mix reduction potential converted at pH = 7 with other data directly taken from tables referring to SHE (pH = 0).
Expression of the Nernst equation as a function of pH
The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side):
The half-cell standard reduction potential is given by
where is the standard Gibbs free energy change, is the number of electrons involved, and is Faraday's constant. The Nernst equation relates pH and :
where curly braces { } indicate activities, and exponents are shown in the conventional manner.This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units).
This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2.
Formal standard reduction potential combined with the pH dependency
To obtain the reduction potential as a function of the measured concentrations of the redox-active species in solution, it is necessary to express the activities as a function of the concentrations.
Given that the chemical activity denoted here by { } is the product of the activity coefficient γ by the concentration denoted by [ ]: ai = γi·Ci, here expressed as {X} = γx [X] and {X}x = (γx)x [X]x and replacing the logarithm of a product by the sum of the logarithms (i.e., log (a·b) = log a + log b), the log of the reaction quotient () (without {H+} already isolated apart in the last term as h pH) expressed here above with activities { } becomes:
It allows to reorganize the Nernst equation as:
Where is the formal standard potential independent of pH including the activity coefficients.
Combining directly with the last term depending on pH gives:
For a pH = 7:
So,
It is therefore important to know to what exact definition does refer the value of a reduction potential for a given biochemical redox process reported at pH = 7, and to correctly understand the relationship used.
Is it simply:
calculated at pH 7 (with or without corrections for the activity coefficients),
, a formal standard reduction potential including the activity coefficients but no pH calculations, or, is it,
, an apparent formal standard reduction potential at pH 7 in given conditions and also depending on the ratio .
This requires thus to dispose of a clear definition of the considered reduction potential, and of a sufficiently detailed description of the conditions in which it is valid, along with a complete expression of the corresponding Nernst equation. Were also the reported values only derived from thermodynamic calculations, or determined from experimental measurements and under what specific conditions? Without being able to correctly answering these questions, mixing data from different sources without appropriate conversion can lead to errors and confusion.
Determination of the formal standard reduction potential when 1
The formal standard reduction potential can be defined as the measured reduction potential of the half-reaction at unity concentration ratio of the oxidized and reduced species (i.e., when 1) under given conditions.
Indeed:
as, , when ,
, when ,
because , and that the term is included in .
The formal reduction potential makes possible to more simply work with molar or molal concentrations in place of activities. Because molar and molal concentrations were once referred as formal concentrations, it could explain the origin of the adjective formal in the expression formal potential.
The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. If any small incremental change of potential causes a change in the direction of the reaction, i.e. from reduction to oxidation or vice versa, the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions (i.e. the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, = 1 bar) it becomes de facto a standard potential. According to Brown and Swift (1949), "A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode, when the total concentration of each oxidation state is one formal".
The activity coefficients and are included in the formal potential , and because they depend on experimental conditions such as temperature, ionic strength, and pH, cannot be referred as an immuable standard potential but needs to be systematically determined for each specific set of experimental conditions.
Formal reduction potentials are applied to simplify results interpretations and calculations of a considered system. Their relationship with the standard reduction potentials must be clearly expressed to avoid any confusion.
Main factors affecting the formal (or apparent) standard reduction potentials
The main factor affecting the formal (or apparent) reduction potentials in biochemical or biological processes is the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be clearly indicated. When using, or comparing, several formal (or apparent) reduction potentials they must also be internally consistent.
Problems may occur when mixing different sources of data using different conventions or approximations (i.e., with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials ( versus SHE, pH = 0) with formal (or apparent) reduction potentials ( at pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and directly mixing data from classical electrochemistry textbooks ( versus SHE, pH = 0) and microbiology textbooks ( at pH = 7) without paying attention to the conventions on which they are based).
Example in biochemistry
For example, in a two electrons couple like : the reduction potential becomes ~ 30 mV (or more exactly, 59.16 mV/2 = 29.6 mV) more positive for every power of ten increase in the ratio of the oxidised to the reduced form.
Some important apparent potentials used in biochemistry
See also
Nernst equation
Electron bifurcation
Pourbaix diagram
Reduction potential
Dependency of reduction potential on pH
Standard electrode potential
Standard reduction potential
Standard reduction potential (data page)
Standard state
References
Bibliography
Electrochemistry
Bio-electrochemistry
Microbiology
Biochemistry
Standard reduction potentials for half-reactions important in biochemistry
Electrochemical potentials
Thermodynamics databases
Biochemistry databases | Table of standard reduction potentials for half-reactions important in biochemistry | [
"Physics",
"Chemistry",
"Biology"
] | 2,091 | [
"Electrochemical potentials",
"Biochemistry databases",
"Electrochemistry",
"Thermodynamics",
"nan",
"Biochemistry",
"Thermodynamics databases"
] |
581,448 | https://en.wikipedia.org/wiki/Rhode%20Island%20School%20of%20Design | The Rhode Island School of Design (RISD , pronounced "Riz-D") is a private art and design school in Providence, Rhode Island. The school was founded as a coeducational institution in 1877 by Helen Adelia Rowe Metcalf, who sought to increase the accessibility of design education to women. Today, RISD offers bachelor's and master's degree programs across 19 majors and enrolls approximately 2,000 undergraduate and 500 graduate students. The Rhode Island School of Design Museum—which houses the school's art and design collections—is one of the largest college art museums in the United States.
The Rhode Island School of Design is affiliated with Brown University, whose campus sits immediately adjacent to RISD's on Providence's College Hill. The two institutions share social and community resources and since 1900 have permitted cross-registration. Together, RISD and Brown offer dual degree programs at the graduate and undergraduate levels. As of 2024, RISD alumni have received 11 MacArthur Fellowships, 9 Emmy Awards, 7 Guggenheim Fellowships, and 3 Academy Awards.
History
Founding of the school
The Rhode Island School of Design's founding is often traced back to Helen Adelia Rowe Metcalf's 1876 visit to the Centennial Exposition in Philadelphia. At the exposition, Metcalf visited the Women's Pavilion. Organized by the "Centennial Women," the pavilion showcased the work of female entrepreneurs, artists, and designers. Metcalf's visit to the pavilion profoundly impacted her and motivated her to address a deficiency in design education accessible to women.
Following the exhibition, the RI committee of the Centennial Women had $1,675 remaining in funds; the group spent some time negotiating how best to use the surplus. Metcalf lobbied the group to use the money to establish a coeducational, design school in Providence. On January 11, 1877, a majority of women on the committee voted for Metcalf's proposal.
On March 22, 1877, the Rhode Island General Assembly ratified "An Act to Incorporate the Rhode Island School of Design", "[f]or the purpose of aiding in the cultivation of the arts of design". Over the next 129 years, the following original by-laws set forth these following primary objectives:
The instruction of artisans in drawing, painting, modeling, and designing, that they may successfully apply the principles of Art to the requirements of trade and manufacture.
The systematic training of students in the practice of Art, in order that they may understand its principles, give instruction to others, or become artists.
The general advancement of public Art Education, by the exhibition of works of Art and of Art school studies, and by lectures on Art.
Metcalf directed the school until her death in 1895. Her daughter, Eliza Greene Metcalf Radeke, then took over until her own death in 1931.
Beginnings
The school opened in October 1877 in Providence. The first class consisted of 43 students, the majority of whom were women.
For the first 15 years of its existence, RISD occupied a suite of six rooms on the fourth floor of the Hoppin Homestead Building in Downtown Providence. On October 24, 1893, the school dedicated a new brick building at 11 Waterman Street on College Hill. Designed by Hoppin, Read & Hoppin, this building served as the first permanent home for the school.
Activism during the Vietnam War
Students at RISD played a key part in the national protest of the Vietnam War, producing various notable anti-war protest art from 1968–1973 and taking several on tour as part of a mobile artwork petition. The most well known is Leave the Fear of Red to Horned Beasts, a reference to Victor Hugo novel Les Misérables in the form of a watercolor-on-canvas painting of a charging red bull. An original print of this painting is on permanent display at the War Remnants Museum in Ho Chi Minh City, Vietnam in a section dedicated to international protest of the Vietnam War, and also features subtly as a bar mural in the Vietnam War film Point Man.
In 1969 the Black Student Community of RISD published a manifesto demanding of university faculty the establishment of "a meaningful liaison with the spirit and expression of Black culture." RISD subsequently hired administrators to begin recruiting and admitting increased numbers of students of color.
COVID-19
After the outbreak of COVID-19 and the subsequent closure of the RISD campus in March 2020, RISD suggested a future of a hybrid of classes online and in-person.
In July 2020, President Somerson began negotiations with the RISD faculty union over the avoidance of possible layoffs by suggesting cost-cutting measures. The part-time faculty union, the National Education Association, rejected the initial proposal.
Racial diversity and equity
In the summer of 2020, after the Black Lives Matter and George Floyd protests, RISD students and alumni came forward to voice outrage at the institution for failing at social equity and inclusion. They formed a student-led RISD Anti-Racism Coalition (ARC) alongside BIPOC faculty. As a result, in July 2020, RISD announced they would hire 10 new faculty members that would specialize in "race and ethnicity in arts and design", the RISD museum would return to Nigeria a sculpture that was once looted, expand and diversify the curriculum, and the school would, "remain committed to reform".
Labor strike
In April 2023, after months of negotiations, the RISD employees union held a picket line protest in demand of better wages. The union, which represents custodians, groundskeepers and movers, was joined in the strike by student supporters and community members. The strike lasted two weeks, until workers approved a new contract and returned to work April 19.
Pro-Palestine solidarity
Students at RISD, along with many across the country in the BDS movement, occupied a campus building for multiple days in support of a cease-fire of the Israeli–Palestinian conflict in early May 2024.
Presidents
RISD's current president is Crystal Williams. She was preceded by Rosanne Somerson who served in the role from 2015 to 2021.
Rankings and admission
In 2014, U.S. News & World Report ranked RISD first amongst fine arts programs nationwide. In 2020, graduate programs in Graphic Design, Painting, Sculpture, and Photography, among others, were ranked in the top 5 nationally, however, in 2023, RISD announced its withdrawal from the rankings, citing its inability to accurately assess art and design education, while also running counter to principles of social equity and inclusion. The school's undergraduate architecture program ranked 6 in DesignIntelligence's ranking of the Top Architecture Schools in the US for 2019. In 2018, the institution was also named among Forbes America's Top Colleges and the Chronicle of Higher Educations Top Producers of US Fulbright Scholars.
RISD's acceptance rate is 19%. In August 2019, the school announced it would be adopting a test-optional policy for admissions.
Campus
In the past, RISD buildings were mostly located at the western edge of College Hill, between the Brown University campus and the Providence River. In recent decades, RISD has acquired or built buildings on the downslope nearer the river, or in Downtown Providence just on the other side of the waterway. The main library, undergrad dormitories, and graduate studios of the college are now located downtown.
RISD Museum
The RISD Museum was founded in 1877 on the belief that art, artists, and the institutions that support them play pivotal roles in promoting broad civic engagement and creating more open societies. With a permanent collection numbering approximately 100,000 works, the RISD museum is the third largest art museum attached to an educational facility.
Athletics
RISD has many athletic clubs and teams. The hockey team is called the "Nads", and their cheer is "Go Nads!" The logo for the Nads features a horizontal hockey stick with two hockey pucks at the end of the stick's handle.
The basketball team is known simply as "The Balls", and their slogan is, "When the heat is on, the Balls stick together!" The Balls' logo consists of two balls next to one another in an irregularly shaped net.
Lest the sexual innuendo of these team names and logos be lost or dismissed, the 2001 creation of the school's unofficial mascot, Scrotie, ended any ambiguity. Despite the name, Scrotie is not merely a representation of a scrotum, but is a 7-foot tall penis.
The school's color is a vivid blue.
Notable people
Alumni
Notable RISD alumni include Kara Walker (MFA 1994), Jenny Holzer (MFA 1977), Nicole Eisenman (BFA 1987), Do-Ho Suh (BFA 1994), Julie Mehretu (MFA 1997), Roni Horn (BFA 1975), Shahzia Sikander (MFA 1995), Glenn Ligon (attended 1978-80), Ryan Trecartin (BFA 2004), Lizzie Fitch (BFA 2004), Janine Antoni (MFA 1989), Rose B. Simpson (MFA 2011) as well as artist collectives including Fort Thunder (1995-2001) and Forcefield (1997-2003) and the band Lightning Bolt. Graduates in photography include Francesca Woodman (BFA 1978), Todd Hido (attended 1991-92), Deana Lawson (MFA 2004), and RaMell Ross (MFA 2014).
Among the school's alumni in illustration are Brian Selznick (BFA 1988), Chris Van Allsburg (MA 1975), Roz Chast (BFA 1977), and David Macaulay (BArch 1969). Alumni in graphic design include Shepard Fairey (BFA 1992), Tobias Frere-Jones (BFA 1992), and Pippin Frisbie-Calder (BFA 2008). Among the alumni of the school's architecture department are Hashim Sarkis (BArch 1987) Deborah Berke (BFA 1975, BArch 1977), Preston Scott Cohen (BArch 1983), and Nader Tehrani (BArch 1986).
Prominent RISD graduates in film include James Franco (MFA 2012), Seth MacFarlane (BFA 1995), Jemima Kirke (BFA 2008), Bryan Konietzko (BFA 1998), Michael Dante DiMartino (BFA 1996), Gus Van Sant (BFA 1975), and Robert Richardson (BFA 1979). Graduates in music include bassist Syd Butler (BFA 1996) and two founding members of Talking Heads: Tina Weymouth (BFA 1974) and Chris Frantz (BFA 1974); Talking Heads' other founder, David Byrne, is also a RISD alumnus and met Weymouth and Frantz at the art school, but left before graduation.
Among the school's alumni in business are Airbnb co-founders Joe Gebbia (BFA 2004) and Brian Chesky (BFA 2004).
Faculty
Notable RISD faculty include photographers Diane Arbus, Aaron Siskind, and Elle Pérez, sculptor Simone Leigh, painters Jennifer Packer, Aaron Gilbert, and Angela Dufresne, architect Friedrich St. Florian, designer Victor Papanek, and Pulitzer Prize-winning author Jhumpa Lahiri. Rockwell King DuMoulin was a professor and architecture department chair from 1972 to 1978.
References
External links
Art schools in Rhode Island
Design schools in the United States
Architecture schools in Rhode Island
Landscape architecture schools
Universities and colleges established in 1877
Graphic design schools in the United States
Animation schools in the United States
1877 establishments in Rhode Island
Private universities and colleges in Rhode Island
Glassmaking schools | Rhode Island School of Design | [
"Materials_science",
"Engineering"
] | 2,381 | [
"Glass engineering and science",
"Glassmaking schools"
] |
581,449 | https://en.wikipedia.org/wiki/Martin%20Seligman | Martin Elias Peter Seligman (; born August 12, 1942) is an American psychologist, educator, and author of self-help books. Seligman is a strong promoter within the scientific community of his theories of well-being and positive psychology. His theory of learned helplessness is popular among scientific and clinical psychologists. A Review of General Psychology survey, published in 2002, ranked Seligman as the 31st most cited psychologist of the 20th century.
Seligman is the Zellerbach Family Professor of Psychology in the University of Pennsylvania's Department of Psychology. He was previously the Director of the Clinical Training Program in the department, and earlier taught at Cornell University. He is the director of the university's Positive Psychology Center. Seligman was elected president of the American Psychological Association for 1998. He is the founding editor-in-chief of Prevention and Treatment (the APA electronic journal) and is on the board of advisers of Parents magazine.
Seligman has written about positive psychology topics in books such as The Optimistic Child, Child's Play, Learned Optimism, Authentic Happiness, and Flourish. His most recent book, Tomorrowmind, co-written with Gabriella Rosen Kellerman, was published in 2023.
Early life and education
Seligman was born in Albany, New York, to a Jewish family. He was educated at a public school and at The Albany Academy. He earned a bachelor's degree in philosophy at Princeton University in 1964, graduating summa cum laude. and a Ph.D. in psychology from the University of Pennsylvania in 1967. In June 1989, Seligman received an honorary doctorate from the Faculty of Social Sciences at Uppsala University, Sweden.
Learned helplessness
Seligman's foundational experiments and theory of "learned helplessness" began at University of Pennsylvania in 1967, as an extension of his interest in depression. Seligman and colleagues accidentally discovered that the experimental conditioning protocol they used with dogs led to behaviors which were unexpected, in that under the experimental conditions, the recently conditioned dogs did not respond to opportunities to learn to escape from an unpleasant situation ('electric shocks of moderate intensity'). A fictionalised account of the experiment, and illustration of anti-vivisection arguments about the ethics of electrocuting dogs in the name of psychological research, occur in clinical psychologist Guy Holmes' novel The Black Dogs of Glaslyn. At the time, and in subsequent years, very little ethical critique of the experiment was published, although definitional and conceptual issues were critiqued.
Seligman developed the theory further, finding learned helplessness to be a psychological condition in which a human being or an animal has learned to act or behave helplessly in a particular situation—usually after experiencing some inability to avoid an adverse situation—even when it actually has the power to change its unpleasant or even harmful circumstance. Seligman saw a similarity with severely depressed patients, and argued that clinical depression and related mental illnesses result in part from a perceived absence of control over the outcome of a situation. In later years, alongside Abramson, Seligman reformulated his theory of learned helplessness to include attributional style.
Happiness
In his 2002 book Authentic Happiness, Seligman saw happiness as made up of positive emotion, engagement and meaning.
Positive psychology
Seligman worked with Christopher Peterson to create what they describe as a "positive" counterpart to the Diagnostic and Statistical Manual of Mental Disorders (DSM). While the DSM focuses on what can go wrong, Character Strengths and Virtues (2004) is designed to look at what can go right. In their research they looked across cultures and across millennia to attempt to distill a manageable list of virtues that have been highly valued from ancient China and India, through Greece and Rome, to contemporary Western cultures.
Their list includes six character strengths: wisdom/knowledge, courage, humanity, justice, temperance, and transcendence. Each of these has three to five sub-entries; for instance, temperance includes forgiveness, humility, prudence, and self-regulation. The authors do not believe that there is a hierarchy for the six virtues; no one is more fundamental than or a precursor to the others.
Well-being
In his book Flourish, 2011, Seligman wrote on "Well-Being Theory", and said, with respect to how he measures well-being:
Each element of well-being must itself have three properties to count as an element:
It contributes to well-being.
Many people pursue it for its own sake, not merely to get any of the other elements.
It is defined and measured independently of the other elements.
Seligman concluded that there are five elements to "well-being", which fall under the mnemonic PERMA:
Positive emotion—Can only be assessed subjectively
Engagement—Like positive emotion, can only be measured through subjective means. It is presence of a flow state
Relationships—The presence of friends, family, intimacy, or social connection
Meaning—Belonging to and serving something bigger than one's self
Achievement—Accomplishment that is pursued even when it brings no positive emotion, no meaning, and nothing in the way of positive relationships.
These theories have not been empirically validated.
In July 2011, Seligman encouraged the British Prime Minister, David Cameron, to look into well-being as well as financial wealth in ways of assessing the prosperity of a nation. On July 6, 2011, Seligman appeared on Newsnight and was interviewed by Jeremy Paxman about his ideas and his interest in the concept of well-being.
MAPP program
The Master of Applied Positive Psychology (MAPP) program at the University of Pennsylvania was established under the leadership of Seligman as the first educational initiative of the Positive Psychology Center in 2003.
Personal life
Seligman plays bridge and finished second in the 1998 installment of one of the three major North American pair championships, the Blue Ribbon Pairs, as well as having won over 50 regional championships.
Seligman has seven children, four grandchildren, and two dogs. He and his second wife, Mandy, live in a house that was once occupied by Eugene Ormandy. They have home-schooled five of their seven children.
Seligman was inspired by the work of the psychiatrist Aaron T. Beck at the University of Pennsylvania in refining his own cognitive techniques and exercises.
Publications
(Paperback reprint edition, W.H. Freeman, 1992, )
(Paperback reprint edition, Penguin Books, 1998; reissue edition, Free Press, 1998)
(Paperback reprint edition, Ballantine Books, 1995, )
(Paperback edition, Harper Paperbacks, 1996, )
(Paperback edition, Free Press, 2004, )
References
External links
Authentic Happiness, Seligman's homepage at University of Pennsylvania
"Eudaemonia, the Good Life: A Talk with Martin Seligman", an article wherein Seligman speaks extensively on the topic of eudaemonia
"The Positive Psychology Center", a website devoted to positive psychology. Martin Seligman is director of the Positive Psychology Center of the University of Pennsylvania.
Program description for Master of Applied Positive Psychology degree established by Seligman
Martin E. P. Seligman's curriculum vitae at the University of Pennsylvania
TED Talk: Why is psychology good?
University of Pennsylvania's page on MAPP program
1942 births
20th-century American psychologists
21st-century American psychologists
American contract bridge players
American male non-fiction writers
American self-help writers
Animal testing
Fellows of the American Association for the Advancement of Science
Fellows of the Society of Experimental Psychologists
Jewish American non-fiction writers
Living people
Positive psychologists
Presidents of the American Psychological Association
Princeton University alumni
American social psychologists
The Albany Academy alumni
University of Pennsylvania alumni
University of Pennsylvania faculty
21st-century American Jews
James McKeen Cattell Fellow Award recipients
APA Distinguished Scientific Award for an Early Career Contribution to Psychology recipients | Martin Seligman | [
"Chemistry"
] | 1,594 | [
"Animal testing"
] |
581,610 | https://en.wikipedia.org/wiki/Kummer%20theory | In abstract algebra and number theory, Kummer theory provides a description of certain types of field extensions involving the adjunction of nth roots of elements of the base field. The theory was originally developed by Ernst Eduard Kummer around the 1840s in his pioneering work on Fermat's Last Theorem. The main statements do not depend on the nature of the field – apart from its characteristic, which should not divide the integer n – and therefore belong to abstract algebra. The theory of cyclic extensions of the field K when the characteristic of K does divide n is called Artin–Schreier theory.
Kummer theory is basic, for example, in class field theory and in general in understanding abelian extensions; it says that in the presence of enough roots of unity, cyclic extensions can be understood in terms of extracting roots. The main burden in class field theory is to dispense with extra roots of unity ('descending' back to smaller fields); which is something much more serious.
Kummer extensions
A Kummer extension is a field extension L/K, where for some given integer n > 1 we have
K contains n distinct nth roots of unity (i.e., roots of Xn − 1)
L/K has abelian Galois group of exponent n.
For example, when n = 2, the first condition is always true if K has characteristic ≠ 2. The Kummer extensions in this case include quadratic extensions where a in K is a non-square element. By the usual solution of quadratic equations, any extension of degree 2 of K has this form. The Kummer extensions in this case also include biquadratic extensions and more general multiquadratic extensions. When K has characteristic 2, there are no such Kummer extensions.
Taking n = 3, there are no degree 3 Kummer extensions of the rational number field Q, since for three cube roots of 1 complex numbers are required. If one takes L to be the splitting field of X3 − a over Q, where a is not a cube in the rational numbers, then L contains a subfield K with three cube roots of 1; that is because if α and β are roots of the cubic polynomial, we shall have (α/β)3 =1 and the cubic is a separable polynomial. Then L/K is a Kummer extension.
More generally, it is true that when K contains n distinct nth roots of unity, which implies that the characteristic of K doesn't divide n, then adjoining to K the nth root of any element a of K creates a Kummer extension (of degree m, for some m dividing n). As the splitting field of the polynomial Xn − a, the Kummer extension is necessarily Galois, with Galois group that is cyclic of order m. It is easy to track the Galois action via the root of unity in front of
Kummer theory provides converse statements. When K contains n distinct nth roots of unity, it states that any abelian extension of K of exponent dividing n is formed by extraction of roots of elements of K. Further, if K× denotes the multiplicative group of non-zero elements of K, abelian extensions of K of exponent n correspond bijectively with subgroups of
that is, elements of K× modulo nth powers. The correspondence can be described explicitly as follows. Given a subgroup
the corresponding extension is given by
where
In fact it suffices to adjoin nth root of one representative of each element of any set of generators of the group Δ. Conversely, if L is a Kummer extension of K, then Δ is recovered by the rule
In this case there is an isomorphism
given by
where α is any nth root of a in L. Here denotes the multiplicative group of nth roots of unity (which belong to K) and is the group of continuous homomorphisms from equipped with Krull topology to with discrete topology (with group operation given by pointwise multiplication). This group (with discrete topology) can also be viewed as Pontryagin dual of , assuming we regard as a subgroup of circle group. If the extension L/K is finite, then is a finite discrete group and we have
however the last isomorphism isn't natural.
Recovering from a primitive element
For prime, let be a field containing and a degree Galois extension. Note the Galois group is cyclic, generated by . Let
Then
Since and
,
where the sign is if is odd and if .
When is an abelian extension of degree square-free such that , apply the same argument to the subfields Galois of degree to obtain
where
.
The Kummer Map
One of the main tools in Kummer theory is the Kummer map. Let be a positive integer and let be a field, not necessarily containing the th roots of unity. Letting denote the algebraic closure of , there is a short exact sequence
Choosing an extension and taking -cohomology one obtains the sequence
By Hilbert's Theorem 90 , and hence we get an isomorphism . This is the Kummer map. A version of this map also exists when all are considered simultaneously. Namely, since , taking the direct limit over yields an isomorphism
,
where tors denotes the torsion subgroup of roots of unity.
For Elliptic Curves
Kummer theory is often used in the context of elliptic curves. Let be an elliptic curve. There is a short exact sequence
,
where the multiplication by map is surjective since is divisible. Choosing an algebraic extension and taking cohomology, we obtain the Kummer sequence for :
.
The computation of the weak Mordell-Weil group is a key part of the proof of the Mordell-Weil theorem. The failure of to vanish adds a key complexity to the theory.
Generalizations
Suppose that G is a profinite group acting on a module A with a surjective homomorphism π from the G-module A to itself. Suppose also that G acts trivially on the kernel C of π and that the first cohomology group H1(G,A) is trivial. Then the exact sequence of group cohomology shows that there is an isomorphism between AG/π(AG) and Hom(G,C).
Kummer theory is the special case of this when A is the multiplicative group of the separable closure of a field k, G is the Galois group, π is the nth power map, and C the group of nth roots of unity. Artin–Schreier theory is the special case when A is the additive group of the separable closure of a field k of positive characteristic p, G is the Galois group, π is the Frobenius map minus the identity, and C the finite field of order p. Taking A to be a ring of truncated Witt vectors gives Witt's generalization of Artin–Schreier theory to extensions of exponent dividing pn.
See also
Quadratic field
References
Bryan Birch, "Cyclotomic fields and Kummer extensions", in J.W.S. Cassels and A. Frohlich (edd), Algebraic number theory, Academic Press, 1973. Chap.III, pp. 85–93.
Field (mathematics)
Algebraic number theory | Kummer theory | [
"Mathematics"
] | 1,496 | [
"Algebraic number theory",
"Number theory"
] |
581,638 | https://en.wikipedia.org/wiki/Native%20%28computing%29 | In computing, native software or data-formats are those that were designed to run on a particular operating system. In a more technical sense, native code is code written specifically for a certain processor. In contrast, cross-platform software can be run on multiple operating systems and/or computer architectures.
For example, a Game Boy receives its software through a cartridge, which contains code that runs natively on the Game Boy. The only way to run this code on another processor is to use an emulator, which simulates an actual Game Boy. This usually comes at the cost of speed.
Applications
Something running on a computer natively means that it is running without any external layer requiring fewer software layers. For example, in Microsoft Windows the Native API is an application programming interface specific for the Windows NT kernel, which can be used to give access to some kernel functions, which cannot be directly accessed through a more universal Windows API.
Operating systems
Used to denote either the absence of virtualization or virtualization at its lowest level. When various levels of virtualization take place, the lowest level operating system—the one that actually maintains direct control of the hardware—is referred to as a "Native VM," for example.
Machine code
Machine code, also known as native code, is a program which is written in machine language. Machine code is usually considered the lowest level of code for a computer, that, in its lowest level form, is written in binary (0s and 1s), but is often written in hexadecimal or octal to make it a little easier to handle. These instruction sets are then interpreted by the computer. With this, there is no need for translation. machine code is strictly numerical and usually isn't what programmers program in, due to this complex nature. Machine code is also as close as you can get to the processor, so using this language, you are programming specifically for that processor as machine code for each processor may differ. Typically programmers will code in high-level languages such as C, C++, Pascal, (or other directly compiled languages) which gets translated into assembly code, which then translates it into machine code (or in most cases the compiler generates machine code directly). Since each CPU is different, programs need to be recompiled or rewritten in order to work on that CPU.
Data
Applied to data, native data formats or communication protocols are those supported by a certain computer hardware or software, with maximal consistency and minimal amount of additional components.
For example, EGA and VGA video adapters natively support code page 437. This does not preclude supporting other code pages, but it requires either a font uploading or using graphic modes.
Cloud computing
In cloud computing, "cloud native" refers to the software approach of building, deploying, and managing modern applications in cloud computing environments, for software optimised for running on a cloud-based platform. A cloud native application typically consists of individual modular microservices.
References
Computer jargon | Native (computing) | [
"Technology"
] | 614 | [
"Computing terminology",
"Computer jargon",
"Natural language and computing"
] |
581,644 | https://en.wikipedia.org/wiki/Fundidora%20Park | Fundidora Park (Parque Fundidora in Spanish) is an urban park located in the Mexican city of Monterrey, built in what once were the grounds of the Monterrey Foundry, the first steel and iron foundry in Latin America, and, for many years, the most important one in the region.
History and location
The Parque Fundidora is located inside the grounds formerly occupied by Fundidora Monterrey, a steel foundry company of great importance to the economic development of the city during the 20th century. After its bankruptcy in 1986, Federal and State government showed interest in using the land to create a public park with the aim to preserve its history as well as being a center of culture, business, entertainment and ecological awareness for the people of the city. In 1988 the land was expropriated and the Fideicomiso Fundidora (Fundidora Trust) was created to manage it in an arrangement between the State government and private investment. Construction began in 1989, starting with the preservation of historically important buildings and structures within the foundry and the dismantling of the others, followed by the construction of the CINTERMEX convention center, Plaza Sesamo amusement park, a hotel and a cinematheque. Construction and rehabilitation continued during the rest of the 1990s.
The park opened on February 24, 2001, with an area of , receiving the additional name of Museum of Industrial Archaeology Site. In 2010 the Paseo Santa Lucía development, consisting of a artificial river and accompanying river-walk, is incorporated to the park bringing it into its current state, with a total area of , of which are green space, 2 lakes, 23 fountains, 16 buildings, 27 large scale industrial structures and 127 pieces of steel-making machinery and tools of historical importance to the state of Nuevo Leon. There's also a long track surrounding the original section of the park.
Buildings
Monterrey Arena
Arena Monterrey is an indoor arena in Monterrey, Mexico. It is primarily used for concerts, shows and indoor sports like indoor soccer or basketball. It used to be the home arena of the Monterrey Fury indoor soccer team and the Fuerza Regia, a professional basketball team in the Liga Nacional de Baloncesto Profesional and the Monterrey La Raza, a team in the NISL.
The Arena Monterrey is owned by Publimax S.A. de C.V. (TV Azteca Northeast), part of the Avalanz Group, who owns 80% and by TV Azteca who owns 20%. The arena is 480,000 square feet (45,000 m) in size.
Cintermex
Parque Fiesta Aventuras
Parque Fiesta Aventuras (formerly Parque Plaza Sésamo) is a theme park located in the complex that originally opened in 1995. The park was operated under a license from Sesame Workshop, the owners of Sesame Street and Plaza Sésamo.
On May 18, 2022, the park announced that it would rebrand as Parque Fiesta Aventuras for the 2022 season following a two-year period of closure. The reason for the rebranding was not classified by the park, but is likely that the owners had terminated the license to use the Plaza Sésamo branding and characters.
Auditorio Banamex
Auditorio Citibanamex (formerly named Auditorio Coca-Cola, Auditorio Fundidora and Auditorio Banamex) is an indoor amphitheatre with a capacity of 8,200.
The amphitheatre opened in 1994 with a sponsorship by The Coca-Cola Company, and it was the primary venue for concerts until the Arena Monterrey opened in 2003.
Centro de las Artes
Museo de Acero
Visualscapes
Events
The Champ Car World Series Grand Prix of Monterrey was held from 2001 to 2006 (see Tecate/Telmex Grand Prix of Monterrey).
Fundidora Park has been the venue of UN and OEA summits.
On February 26, 2006, the Fundidora Park raceway hosted the 2005–06 A1 Grand Prix of Nations, Mexico in A1 Grand Prix racing series.
The park was the center of the 2007 Universal Forum of Cultures. For this event, an area of was joined to the 120 original hectares and also was joined to the city's Great Plaza by the Santa Lucia Riverwalk.
The Pal Norte music fest and the Machaca Fest are hosted in the park every year.
Champ Car race history
A1GP race history
Lap records
The fastest official race lap records at the Fundidora Park Circuit are listed as:
See also
List of preserved historic blast furnaces
Enrique Abaroa Castellanos
References
External links
Fundidora Park official website
Mabe Fundidora Ice Rink: localized at Fundidora Park, Monterrey, Nuevo León
Centro de las Artes CONARTE
Fundidora Park Slideshow
Buildings and structures in Monterrey
Champ Car circuits
Parks in Mexico
A1 Grand Prix circuits
Tourist attractions in Monterrey
Blast furnaces | Fundidora Park | [
"Chemistry"
] | 1,000 | [
"Blast furnaces",
"History of metallurgy"
] |
581,759 | https://en.wikipedia.org/wiki/Thrust%20vectoring | Thrust vectoring, also known as thrust vector control (TVC), is the ability of an aircraft, rocket or other vehicle to manipulate the direction of the thrust from its engine(s) or motor(s) to control the attitude or angular velocity of the vehicle.
In rocketry and ballistic missiles that fly outside the atmosphere, aerodynamic control surfaces are ineffective, so thrust vectoring is the primary means of attitude control. Exhaust vanes and gimbaled engines were used in the 1930s by Robert Goddard.
For aircraft, the method was originally envisaged to provide upward vertical thrust as a means to give aircraft vertical (VTOL) or short (STOL) takeoff and landing ability. Subsequently, it was realized that using vectored thrust in combat situations enabled aircraft to perform various maneuvers not available to conventional-engined planes. To perform turns, aircraft that use no thrust vectoring must rely on aerodynamic control surfaces only, such as ailerons or elevator; aircraft with vectoring must still use control surfaces, but to a lesser extent.
In missile literature originating from Russian sources, thrust vectoring is referred to as gas-dynamic steering or gas-dynamic control.
Methods
Rockets and ballistic missiles
Nominally, the line of action of the thrust vector of a rocket nozzle passes through the vehicle's centre of mass, generating zero net torque about the mass centre. It is possible to generate pitch and yaw moments by deflecting the main rocket thrust vector so that it does not pass through the mass centre. Because the line of action is generally oriented nearly parallel to the roll axis, roll control usually requires the use of two or more separately hinged nozzles or a separate system altogether, such as fins, or vanes in the exhaust plume of the rocket engine, deflecting the main thrust. Thrust vector control (TVC) is only possible when the propulsion system is creating thrust; separate mechanisms are required for attitude and flight path control during other stages of flight.
Thrust vectoring can be achieved by four basic means:
Gimbaled engine(s) or nozzle(s)
Reactive fluid injection
Auxiliary "Vernier" thrusters
Exhaust vanes, also known as jet vanes
Gimbaled thrust
Thrust vectoring for many liquid rockets is achieved by gimbaling the whole engine. This involves moving the entire combustion chamber and outer engine bell as on the Titan II's twin first-stage motors, or even the entire engine assembly including the related fuel and oxidizer pumps. The Saturn V and the Space Shuttle used gimbaled engines.
A later method developed for solid propellant ballistic missiles achieves thrust vectoring by deflecting only the nozzle of the rocket using electric actuators or hydraulic cylinders. The nozzle is attached to the missile via a ball joint with a hole in the centre, or a flexible seal made of a thermally resistant material, the latter generally requiring more torque and a higher power actuation system. The Trident C4 and D5 systems are controlled via hydraulically actuated nozzle. The STS SRBs used gimbaled nozzles.
Propellant injection
Another method of thrust vectoring used on solid propellant ballistic missiles is liquid injection, in which the rocket nozzle is fixed, however a fluid is introduced into the exhaust flow from injectors mounted around the aft end of the missile. If the liquid is injected on only one side of the missile, it modifies that side of the exhaust plume, resulting in different thrust on that side thus an asymmetric net force on the missile. This was the control system used on the Minuteman II and the early SLBMs of the United States Navy.
Vernier thrusters
An effect similar to thrust vectoring can be produced with multiple vernier thrusters, small auxiliary combustion chambers which lack their own turbopumps and can gimbal on one axis. These were used on the Atlas and R-7 missiles and are still used on the Soyuz rocket, which is descended from the R-7, but are seldom used on new designs due to their complexity and weight. These are distinct from reaction control system thrusters, which are fixed and independent rocket engines used for maneuvering in space.
Exhaust vanes
One of the earliest methods of thrust vectoring in rocket engines was to place vanes in the engine's exhaust stream. These exhaust vanes or jet vanes allow the thrust to be deflected without moving any parts of the engine, but reduce the rocket's efficiency. They have the benefit of allowing roll control with only a single engine, which nozzle gimbaling does not. The V-2 used graphite exhaust vanes and aerodynamic vanes, as did the Redstone, derived from the V-2. The Sapphire and Nexo rockets of the amateur group Copenhagen Suborbitals provide a modern example of jet vanes. Jet vanes must be made of a refractory material or actively cooled to prevent them from melting. Sapphire used solid copper vanes for copper's high heat capacity and thermal conductivity, and Nexo used graphite for its high melting point, but unless actively cooled, jet vanes will undergo significant erosion. This, combined with jet vanes' inefficiency, mostly precludes their use in new rockets.
Tactical missiles and small projectiles
Some smaller sized atmospheric tactical missiles, such as the AIM-9X Sidewinder, eschew flight control surfaces and instead use mechanical vanes to deflect rocket motor exhaust to one side.
By using mechanical vanes to deflect the exhaust of the missile's rocket motor, a missile can steer itself even shortly after being launched (when the missile is moving slowly, before it has reached a high speed). This is because even though the missile is moving at a low speed, the rocket motor's exhaust has a high enough speed to provide sufficient forces on the mechanical vanes. Thus, thrust vectoring can reduce a missile's minimum range. For example, anti-tank missiles such as the Eryx and the PARS 3 LR use thrust vectoring for this reason.
Some other projectiles that use thrust-vectoring:
9M330
Strix mortar round uses twelve midsection lateral thruster rockets to provide terminal course corrections
AAD uses jet vanes
Astra (missile)
Akash (missile)
BrahMos
QRSAM uses jet vanes
MPATGM uses jet vanes
AAM-5
Barak 8 uses jet vanes
A-Darter uses jet vanes
ASRAAM uses jet vanes
R-73 (missile) uses jet vanes
HQ-9 uses jet vanes
PL-10 (ASR) uses jet vanes
MICA (missile) uses jet vanes
PARS 3 LR uses jet vanes
IRIS-T
Aster missile family combines aerodynamic control and the direct thrust vector control called "PIF-PAF"
AIM-9X uses four jet vanes inside the exhaust, that move as the fins move.
9M96E uses a gas-dynamic control system enables maneuver at altitudes of up to 35km at forces of over 20g, which permits engagement of non-strategic ballistic missiles.
9K720 Iskander is controlled during the whole flight with gas-dynamic and aerodynamic control surfaces.
Dongfeng subclasses/JL-2/JL-3 ballistic missiles (allegedly fitted with TVC control)
Pralay uses jet vanes
Aircraft
Most currently operational vectored thrust aircraft use turbofans with rotating nozzles or vanes to deflect the exhaust stream. This method allows designs to deflect thrust through as much as 90 degrees relative to the aircraft centreline. If an aircraft uses thrust vectoring for VTOL operations the engine must be sized for vertical lift, rather than normal flight, which results in a weight penalty. Afterburning (or Plenum Chamber Burning, PCB, in the bypass stream) is difficult to incorporate and is impractical for take-off and landing thrust vectoring, because the very hot exhaust can damage runway surfaces. Without afterburning it is hard to reach supersonic flight speeds. A PCB engine, the Bristol Siddeley BS100, was cancelled in 1965.
Tiltrotor aircraft vector thrust via rotating turboprop engine nacelles. The mechanical complexities of this design are quite troublesome, including twisting flexible internal components and driveshaft power transfer between engines. Most current tiltrotor designs feature two rotors in a side-by-side configuration. If such a craft is flown in a way where it enters a vortex ring state, one of the rotors will always enter slightly before the other, causing the aircraft to perform a drastic and unplanned roll.
Thrust vectoring is also used as a control mechanism for airships. An early application was the British Army airship Delta, which first flew in 1912. It was later used on HMA (His Majesty's Airship) No. 9r, a British rigid airship that first flew in 1916 and the twin 1930s-era U.S. Navy rigid airships USS Akron and USS Macon that were used as airborne aircraft carriers, and a similar form of thrust vectoring is also particularly valuable today for the control of modern non-rigid airships. In this use, most of the load is usually supported by buoyancy and vectored thrust is used to control the motion of the aircraft. The first airship that used a control system based on pressurized air was Enrico Forlanini's Omnia Dir in 1930s.
A design for a jet incorporating thrust vectoring was submitted in 1949 to the British Air Ministry by Percy Walwyn; Walwyn's drawings are preserved at the National Aerospace Library at Farnborough. Official interest was curtailed when it was realised that the designer was a patient in a mental hospital.
Now being researched, Fluidic Thrust Vectoring (FTV) diverts thrust via secondary fluidic injections. Tests show that air forced into a jet engine exhaust stream can deflect thrust up to 15 degrees. Such nozzles are desirable for their lower mass and cost (up to 50% less), inertia (for faster, stronger control response), complexity (mechanically simpler, fewer or no moving parts or surfaces, less maintenance), and radar cross section for stealth. This will likely be used in many unmanned aerial vehicle (UAVs), and 6th generation fighter aircraft.
Vectoring nozzles
Thrust-vectoring flight control (TVFC) is obtained through deflection of the aircraft jets in some or all of the pitch, yaw and roll directions. In the extreme, deflection of the jets in yaw, pitch and roll creates desired forces and moments enabling complete directional control of the aircraft flight path without the implementation of the conventional aerodynamic flight controls (CAFC). TVFC can also be used to hold stationary flight in areas of the flight envelope where the main aerodynamic surfaces are stalled. TVFC includes control of STOVL aircraft during the hover and during the transition between hover and forward speeds below 50 knots where aerodynamic surfaces are ineffective.
When vectored thrust control uses a single propelling jet, as with a single-engined aircraft, the ability to produce rolling moments may not be possible. An example is an afterburning supersonic nozzle where nozzle functions are throat area, exit area, pitch vectoring and yaw vectoring. These functions are controlled by four separate actuators. A simpler variant using only three actuators would not have independent exit area control.
When TVFC is implemented to complement CAFC, agility and safety of the aircraft are maximized. Increased safety may occur in the event of malfunctioning CAFC as a result of battle damage.
To implement TVFC a variety of nozzles both mechanical and fluidic may be applied. This includes convergent and convergent-divergent nozzles that may be fixed or geometrically variable. It also includes variable mechanisms within a fixed nozzle, such as rotating cascades and rotating exit vanes. Within these aircraft nozzles, the geometry itself may vary from two-dimensional (2-D) to axisymmetric or elliptic. The number of nozzles on a given aircraft to achieve TVFC can vary from one on a CTOL aircraft to a minimum of four in the case of STOVL aircraft.
Definitions
Axisymmetric Nozzles with circular exits.
Conventional aerodynamic flight control (CAFC) Pitch, yaw-pitch, yaw-pitch-roll or any other combination of aircraft control through aerodynamic deflection using rudders, flaps, elevators and/or ailerons.
Converging-diverging nozzle (C-D) Generally used on supersonic jet aircraft where nozzle pressure ratio (npr) > 3. The engine exhaust is expanded through a converging section to achieve Mach 1 and then expanded through a diverging section to achieve supersonic speed at the exit plane, or less at low npr.
Converging nozzle Generally used on subsonic and transonic jet aircraft where npr < 3. The engine exhaust is expanded through a converging section to achieve Mach 1 at the exit plane, or less at low npr.
Effective Vectoring Angle The average angle of deflection of the jet stream centreline at any given moment in time.
Fixed nozzle A thrust-vectoring nozzle of invariant geometry or one of variant geometry maintaining a constant geometric area ratio, during vectoring. This will also be referred to as a civil aircraft nozzle and represents the nozzle thrust vectoring control applicable to passenger, transport, cargo and other subsonic aircraft.
Fluidic thrust vectoring The manipulation or control of the exhaust flow with the use of a secondary air source, typically bleed air from the engine compressor or fan.
Geometric vectoring angle Geometric centreline of the nozzle during vectoring. For those nozzles vectored at the geometric throat and beyond, this can differ considerably from the effective vectoring angle.
Three-bearing swivel duct nozzle (3BSD) Three angled segments of engine exhaust duct rotate relative to one another about duct centreline to produce nozzle thrust axis pitch and yaw.
Three-dimensional (3-D) Nozzles with multi-axis or pitch and yaw control.
Thrust vectoring (TV) The deflection of the jet away from the body-axis through the implementation of a flexible nozzle, flaps, paddles, auxiliary fluid mechanics or similar methods.
Thrust-vectoring flight control (TVFC) Pitch, yaw-pitch, yaw-pitch-roll, or any other combination of aircraft control through deflection of thrust generally issuing from an air-breathing turbofan engine.
Two-dimensional (2-D) Nozzles with square or rectangular exits. In addition to the geometrical shape 2-D can also refer to the degree-of-freedom (DOF) controlled which is single axis, or pitch-only, in which case round nozzles are included.
Two-dimensional converging-diverging (2-D C-D) Square, rectangular, or round supersonic nozzles on fighter aircraft with pitch-only control.
Variable nozzle A thrust-vectoring nozzle of variable geometry maintaining a constant, or allowing a variable, effective nozzle area ratio, during vectoring. This will also be referred to as a military aircraft nozzle as it represents the nozzle thrust vectoring control applicable to fighter and other supersonic aircraft with afterburning. The convergent section may be fully controlled with the divergent section following a pre-determined relationship to the convergent throat area. Alternatively, the throat area and the exit area may be controlled independently, to allow the divergent section to match the exact flight condition.
Methods of nozzle control
Geometric area ratios Maintaining a fixed geometric area ratio from the throat to the exit during vectoring. The effective throat is constricted as the vectoring angle increases.
Effective area ratios Maintaining a fixed effective area ratio from the throat to the exit during vectoring. The geometric throat is opened as the vectoring angle increases.
Differential area ratios Maximizing nozzle expansion efficiency generally through predicting the optimal effective area as a function of the mass flow rate.
Methods of thrust vectoring
Type I Nozzles whose baseframe mechanically is rotated before the geometrical throat.
Type II Nozzles whose baseframe is mechanically rotated at the geometrical throat.
Type III Nozzles whose baseframe is not rotated. Rather, the addition of mechanical deflection post-exit vanes or paddles enables jet deflection.
Type IV Jet deflection through counter-flowing or co-flowing (by shock-vector control or throat shifting) auxiliary jet streams. Fluid-based jet deflection using secondary fluidic injection.
Additional type Nozzles whose upstream exhaust duct consists of wedge-shaped segments which rotate relative to each other about the duct centreline.
Operational examples
Aircraft
An example of 2D thrust vectoring is the Rolls-Royce Pegasus engine used in the Hawker Siddeley Harrier, as well as in the AV-8B Harrier II variant.
Widespread use of thrust vectoring for enhanced maneuverability in Western production-model fighter aircraft didn't occur until the deployment of the Lockheed Martin F-22 Raptor fifth-generation jet fighter in 2005, with its afterburning, 2D thrust-vectoring Pratt & Whitney F119 turbofan.
While the Lockheed Martin F-35 Lightning II uses a conventional afterburning turbofan (Pratt & Whitney F135) to facilitate supersonic operation, its F-35B variant, developed for joint usage by the US Marine Corps, Royal Air Force, Royal Navy, and Italian Navy, also incorporates a vertically mounted, low-pressure shaft-driven remote fan, which is driven through a clutch during landing from the engine. Both the exhaust from this fan and the main engine's fan are deflected by thrust vectoring nozzles, to provide the appropriate combination of lift and propulsive thrust. It is not conceived for enhanced maneuverability in combat, only for VTOL operation, and the F-35A and F-35C do not use thrust vectoring at all.
The Sukhoi Su-30MKI, produced by India under licence at Hindustan Aeronautics Limited, is in active service with the Indian Air Force. The TVC makes the aircraft highly maneuverable, capable of near-zero airspeed at high angles of attack without stalling, and dynamic aerobatics at low speeds. The Su-30MKI is powered by two Al-31FP afterburning turbofans. The TVC nozzles of the MKI are mounted 32 degrees outward to longitudinal engine axis (i.e. in the horizontal plane) and can be deflected ±15 degrees in the vertical plane. This produces a corkscrew effect, greatly enhancing the turning capability of the aircraft.
A few computerized studies add thrust vectoring to extant passenger airliners, like the Boeing 727 and 747, to prevent catastrophic failures, while the experimental X-48C may be jet-steered in the future.
Other
Examples of rockets and missiles which use thrust vectoring include both large systems such as the Space Shuttle Solid Rocket Booster (SRB), S-300P (SA-10) surface-to-air missile, UGM-27 Polaris nuclear ballistic missile and RT-23 (SS-24) ballistic missile and smaller battlefield weapons such as Swingfire.
The principles of air thrust vectoring have been recently adapted to military sea applications in the form of fast water-jet steering that provide super-agility. Examples are the fast patrol boat Dvora Mk-III, the Hamina class missile boat and the US Navy's Littoral combat ships.
List of vectored thrust aircraft
Thrust vectoring can convey two main benefits: VTOL/STOL, and higher maneuverability. Aircraft are usually optimized to maximally exploit one benefit, though will gain in the other.
For VTOL ability
Bell Model 65
Bell X-14
Bell Boeing V-22 Osprey
Boeing X-32
Dornier Do 31
EWR VJ 101
Harrier jump jet
British Aerospace Harrier II
British Aerospace Sea Harrier
Hawker Siddeley Harrier
McDonnell Douglas AV-8B Harrier II
Hawker Siddeley Kestrel
Hawker Siddeley P.1127
Lockheed Martin F-35B Lightning II
VFW VAK 191B
Yakovlev Yak-38
Yakovlev Yak-141
For higher maneuverability
Vectoring in two dimensions
McDonnell Douglas F-15 STOL/MTD (experimental)
Lockheed Martin F-22 Raptor (pitch only)
Chengdu J-20 (earlier variants with WS-10C, pitch and roll)
McDonnell Douglas X-36 (yaw only)
Boeing X-45A (yaw only)
Me 163 B experimentally used a rocket steering paddle for the yaw axis
Vectoring in three dimensions
McDonnell Douglas F-15 ACTIVE (experimental)
Mitsubishi X-2 (experimental)
McDonnell Douglas F-18 HARV (experimental)
General Dynamics F-16 VISTA (experimental)
Rockwell-MBB X-31 (experimental)
Chengdu J-10B TVC testbed (experimental)
Mikoyan MiG-35 (MiG-29OVT, not in production aircraft)
Sukhoi Su-37 (demonstrator)
Sukhoi Su-47 (experimental)
Sukhoi Su-57
Sukhoi Su-30MKI /MKM/ MKA/ SM (pitch and roll, with canted engine nozzles (for yaw), emulating but not direct 3D thrust vectoring)
Sukhoi Su-35S (roll and yaw, however engines are canted at an angle for pitch, providing 3D thrust vectoring; more advanced version than the SU-30MKI and derivatives)
Airships
23 class airship, a series of British, World War 1 airships
Airship Industries Skyship 600 modern airship
Zeppelin NT modern, thrust–vectoring airship
Helicopters
Sikorsky XV-2
NOTAR
See also
Index of aviation articles
Gimbaled thrust
Reverse thrust
Tiltjet
Tiltrotor
Tiltwing
Tail-sitter
VTOL
References
8. Wilson, Erich A., "An Introduction to Thrust-Vectored Aircraft Nozzles",
External links
Application of Thrust Vectoring to Reduce Vertical Tail Size
Jet engines
Airship technology | Thrust vectoring | [
"Technology"
] | 4,589 | [
"Jet engines",
"Engines"
] |
581,792 | https://en.wikipedia.org/wiki/Rambus | Rambus Inc. is an American technology company that designs, develops and licenses chip interface technologies and architectures that are used in digital electronics products. The company, founded in 1990, is well known for inventing RDRAM and for its intellectual property-based litigation following the introduction of DDR-SDRAM memory.
History
Rambus was founded in March 1990 by electrical and computer engineers, Mike Farmwald and Mark Horowitz. The company's early investors included premier venture capital and investment banking firms such as Kleiner Perkins Caufield and Byers, Merrill Lynch, Mohr Davidow Ventures, and Goldman Sachs.
Rambus was incorporated and founded as California company in 1990 and then re-incorporated in the state of Delaware before the company went public in 1997 on the NASDAQ stock exchange under the symbol RMBS.
In the 1990s, Rambus was a high-speed interface technology development and marketing company that invented 600 MHz interface technology, which solved memory bottleneck issues faced by system designers. Rambus's technology was based on a very high speed, chip-to-chip interface that was incorporated on dynamic random-access-memory (DRAM) components, processors and controllers, which achieved performance rates over ten times faster than conventional DRAMs. Rambus's RDRAM transferred data at 600 MHz over a narrow byte-wide Rambus Channel to Rambus-compatible Integrated Circuits (ICs).
Rambus's interface was an open standard, accessible to all semiconductor companies, such as Intel. Rambus provided companies who licensed its technology a full range of reference designs and engineering services. Rambus's interface technology was broadly licensed to leading DRAM, ASIC and PC peripheral chipset suppliers in the 1990s. Licensees of Rambus's RDRAM technology included companies such as Creative Labs, Intel, Microsoft, Nintendo, Silicon Graphics, Hitachi, Hyundai, IBM, Molex, Macronix and NEC.
Rambus RDRAM technology was integrated into products such as Nintendo 64, Microsoft's Talisman 3D graphics chip set, Creative Labs Graphics Blaster 3D Graphics cards for PCs, workstations manufactured by Silicon Graphics and Intel's system memory chipsets for PCs.
In 2003, Rambus Incorporated announced that Toshiba Corp. and Elpida Memory Inc. will produce its new memory technology, known as XDR DRAM. The memory technology is capable of running at 3.2 GHz and is said to be faster than any memory technology available in consumer entertainment devices and PCs at the time.
Rambus purchased Cryptography Research on June 6, 2011, for $342.5M. This will enable Rambus Inc. to develop its semiconductor licensing portfolio to include CRI's content protection and security. According to Rambus CEO Harold Hughes, the CRI security technologies would be applied to a variety of products in the company's IP portfolio.
Today, Rambus derives the majority of its annual revenue by licensing its technologies and patents for chip interfaces to its customers. According to The Wall Street Journal, history of Rambus has been "marked by litigation, including patent battles with numerous chip makers".
In 2015, Rambus acquired Integrion Microelectronics, a small Toronto based IP provider of high-speed analog SerDes PHY for an undisclosed amount. Through this acquisition, Rambus opened its 1st Canadian office and boosted its high-speed serdes IP portfolio offering.
On August 17, 2015, Rambus announced the new R+ DDR4 server memory chips RB26 DDR4 RDIMM and RB26 DDR4 LRDIMM. The chipset includes a DDR4 Register Clock Driver and Data Buffer, and it's fully-compliant with the JEDEC DDR4.
In 2016, Rambus acquired Semtech's Snowbush IP for US$32.5 million. Snowbush IP provides analog and mixed-signal IP technologies, and will expand Rambus' product offerings.
In 2016, Rambus acquired Inphi Memory Interconnect Business, for US$90 million. The acquisition includes all assets of the Inphi Memory Interconnect Business, such as customer contracts, product inventory, supply chain agreements, and intellectual property.
On November 2, 2017, Rambus announced partnership with Interac Association and Samsung Canada to assist in enabling Samsung Pay in Canada.
In 2018, Rambus agreed to renew a patent license with Nvidia. Rambus would be sharing its patent portfolio, including those covering serial links and memory controllers, with Nvidia.
On December 11, 2019, Rambus HBM2 PHY and Memory Controller IP were announced to be used in Inflame Technology's AI training chip.
In 2019, Rambus announced that it will move headquarters from Sunnyvale, California to North San Jose, California.
In 2021, Rambus announced that it started an expedited share buyback program with Deutsche Bank to buy up roughly $100 million in common stock. Rambus also acquired two companies, AnalogX and PLDA, which specialize in physical links for PCIe and CXL protocols.
In May 2022, it was announced Rambus had acquired the Montreal-headquartered electronic design company, Hardent.
In July 2023, Rambus sold its SerDes and memory interface PHY IP business to Cadence Design Systems for $110 million. In September 2023, it was announced the acquisition had been completed.
Technology
An early version of RDRAM, base RDRAM, was used in the Nintendo 64 that was released in 1996.
Disadvantages of RDRAM technology include significantly increased latency, power dissipation as heat, manufacturing complexity, and cost. PC800 RDRAM operated with a minimum latency of 45 ns, compared to 15 ns for PC133 SDRAM. RDRAMs can also be told to increase their latencies in order to prevent the possibility of two or more chips transmitting at the same time and causing a collision. However, SDRAM latency depends on the current state of memory so its latency can vary widely depending on what happened earlier and the strategy used by the SDRAM controller, while RDRAM latency is constant once it has been established by the memory controller. RDRAM memory chips also put out significantly more heat than SDRAM chips, necessitating heatsinks on all RIMM devices.
Rambus also developed and licensed its XDR DRAM technology, notably used in the PlayStation 3, and more recently XDR2 DRAM.
Lawsuits
In the early 1990s, Rambus was invited to join the JEDEC. Rambus had been trying to interest memory manufacturers in licensing their proprietary memory interface, and numerous companies had signed non-disclosure agreements to view Rambus' technical data. During the later Infineon v. Rambus trial, Infineon memos from a meeting with representatives of other manufacturers surfaced, including the line "[O]ne day all computers will be built this way, but hopefully without the royalties going to Rambus", and continuing with a strategy discussion for reducing or eliminating royalties to be paid to Rambus. As Rambus continued its participation in JEDEC, it became apparent that they were not prepared to agree to JEDEC's patent policy requiring owners of patents included in a standard to agree to license that technology under terms that are "reasonable and non-discriminatory", and Rambus withdrew from the organization in 1995. Memos from Rambus at that time showed they were tailoring new patent applications to cover features of SDRAM being discussed, which were public knowledge (JEDEC meetings are not secret) and perfectly legal for patent owners who have patented underlying innovations, but were seen as evidence of bad faith by the jury in the first Infineon v. Rambus trial. The Court of Appeals for the Federal Circuit (CAFC) rejected this theory of bad faith in its decision overturning the fraud conviction Infineon achieved in the first trial (see below).
Patent lawsuits
In 2000, Rambus began filing lawsuits against the largest memory manufacturers, claiming that they owned SDRAM and DDR technology. Seven manufacturers, including Samsung, quickly settled with Rambus and agreed to pay royalties on SDRAM and DDR memory. In May 2001, Rambus was found guilty of fraud for having claimed that it owned SDRAM and DDR technology, and all infringement claims against memory manufacturers were dismissed. In January 2003, the CAFC overturned the fraud verdict of the jury trial in Virginia under Judge Payne, issued a new claims construction, and remanded the case back to Virginia for re-trial on infringement. In October 2003, the U.S. Supreme Court refused to hear the case. Thus, the case returned to Virginia per the CAFC ruling.
In January 2005, Rambus filed four more lawsuits against memory chip makers Hynix Semiconductor, Nanya Technology, Inotera Memories and Infineon Technology claiming that DDR2, GDDR2 and GDDR3 chips contain Rambus technology. In March 2005, Rambus had its claim for patent infringements against Infineon dismissed. When Rambus was accused of shredding key documents prior to court hearings, the judge agreed and dismissed Rambus's case against Infineon. This led Rambus to negotiate a settlement with Infineon, which agreed to pay Rambus quarterly license fees of $5.9 million and in return, both companies ceased all litigation against each other. The agreement ran from November 2005 to November 2007. After this date, if Rambus had enough remaining agreements in place, Infineon may make extra payments up to $100 million. In June 2005, Rambus also sued one of its strongest proponents, Samsung, the world's largest memory manufacturer, and terminated Samsung's license. Samsung had promoted Rambus's RDRAM and currently remains a licensee of Rambus's XDR memory.
In February 2006, Micron Technology sued Rambus, alleging that Rambus had violated RICO and deliberately harmed Micron.
On April 29, 2008, the Court of Appeals for the Federal Circuit issued a ruling vacating the order of the U.S. District Court for the Eastern District of Virginia, saying the case with Samsung should be dismissed, saying Judge Robert E. Payne's findings critical of Rambus, were on a case that had already been settled, and thus had no legal standing.
On January 9, 2009, a Delaware federal judge ruled that Rambus could not enforce patents against Micron Technology Inc., stating that Rambus had a "clear and convincing" show of bad faith, and ruled that Rambus' destruction of key related documents (spoliation of evidence) nullified its right to enforce its patents against Micron.
In July 2009, the United States Patent and Trademark Office (USPTO) rejected 8 claims by Rambus against Nvidia.
On November 24, 2009, the USPTO rejected all 17 claims in three Rambus patents that the company asserted against Nvidia in a complaint filed with the U.S. International Trade Commission (ITC). However the ITC has announced that out of five patents, Nvidia did violate three of them. Due to this ruling Nvidia faced a potential U.S. import ban on some of its chips used in the nForce, Quadro, GeForce, Tesla, and Tegra series graphics products—nearly every video card type manufactured by Nvidia.
On June 20, 2011, Rambus went to trial against Micron and Hynix in California, seeking as much as $12.9 billion in damages for "a secret and unlawful conspiracy to kill a revolutionary technology, make billions of dollars and hang onto power", Rambus lawyer Bart Williams told jurors. Rambus lost on November 16, 2011, in the jury trial and its shares dropped drastically, from $14.04 to $4.00 per share.
On November 16, 2011, Rambus lost the antitrust case against Micron Technology and Hynix Semiconductor. The San Francisco County Superior Court jury ruled against Rambus in a 9–3 vote. In a statement posted on the company's website, Rambus CEO Harold Hughes said: "We are reviewing our options for appeal".
On January 24, 2012, a USPTO appeals board declared the third of three patents known as the "Barth patents" invalid. The first two had been declared invalid in September 2011. Rambus had used these patents to win infringement lawsuits against Nvidia Corp and Hewlett-Packard.
On June 28, 2013, The Court of Appeals for the Federal Circuit reversed the USPTO and the '109 Barth patent's validity was reinstated:
"In conclusion, the Board's determination that all 25 claims of the '109 Patent are invalid as anticipated by Farmwald is not supported by substantial evidence. Accordingly, this court reverses."
Federal Trade Commission antitrust suits
In May 2002, the United States Federal Trade Commission (FTC) filed charges against Rambus for antitrust violations. Specifically, the FTC complaint asserted that through the use of patent continuations and divisionals, Rambus pursued a strategy of expanding the scope of its patent claims to encompass the emerging SDRAM standard. The FTC's antitrust allegations against Rambus went to trial in the summer of 2003 after the organization formally accused Rambus of anti-competitive behavior the previous June, itself the result of an investigation launched in May 2002 at the behest of the memory manufacturers. The FTC's chief administrative-law judge, Stephen J. McGuire, dismissed the antitrust claims against Rambus in 2006, saying that the memory industry had no reasonable alternatives to Rambus technology and was aware of the potential scope of Rambus patent rights, according to the company. Soon after, FTC investigators filed a brief to appeal against that ruling.
On August 2, 2006, the FTC overturned McGuire's ruling, stating that Rambus illegally monopolized the memory industry under section 2 of the Sherman Antitrust Act, and also practiced deception that violated section 5 of the Federal Trade Commission Act.
February 5, 2007, the FTC issued a ruling that limits maximum royalties that Rambus may demand from manufacturers of dynamic random-access memory (DRAM), which was set to 0.5% for DDR SDRAM for 3 years from the date the commission's Order is issued and then going to 0; while SDRAM's maximum royalty was set to 0.25%. The Commission claimed that halving the DDR SDRAM rate for SDRAM would reflect the fact that while DDR SDRAM utilizes four of the relevant Rambus technologies, SDRAM uses only two. In addition to collecting fees for DRAM chips, Rambus will also be able to receive 0.5% and 1.0% royalties for SDRAM and DDR SDRAM memory controllers or other non-memory chip components respectively. However, the ruling did not prohibit Rambus from collecting royalties on products based on DDR2 SDRAM, GDDR2, and other JEDEC post-DDR memory standards. Rambus has appealed the FTC Opinion/Remedy and awaits a court date for the appeal.
On March 26, 2008, the jury of the U.S. District Court for the Northern District of California determined that had Rambus acted properly while a member of the standard-setting organization JEDEC during its participating in the early 1990s, finding that the memory manufacturers did not meet their burden of proving antitrust and fraud claims.
On April 22, 2008, the U.S. Court of Appeals for the D.C. Circuit overturned the FTC reversal of McGuire's 2006 ruling, saying that the FTC had not established that Rambus had harmed the competition.
On February 23, 2009, the U.S. Supreme Court rejected the bids by the FTC to impose royalty sanctions on Rambus via antitrust penalties.
European Commission antitrust suit
July 30, 2007, the European Commission launched antitrust investigations against Rambus, taking the view that Rambus engaged in intentional deceptive conduct in the context of the standard-setting process for example by not disclosing the existence of the patents which it later claimed were relevant to the adopted standard. This type of behavior is known as a "patent ambush". Against this background, the Commission provisionally considered that Rambus breached the EC Treaty's rules on abuse of a dominant market position (Article 82 EC Treaty) by subsequently claiming unreasonable royalties for the use of those relevant patents. The commission's preliminary view is that without its "patent ambush", Rambus would not have been able to charge the royalty rates it currently does.
Recent settlements
In 2013 and 2014, Rambus settled and agreed on licensing terms with several of the companies involved in long-running disputes. On December 13, 2013, Rambus entered an agreement with Micron to let the latter use some of its patents, in exchange for $280 million worth of royalties over seven years. In June 2013, the company settled with SK Hynix, with Hynix paying $240 million to settle the disputes.
In March 2014, Rambus and Nanya signed a 5-year patent licensing agreement, settling earlier claims.
Rambus said these deals were part of a change in strategy to a less litigious, more collaborative approach, distancing themselves from accusations of patent trolling. Ronald Black, Rambus's CEO, said, "Somehow we got thrown into the patent troll bunch...This is just not the case."
See also
Rambus Inc. v. Nvidia
References
External links
1990 establishments in California
1997 initial public offerings
Computer companies of the United States
Computer hardware companies
Companies based in San Jose, California
Companies listed on the Nasdaq
Computer companies established in 1990
Computer memory companies
Fabless semiconductor companies
Patent monetization companies of the United States
Semiconductor companies of the United States
Technology companies based in the San Francisco Bay Area
Technology companies established in 1990
Companies in the S&P 400 | Rambus | [
"Technology"
] | 3,674 | [
"Computer hardware companies",
"Computers"
] |
581,797 | https://en.wikipedia.org/wiki/Matching%20%28graph%20theory%29 | In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in a bipartite graph can be treated as a network flow problem.
Definitions
Given a graph a matching M in G is a set of pairwise non-adjacent edges, none of which are loops; that is, no two edges share common vertices.
A vertex is matched (or saturated) if it is an endpoint of one of the edges in the matching. Otherwise the vertex is unmatched (or unsaturated).
A maximal matching is a matching M of a graph G that is not a subset of any other matching. A matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M. The following figure shows examples of maximal matchings (red) in three graphs.
A maximum matching (also known as maximum-cardinality matching) is a matching that contains the largest possible number of edges. There may be many maximum matchings. The matching number of a graph is the size of a maximum matching. Every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the same three graphs.
A perfect matching is a matching that matches all vertices of the graph. That is, a matching is perfect if every vertex of the graph is incident to an edge of the matching. A matching is perfect if . Every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used. In the above figure, only part (b) shows a perfect matching. A perfect matching is also a minimum-size edge cover. Thus, the size of a maximum matching is no larger than the size of a minimum edge cover: . A graph can only contain a perfect matching when the graph has an even number of vertices.
A near-perfect matching is one in which exactly one vertex is unmatched. Clearly, a graph can only contain a near-perfect matching when the graph has an odd number of vertices, and near-perfect matchings are maximum matchings. In the above figure, part (c) shows a near-perfect matching. If every vertex is unmatched by some near-perfect matching, then the graph is called factor-critical.
Given a matching M, an alternating path is a path that begins with an unmatched vertex and whose edges belong alternately to the matching and not to the matching. An augmenting path is an alternating path that starts from and ends on free (unmatched) vertices. Berge's lemma states that a matching M is maximum if and only if there is no augmenting path with respect to M.
An induced matching is a matching that is the edge set of an induced subgraph.
Properties
In any graph without isolated vertices, the sum of the matching number and the edge covering number equals the number of vertices. If there is a perfect matching, then both the matching number and the edge cover number are .
If and are two maximal matchings, then and . To see this, observe that each edge in can be adjacent to at most two edges in because is a matching; moreover each edge in is adjacent to an edge in by maximality of , hence
Further we deduce that
In particular, this shows that any maximal matching is a 2-approximation of a maximum matching and also a 2-approximation of a minimum maximal matching. This inequality is tight: for example, if is a path with 3 edges and 4 vertices, the size of a minimum maximal matching is 1 and the size of a maximum matching is 2.
A spectral characterization of the matching number of a graph is given by Hassani Monfared and Mallik as follows: Let be a graph on vertices, and be distinct nonzero purely imaginary numbers where . Then the matching number of is if and only if (a) there is a real skew-symmetric matrix with graph and eigenvalues and zeros, and (b) all real skew-symmetric matrices with graph have at most nonzero eigenvalues. Note that the (simple) graph of a real symmetric or skew-symmetric matrix of order has vertices and edges given by the nonozero off-diagonal entries of .
Matching polynomials
A generating function of the number of k-edge matchings in a graph is called a matching polynomial. Let G be a graph and mk be the number of k-edge matchings. One matching polynomial of G is
Another definition gives the matching polynomial as
where n is the number of vertices in the graph. Each type has its uses; for more information see the article on matching polynomials.
Algorithms and computational complexity
Maximum-cardinality matching
A fundamental problem in combinatorial optimization is finding a maximum matching. This problem has various algorithms for different classes of graphs.
In an unweighted bipartite graph, the optimization problem is to find a maximum cardinality matching. The problem is solved by the Hopcroft-Karp algorithm in time time, and there are more efficient randomized algorithms, approximation algorithms, and algorithms for special classes of graphs such as bipartite planar graphs, as described in the main article.
Maximum-weight matching
In a weighted bipartite graph, the optimization problem is to find a maximum-weight matching; a dual problem is to find a minimum-weight matching. This problem is often called maximum weighted bipartite matching, or the assignment problem. The Hungarian algorithm solves the assignment problem and it was one of the beginnings of combinatorial optimization algorithms. It uses a modified shortest path search in the augmenting path algorithm. If the Bellman–Ford algorithm is used for this step, the running time of the Hungarian algorithm becomes , or the edge cost can be shifted with a potential to achieve running time with the Dijkstra algorithm and Fibonacci heap.
In a non-bipartite weighted graph, the problem of maximum weight matching can be solved in time using Edmonds' blossom algorithm.
Maximal matchings
A maximal matching can be found with a simple greedy algorithm. A maximum matching is also a maximal matching, and hence it is possible to find a largest maximal matching in polynomial time. However, no polynomial-time algorithm is known for finding a minimum maximal matching, that is, a maximal matching that contains the smallest possible number of edges.
A maximal matching with k edges is an edge dominating set with k edges. Conversely, if we are given a minimum edge dominating set with k edges, we can construct a maximal matching with k edges in polynomial time. Therefore, the problem of finding a minimum maximal matching is essentially equal to the problem of finding a minimum edge dominating set. Both of these two optimization problems are known to be NP-hard; the decision versions of these problems are classical examples of NP-complete problems. Both problems can be approximated within factor 2 in polynomial time: simply find an arbitrary maximal matching M.
Counting problems
The number of matchings in a graph is known as the Hosoya index of the graph. It is #P-complete to compute this quantity, even for bipartite graphs. It is also #P-complete to count perfect matchings, even in bipartite graphs, because computing the permanent of an arbitrary 0–1 matrix (another #P-complete problem) is the same as computing the number of perfect matchings in the bipartite graph having the given matrix as its biadjacency matrix. However, there exists a fully polynomial time randomized approximation scheme for counting the number of bipartite matchings. A remarkable theorem of Kasteleyn states that the number of perfect matchings in a planar graph can be computed exactly in polynomial time via the FKT algorithm.
The number of perfect matchings in a complete graph Kn (with n even) is given by the double factorial (n − 1)!!. The numbers of matchings in complete graphs, without constraining the matchings to be perfect, are given by the telephone numbers.
The number of perfect matchings in a graph is also known as the hafnian of its adjacency matrix.
Finding all maximally matchable edges
One of the basic problems in matching theory is to find in a given graph all edges that may be extended to a maximum matching in the graph (such edges are called maximally matchable edges, or allowed edges). Algorithms for this problem include:
For general graphs, a deterministic algorithm in time and a randomized algorithm in time .
For bipartite graphs, if a single maximum matching is found, a deterministic algorithm runs in time .
Online bipartite matching
The problem of developing an online algorithm for matching was first considered by Richard M. Karp, Umesh Vazirani, and Vijay Vazirani in 1990.
In the online setting, nodes on one side of the bipartite graph arrive one at a time and must either be immediately matched to the other side of the graph or discarded. This is a natural generalization of the secretary problem and has applications to online ad auctions. The best online algorithm, for the unweighted maximization case with a random arrival model, attains a competitive ratio of .
Characterizations
Kőnig's theorem states that, in bipartite graphs, the maximum matching is equal in size to the minimum vertex cover. Via this result, the minimum vertex cover, maximum independent set, and maximum vertex biclique problems may be solved in polynomial time for bipartite graphs.
Hall's marriage theorem provides a characterization of bipartite graphs which have a perfect matching and the Tutte theorem provides a characterization for arbitrary graphs.
Applications
Matching in general graphs
A Kekulé structure of an aromatic compound consists of a perfect matching of its carbon skeleton, showing the locations of double bonds in the chemical structure. These structures are named after Friedrich August Kekulé von Stradonitz, who showed that benzene (in graph theoretical terms, a 6-vertex cycle) can be given such a structure.
The Hosoya index is the number of non-empty matchings plus one; it is used in computational chemistry and mathematical chemistry investigations for organic compounds.
The Chinese postman problem involves finding a minimum-weight perfect matching as a subproblem.
Matching in bipartite graphs
Graduation problem is about choosing minimum set of classes from given requirements for graduation.
Hitchcock transport problem involves bipartite matching as sub-problem.
Subtree isomorphism problem involves bipartite matching as sub-problem.
See also
Matching in hypergraphs - a generalization of matching in graphs.
Fractional matching.
Dulmage–Mendelsohn decomposition, a partition of the vertices of a bipartite graph into subsets such that each edge belongs to a perfect matching if and only if its endpoints belong to the same subset
Edge coloring, a partition of the edges of a graph into matchings
Matching preclusion, the minimum number of edges to delete to prevent a perfect matching from existing
Rainbow matching, a matching in an edge-colored bipartite graph with no repeated colors
Skew-symmetric graph, a type of graph that can be used to model alternating path searches for matchings
Stable matching, a matching in which no two elements prefer each other to their matched partners
Independent vertex set, a set of vertices (rather than edges) no two of which are adjacent to each other
Stable marriage problem (also known as stable matching problem)
References
Further reading
External links
A graph library with Hopcroft–Karp and Push–Relabel-based maximum cardinality matching implementation
Combinatorial optimization
Polynomial-time problems
Computational problems in graph theory | Matching (graph theory) | [
"Mathematics"
] | 2,414 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Polynomial-time problems",
"Mathematical relations",
"Matching (graph theory)",
"Mathematical problems"
] |
581,799 | https://en.wikipedia.org/wiki/Environmental%20design | Environmental design is the process of addressing surrounding environmental parameters when devising plans, programs, policies, buildings, or products. It seeks to create spaces that will enhance the natural, social, cultural and physical environment of particular areas. Classical prudent design may have always considered environmental factors; however, the environmental movement beginning in the 1940s has made the concept more explicit.
Environmental design can also refer to the applied arts and sciences dealing with creating the human-designed environment. These fields include architecture, geography, urban planning, landscape architecture, and interior design. Environmental design can also encompass interdisciplinary areas such as historical preservation and lighting design. In terms of a larger scope, environmental design has implications for the industrial design of products: innovative automobiles, wind power generators, solar-powered equipment, and other kinds of equipment could serve as examples. Currently, the term has expanded to apply to ecological and sustainability issues.
Core Principals
1. Sustainability - Minimizing the environmental impact of human activities through the use of renewable resources, energy-efficient technologies, and eco-friendly materials.
2. Functionality - Designing spaces that are practical, accessible, and tailored to the needs and behaviors of the people who will use them.
3. Aesthetics - Incorporating elements of visual appeal, sensory experience, and emotional connection into the design.
4. Holistic Approach - Considering the interconnected social, economic, and ecological factors that shape the environment.
Modern Uses
Today, environmental design is applied across a wide range of scales, from small-scale residential projects to large-scale urban planning initiatives. Key areas of focus include:
- Sustainable architecture and green building
- Landscape architecture and urban planning
- Transportation design and infrastructure
- Industrial design and product development
- Interior design and space planning
Environmental designers often collaborate with experts from disciplines such as engineering, ecology, sociology, and public policy to create holistic solutions that address the complex challenges of modern environments.
History
The first traceable concepts of environmental designs focused primarily on solar heating, which began in Ancient Greece around 500 BCE. At the time, most of Greece had exhausted its supply of wood for fuel, leading architects to design houses that would capture the solar energy of the sun. The Greeks understood that the position of the sun varies throughout the year. For a latitude of 40 degrees in summer the sun is high in the south, at an angle of 70 degrees at the zenith, while in winter, the sun travels a lower trajectory, with a zenith of 26 degrees. Greek houses were built with south-facing façades which received little to no sun in the summer but would receive full sun in the winter, warming the house. Additionally, the southern orientation also protected the house from the colder northern winds. This clever arrangement of buildings influenced the use of the grid pattern of ancient cities. With the north–south orientation of the houses, the streets of Greek cities mainly ran east–west.
The practice of solar architecture continued with the Romans, who similarly had deforested much of their native Italian Peninsula by the first century BCE. The Roman heliocaminus, literally 'solar furnace', functioned with the same aspects of the earlier Greek houses. The numerous public baths were oriented to the south. Roman architects added glass to windows to allow for the passage of light and to conserve interior heat as it could not escape. The Romans also used greenhouses to grow crops all year long and to cultivate the exotic plants coming from the far corners of the Empire. Pliny the Elder wrote of greenhouses that supplied the kitchen of the Emperor Tiberius during the year.
Along with the solar orientation of buildings and the use of glass as a solar heat collector, the ancients knew other ways of harnessing solar energy. The Greeks, Romans and Chinese developed curved mirrors that could concentrate the sun's rays on an object with enough intensity to make it burn in seconds. The solar reflectors were often made of polished silver, copper or brass.
Early roots of modern environmental design began in the late 19th century with writer/designer William Morris, who rejected the use of industrialized materials and processes in wallpaper, fabrics and books his studio produced. He and others, such as John Ruskin felt that the industrial revolution would lead to harm done to nature and workers.
The narrative of Brian Danitz and Chris Zelov's documentary film Ecological Design: Inventing the Future asserts that in the decades after World War II, "The world was forced to confront the dark shadow of science and industry." From the middle of the twentieth century, thinkers like Buckminster Fuller have acted as catalysts for a broadening and deepening of the concerns of environmental designers. Nowadays, energy efficiency, appropriate technology, organic horticulture and agriculture, land restoration, New Urbanism, and ecologically sustainable energy and waste systems are recognized considerations or options and may each find application.
By integrating renewable energy sources such as solar photovoltaic, solar thermal, and even geothermal energy into structures, it is possible to create zero emission buildings, where energy consumption is self-generating and non-polluting. It is also possible to construct "energy-plus buildings" which generate more energy than they consume, and the excess could then be sold to the grid. In the United States, the LEED Green Building Rating System rates structures on their environmental sustainability.
Environmental design and planning
Environmental design and planning is the moniker used by several Ph.D. programs that take a multidisciplinary approach to the built environment. Typically environmental design and planning programs address architectural history or design (interior or exterior), city or regional planning, landscape architecture history or design, environmental planning, construction science, cultural geography, or historic preservation. Social science methods are frequently employed; aspects of sociology or psychology can be part of a research program.
The concept of "environmental" in these programs is quite broad and can encompass aspects of the natural, built, work, or social environments.
Areas of research
Architecture
Construction science
Ecology
Environmental impact design
Environmental planning
Environmental psychology
Environmental sociology
Historic preservation
Landscape architecture
Sociology of architecture
Sustainability
Urban planning
Academic programs
The following universities offer a Ph.D. in environmental design and planning:
Clemson University, College of Architecture, Arts and Humanities (Now called "Planning, Design, and the Built Environment")
Arizona State University, College of Design
Kansas State University
University of Calgary (technically the Ph.D. is in "environmental design," but encompasses the same scope as the other programs)
Virginia Tech until recently offered the degree program, but has since replaced it with programs in "architecture and design research" and "planning, governance, and globalization".
Fanshawe College in London, Ontario Canada offers an honours bachelor's degree called "Environmental Design and planning.
Related programs
University of Missouri, Columbia: Ph.D. in Human Environmental Sciences (PDF file) with emphasis in Architectural Studies.
Texas A & M University offers a Ph.D. in architecture that emphasizes environmental design.
Examples
Examples of the environmental design process include use of roadway noise computer models in design of noise barriers and use of roadway air dispersion models in analyzing and designing urban highways.
Designers consciously working within this more recent framework of philosophy and practice seek a blending of nature and technology, regarding ecology as the basis for design. Some believe that strategies of conservation, stewardship, and regeneration can be applied at all levels of scale from the individual building to the community, with benefit to the human individual and local and planetary ecosystems.
Specific examples of large scale environmental design projects include:
Boston Transportation Planning Review
BART – Bay Area Rapid Transit System Daly City Turn-back project and airport extension.
Metropolitan Portland, Oregon light rail system
See also
Green building
Green development
Land recycling
Passive solar building design
Sustainable development
Ecological design
References
6.
External links
Environment
Historic preservation | Environmental design | [
"Engineering"
] | 1,556 | [
"Environmental design",
"Design"
] |
2,170,537 | https://en.wikipedia.org/wiki/Lemniscus%20%28anatomy%29 | A lemniscus (Greek for ribbon or band) is a bundle of secondary sensory fibers in the brainstem. The medial lemniscus and lateral lemniscus terminate in specific relay nuclei of the diencephalon. The trigeminal lemniscus is sometimes considered as the cephalic part of the medial lemniscus. The spinal lemniscus constitutes the spinothalamic tract.
References
External links
Trigeminal lemniscus from Online medical Dictionary
Nervous system | Lemniscus (anatomy) | [
"Biology"
] | 107 | [
"Organ systems",
"Nervous system"
] |
2,170,744 | https://en.wikipedia.org/wiki/Tapachula%20International%20Airport | Tapachula International Airport () is an international airport located in Tapachula, Chiapas, Mexico, near the Mexico–Guatemala border. It serves the Metropolitan Area of Tapachula and the Soconusco region, facilitating multiple domestic destinations, flight training, and general aviation activities. Operated by Grupo Aeroportuario del Sureste (ASUR), it holds the distinction of being the southernmost airport in Mexico. In 2022, it served 503,254 passengers, and this number increased to 553,744 passengers in 2023.
Facilities
Tapachula Airport is situated at an elevation of above sea level and features a asphalt runway. With an ICAO classification of 4D, the airport has a capacity for 18 operations per hour. The apron includes 3 C-type disembarkation hardstands and 1 D-type.
The terminal spans within a two-level structure. The lower level houses essential services such as the check-in area, an arrivals hall with a baggage claim area, immigration and customs facilities, car rental services, taxi stands, and snack bars. Upstairs, the upper floor features a security checkpoint and the departures concourse, offering a commercial area, bars, a VIP lounge, and four gates—two of which are equipped with jet bridges. Adjacent to the terminal, additional facilities include hangars and designated spaces for general aviation.
Tapachula Airport also hosts the Tapachula Naval Air Base, situated near the threshold of runway 23. This base comprises an apron spanning , 1 heliport, and 3 hangars. Operating from this base are the 4th Naval Patrol Air Squadron with Mil Mi-17 aircraft and the 4th Naval Air Mobility, Observation, and Transport Squadron with Maule MX-7 and Lancair ES aircraft.
Airlines and destinations
Passenger
Destinations map
Statistics
Passengers
Busiest routes
See also
List of the busiest airports in Mexico
List of airports in Mexico
List of airports by ICAO code: M
List of busiest airports in North America
List of the busiest airports in Latin America
Transportation in Mexico
Tourism in Mexico
Grupo Aeroportuario del Sureste
Mexican Naval Aviation
References
External links
Official website
Grupo Aeroportuario del Sureste ASUR
Tapachula Airport information at Great Circle Mapper
Airports in Chiapas
WAAS reference stations
Airfields of the United States Army Air Forces | Tapachula International Airport | [
"Technology"
] | 467 | [
"Global Positioning System",
"WAAS reference stations"
] |
2,170,901 | https://en.wikipedia.org/wiki/Cross-presentation | Cross-presentation is the ability of certain professional antigen-presenting cells (mostly dendritic cells) to take up, process and present extracellular antigens with MHC class I molecules to CD8 T cells (cytotoxic T cells). Cross-priming, the result of this process, describes the stimulation of naive cytotoxic CD8+ T cells into activated cytotoxic CD8+ T cells. This process is necessary for immunity against most tumors and against viruses that infect dendritic cells and sabotage their presentation of virus antigens. Cross presentation is also required for the induction of cytotoxic immunity by vaccination with protein antigens, for example, tumour vaccination.
Cross-presentation is of particular importance, because it permits the presentation of exogenous antigens, which are normally presented by MHC II on the surface of dendritic cells, to also be presented through the MHC I pathway. The MHC I pathway is normally used to present endogenous antigens that have infected a particular cell. However, cross presenting cells are able to utilize the MHC I pathway to present exogenous antigens (ones not from the cell itself) to trigger an adaptive immune response by activating cytotoxic CD8+ T cells recognizing the exogenous antigens on the MHC class I complexes.
History
The first evidence of cross-presentation was reported in 1976 by Michael J. Bevan after injection of grafted cells carrying foreign minor histocompatibility (MiHA) molecules. This resulted in a CD8+ T cell response induced by antigen-presenting cells of the recipient against the foreign MiHA cells. Because of this, Bevan implied that these antigen presenting cells must have engulfed and cross presented these foreign MiHA cells to host cytotoxic CD8+ cells, thus triggering an adaptive immune response against the grafted tissue. This observation was termed "cross-priming".
Later, there had been much controversy about cross-presentation, which now is believed to have been due to particularities and limitations of some experimental systems used.
Cross-presenting cells
The primary and most efficient cross-presenting cells are dendritic cells, though macrophages, B lymphocytes and sinusoidal endothelial cells have also been observed to cross present antigens in vivo and in vitro. However, in vivo dendritic cells have been found to be the most efficient and common antigen presenting cells to cross present antigens in MHC I molecules. There are two dendritic cells subtypes; plasmacytoid (pDC) and myeloid (mDC) dendritic cells. pDCs are found within the blood and are able to cross present antigens directly or from neighboring apoptotic cells, but the main physiological significance of pDCs is the secretion of type I IFN in response to viral infections. mDCs are categorized as migratory DCs, resident DCs, Langerhans cells, and inflammatory dendritic cells. All mDCs have specialized functions and secretory factors, but they are all still able to cross present antigens in order to activate cytotoxic CD8+ T cells.
There are many factors that determine cross presentation function such as antigen uptake and processing mechanism, as well as environmental signals and activation of cross presenting dendritic cells. The activation of cross presenting dendritic cells is dependent on stimulation by CD4+ T helper cells. The co-stimulatory molecule CD40/CD40L along with the danger presence of an exogenous antigen are catalysts for dendritic cell licensing, and thus the cross presentation and activation of naive CD8+ cytotoxic T cells.
Vacuolar and cytosolic diversion
In addition to solid structure uptake, dendritic cell phagocytosis simultaneously modifies the kinetics of endosomal trafficking and maturation. As a consequence, external soluble antigens are targeted into the MHC class I cross-presentation pathway instead of the MHC Class II pathway. However, there is still uncertainty in regard to a mechanistic pathway for cross presentation within an antigen presenting cell. Currently, there are two main pathways proposed, cytosolic and vacuolar.
The vacuolar pathway is initiated through the endocytosis of an extracellular antigen by a dendritic cell. Endocytosis results in the formation of a phagocytic vesicle, where an increasingly acidic environment along with the activation of enzymes such as lysosomal proteases triggers the degradation of antigen into peptides. The peptides can then be loaded onto MHC I binding grooves within the phagosome. It is unclear whether the MHC I molecule is being exported from the endoplasmic reticulum before peptide loading, or is being recycled from the cell membrane prior to peptide loading. Once the exogenous antigen peptide is loaded onto the MHC class I molecule, the complex is exported to the cell surface for antigen cross presentation.
There is also evidence that suggest that cross-presentation requires a separate pathway in a proportion of CD8(+) dendritic cells that are able to cross-present. This pathway is called the cytosolic diversion pathway. Similarly to the vacuolar pathway, antigens are taken into the cell through endocytosis. Antigen proteins are transported out of this compartment into the cytoplasm by unknown mechanisms. Within the cytoplasm, exogenous antigens are processed by the proteasome and degraded into peptides. These processed peptides can either be transported by the TAP transporter into the endoplasmic reticulum, or back into the same endosome for loading onto MHC class I complexes,. It is believed that MHC I loading occurs both in the ER as well as phagocytic vesicles such as an endosome in the cytosolic pathway. For MHC class I loading within the Endoplasmic Reticulum, exogenous antigen peptides are loaded onto MHC class I molecules with the help of the peptide loading complex and chaperone proteins such as beta-2 microglobulin, ERAP, tapasin, and calreticulin. After antigen peptide loading, the MHC molecule is transported out of the ER, through the Golgi complex, and then onto the cell surface for cross presentation.
It appears that both pathways are able to occur within an antigen presenting cell, and may be influenced by environmental factors such as proteasome and phagocytic inhibitors.
Relevance for immunity
Cross-presentation has been shown to play a role in the immune defense against many viruses (herpesvirus, influenzavirus, CMV, EBV, SIV, papillomavirus, and others), bacteria (listeria, salmonella, E. coli, M. tuberculosis, and others) and tumors (brain, pancreas, melanoma, leukemia, and others). Even though many viruses can inhibit and degrade dendritic cell activity, cross-presenting dendritic cells that are unaffected by the virus are able to intake the infected peripheral cell and still cross present the exogenous antigen to cytotoxic T cells. The action of cross priming can bolster immunity against antigens that target intracellular peripheral tissues that are unable to be mediated by antibodies produced through B cells. Also, cross-priming avoids viral immune evasion strategies, such as suppression of antigen processing. Consequently, immune responses against viruses that are able to do so, such as herpes viruses, are largely dependent on cross-presentation for a successful immune response. Overall, cross presentation aids in facilitating an adaptive immune response against intracellular viruses and tumor cells.
Dendritic cell-dependent cross-presentation also has implications for cancer immunotherapy vaccines. The injection of anti-tumor specific vaccines can be targeted to specific dendritic cell subsets within peripheral skin tissues, such as migratory dendritic cells and Langerhans cells. After vaccine induced activation, dendritic cells are able to migrate to lymph nodes and activate CD4+ T helper cells as well as cross prime CD8+ T cytotoxic cells. This mass generation of activated tumor specific CD8+ T cells increases anti-tumor immunity, and is also able to overcome many of the immune suppressive effects of tumor cells.
Relevance for immune tolerance
Cross-presenting dendritic cells have a significant impact on the promotion of central and peripheral immune tolerance. In central tolerance, dendritic cells are present within the thymus, or the location of T cell development and maturation. Thymic dendritic cells can intake dead medullary thymic epithelial cells, and cross present "self" peptides on MHC class I as a negative selection check on cytotoxic T cells that have a high affinity for self peptides. Presentation of tissue specific antigens is initiated by medullary thymic epithelial cells (mTEC), but is reinforced by thymic dendritic cells after expression of AIRE and engulfment of mTECs. Although the function of dendritic cells in central tolerance is still relatively unknown, it appears that thymic dendritic cells act as a complement to mTECs during negative selection of T cells.
In regard to peripheral tolerance, peripheral tissue resting dendritic cells are able to promote self tolerance against cytotoxic T cells that have an affinity for self peptides. They can present tissue specific antigens within the lymph node in order to regulate T cytotoxic cells from initiating an adaptive immune response, as well as regulate T cytotoxic cells that have a high affinity for self tissues, but were still able to escape central tolerance. Cross-presenting DCs are able to induce anergy, apoptosis, or T regulatory states for high self affinity T cytotoxic cells. This has large implications for defense against auto immune disorders and regulation of self specific cytotoxic T cells.
References
External links
Immunology | Cross-presentation | [
"Biology"
] | 2,090 | [
"Immunology"
] |
2,170,919 | https://en.wikipedia.org/wiki/Evert%20Willem%20Beth | Evert Willem Beth (7 July 1908 – 12 April 1964) was a Dutch philosopher and logician, whose work principally concerned the foundations of mathematics. He was a member of the Significs Group.
Biography
Beth was born in Almelo, a small town in the eastern Netherlands. His father had studied mathematics and physics at the University of Amsterdam, where he had been awarded a PhD. Evert Beth studied the same subjects at Utrecht University, but then also studied philosophy and psychology. His 1935 PhD was in philosophy.
In 1946, he became professor of logic and the foundations of mathematics in Amsterdam. Apart from two brief interruptions – a stint in 1951 as a research assistant to Alfred Tarski, and in 1957 as a visiting professor at Johns Hopkins University – he held the post in Amsterdam continuously until his death in 1964. His was the first academic post in his country in logic and the foundations of mathematics, and during this time he contributed actively to international cooperation in establishing logic as an academic discipline.
In 1953 he became member of the Royal Netherlands Academy of Arts and Sciences.
He died in Amsterdam.
Contributions to logic
Beth definability theorem
The Beth definability theorem states that for first-order logic a property (or function or constant) is implicitly definable if and only if it is explicitly definable. Further explanation is provided under Beth definability.
Semantic tableaux
Beth's most famous contribution to formal logic is semantic tableaux, which are decision procedures for propositional logic and first-order logic. It is a semantic method—like Wittgenstein's truth tables or J. Alan Robinson's resolution—as opposed to the proof of theorems in a formal system, such as the axiomatic systems employed by Frege, Russell and Whitehead, and Hilbert, or even Gentzen's natural deduction. Semantic tableaux are an effective decision procedure for propositional logic, whereas they are only semi-effective for first-order logic, since first-order logic is undecidable, as showed by Church's theorem. This method is considered by many to be intuitively simple, particularly for students who are not acquainted with the study of logic, and it is faster than the truth-table method (which requires a table with 2n rows for a sentence with n propositional letters). For these reasons, Wilfrid Hodges for example presents semantic tableaux in his introductory textbook, Logic, and Melvin Fitting does the same in his presentation of first-order logic for computer scientists, First-order logic and automated theorem proving.
One starts out with the intention of proving that a certain set of formulae entail another formula , given a set of rules determined by the semantics of the formulae's connectives (and quantifiers, in first-order logic). The method is to assume the concurrent truth of every member of and of (the negation of ), and then to apply the rules to branch this list into a tree-like structure of (simpler) formulae until every possible branch contains a contradiction. At this point it will have been established that is inconsistent, and thus that the formulae of together entail .
Beth models
These are a class of relational models for non-classical logic (cf. Kripke semantics).
Books
Evert W. Beth, The foundations of mathematics. A study in the philosophy of science. XXVΊ + 722 pp. Amsterdam, North-Holland 1959.
Evert W. Beth, Épistémologie mathématique et psychologie (with J. Piaget). 352 pp. Paris P.U.F. 1961.
Evert W. Beth, Formal Methods: An introduction to symbolic logic and to the study of effective operations in arithmetic and logic. D. Reidel Publishing Company / Dordecht-Holland, 1962.
Evert W. Beth, Aspects of Modern Logic. D. Reidel Publishing Company / Dordecht-Holland, 1971.
See also
Gerrit Mannoury
References
External links
Beth Prize 2013
Evert Willem Beth Foundation
1908 births
1964 deaths
Mathematical logicians
Dutch logicians
Formal methods people
Academic staff of the University of Amsterdam
Utrecht University alumni
People from Almelo
20th-century Dutch mathematicians
Members of the Royal Netherlands Academy of Arts and Sciences
20th-century Dutch philosophers | Evert Willem Beth | [
"Mathematics"
] | 876 | [
"Mathematical logic",
"Mathematical logicians"
] |
2,171,222 | https://en.wikipedia.org/wiki/Intern%20architect | An intern architect or architectural intern is a person who is working professionally in the field of architecture in preparation for registration or licensure as an architect. An intern need not have attained a professional degree in architecture to begin accruing experience hours, but said degree is a prerequisite for licensure.
In the United States, Canada, and other countries, an intern architect is enrolled in a regulated program, such as the Intern Development Program (IDP) in the United States or the Intern Architect Program (IAP) in Canada, while working under the supervision of a licensed architect and preparing for professional registration exams.
The use of the title "architect" (or any derivation thereof) is legally protected in the United States, Canada, and other countries. Most U.S. states and all Canadian provinces, however, allow the use of the terms "intern architect" or "architectural intern" for a person enrolled in an architectural internship program.
Intern Development Program (United States)
The Intern Development Program (IDP) is a national program, developed and administered by NCARB, in the United States designed to provide structured training for Intern Architects to ensure that they are exposed to most aspects of the architectural profession prior to attaining professional licensure.
A candidate works under the tutelage of one or more architects as mentor(s) on a regular basis. Additionally, the intern architect selects a sponsor, who is an architect who does not work for the firm where the intern is employed. Together, the mentor and the sponsor work with the intern to make sure that the intern is actively working towards satisfying the requirements of the IDP program.
The program prescribes the minimum experience hours required in various activities pertaining to practice in architecture before attaining professional licensure. Interns track these hours using experience reports that are verified and confirmed by their mentor. These activities fall into four categories: Pre-Design, Design, Project Management, and Practice Management, each of which includes tasks that architects will perform as part of their professional responsibilities. In total, interns must complete 5600 hours of reported experience before attaining professional licensure. In most states Interns may begin taking the Architect Registration Examination (ARE) while they are participating in the Intern Development Program, but will not attain professional licensure before successful completion of both the ARE and IDP.
History
The IDP was created jointly in the 1970s by the National Council of Architectural Registration Boards (NCARB) and the American Institute of Architects (AIA) and is administered by NCARB.
Intern Architect Program (Canada)
The Intern Architect Program (IAP) is a national program in Canada that documents and evaluates internship activities, provides structure to the transition between education and registration, and encourages involvement of practitioners in the development of new architects. The IAP was established by the Committee of Canadian Architectural Councils (CCAC), which has representatives from each of the ten provincial associations of architects.
See also
Architect
National Council of Architectural Registration Boards
Intern Development Program
Professional requirements for architects
References
External links
AIArchitect Article – Let Them Be Architects
State Law and Penalty
Intern Development Program (IDP) in USA
Intern Architect Program (IAP) in Canada
Get Licensed AIA Help
Professional certification in architecture
Architecture occupations | Intern architect | [
"Engineering"
] | 659 | [
"Architecture occupations",
"Architecture"
] |
2,171,258 | https://en.wikipedia.org/wiki/Interspersed%20repeat | Interspersed repetitive DNA is found in all eukaryotic genomes. They differ from tandem repeat DNA in that rather than the repeat sequences coming right after one another, they are dispersed throughout the genome and nonadjacent. The sequence that repeats can vary depending on the type of organism, and many other factors. Certain classes of interspersed repeat sequences propagate themselves by RNA mediated transposition; they have been called retrotransposons, and they constitute 25–40% of most mammalian genomes. Some types of interspersed repetitive DNA elements allow new genes to evolve by uncoupling similar DNA sequences from gene conversion during meiosis.
Intrachromosomal and interchromosomal gene conversion
Gene conversion acts on DNA sequence homology as its substrate. There is no requirement that the sequence homologies lie at the allelic positions on their respective chromosomes or even that the homologies lie on different chromosomes. Gene conversion events can occur between different members of a gene family situated on the same chromosome. When this happens, it is called intrachromosomal gene conversion as distinguished from interchromosomal gene conversion. The effect of homogenizing DNA sequences is the same.
Role of interspersed repetitive DNA
Repetitive sequences play the role of uncoupling the gene conversion network, thereby allowing new genes to evolve. The shorter Alu or SINE repetitive DNA are specialized for uncoupling intrachromosomal gene conversion while the longer LINE repetitive DNA are specialized for uncoupling interchromosomal gene conversion. In both cases, the interspersed repeats block gene conversion by inserting regions of non-homology within otherwise similar DNA sequences. The homogenizing forces linking DNA sequences are thereby broken and the DNA sequences are free to evolve independently. This leads to the creation of new genes and new species during evolution. By breaking the links that would otherwise overwrite novel DNA sequence variations, interspersed repeats catalyse evolution, allowing the new genes and new species to develop.
Interspersed DNA elements catalyze the evolution of new genes
DNA sequences are linked together in a gene pool by gene conversion events. Insertion of an interspersed DNA element breaks this linkage, allowing independent evolution of a new gene. The interspersed repeat is an isolating mechanism enabling new genes to evolve without interference from the progenitor gene. Because insertion of an interspersed repeat is a saltatory event the evolution of the new gene will also be saltatory. Because speciation ultimately depends on the creation of new genes, this naturally causes punctuated equilibria. Interspersed repeats are thus responsible for punctuated evolution and rapid modes of evolution.
See also
Eukaryotic chromosome fine structure
Genomic organization
L1Base
References
Mobile genetic elements
Repetitive DNA sequences | Interspersed repeat | [
"Biology"
] | 546 | [
"Molecular genetics",
"Mobile genetic elements",
"Repetitive DNA sequences"
] |
2,171,409 | https://en.wikipedia.org/wiki/Nickel%28II%29%20fluoride | Nickel(II) fluoride is the chemical compound with the formula NiF2. It is an ionic compound of nickel and fluorine and forms yellowish to green tetragonal crystals. Unlike many fluorides, NiF2 is stable in air.
Nickel(II) fluoride is also produced when nickel metal is exposed to fluorine. In fact, NiF2 comprises the passivating surface that forms on nickel alloys (e.g. monel) in the presence of hydrogen fluoride or elemental fluorine. For this reason, nickel and its alloys are suitable materials for storage and transport these fluorine and related fluorinating agents. NiF2 is also used as a catalyst for the synthesis of chlorine pentafluoride.
Preparation and structure
NiF2 is prepared by treatment of anhydrous nickel(II) chloride with fluorine at 350 °C:
NiCl2 + F2 → NiF2 + Cl2
The corresponding reaction of cobalt(II) chloride results in oxidation of the cobalt, whereas nickel remains in the +2 oxidation state after fluorination because its +3 oxidation state is less stable. Chloride is more easily oxidized than nickel(II). This is a typical halogen displacement reaction, where a halogen plus a less active halide makes the less active halogen and the more active halide.
Like some other metal difluorides, NiF2 crystallizes in the rutile structure, which features octahedral Ni centers and planar fluorides.
At low temperatures, its magnetic structure is antiferromagnetic.
Reactions
A melt of NiF2 and KF reacts to give successively potassium trifluoronickelate and potassium tetrafluoronickelate:
NiF2 + KF → K[NiF3]
K[NiF3] + KF → K2[NiF4]
The structure of this material is closely related to some superconducting oxide materials.
Nickel(II) fluoride reacts with strong bases to give nickel(II) hydroxide:
NiF2 + 2 NaOH → Ni(OH)2 + 2 NaF
References
External links
IARC Monograph "Nickel and Nickel compounds"
National Pollutant Inventory - Fluoride compounds fact sheet
National Pollutant Inventory - Nickel and compounds fact sheet
Nickel compounds
Fluorides
Metal halides | Nickel(II) fluoride | [
"Chemistry"
] | 497 | [
"Inorganic compounds",
"Fluorides",
"Metal halides",
"Salts"
] |
2,171,432 | https://en.wikipedia.org/wiki/Dioxygen%20difluoride | Dioxygen difluoride is a compound of fluorine and oxygen with the molecular formula O2F2. It can exist as an orange-red colored solid which melts into a red liquid at . It is an extremely strong oxidant and decomposes into oxygen and fluorine even at at a rate of 4% per its lifetime at room temperature is thus extremely short. Dioxygen difluoride reacts vigorously with nearly every chemical it encounters (including ordinary ice) leading to its onomatopoeic nickname FOOF (a play on its chemical structure and its explosive tendencies).
Preparation
Dioxygen difluoride can be obtained by subjecting a 1:1 mixture of gaseous fluorine and oxygen at low pressure (7–17 mmHg (0.9–2.3 kPa) is optimal) to an electric discharge of 25–30 mA at 2.1–2.4 kV.
A similar method was used for the first synthesis by Otto Ruff in 1933. Another synthesis involves mixing and in a stainless steel vessel cooled to , followed by exposing the elements to bremsstrahlung for several hours. A third method requires heating a mix of fluorine and oxygen to , and then rapidly cooling it using liquid oxygen. All of these methods involve synthesis according to the equation
+ →
It also arises from the thermal decomposition of ozone difluoride:
2 → 2 +
Structure and properties
In , oxygen is assigned the unusual oxidation state of +1. In most of its other compounds, oxygen has an oxidation state of −2.
The structure of dioxygen difluoride resembles that of hydrogen peroxide, , in its large dihedral angle, which approaches 90° and C2 symmetry. This geometry conforms with the predictions of VSEPR theory.
The bonding within dioxygen difluoride has been the subject of considerable speculation, particularly because of the very short O−O distance and the long O−F distances. The O−O bond length is within 2 pm of the 120.7 pm distance for the O=O double bond in the dioxygen molecule, . Several bonding systems have been proposed to explain this, including an O−O triple bond with O−F single bonds destabilised and lengthened by repulsion between the lone pairs on the fluorine atoms and the π orbitals of the O−O bond. Repulsion involving the fluorine lone pairs is also responsible for the long and weak covalent bonding in the fluorine molecule.
Computational chemistry indicates that dioxygen difluoride has an exceedingly high barrier to rotation of 81.17 kJ/mol around the O−O bond (in hydrogen peroxide the barrier is 29.45 kJ/mol); this is close to the O−F bond disassociation energy of 81.59 kJ/mol.
The 19F NMR chemical shift of dioxygen difluoride is 865 ppm, which is by far the highest chemical shift recorded for a fluorine nucleus, thus underlining the extraordinary electronic properties of this compound. Despite its instability, thermochemical data for have been compiled.
Reactivity
The compound readily decomposes into oxygen and fluorine. Even at a temperature of , 4% decomposes each day by this process:
→ +
The other main property of this unstable compound is its oxidizing power, although most experimental reactions have been conducted near . Several experiments with the compound resulted in a series of fires and explosions. Some of the compounds that produced violent reactions with include ethyl alcohol, methane, ammonia, and even water ice.
With and , it gives the corresponding dioxygenyl salts:
2 + 2 → 2 +
Uses
The compound currently has no practical applications, but has been of theoretical interest. Los Alamos National Laboratory used it to synthesize plutonium hexafluoride at unprecedentedly low temperatures, which was significant because previous methods for preparation needed temperatures so high that the plutonium hexafluoride created would decompose rapidly.
See also
Chlorine trifluoride
A. G. Streng
References
External links
Nonmetal halides
Oxygen fluorides
Oxidizing agents
Chalcohalides
Substances discovered in the 1930s
Peroxides | Dioxygen difluoride | [
"Chemistry"
] | 897 | [
"Oxygen fluorides",
"Inorganic compounds",
"Redox",
"Chalcohalides",
"Oxidizing agents"
] |
2,171,486 | https://en.wikipedia.org/wiki/Ponderomotive%20force | In physics, a ponderomotive force is a nonlinear force that a charged particle experiences in an inhomogeneous oscillating electromagnetic field. It causes the particle to move towards the area of the weaker field strength, rather than oscillating around an initial point as happens in a homogeneous field. This occurs because the particle sees a greater magnitude of force during the half of the oscillation period while it is in the area with the stronger field. The net force during its period in the weaker area in the second half of the oscillation does not offset the net force of the first half, and so over a complete cycle this makes the particle move towards the area of lesser force.
The ponderomotive force Fp is expressed by
which has units of newtons (in SI units) and where e is the electrical charge of the particle, m is its mass, ω is the angular frequency of oscillation of the field, and E is the amplitude of the electric field. At low enough amplitudes the magnetic field exerts very little force.
This equation means that a charged particle in an inhomogeneous oscillating field not only oscillates at the frequency of ω of the field, but is also accelerated by Fp toward the weak field direction. This is a rare case in which the direction of the force does not depend on whether the particle is positively or negatively charged.
Etymology
The term ponderomotive comes from the Latin ponder- (meaning weight) and the english motive (having to do with motion).
Derivation
The derivation of the ponderomotive force expression proceeds as follows.
Consider a particle under the action of a non-uniform electric field oscillating at frequency in the x-direction. The equation of motion is given by:
neglecting the effect of the associated oscillating magnetic field.
If the length scale of variation of is large enough, then the particle trajectory can be divided into a slow time (secular) motion and a fast time (micro)motion:
where is the slow drift motion and represents fast oscillations. Now, let us also assume that . Under this assumption, we can use Taylor expansion on the force equation about , to get:
, and because is small, , so
On the time scale on which oscillates, is essentially a constant. Thus, the above can be integrated to get:
Substituting this in the force equation and averaging over the timescale, we get,
Thus, we have obtained an expression for the drift motion of a charged particle under the effect of a non-uniform oscillating field.
Time averaged density
Instead of a single charged particle, there could be a gas of charged particles confined by the action of such a force. Such a gas of charged particles is called plasma. The distribution function and density of the plasma will fluctuate at the applied oscillating frequency and to obtain an exact solution, we need to solve the Vlasov Equation. But, it is usually assumed that the time averaged density of the plasma can be directly obtained from the expression for the force expression for the drift motion of individual charged particles:
where is the ponderomotive potential and is given by
Generalized ponderomotive force
Instead of just an oscillating field, a permanent field could also be present. In such a situation, the force equation of a charged particle becomes:
To solve the above equation, we can make a similar assumption as we did for the case when . This gives a generalized expression for the drift motion of the particle:
Applications
The idea of a ponderomotive description of particles under the action of a time-varying field has applications in areas like:
High harmonic generation
Plasma acceleration of particles
Plasma propulsion engine especially the Electrodeless plasma thruster
Quadrupole ion trap
Terahertz time-domain spectroscopy as a source of high energy THz radiation in laser-induced air plasmas
The quadrupole ion trap uses a linear function along its principal axes. This gives rise to a harmonic oscillator in the secular motion with the so-called trapping frequency , where are the charge and mass of the ion, the peak amplitude and the frequency of the radiofrequency (rf) trapping field, and the ion-to-electrode distance respectively. Note that a larger rf frequency lowers the trapping frequency.
The ponderomotive force also plays an important role in laser induced plasmas as a major density lowering factor.
Often, however, the assumed slow-time independency of is too restrictive, an example being the
ultra-short, intense laser pulse-plasma(target) interaction. Here a new ponderomotive effect comes into play,
the ponderomotive memory effect. The result is a weakening of the ponderomotive force and the generation of wake fields and ponderomotive streamers. In this case the fast-time averaged density becomes for a Maxwellian plasma:
,
where and .
References
General
Citations
Journals
Electrodynamics
Force | Ponderomotive force | [
"Physics",
"Mathematics"
] | 1,011 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Electrodynamics",
"Wikipedia categories named after physical quantities",
"Matter",
"Dynamical systems"
] |
2,171,631 | https://en.wikipedia.org/wiki/Apple%20Design%20Awards | The Apple Design Awards (ADAs) is an event hosted by Apple Inc. at its annual Worldwide Developers Conference. The purpose of the event is to recognize the best and most innovative Macintosh and iOS software and hardware produced by independent developers, as well as the best and most creative uses of Apple's products. The ADAs are awarded in categories that vary each year. The awards have been presented annually since 1997. For the first two years of their existence, they were known as the "Human Interface Design Excellence Awards" (HIDE Awards).
Since 2003, the physical award given to those recognized at the awards event bore an Apple logo that would glow when touched. The trophy is a long aluminum cube which weighs . These were engineered and built by Sparkfactor Design.
Winners
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
Student Scholarship Design Award Winners
Louis Harboe
Bryan Keller
Puck Meerburg
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
References
External links
Apple Inc. industrial design
Design awards
Apple Worldwide Developers Conference
Video game awards
Computer-related awards | Apple Design Awards | [
"Engineering"
] | 224 | [
"Design",
"Design awards"
] |
2,172,079 | https://en.wikipedia.org/wiki/Rectified%20spirit | Rectified spirit, also known as neutral spirits, rectified alcohol or ethyl alcohol of agricultural origin, is highly concentrated ethanol that has been purified by means of repeated distillation in a process called rectification. In some countries, denatured alcohol or denatured rectified spirit may commonly be available as "rectified spirit", because in some countries (though not necessarily the same) the retail sale of rectified alcohol in its non-denatured form is prohibited.
The purity of rectified spirit has a practical limit of 97.2% ABV (95.6% by mass) when produced using conventional distillation processes, as a mixture of ethanol and water becomes a minimum-boiling azeotrope at this concentration. However, rectified spirit is typically distilled in continuous multi-column stills at 96–96.5% ABV and diluted as necessary. Ethanol is a commonly used medical alcoholspiritus fortis is a medical term for ethanol solutions with 95% ABV.
Neutral spirits can be produced from grains, corn, grapes, sugar beets, sugarcane, tubers, or other fermentable materials such as whey. In particular, large quantities of neutral alcohol are distilled from wine and by-products of wine production (pomace, lees). A product made from grain is "neutral grain spirit", while a spirit made from grapes is called "grape neutral spirit" or "vinous alcohol". These terms are commonly abbreviated as either GNS or NGS.
Neutral spirits are used in the production of several spirit drinks, such as blended whisky, cut brandy, most gins, some liqueurs and some bitters. As a consumer product, it is generally mixed with other beverages, either to create drinks like alcoholic punch or Jello shots or to substitute for other spirits, such as vodka or rum, in cocktails. It is also used to make home made liqueurs, such as limoncello or Crème de cassis, and in cooking because its high concentration of alcohol acts as a solvent to extract flavors. Rectified spirit is also used for medicinal tinctures and as a household solvent. It is sometimes consumed undiluted; however, because the alcohol is so high-proof, overconsumption can cause alcohol poisoning more quickly than more traditional distilled spirits.
Regional
United States
Neutral spirit is legally defined as spirit distilled from any material distilled at or above 95% ABV (190 US proof) and bottled at or above 40% ABV. When the term is used in an informal context rather than as a term of U.S. law, any distilled spirit of high alcohol purity (e.g., 170 proof or higher) that does not contain added flavoring may be referred to as neutral alcohol. Prominent brands of neutral spirits sold in the U.S. include:
Brands made by Luxco:
Everclear
Crystal Clear
Golden Grain
Gem Clear
Graves Grain Alcohol
"Grain spirit" is a legal classification for neutral spirit that is distilled from fermented grain mash and stored in oak containers.
Retail availability
Availability of neutral spirit for retail purchase varies between states. States where consumer sales of high-ABV neutral spirit are prohibited include California, Florida, Hawaii, Maine, Maryland, Massachusetts, Michigan, Minnesota, New Hampshire, Nevada, North Carolina, Pennsylvania, Iowa, and West Virginia. In Virginia, the purchase of neutral spirits requires a no-cost "Grain Alcohol Permit", issued "strictly for industrial, commercial, culinary or medicinal use". In 2017, Virginia approved the sale of up to 151 proof neutral spirits at its ABC stores without a permit. Pennsylvania sells 151 proof without a permit but requires one for 190 proof.
European Union
Legal definition
Under EU regulations, alcohol used in the production of some spirit drinks must be "ethyl alcohol of agricultural origin", which has to comply with the following requirements:
Organoleptic properties: no detectable taste other than that of the raw materials used in its production;
minimum alcoholic strength by volume: 96.0%;
maximum levels of residues do not exceed (in grams per hectolitre of 100% vol. alcohol):
acetic acid (total acidity): 1.5;
ethyl acetate (esters): 1.3;
acetaldehyde (aldehydes): 0.5;
2-methyl-1-propanol (higher alcohols): 0.5;
methanol: 30;
nitrogen (volatile bases containing nitrogen): 0.1;
dry extract: 1.5;
furfural: not detectable.
Germany
In Germany, rectified spirit is generically called Primasprit (colloquial) or, more technically, Neutralalkohol. It is available in pharmacies, bigger supermarkets, and East European markets. In the former East Germany, it was available in regular stores. Primasprit is most often used for making homemade liqueurs; other types of use are rare. Most of the Primasprit produced in Germany is made from grain and is, therefore, a neutral grain spirit.
Neutralalkohol by Lautergold and Weinhof Peschke both have an ABV of 96.6%.
Poland
Spirytus Rektyfikowany made by Polmos is the most notable brand with 96% ABV, while in fine and luxury cases, increases to 96.5% ABV. It is often claimed to be the strongest liquor in the world. Spirytus Delikatesowy by Polmos is at 95% ABV. Spirytus Luksusowy by Dębowa Polska is at 96.5% ABV.
Norway
The import and sale of spirits containing more than 60% alcohol by volume is prohibited, so only weaker grain spirits are permitted.
Latin America
Bolivia
Bolivia has its own form of rectified spirit made using sugar cane or coca leaves, called cocoroco, which is as high as 96% ABV.
Moonshine
A column still or spiral still can achieve a vapor alcohol content of 95% ABV.
Moonshine is usually distilled to 40% ABV, and seldom above 66% based on 48 samples. For example, conventional pot stills commonly produce 40% ABV, and top out between 60 and 80% after multiple distillations. However, ethanol can be dried to 95% ABV by heating 3Å molecular sieves such as 3Å zeolite.
See also
Distilled beverage
Vodka
Edward Adam
References
Distilled drinks
Vodkas | Rectified spirit | [
"Chemistry"
] | 1,368 | [
"Distillation",
"Distilled drinks"
] |
2,172,095 | https://en.wikipedia.org/wiki/Lithium%20perchlorate | Lithium perchlorate is the inorganic compound with the formula LiClO4. This white or colourless crystalline salt is noteworthy for its high solubility in many solvents. It exists both in anhydrous form and as a trihydrate.
Applications
Inorganic chemistry
Lithium perchlorate is used as a source of oxygen in some chemical oxygen generators. It decomposes at about 400 °C, yielding lithium chloride and oxygen:
LiClO4 → LiCl + 2 O2
Over 60% of the mass of the lithium perchlorate is released as oxygen. It has both the highest oxygen to weight and oxygen to volume ratio of all practical perchlorate salts, and higher oxygen to volume ratio than liquid oxygen.
Lithium perchlorate is used as an oxidizer in some experimental solid rocket propellants, and to produce red colored flame in pyrotechnic compositions.
Organic chemistry
LiClO4 is highly soluble in organic solvents, even diethyl ether. Such solutions are employed in Diels–Alder reactions, where it is proposed that the Lewis acidic Li+ binds to Lewis basic sites on the dienophile, thereby accelerating the reaction.
Lithium perchlorate is also used as a co-catalyst in the coupling of α,β-unsaturated carbonyls with aldehydes, also known as the Baylis–Hillman reaction.
Solid lithium perchlorate is found to be a mild and efficient Lewis acid for promoting cyanosilylation of carbonyl compounds under neutral conditions.
Batteries
Lithium perchlorate is also used as an electrolyte salt in lithium-ion batteries. Lithium perchlorate is chosen over alternative salts such as lithium hexafluorophosphate or lithium tetrafluoroborate when its superior electrical impedance, conductivity, hygroscopicity, and anodic stability properties are of importance to the specific application. However, these beneficial properties are often overshadowed by the electrolyte's strong oxidizing properties, making the electrolyte reactive toward its solvent at high temperatures and/or high current loads. Due to these hazards the battery is often considered unfit for industrial applications.
Biochemistry
Concentrated solutions of lithium perchlorate (4.5 mol/L) are used as a chaotropic agent to denature proteins.
Production
Lithium perchlorate can be manufactured by reaction of sodium perchlorate with lithium chloride. It can be also prepared by electrolysis of lithium chlorate at 200 mA/cm2 at temperatures above 20 °C.
Safety
Perchlorates often give explosive mixtures with organic compounds, finely divided metals, sulfur, and other reducing agents.
References
Fuerther reading
External links
WebBook page for LiClO4
Perchlorates
Lithium salts
Oxidizing agents
Electrolytes | Lithium perchlorate | [
"Chemistry"
] | 572 | [
"Lithium salts",
"Electrolytes",
"Redox",
"Perchlorates",
"Oxidizing agents",
"Salts",
"Electrochemistry"
] |
2,172,106 | https://en.wikipedia.org/wiki/Silver%20perchlorate | Silver perchlorate is the chemical compound with the formula AgClO4. This white solid forms a monohydrate and is mildly deliquescent. It is a useful source of the Ag+ ion, although the presence of perchlorate presents risks. It is used as a catalyst in organic chemistry.
Production
Silver perchlorate is created by heating a mixture of perchloric acid with silver nitrate.
Alternatively, it can be prepared by the reaction between barium perchlorate and silver sulfate, or from the reaction of perchloric acid with silver oxide.
Solubility
Silver perchlorate is noteworthy for its solubility in aromatic solvents such as benzene (52.8 g/L) and toluene (1010 g/L). In these solvents, the silver cation binds to the arene, as has been demonstrated by extensive crystallographic studies on crystals obtained from such solutions. Its solubility in water is extremely high, up to 500 g per 100 mL water.
Related reagents
Similar to silver nitrate, silver perchlorate is an effective reagent for replacing halides ligands with perchlorate, which is a weakly or non-coordinating anion. The use of silver perchlorate in chemical synthesis has declined due to concerns about explosiveness of perchlorate salts. Other silver reagents are silver tetrafluoroborate, and the related silver trifluoromethanesulfonate and silver hexafluorophosphate.
References
Perchlorates
Silver compounds
Deliquescent materials
Oxidizing agents | Silver perchlorate | [
"Chemistry"
] | 327 | [
"Redox",
"Perchlorates",
"Oxidizing agents",
"Salts",
"Deliquescent materials"
] |
2,172,175 | https://en.wikipedia.org/wiki/Planogram | Planograms, also known as plano-grams, plan-o-grams, schematics, POGs or simply plans, are visual representations of a store's products or services on display. They are considered a tool for visual merchandising. According to the Merriam-Webster Dictionary, a planogram is "a schematic drawing or plan for displaying merchandise in a store so as to maximize sales." The effectiveness of the planogram can be measured by the sales volume generated from the specific area being diagrammed.
Overview
Planograms are predominantly used in retail businesses. A planogram defines the location and quantity of products to be placed on display, often with detailed specifications on the number of product facings and spacing; shelf layout, height, width, slant and depth and necessary or recommended chiller conditions (e.g. fresh meat versus white wine). Any other information deemed necessary or useful can be included. The rules and theories for creating planograms are set under the terms of merchandising. For example, given limited shelf space, a vendor may prefer to provide a wide assortment of products, or may limit the assortment but increase the facings of each product to avoid stock-outs.
Placement methods
Visual
Visual product placement is supported by different theories including; horizontal, vertical, and block placement. Horizontal product placement increases the concentration of a certain article. Research studies suggest that a product's relation to customer eye levels directly correlates to its sales. This depends on the customer's distance from the unit. Vertical product placement puts products on more than one shelf level to achieve – of placement space. Similar products are placed in blocks.
A planogram can be compared to a book. A store is the book and its individual modules represent the pages. The customer gradually “reads” individual modules and automatically proceeds from the left to the right, from the top to the bottom as if he/she read a book. This principle is followed by a majority of rules for goods displaying. The rules say that goods should be arranged on a shelf from the least to the most expensive ones. Goods may also be arranged in the reverse order, depending on the kind of goods that the dealer wishes to promote. This makes the difference between dealers of cheap and luxury goods.
Commercial
Commercial placement is determined by both market share placement and margin placement.
References
External links
Retail processes and techniques
Diagrams
Retail store elements | Planogram | [
"Technology"
] | 499 | [
"Components",
"Retail store elements"
] |
2,172,265 | https://en.wikipedia.org/wiki/Sodium%20periodate | Sodium periodate is an inorganic salt, composed of a sodium cation and the periodate anion. It may also be regarded as the sodium salt of periodic acid. Like many periodates, it can exist in two different forms: sodium metaperiodate (formula NaIO4) and sodium orthoperiodate (normally Na2H3IO6, but sometimes the fully reacted salt Na5IO6). Both salts are useful oxidising agents.
Preparation
Classically, periodate was produced in the form of sodium hydrogen periodate (). This commercially available, but can also be produced by the oxidation of iodates with chlorine and sodium hydroxide. Or, similarly, from iodides by oxidation with bromine and sodium hydroxide:
Modern industrial scale production involves the electrochemical oxidation of iodates, on a lead dioxide () anode, with the following standard electrode potential:
Sodium metaperiodate can be prepared by the dehydration of sodium hydrogen periodate with nitric acid.
Structure
Sodium metaperiodate (NaIO4) forms tetragonal crystals (space group I41/a) consisting of slightly distorted ions with average I–O bond distances of 1.775 Å; the Na+ ions are surrounded by 8 oxygen atoms at distances of 2.54 and 2.60 Å.
Sodium hydrogen periodate (Na2H3IO6) forms orthorhombic crystals (space group Pnnm). Iodine and sodium atoms are both surrounded by an octahedral arrangement of 6 oxygen atoms; however the NaO6 octahedron is strongly distorted. IO6 and NaO6 groups are linked via common vertices and edges.
Powder diffraction indicates that Na5IO6 crystallises in the monoclinic system (space group C2/m).
Uses
Sodium periodate can be used in solution to open saccharide rings between vicinal diols leaving two aldehyde groups. This process is often used in labeling saccharides with fluorescent molecules or other tags such as biotin. Because the process requires vicinal diols, periodate oxidation is often used to selectively label the 3′-ends of RNA (ribose has vicinal diols) instead of DNA as deoxyribose does not have vicinal diols.
NaIO4 is used in organic chemistry to cleave diols to produce two aldehydes.
In 2013 the US Army announced that it would replace environmentally harmful chemicals barium nitrate and potassium perchlorate with sodium metaperiodate for use in their tracer ammunition.
See also
lead tetraacetate - also effective for diol cleavage via the Criegee oxidation
References
See Fatiadi, Synthesis (1974) 229–272 for a review of periodate chemistry.
Periodates
Sodium compounds
Oxidizing agents | Sodium periodate | [
"Chemistry"
] | 585 | [
"Periodates",
"Redox",
"Oxidizing agents"
] |
2,172,275 | https://en.wikipedia.org/wiki/Potassium%20periodate | Potassium periodate is an inorganic salt with the molecular formula KIO4. It is composed of a potassium cation and a periodate anion and may also be regarded as the potassium salt of periodic acid. Note that the pronunciation is per-iodate, not period-ate.
Unlike other common periodates, such as sodium periodate and periodic acid, it is only available in the metaperiodate form; the corresponding potassium orthoperiodate (K5IO6) has never been reported.
Preparation
Potassium periodate can be prepared by the oxidation of an aqueous solution of potassium iodate by chlorine and potassium hydroxide.
It can also be generated by the electrochemical oxidation of potassium iodate, however the low solubility of KIO3 makes this approach of limited use.
Chemical properties
Potassium periodate decomposes at 582 °C to form potassium iodate and oxygen.
The low solubility of KIO4 makes it useful for the determination of potassium and cerium.
It is slightly soluble in water (one of the less soluble of potassium salts, owing to a large anion), giving rise to a solution that is slightly alkaline. On heating (especially with manganese(IV) oxide as catalyst), it decomposes to form potassium iodate, releasing oxygen gas.
KIO4 forms tetragonal crystals of the Scheelite type (space group I41/a).
References
Periodates
Potassium compounds
Oxidizing agents | Potassium periodate | [
"Chemistry"
] | 304 | [
"Periodates",
"Redox",
"Oxidizing agents"
] |
2,172,567 | https://en.wikipedia.org/wiki/Prodikeys | Prodikeys is a music and computer keyboard. It was created by Singaporean audio company Creative Technology. So far, three different versions of Prodikeys have been launched: Creative Prodikeys, Creative Prodikeys DM and Creative Prodikeys PC-MIDI.
The products are computer keyboards, extended by 37 mini-sized MIDI keyboard at the bottom. The MIDI keyboard is placed under a detachable palm cover .
Prodikeys can also be used as a MIDI controller for third-party MIDI software. It supports Windows XP, Windows 2000 and Linux, but is incompatible with newer versions of Windows or Mac OS.
Prodikeys Software
The products are shipped with an additional software. It includes the following features:
EasyNotes Is a software to learn to play melodies on the keyboard. It comes with an included song library which can be extended with MIDI files by the user. EasyNotes supports music format in SEQ and MIDI.
FunMix Can create and record own music with pre-arranged mixes.
HotKeys Manager
Is a tool to customize the keyboard's hotkeys functions to change access to the software suite.
Mini Keyboard
Is a keyboard software with a sound library of more than 100 instrument sounds like piano, flute, guitar and drums.
Prodikeys Launcher
A Launcher for the software suite. It also has an interactive tutorial.
Prodikeys DM
The Prodikeys DM has one single mini-DIN connector for the PS/2 port and is therefore detected as a regular typing keyboard. The included Windows software communicates with the keyboard driver in order to send and receive MIDI data over the PS/2 line.
This protocol has been partly reverse-engineered, making it possible to use the Prodikeys DM on a regular USB port using an Arduino microcontroller as an adaptor.
References
Singaporean brands
Keyboard instruments
Computer keyboards | Prodikeys | [
"Technology"
] | 392 | [
"Computing stubs",
"Computer hardware stubs"
] |
2,172,717 | https://en.wikipedia.org/wiki/Kingdom%20of%20Crystal | The Kingdom of Crystal (Swedish: Glasriket, The glass realm) is a geographical area today containing a total of 14 glassworks in the municipalities of Emmaboda, Nybro, Uppvidinge, and Lessebo in southern Sweden. The two municipalities Emmaboda and Nybro belong to Kalmar County and Lessebo and Uppvidinge belong to Kronoberg County. The area is part of the province Småland, and Nybro is considered the capital of the Kingdom of Crystal area. The Kingdom of Crystal is known for its handblown glass with a continuous story since 1742. The glassworks have become part of the culture of Sweden; examples can be found in many Swedish homes, recognisable by a small sticker at the bottom with the name Orrefors, Kosta Boda, etc. The height of glass production was the end of the 19th century during which 77 glass factories were established with more than half of them situated in Småland.
When touring the forested province of Småland in Sweden, it is normal to visit at least one of the glassworks. The larger ones have adjacent museums and are open for visitors to see the glass blowing hall, normally looking down from a platform. Food is available as well as shopping for various glass products such as glasses, bowls, vases and unique glass ornaments. The Kingdom of Crystal is a popular and a well known tourist destination. The Regional Council of Kalmar County conducts a study every four years to survey the Swedish public regarding their knowledge and awareness about Kalmar County, its places to visit and tourist attractions. The survey was conducted by Kantar Sifo and included 1500 respondents aged 20–79 years old. The survey results found that the Kingdom of Crystal area was the most visited tourist attraction in the Kalmar County attracting a wide range of people. One in five aged 40–79 years old had visited the area in the last five years. The visitors had no significant difference in terms of geographical belonging, income or gender. The only difference that could be noted was that the group visiting the area tended to be of older age rather than young. Among the respondents aged 40–65+ years old only 3% claimed they never heard of the Kingdom of Crystal.
The more notable are Orrefors Glasbruk, with the adjacent National School of Glass and Kosta Boda. Each one of the glassworks have distinctive design traditions, character and atmosphere.
Companies
Glassworks
Hitorp Glasbruk
Målerås
Kosta Boda
Orrefors
Sea
Nybro
Sandvik
Bergdala
Rosdala
Johansfors
Lindshammar
Strömbergshyttan
Pukeberg (founded 1871)
Åfors
Transjö hytta
Smaller glasswork-related companies
In the Kingdom of Crystal, there are a large number of small businesses in the glass industry, which are often spin-offs from some of the larger companies. Activities of these businesses include:
Studio glass
Glass engraving
Glass repair
Glass painting
Design
Training
Riksglasskolan in Orrefors
Glasskolan in Kosta (Secondary school, the Nordic line, Commissioned, Vocational Education)
References
External links
Official website in Swedish, English and German
Glassmaking companies of Sweden
Småland
Museums in Kalmar County
Glass museums and galleries
Art museums and galleries in Sweden
Museums in Kronoberg County | Kingdom of Crystal | [
"Materials_science",
"Engineering"
] | 686 | [
"Glass engineering and science",
"Glass museums and galleries"
] |
2,172,777 | https://en.wikipedia.org/wiki/David%20Masser | David William Masser (born 8 November 1948) is Professor Emeritus in the Department of Mathematics and Computer Science at the University of Basel. He is known for his work in transcendental number theory, Diophantine approximation, and Diophantine geometry. With Joseph Oesterlé in 1985, Masser formulated the abc conjecture, which has been called "the most important unsolved problem in Diophantine analysis".
Early life and education
Masser was born on 8 November 1948 in London, England. He graduated from Trinity College, Cambridge with a B.A. (Hons) in 1970. In 1974, he obtained his M.A. and Ph.D. at the University of Cambridge, with a doctoral thesis under the supervision of Alan Baker titled Elliptic Functions and Transcendence.
Career
Masser was a Lecturer at the University of Nottingham from 1973 to 1975, before spending the 1975–1976 year as a Research Fellow of Trinity College at the University of Cambridge. He returned to the University of Nottingham to serve as a Lecturer from 1976 to 1979 and then as a Reader from 1979 to 1983. He was a professor at the University of Michigan from 1983 to 1992. He then moved to the Mathematics Institute at the University of Basel and became emeritus there in 2014.
Research
Masser's research focuses on transcendental number theory, Diophantine approximation, and Diophantine geometry. The abc conjecture originated as the outcome of attempts by Oesterlé and Masser to understand the Szpiro conjecture about elliptic curves.
Awards
Masser was an invited speaker at the International Congress of Mathematicians in Warsaw in 1983. In 1991, he received the Humboldt Prize. He was elected as a Fellow of the Royal Society in 2005. In 2014, he was elected as a Member of the Academia Europaea.
See also
Analytic subgroup theorem
Bézout's theorem
Zilber–Pink conjecture
References
1948 births
Living people
20th-century British mathematicians
21st-century British mathematicians
Number theorists
Alumni of Trinity College, Cambridge
Fellows of the Royal Society
University of Michigan faculty
Academic staff of the University of Basel
Abc conjecture
Members of Academia Europaea | David Masser | [
"Mathematics"
] | 434 | [
"Abc conjecture",
"Number theorists",
"Number theory"
] |
2,172,795 | https://en.wikipedia.org/wiki/MediaPortal | MediaPortal is an open-source media player and digital video recorder software project, often considered an alternative to Windows Media Center. It provides a 10-foot user interface for performing typical PVR/TiVo functionality, including playing, pausing, and recording live TV; playing DVDs, videos, and music; viewing pictures; and other functions. Plugins allow it to perform additional tasks, such as watching online video, listening to music from online services such as Last.fm, and launching other applications such as games. It interfaces with the hardware commonly found in HTPCs, such as TV tuners, infrared receivers, and LCD displays.
The MediaPortal source code was initially forked from XBMC (now Kodi), though it has been almost completely re-written since then. MediaPortal is designed specifically for Microsoft Windows, unlike most other open-source media center programs such as MythTV and Kodi, which are usually cross-platform.
Features
DirectX GUI
Video Hardware Acceleration
VMR / EVR on Windows Vista / 7
TV / Radio (DVB-S, DVB-S2, DVB-T, DVB-C, Analog television (Common Interface, DVB radio, DVB EPG, Teletext, etc...)
IPTV
Recording, pause and time shifting of TV and Radio broadcasts
Music player
Video/DVD player
Picture player
Internet Streams
Integrated Weather Forecasts
Built-in RSS reader
Metadata web scraping from TheTVDB and The Movie Database
Plug ins
Skins
Graphical user interfaces
Control
MediaPortal can be controlled by any input device, that is supported by the Windows Operating System.
PC Remote
Keyboard / Mouse
Gamepad
Kinect
Wii Remote
Android / iOS/ WebOS / S60 handset devices
Television
MediaPortal uses its own TV-Server to allow to set up one central server with one or more TV cards. All TV related tasks are handled by the server and streamed over the network to one or more clients. Clients can then install the MediaPortal Client software and use the TV-Server to watch live or recorded TV, schedule recordings, view and search EPG data over the network. Since version 1.0.1, the client plugin of the TV-Server has replaced the default built-in TV Engine.
Even without a network (i.e. a singleseat installation), the TV-Server treats the PC as both the server and the client.
The TV-Server supports watching and recording TV at the same time with only one DVB/ATSC TV Card, on the same transponder (multiplex).
Broadcast Driver Architecture is used to support as many TV cards as possible.
The major brands of cards, like digital-everywhere, Hauppauge, Pinnacle, TechnoTrend and TechniSat, including analog cards, provide BDA drivers for their cards.
Video/DVD player
The video player of MediaPortal is a DirectShow Player, so any codec/filter can be used. MediaPortal uses the codec from LAV Filters by default, but the codec can be changed to all installed ones, such like Ffdshow, PowerDVD, CoreAVC, Nvidia PureVideo etc.
MediaPortal also support video post-processing, with any installed codec.
Due to the DirectShow player implementation, MediaPortal can play all media files that can be played on Windows.
Music player
The default internal music player uses the BASS Engine with the BASS audio library. The alternative player is the internal DirectShow player.
With the BASS Engine MediaPortal supports visualizations from Windows Media Visualizations, Winamp Visualizations including MilkDrop, Sonic and Soundspectrum G-Force.
Picture player/organizer
Digital pictures/photos can be browsed, managed and played as slide shows with background music or radio. The picture player uses different transitions or the Ken Burns effect between each picture.
Exif data are used to rotate the pictures automatically, but the rotation can be done manually too. Zooming of pictures is also possible.
Online videos
OnlineVideos is a plugin for MediaPortal to integrate seamless online video support into MediaPortal.
OnlineVideos supports almost 200 sites/channels in a variety of languages and genres such like YouTube, iTunes Movie Trailers, Discovery Channel etc.
Series
MP-TVSeries is a popular TV Series plug-in for MediaPortal. It focuses on managing the user's TV Series library.
The MP-TVSeries plugin will scan the hard drive (including network and removable drives) for video files, it then analyzes them by their path structures to determine if they are TV Shows. If the file(s) are recognized then the plugin will go online and retrieve information about them. You can then browse, manage and play your episodes from inside MediaPortal in a nice graphical layout.
The information and fan art it retrieves is coming from TheTVDB.com which allows any user to add and update information. The extension will automatically update any information when new episodes/files are added.
Movies
Moving Pictures is a plug-in that focuses on ease of use and flexibility. Point it to your movies collection and Moving Pictures will automatically load media rich details about your movies as quickly as possible with as little user interaction as possible. Once imported you can browse your collection via an easy to use but highly customizable interface.
Ambilight
AtmoLight is a plug-in that makes it possible to use all sorts of Ambilight solutions which currently are:
AmbiBox
AtmoOrb
AtmoWin
BobLight
Hue
Hyperion
It also allows easy expansion for any future Ambilight solutions.
Hardware
Hardware, SD single tuner
For standard definition resolution playback and recording with MPEG-2 video compression using a single TV tuner:
1.4 GHz Intel Pentium III or equivalent processor
256 MB (256 MiB) of system RAM
Hardware, HDTV
For HDTV (720p/1080i/1080p) playback/recording, recording from multiple tuners, and playback of MPEG-4 AVC (H.264) video:
2.8 GHz Intel Pentium 4 or equivalent processor
512 MB of system RAM
Display and storage, SD and HD
DirectX 9.0 hardware-accelerated GPU with at least 128MB of video memory
Graphics chips which support this and are compatible with MediaPortal:
ATI Radeon series 9600 (or above)
NVIDIA GeForce 6600 (or above), GeForce FX 5200 (or above) and nForce 6100 series (or above)
Intel Extreme Graphics 2 (integrated i865G)
Matrox Parhelia
SiS Xabre series
XGI Volari V Series and XP Series
200 MB free harddisk-drive space for the MediaPortal software
12 GB or more free harddisk-drive space for Hardware Encoding or Digital TV based TV cards for timeshifting purposes
Operating system and software
Supported operating systems - version 1.7.1
Windows Media Center Edition 2005 with Service Pack 3
Windows Vista 32 and 64-bit with Service Pack 2 or later
Windows 7 32 and 64-bit
Windows 8 32 and 64-bit (as of v1.3.0)
Windows 8.1 32 and 64-bit (as of v1.5.0)
As of version 1.7, MediaPortal is not officially supported on Windows XP
It will install, but warn the user of the unsupported status while doing so.
Software prerequisites - version 1.7.1
Microsoft .NET Framework 4.0 - with the .NET 3.5 features enabled, (as of v1.6.0)
DirectX 9.0c
Windows Media Player 11 (Only required on XP SP3, Windows Vista comes with WMP11 and Windows 7 comes with WMP12 already)
See also
Media PC
Windows XP Media Center Edition (MCE)
Windows Media Center Extender
Windows Media Connect
Windows Media Player
XBMC – the GPL open source software that MediaPortal was originally based upon.
Comparison of PVR software packages
Microsoft PlaysForSure
2Wire MediaPortal
List of codecs
List of free television software
References
External links
2004 software
Free television software
Video recording software
Free video software
Television technology
Television time shifting technology
Software forks
Internet television software
Windows-only free software | MediaPortal | [
"Technology"
] | 1,710 | [
"Information and communications technology",
"Television technology"
] |
2,172,840 | https://en.wikipedia.org/wiki/Luby%20transform%20code | In computer science, Luby transform codes (LT codes) are the first class of practical fountain codes that are near-optimal erasure correcting codes. They were invented by Michael Luby in 1998 and published in 2002. Like some other fountain codes, LT codes depend on sparse bipartite graphs to trade reception overhead for encoding and decoding speed. The distinguishing characteristic of LT codes is in employing a particularly simple algorithm based on the exclusive or operation () to encode and decode the message.
LT codes are rateless because the encoding algorithm can in principle produce an infinite number of message packets (i.e., the percentage of packets that must be received to decode the message can be arbitrarily small). They are erasure correcting codes because they can be used to transmit digital data reliably on an erasure channel.
The next generation beyond LT codes are Raptor codes (see for example IETF RFC 5053 or IETF RFC 6330), which have linear time encoding and decoding. Raptor codes are fundamentally based on LT codes, i.e., encoding for Raptor codes uses two encoding stages, where the second stage is LT encoding. Similarly, decoding with Raptor codes primarily relies upon LT decoding, but LT decoding is intermixed with more advanced decoding techniques. The RaptorQ code specified in IETF RFC 6330, which is the most advanced fountain code, has vastly superior decoding probabilities and performance compared to using only an LT code.
Why use an LT code?
The traditional scheme for transferring data across an erasure channel depends on continuous two-way communication.
The sender encodes and sends a packet of information.
The receiver attempts to decode the received packet. If it can be decoded, the receiver sends an acknowledgment back to the transmitter. Otherwise, the receiver asks the transmitter to send the packet again.
This two-way process continues until all the packets in the message have been transferred successfully.
Certain networks, such as ones used for cellular wireless broadcasting, do not have a feedback channel. Applications on these networks still require reliability. Fountain codes in general, and LT codes in particular, get around this problem by adopting an essentially one-way communication protocol.
The sender encodes and sends packet after packet of information.
The receiver evaluates each packet as it is received. If there is an error, the erroneous packet is discarded. Otherwise the packet is saved as a piece of the message.
Eventually the receiver has enough valid packets to reconstruct the entire message. When the entire message has been received successfully the receiver signals that transmission is complete.
As mentioned above, the RaptorQ code specified in IETF RFC 6330 outperforms an LT code in practice.
LT encoding
The encoding process begins by dividing the uncoded message into n blocks of roughly equal length. Encoded packets are then produced with the help of a pseudorandom number generator.
The degree d, 1 ≤ d ≤ n, of the next packet is chosen at random.
Exactly d blocks from the message are randomly chosen.
If Mi is the i-th block of the message, the data portion of the next packet is computed as
where {i1, i2, ..., id} are the randomly chosen indices for the d blocks included in this packet.
A prefix is appended to the encoded packet, defining how many blocks n are in the message, how many blocks d have been exclusive-ored into the data portion of this packet, and the list of indices {i1, i2, ..., id}.
Finally, some form of error-detecting code (perhaps as simple as a cyclic redundancy check) is applied to the packet, and the packet is transmitted.
This process continues until the receiver signals that the message has been received and successfully decoded.
LT decoding
The decoding process uses the "exclusive or" operation to retrieve the encoded message.
If the current packet isn't clean, or if it replicates a packet that has already been processed, the current packet is discarded.
If the current cleanly received packet is of degree d > 1, it is first processed against all the fully decoded blocks in the message queuing area (as described more fully in the next step), then stored in a buffer area if its reduced degree is greater than 1.
When a new, clean packet of degree d = 1 (block Mi) is received (or the degree of the current packet is reduced to 1 by the preceding step), it is moved to the message queueing area, and then matched against all the packets of degree d > 1 residing in the buffer. It is exclusive-ored into the data portion of any buffered packet that was encoded using Mi, the degree of that matching packet is decremented, and the list of indices for that packet is adjusted to reflect the application of Mi.
When this process unlocks a block of degree d = 2 in the buffer, that block is reduced to degree 1 and is in its turn moved to the message queueing area, and then processed against the packets remaining in the buffer.
When all n blocks of the message have been moved to the message queueing area, the receiver signals the transmitter that the message has been successfully decoded.
This decoding procedure works because A A = 0 for any bit string A. After d − 1 distinct blocks have been exclusive-ored into a packet of degree d, the original unencoded content of the unmatched block is all that remains. In symbols we have
Variations
Several variations of the encoding and decoding processes described above are possible. For instance, instead of prefixing each packet with a list of the actual message block indices {i1, i2, ..., id}, the encoder might simply send a short "key" which served as the seed for the pseudorandom number generator (PRNG) or index table used to construct the list of indices. Since the receiver equipped with the same RNG or index table can reliably recreate the "random" list of indices from this seed, the decoding process can be completed successfully. Alternatively, by combining a simple LT code of low average degree with a robust error-correcting code, a raptor code can be constructed that will outperform an optimized LT code in practice.
Optimization of LT codes
There is only one parameter that can be used to optimize a straight LT code: the degree distribution function (described as a pseudorandom number generator for the degree d in the LT encoding section above). In practice the other "random" numbers (the list of indices { i1, i2, ..., id } ) are invariably taken from a uniform distribution on [0, n), where n is the number of blocks into which the message has been divided.
Luby himself discussed the "ideal soliton distribution" defined by
This degree distribution theoretically minimizes the expected number of redundant code words that will be sent before the decoding process can be completed. However the ideal soliton distribution does not work well in practice because any fluctuation around the expected behavior makes it likely that at some step in the decoding process there will be no available packet of (reduced) degree 1 so decoding will fail. Furthermore, some of the original blocks will not be xor-ed into any of the transmission packets. Therefore, in practice, a modified distribution, the "robust soliton distribution", is substituted for the ideal distribution. The effect of the modification is, generally, to produce more packets of very small degree (around 1) and fewer packets of degree greater than 1, except for a spike of packets at a fairly large quantity chosen to ensure that all original blocks will be included in some packet.
See also
Online codes
Raptor codes
Tornado codes
Notes and references
External links
"Implementation of Luby transform Code in C#"
"Implementation of Fountain Codes for storage in C/C++"
"Introduction to fountain codes: LT Codes with Python"
Coding theory | Luby transform code | [
"Mathematics"
] | 1,659 | [
"Discrete mathematics",
"Coding theory"
] |
2,172,901 | https://en.wikipedia.org/wiki/John%20Irving%20Bentley | John Irving Bentley (15 April 1874 – 5 December 1966) was a physician who burned to death at the age of 92 in the bathroom of his house in Coudersport, Pennsylvania. His death was allegedly caused by spontaneous human combustion.
Discovery of Bentley's remains
Bentley was last seen alive 4 December 1966, when friends visiting his home wished him good night at about 9 p.m. The following morning, meter reader Don Gosnell let himself into Bentley's house, as he had permission to do due to Bentley's infirmity, and went to the basement to check the meter. While in the basement, Gosnell noticed a strange smell and a light blue smoke. He explained the smoke to be "somewhat sweet, like starting up a new oil-burning central heating system". On the ground was a neat pile of ash, about in height. The floor underneath the ash was unmarked. Had he looked up, he would have seen a hole about a foot long square in the floorboards above. Intrigued, he went upstairs to investigate. The bedroom was smoky, and, in the bathroom, he found Bentley's cremated remains.
All that was left of Bentley was the lower half of his right leg with the slipper still on it. The rest of his body had been reduced to a pile of ashes in the basement below. His walker lay across the hole in the floor generated by the fire. The rubber tips on it were still intact and the nearby bathtub was barely scorched. Gosnell ran from the building to get help. He reached the gas company office screaming "Doctor Bentley's burned up!" to his colleagues. They later stated that he looked as white as a sheet.
Theories
The first theory put forward was that Bentley had set himself on fire with his pipe, but his pipe was still on its stand by the bed in the next room. Perplexed, the coroner could only record a verdict of 'death by asphyxiation and 90 percent burning of the body.'
Joe Nickell, in his book Secrets of the Supernatural, gives an account of this event he got from Larry E. Arnold's article "The Flaming Fate of Dr. John Irving Bentley," printed in the Pursuit of Fall 1976. Nickell mentions that the hole in the bathroom floor measured 2½ feet by 4 feet, and details the remains as being Bentley's lower leg burned off at the knee.
Nickell mentions that Bentley's robe was found smoldering in the bathtub next to the hole, and that the broken remains of "what was apparently a water pitcher" were found in the toilet; he adds that the doctor had dropped hot ashes from his pipe onto his clothing previously (which "were dotted with burn spots from previous incidents"), and that he kept wooden matches in his pockets which could transform a small ember into a blazing flame.
Nickell believes that Bentley woke up to find his clothes on fire, walked to the bathroom, and passed out before he could extinguish the flames. Then, he suggests that the burning clothes ignited the flammable linoleum floor, and cool air drawn from the basement in what is known as "the stack effect" kept the fire burning hotly.
See also
Spontaneous human combustion
References
Nickell, Joe (2001). Real-Life X-Files: Investigating the Paranormal. .
Francis Hitching (1978). The World Atlas of Mysteries.
1874 births
1966 deaths
People from Potter County, Pennsylvania
Deaths from fire in the United States
Spontaneous human combustion | John Irving Bentley | [
"Chemistry"
] | 722 | [
"Combustion",
"Spontaneous human combustion"
] |
2,172,915 | https://en.wikipedia.org/wiki/Scattering%20channel | In scattering theory, a scattering channel is a quantum state of the colliding system before or after the collision (). The Hilbert space spanned by the states before collision (in states) is equal to the space spanned by the states after collision (out states) which are both Fock spaces if there is a mass gap. This is the reason why the S matrix which maps the in states onto the out states must be unitary. The scattering channel are also called scattering asymptotes. The Møller operators are mapping the scattering channels onto the corresponding states which are solution of the Schrödinger equation taking the interaction Hamiltonian into account. The Møller operators are isometric.
See also
LSZ formalism
Scattering | Scattering channel | [
"Physics",
"Chemistry",
"Materials_science"
] | 150 | [
"Scattering stubs",
"Quantum mechanics",
"Scattering",
"Particle physics",
"Condensed matter physics",
"Nuclear physics",
"Quantum physics stubs"
] |
2,173,021 | https://en.wikipedia.org/wiki/Lamport%20timestamp | The Lamport timestamp algorithm is a simple logical clock algorithm used to determine the order of events in a distributed computer system. As different nodes or processes will typically not be perfectly synchronized, this algorithm is used to provide a partial ordering of events with minimal overhead, and conceptually provide a starting point for the more advanced vector clock method. The algorithm is named after its creator, Leslie Lamport.
Distributed algorithms such as resource synchronization often depend on some method of ordering events to function. For example, consider a system with two processes and a disk. The processes send messages to each other, and also send messages to the disk requesting access. The disk grants access in the order the messages were received. For example process sends a message to the disk requesting write access, and then sends a read instruction message to process . Process receives the message, and as a result sends its own read request message to the disk. If there is a timing delay causing the disk to receive both messages at the same time, it can determine which message happened-before the other: happens-before if one can get from to by a sequence of moves of two types: moving forward while remaining in the same process, and following a message from its sending to its reception. A logical clock algorithm provides a mechanism to determine facts about the order of such events. Note that if two events happen in different processes that do not exchange messages directly or indirectly via third-party processes, then we say that the two processes are concurrent, that is, nothing can be said about the ordering of the two events.
Lamport invented a simple mechanism by which the happened-before ordering can be captured numerically. A Lamport logical clock is a numerical software counter value maintained in each process.
Conceptually, this logical clock can be thought of as a clock that only has meaning in relation to messages moving between processes. When a process receives a message, it re-synchronizes its logical clock with that sender. The above-mentioned vector clock is a generalization of the idea into the context of an arbitrary number of parallel, independent processes.
Algorithm
The algorithm follows some simple rules:
A process increments its counter before each local event (e.g., message sending event);
When a process sends a message, it includes its counter value with the message after executing step 1;
On receiving a message, the counter of the recipient is updated, if necessary, to the greater of its current counter and the timestamp in the received message. The counter is then incremented by 1 before the message is considered received.
In pseudocode, the algorithm for sending is:
# event is known
time = time + 1;
# event happens
send(message, time);
The algorithm for receiving a message is:
(message, timestamp) = receive();
time = max(timestamp, time) + 1;
Considerations
For every two different events and occurring in the same process, and being the timestamp for a certain event , it is necessary that never equals .
Therefore it is necessary that:
The logical clock be set so that there is a minimum of one clock "tick" (increment of the counter) between events and ;
In a multi-process or multi-threaded environment, it might be necessary to attach the process ID (PID) or any other unique ID to the timestamp so that it is possible to differentiate between events and which may occur simultaneously in different processes.
Causal ordering
For any two events, and , if there is any way that could have influenced , then the Lamport timestamp of will be less than the Lamport timestamp of . It’s also possible to have two events where we can’t say which came first; when that happens, it means that they couldn’t have affected each other. If and can’t have any effect on each other, then it doesn’t matter which one came first.
Implications
A Lamport clock may be used to create a partial ordering of events between processes. Given a logical clock following these rules, the following relation is true: if then , where means happened-before.
This relation only goes one way, and is called the clock consistency condition: if one event comes before another, then that event's logical clock comes before the other's. The strong clock consistency condition, which is two way (if then ), can be obtained by other techniques such as vector clocks. Using only a simple Lamport clock, only a partial causal ordering can be inferred from the clock.
However, via the contrapositive, it's true that implies . So, for example, if then cannot have happened-before .
Another way of putting this is that means that may have happened-before , or be incomparable with in the happened-before ordering, but did not happen after .
Nevertheless, Lamport timestamps can be used to create a total ordering of events in a distributed system by using some arbitrary mechanism to break ties (e.g., the ID of the process). The caveat is that this ordering is artificial and cannot be depended on to imply a causal relationship.
Lamport's logical clock in distributed systems
In a distributed system, it is not possible in practice to synchronize time across entities (typically thought of as processes) within the system; hence, the entities can use the concept of a logical clock based on the events through which they communicate.
If two entities do not exchange any messages, then they probably do not need to share a common clock; events occurring on those entities are termed as concurrent events.
Among the processes on the same local machine we can order the events based on the local clock of the system.
When two entities communicate by message passing, then the send event is said to happen-before the receive event, and the logical order can be established among the events.
A distributed system is said to have partial order if we can have a partial order relationship among the events in the system. If 'totality', i.e., causal relationship among all events in the system, can be established, then the system is said to have total order.
A single entity cannot have two events occur simultaneously. If the system has total order we can determine the order among all events in the system. If the system has partial order between processes, which is the type of order Lamport's logical clock provides, then we can only tell the ordering between entities that interact. Lamport addressed ordering two events with the same timestamp (or counter): "To break ties, we use any arbitrary total ordering of the processes." Thus two timestamps or counters may be the same within a distributed system, but in applying the logical clocks algorithm events that occur will always maintain at least a strict partial ordering.
Lamport clocks lead to a situation where all events in a distributed system are totally ordered. That is, if , then we can say actually happened before .
Note that with Lamport’s clocks, nothing can be said about the actual time of and . If the logical clock says , that does not mean in reality that actually happened before in terms of real time.
Lamport clocks show non-causality, but do not capture all causality. Knowing and shows did not cause or but we cannot say which initiated .
This kind of information can be important when trying to replay events in a distributed system (such as when trying to recover after a crash). If one node goes down, and we know the causal relationships between messages, then we can replay those messages and respect the causal relationship to get that node back up to the state it needs to be in.
Alternatives to potential causality
The happened-before relation captures potential causality, not true causality. In 2011-12, Munindar Singh proposed a declarative, multiagent approach based on true causality called information protocols. An information protocol specifies the constraints on communications between the agents that constitute a distributed system. However, instead of specifying message ordering (e.g., via a state machine, a common way of representing protocols in computing), an information protocol specifies the information dependencies between the communications that agents (the protocol's endpoints) may send. An agent may send a communication in a local state (its communication history) only if the communication and the state together satisfy the relevant information dependencies. For example, an information protocol for an e-commerce application may specify that to send a Quote with parameters ID (a uniquifier), item, and price, Seller must already know the ID and item from its state but can generate whatever price it wants. A remarkable thing about information protocols is that although emissions are constrained, receptions are not. Specifically, agents may receive communications in any order whatsoever -- receptions simply bring information and there is no point delaying them. This means that information protocols can be enacted over unordered communication services such as UDP.
The bigger idea is that of application semantics, the idea of designing distributed systems based on the content of the messages, an idea implicated in the end-to-end principle. Current approaches largely ignore semantics and focus on providing application-agnostic ("syntactic") message delivery and ordering guarantees in communication services, which is where ideas like potential causality help. But if we had a suitable way of doing application semantics, then we wouldn't need such communication services. An unordered, unreliable communication service would suffice. The real value of information protocols approach is that it provides the foundations for an application semantics approach.
See also
Matrix clock
Vector clock
Version vector
References
Logical clock algorithms | Lamport timestamp | [
"Physics"
] | 1,967 | [
"Spacetime",
"Logical clock algorithms",
"Physical quantities",
"Time"
] |
2,173,109 | https://en.wikipedia.org/wiki/Nick%20translation | Nick translation (or head translation), developed in 1977 by Peter Rigby and Paul Berg, is a tagging technique in molecular biology in which DNA polymerase I is used to replace some of the nucleotides of a DNA sequence with their labeled analogues, creating a tagged DNA sequence which can be used as a probe in fluorescent in situ hybridization (FISH) or blotting techniques. It can also be used for radiolabeling.
This process is called "nick translation" because the DNA to be processed is treated with DNAase to produce single-stranded "nicks", where one of the strands is missing nucleotides. This is followed by replacement in nicked sites by DNA polymerase I, which removes nucleotides from the 3' (downstream) end of a nick with its 3'-5' endonuclease activity and adds new, labeled dNTPs from the medium to the 5' end of the nick, moving the nick downstream in the process. To radioactively label a DNA fragment for use as a probe in blotting procedures, one of the incorporated nucleotides provided in the reaction is radiolabeled in the alpha phosphate position, often using phosphorus-32. Similarly, a fluorophore can be attached instead for fluorescent labelling, or an antigen for immunodetection. When DNA polymerase I eventually detaches from the DNA, it leaves another nick in the phosphate backbone. The nick has "translated" some distance depending on the processivity of the polymerase. This nick could be sealed by DNA ligase, or its 3' hydroxyl group could serve as the template for further DNA polymerase I activity. Proprietary enzyme mixes are available commercially to perform all steps in the procedure in a single incubation.
Nick translation could cause double-stranded DNA breaks, if DNA polymerase I encounters another nick on the opposite strand, resulting in two shorter fragments. This does not influence the performance of the labelled probe in in-situ hybridization.
References
Biochemistry detection methods
Genetics techniques
Laboratory techniques
Molecular biology techniques | Nick translation | [
"Chemistry",
"Engineering",
"Biology"
] | 429 | [
"Biochemistry methods",
"Genetics techniques",
"Genetic engineering",
"Chemical tests",
"Molecular biology techniques",
"nan",
"Biochemistry detection methods",
"Molecular biology"
] |
2,173,296 | https://en.wikipedia.org/wiki/Beijing%E2%80%93Tianjin%20intercity%20railway | The Beijing–Tianjin intercity railway () is a Chinese high-speed railway that runs line between Beijing and Tianjin. Designed for passenger traffic only, the Chinese government built the line to accommodate trains traveling at a maximum speed of , and currently carries CRH high-speed trains running speeds up to since August 2018.
When the line opened on August 1, 2008, it set the record for the fastest conventional train service in the world by top speed, and reduced travel time between the two largest cities in northern China from 70 to 30 minutes. A second phase of construction extended this line from the urban area of Tianjin to Yujiapu railway station (now Binhai railway station) in Tianjin's Binhai New Area was opened on September 20, 2015.
The line is projected to approach operating capacity in the first half of 2016. Anticipating this, a second parallel line, the Beijing–Binhai intercity railway, commenced construction on December 29, 2015. It will run from Beijing Sub-Center railway station to Binhai railway station via Baodi and Tianjin Binhai International Airport, along a new route to the northeast of the Beijing–Tianjin ICR.
Route and stations
Beijing to Tianjin
From Beijing South railway station, the line runs in a southeasterly direction, following the Beijing–Tianjin–Tanggu Expressway to Tianjin. It has three intermediate stations at Yizhuang, Yongle (reserved station) and Wuqing. The service has peak speed between cities.
As an intercity line, it will provide train service only between the two metropolitan areas, unlike the Beijing–Shanghai High-Speed Railway which will continue beyond Shanghai.
The Beijing–Tianjin intercity railway has a current length of (fare mileage: ), of which roughly is built on viaducts and the last on an embankment. The elevated track ensures level tracks over uneven terrain and eliminates the trains having to slow down to safely navigate through at-grade road crossings.
Extension to Binhai New Area
Sometimes known as the Tianjin–Binhai intercity railway, this extension continues southeasterly from Tianjin railway station, following the conventional railway to Tanggu railway station but built on elevated piers with three new stations were to be added. It passes a blockpost at Airport West, through Junliangcheng North railway station to Tanggu railway station before entering a tunnel to an underground station, Yujiapu railway station (now Binhai railway station).
Junctions have been built along the line allowing services to branch off to a new station under Tianjin Binhai International Airport and to Binhai West railway station on the Tianjin–Qinhuangdao high-speed railway. Trial operations of the extension started on August 14, 2015, with an official opening on September 20, 2015. the railway reduced travel times from Beijing South railway station to Binhai railway station to 1 hour 02 minutes and from Tianjin railway station to Binhai railway station to 23 minutes.
Service
These intercity trains are designated by the prefixed "C" (城) followed by four digits, from C2001 to C2298. Of these, C2001–C2198 are non-stop trains from Beijing South to Tianjin. The odd numbers for trains departing from Beijing South and even numbers for those running to Beijing South. Trains numbered C2201–C2268 are trains from Beijing South and Tianjin that stop on the way at Wuqing station. Trains C2271–C2298 run from Beijing South to Tanggu.
The line opened on August 1, 2008, with 47 daily pairs of intercity trains between Beijing South and Tianjin. Since September 14, 2008, 10 more pairs of trains were added, reducing the minimum interval from 15 minutes to 10 minutes. On September 24, 2008, 4 pairs of trains extended to Tanggu along the conventional railway. On September 28, 2008, 2 more pairs of trains were added into service. Frequencies have consistently been increased since to cope with rising demand, reaching 136 pairs of trains operating daily by 2018.
In addition to the intercity service, 13 pairs of trains were diverted to this line from the preexisting Beijing-Shanghai (Jinghu) Railway, including trains from Beijing South to Jinan, Qingdao, Shanghai, and Tianjin West. With the opening of the Beijing–Shanghai high-speed railway, these trains have been diverted to the new line.
Tickets
Manual ticket windows and automatic ticket machines developed by GDT are installed at each station along the Beijing-Tianjin intercity railroad (including Beijing South Station, Tianjin Station, and Wuqing Station). The vending machines can accept RMB 100, RMB 50, RMB 20, RMB 10, and RMB 5 bills and support payment by bank card. As of June 2009, tickets sold through the vending machines have accounted for 24% of all Beijing-Tianjin intercity train tickets.
On March 7, 2009, the Beijing-Tianjin Intercity Railway launched the "Beijing-Tianjin Intercity Railway Express Card", which uses a non-contact IC card system and can be recharged repeatedly. The "Beijing-Tianjin Intercity Railway Express Card" is divided into two types: Ordinary Card and Gold Card. The Ordinary Card is priced at RMB 1,000 and can be used in the second-class compartment of the Beijing-Tianjin Intercity Railway. The Gold Card is priced at RMB 3,000 and allows passengers to travel in the first-class compartment of the Beijing-Tianjin Intercity Railway, with the maximum amount of stored value not exceeding RMB 5,000. As of July 22, 2009, 13,700 Fast pass cards have been sold, and the total number of cardholders is 82,000, with the highest number of cardholders on a single train being 93. On June 1, 2011, the Beijing-Tianjin Intercity Fast pass card system was upgraded to match the real-name system of national train tickets, and all cardholders are required to bring their original valid IDs, otherwise they will not be able to enter the station and ride. Since July 22, 2012, the former "Beijing-Tianjin Intercity Railway Express Card" has been discontinued and the newly introduced "China Railway Silver Pass Card" has replaced the "Beijing-Tianjin Intercity Railway Express Card".
From June 12, 2011, online railroad ticketing was first tested on the Beijing-Tianjin Intercity Railway. Passengers can pay for their tickets online using bank cards (including debit and credit cards) from the Industrial and Commercial Bank of China, the Agricultural Bank of China, the Bank of China and China Merchants Bank or through the China UnionPay online payment system and can make subsequent ticket changes and refunds. After purchasing a ticket, if passengers use a second-generation ID card, passengers can use their ID card as a travel voucher to check their ticket directly at the entry/exit gates; if passengers use other documents or need reimbursement vouchers, they can exchange their ID card for a paper ticket at the station ticket window or at a sales outlet.
On February 6, 2017, the mayor of Tianjin, Wang Dongfeng, said at the symposium of the central media's "Beijing-Tianjin-Hebei Cooperative Development Research Line" interview group that the Beijing–Tianjin intercity railway is expected to implement a monthly ticket system, and from May 1, 2017, in order to facilitate the travel of commuters between Beijing and Tianjin, China Railway Yin Tong Payment Co. "China Railway Beijing-Tianjin Intercity City Discount Card" for frequent commuters of the Beijing-Tianjin Intercity Railway, who can enjoy certain discounts for a certain number of purchases. As of April 30, 2018, a total of 20,160 "Beijing-Tianjin Intercity City Discount Cards" were sold, including 2,911 gold cards and 17,249 silver cards, with a total of 962,600 passengers taking the card.
Technical information
The line is the first railway in China to be built for operational speeds above . This railway line allows speeds up to . A trip between Beijing and Tianjin takes 30 minutes.
High-speed lines
of the line is ballast-free track, using the plate ballast-free track technology imported from the German company Borg. A total of 36,092 pieces of Borg track plates were used on the whole line. In addition, in order to save land use, the Beijing-Tianjin Intercity Railway uses bridges instead of roads on a large scale, with 87% of the entire line being bridges, with a cumulative length of , including 5 special bridges. Each kilometer of bridge can save of land compared with the traditional roadbed. In addition, soft soil, loose and soft soil areas roadbed design, bridge deformation and foundation settlement control technology are adopted. The railroad track uses the advanced welding process of long steel rails, and the comprehensive inspection shows that the line quality is stable. The expected service life of the ballast-free track is 60 years, while the service life of the main structure of the bridge is up to 100 years, reducing the comprehensive maintenance costs.
Traction power supply
When a high-speed train runs at a speed of , that is, nearly 100 meters per second, there are high requirements for the stability of the current receiving between the catenary and the pantograph, as well as the power supply equipment for electrified railways. The Beijing–Tianjin intercity railway adopts SCADA system for remote monitoring, and for the first time in China adopts a magnesium-copper alloy, small cross-section (), high tension, lightweight simple chain-type belt reinforced wire suspension catenary system. There are 3 traction substations, 4 divisional substations and 2 switching substations along the whole line.
Communication Signal and Dispatch
The Beijing-Tianjin intercity railroad adopts GSM-R railroad digital mobile communication system for the whole line, which realizes mobile voice communication and wireless data transmission; it also uses ETCS-1 level and CTCS-2 level signaling systems, as well as CTCS-3D (China high-speed railroad train control terrestrial digital transmission system) level signaling system, and the minimum departure interval is designed to be 3 minutes. In terms of operation scheduling, Beijing-Tianjin Intercity Railway applies decentralized self-regulating dispatching centralized system (CTC) to realize centralized dispatching control for trains running on the whole line.
Safety and security
In order to avoid natural disasters affecting the safety of railroad traffic, the whole line of Beijing-Tianjin Intercity Railway has established a disaster prevention and safety monitoring system including wind warning monitoring, earthquake monitoring system and foreign object intrusion monitoring. In addition, the whole railroad line also has a comprehensive grounding system, ballastless track, contact network columns, station platforms, sound barriers, retaining walls are connected to ground.
History
Preliminary preparation
In February 2002, Binglian Liu, a professor at Nankai University in Tianjin, first proposed the construction of a high-speed railroad between Beijing and Tianjin during a discussion on the "Beijing-Tianjin Economic Integration Strategy Study and Proposal". In June 2003, the Ministry of Railways and the municipal governments of Beijing and Tianjin began preliminary discussions. In January 2004, the executive meeting of the State Council adopted the Medium and Long-Term Railway Network Plan, in which the Beijing-Tianjin Intercity Railway was included, and on October 24, 2004, the Ministry of Railways, Beijing Municipal Government and Tianjin Municipal Government jointly determined the route plan. On March 3, 2005, Zhijun Liu, Minister of Railway, Qishan Wang, Mayor of Beijing, and Xianglong Dai, Mayor of Tianjin, co-chaired the meeting of the Beijing-Tianjin Intercity Rail Transit System Construction Leading Group. The meeting determined the financing method of the railroad construction.
Construction history
On July 4, 2005, the construction of the Beijing-Tianjin Intercity Railway officially started. The entire Beijing-Tianjin intercity rail transit system construction project adopts the "two points and one line" construction strategy, which is divided into three parts: the Beijing–Tianjin intercity railway, Beijing South Railway Station, and Tianjin Station transportation hub reconstruction project, which are carried out simultaneously. In August 2007, the roadbed and bridge works were completed. On October 31, 2007, the track Borg board laying was completed. On November 13, 2007, the laying of the track began. The 100-meter-long steel rails used on the entire line of the Beijing-Tianjin Intercity Railway are all produced by Panzhihua Iron and Steel Company, with a total weight of 27,000 tons, and the 500-meter-long steel rails are welded on site. On December 16, 2007, the entire track was laid. On December 19, 2007, the construction of the electrified catenary under the charge of the China Railway Electrification Bureau was in full swing. On February 2, 2008, the electrification project was completed, and the grid began to be electrified. In March 2008, the construction of the Beijing–Tianjin intercity railway entered the stage of system joint debugging and joint testing, including four parts: EMU type test, integration test, comprehensive test and trial operation. On May 13, 2008, a CRH2C electric multiple units from Beijing South Railway Station to Tianjin Railway Station created the fastest speed of a wheel-rail train in China at that time at a speed of 372 kilometers per hour during the test. On June 24, 2008, a CRH3C EMU reached a speed of 394.3 kilometers per hour during the test on the Beijing-Tianjin Intercity Railway, breaking the record set by the CRH2C EMU on May 13. And refresh the highest speed of China's current wheel-rail train. On July 1, 2008, the Beijing-Tianjin Intercity Railway began trial operation without carrying passengers according to the official operating conditions to verify the tracking interval of trains, the interval between trains receiving and departing at stations, etc., and at the same time arrange simulated equipment failures and emergency response in bad weather drill.
Open for operation
The Beijing-Tianjin Intercity Railway was officially opened to traffic on August 1, 2008, with Vice Premier Zhang Dejiang, Beijing Municipal Party Secretary Liu Qi and Tianjin Municipal Party Secretary Zhang Gaoli attending the opening ceremony. On September 27, 2008, Premier Wen Jiabao visited the Beijing-Tianjin Intercity Railway and put forward the requirements of "quality, energy saving, land saving and environmental protection" for the railroad construction. On March 4, 2009, Hong Kong Chief Executive Donald Tsang visited Tianjin by the Beijing-Tianjin Intercity High-speed Train, and on August 1, 2009, the Beijing-Tianjin Intercity Railway celebrated its first anniversary of operation. Within one year, the Beijing-Tianjin intercity high-speed trains sent a total of 18.7 million passengers, the train punctuality rate reached 98%, the average train occupancy rate reached nearly 70%, and more than 200 heads of state, dignitaries and railroad inspection teams from all over the world, including the United States, Russia and Japan, visited the Beijing-Tianjin intercity railroad.
On October 1, 2009, construction of the Beijing-Tianjin Intercity Extension, or Jinbin Intercity Railway, began, with an estimated 40+ minutes from Beijing South Station to Binhai Station in the core area of Binhai New Area. The Yujiapu extension opened on September 20, 2015, and the Beijing-Tianjin intercity train no longer travels the Jinshan Railway when running between the central city of Tianjin and the core area of the Binhai New Area, and the running time has been significantly shortened.
Ridership
Before the line was finished, it was expected that the railway line would handle 32 million passengers in 2008 and 54 million passengers in 2015.
The line opened on August 1, 2008, just before the opening of the 2008 Summer Olympics, which held some football matches in Tianjin. The introduction of high-speed rail service significantly boosted rail travel between the two cities. In 2007, conventional train service between Beijing and Tianjin delivered 8.3 million rides. In the first year of high-speed rail service, from August 2008 to July 2009, total rail passenger volume between Beijing and Tianjin reached 18.7 million, of which 15.85 million rode the Intercity trains. Meanwhile, during the same period, ridership on intercity buses fell by 36.8%. As of September 2010, daily ridership averaged 69,000 or an annual rate of 25.2 million. The line has a capacity of delivering 100 million rides annually and initial estimated repayment period of 16 years.
From 2008 to 2013, ridership grew at an annual rate of 20% reaching a cumulative 88 million passengers. In the first half of 2018 the line was carrying over 82,000 passengers each day.
Finances
At the start of construction, an expected ¥12.3 billion (US$1.48 billion) was expected to be invested into the Beijing–Tianjin intercity railway. At the time of construction, the Ministry of Railways and the Tianjian government had each contributed ¥2.6 billion (US$325 million) to the project, while the central government requisitioned land and paid for the resettlement of those affected. However, it would later cost $2.34 billion to build.
As of 2010, the line cost ¥1.8 billion per annum to operate, including ¥0.6 billion in interest payments on its ¥10 billion of loan obligations. The terms of the loans range from 5–10 years at interest rates of 6.3 to 6.8 percent. In its first year of operation from August 1, 2008, to July 31, 2009, the line generated ¥1.1 billion in revenues on 18.7 million rides delivered and incurred a loss of ¥0.7 billion. In the second year, ridership rose to 22.3 million and revenues improved to ¥1.4 billion, which narrowed to below ¥0.5 billion. To break even, the line must deliver 30 million rides annually. To be able to repay principal, ridership would need to exceed 40 million. As of 2012, Beijing–Tianjin Intercity Railway officially reported to break even financially, defined as operational costs with debt payments is matched with revenue. By 2015, the line is operating with an operational profit.
Long-term planning
Beijing-Tianjin intercity railroad will lead the future liaison line through Nan Cang Bridge to access the intercity field of Tianjin West Station. The Beijing-Shanghai high-speed railway Beijing-Tianjin intercity liaison line Nan Cang Special Bridge project starts at the Pu Ji He Dao overpass and extends to the east of Tianjin West Station in a southwesterly direction, with a total length of 4.38 km. In turn, it realizes the connection between Beijing-Tianjin intercity railroad and Beijing-Shanghai high-speed railroad. The liaison line has been basically completed and will be ready for operation after the construction of the intercity field at Tianjin West Station.
See also
Fastest trains in China
References
External links
Beijing – Tianjin elevated line anticipates 350 km/h , Railway Gazette International March 2006
Beijing-Tianjin High-Speed Commuter Link, China
Beijing-Tianjin High-Speed Train Schedule
High-speed railway lines in China
Rail transport in Beijing
Rail transport in Tianjin
Siemens Mobility projects
Railway lines opened in 2008
Standard gauge railways in China
25 kV AC railway electrification
2008 in Tianjin
2008 in Beijing | Beijing–Tianjin intercity railway | [
"Technology",
"Engineering"
] | 3,875 | [
"Siemens Mobility projects",
"Transport systems"
] |
2,173,686 | https://en.wikipedia.org/wiki/Logical%20holism | In Philosophy, logical holism is the belief that the world operates in such a way that no part can be known without the whole being known first.
Theoretical holism is a theory in philosophy of science, that a theory of science can only be understood in its entirety, introduced by Pierre Duhem. Different total theories of science are understood by making them commensurable allowing statements in one theory to be converted to sentences in another. Richard Rorty argued that when two theories are incompatible a process of hermeneutics is necessary.
Practical holism is a concept in the work of Martin Heidegger than posits it not possible to produce a complete understanding of one's own experience of reality because your mode of existence is embedded in cultural practices, the constraints of the task that you are doing.
Bertrand Russell concluded that "Hegel's dialectical logical holism should be dismissed in favour of the new logic of propositional analysis." and introduced a form of logical atomism.
See also
Doctrine of internal relations
Meaning holism
References
Holism
Epistemological theories
Theories of deduction | Logical holism | [
"Mathematics"
] | 224 | [
"Theories of deduction"
] |
2,173,775 | https://en.wikipedia.org/wiki/John%20E.%20Heymer | John Edward Heymer is a British former police officer and author who has written extensively on spontaneous human combustion (SHC).
Heymer was born in Bow, East London, in 1934 and went to South Wales at the age of 16 to become a coal miner. He returned to London two years later for National Service and spent three years in the Royal Fusiliers. He then returned to work as a miner but left after being injured during a roof fall. He joined the Monmouthshire Constabulary and spent a few years as a police constable on patrol, followed by a few years in the photography department at Police Headquarters in Croesyceiliog. He then became a Scenes of Crimes Officer and Crime Prevention Officer.
Heymer describes himself as an autodidact, with a lifelong passion for knowledge, and has written that he is not afraid to pursue this into areas where other people might fear ridicule or contempt.
He was a gradual convert to belief in SHC, mainly as a result of his attendance as scene of crime officer at the apparent death by SHC of an elderly man in Ebbw Vale (Henry Thomas).
Heymer believes that SHC is not a supernatural phenomenon, but a rare natural phenomenon that has not yet been examined sufficiently (mainly due to the difficulty presented by the results of SHC).
He has published articles about SHC in New Scientist and Fortean Times, and has appeared on the BBC television programmes Newsnight and QED ("The Burning Question").
In 1996, he published a book entitled The Entrancing Flame, which was about his personal experience of dealing with the results of SHC and attempted to analyse the phenomenon.
Notes
1934 births
2011 deaths
People from Bow, London
British police officers
English miners
English writers on paranormal topics
Spontaneous human combustion | John E. Heymer | [
"Chemistry"
] | 363 | [
"Combustion",
"Spontaneous human combustion"
] |
2,173,845 | https://en.wikipedia.org/wiki/Omamori | are Japanese amulets commonly sold at Shinto shrines and Buddhist temples, dedicated to particular Shinto as well as Buddhist figures and are said to provide various forms of luck and protection.
Origin and usage
The word means 'protection', with being the (honorific) form of the word. Originally made from paper or wood, modern amulets are small items usually kept inside a brocade bag and may contain a prayer, religious inscription of invocation. are available at both Shinto shrines and Buddhist temples with few exceptions and are available for sale, regardless of one's religious affiliation.
are then made sacred through the use of ritual, and are said to contain (spiritual offshoots) in a Shinto context or (manifestations) in a Buddhist context.
While are intended for temple tourists' personal use, they are mainly viewed as a donation to the temple or shrine the person is visiting. Visitors often give as a gift to another person as a physical form of well-wishing.
Design and function
are usually covered with brocaded silk and enclose paper or pieces of wood with prayers written on them, which are supposed to bring good luck to the bearer on particular occasions, tasks, or ordeals. are also used to ward off bad luck and are often spotted on bags, hung on cellphone straps, in cars, etc.
have changed over the years from being made mostly of paper and/or wood to being made out of a wide variety of materials (i.e. bumper decals, bicycle reflectors, credit cards, etc.). Modern commercialism has also taken over a small part of the production of . Usually this happens when more popular shrines and temples cannot keep up with the high demand for certain charms. They then turn to factories to manufacture the . However, priests have been known to complain about the quality and authenticity of the products made by factories.
According to Yanagita Kunio (1969):
Usage
may provide general blessings and protection, or may have a specific focus such as:
: traffic safety-protection for drivers and travelers of all sorts
: avoidance of evil
: open luck, better fortune
: education and passing examinations—for students and scholars
: prosperity in business—success in business and matters of money
: acquisition of a mate and marriage—available for singles and couples to ensure love and marriage
: protection for pregnant women for a healthy pregnancy and easy delivery
: safety (well-being) of one's family, peace and prosperity in the household
Customarily, are not opened in order to avoid losing their protective benefits. They are instead carried on one's person, or tied to something like a backpack or a purse. It is not necessary, but amulets are customarily replaced once a year to ward off bad luck from the previous year. Old amulets are usually returned to the same shrine or temple they were purchased at so they can be disposed of properly. Amulets are commonly returned on or slightly after the Japanese New Year so the visitor has a fresh start for the New Year with a new .
Old traditionally should not be disposed of, but burned, as a sign of respect to the deity that assisted the person throughout the year.
If a shrine or temple visitor cannot find an that meets their need, they can request for a priest to have one made. If enough people request for this same type of , the temple or shrine may start producing them for everyday availability.
Modern commercial uses
There are modern commercial versions of that are typically not spiritual in nature and are not issued by a shrine or temple. It has become popular for stores in Japan to feature generic with popular characters such as Mickey Mouse, Hello Kitty, Snoopy, Kewpie, etc.
See also
References
Further reading
External links
Omamori.com
Amulets
Talismans
Shinto
Buddhist religious objects
Religious objects
Shinto religious objects
Superstitions of Japan
Eastern esotericism
Japanese words and phrases | Omamori | [
"Physics"
] | 789 | [
"Religious objects",
"Physical objects",
"Matter"
] |
2,173,873 | https://en.wikipedia.org/wiki/Small%20subgroup%20confinement%20attack | In cryptography, a subgroup confinement attack, or small subgroup confinement attack, on a cryptographic method that operates in a large finite group is where an attacker attempts to compromise the method by forcing a key to be confined to an unexpectedly small subgroup of the desired group.
Several methods have been found to be vulnerable to subgroup confinement attack, including some forms or applications of Diffie–Hellman key exchange and DH-EKE.
References
Cryptographic attacks
Finite groups | Small subgroup confinement attack | [
"Mathematics",
"Technology"
] | 93 | [
"Mathematical structures",
"Cryptographic attacks",
"Finite groups",
"Algebraic structures",
"Computer security exploits"
] |
2,174,011 | https://en.wikipedia.org/wiki/Avoided%20crossing | In quantum physics and quantum chemistry, an avoided crossing (AC, sometimes called intended crossing, non-crossing or anticrossing) is the phenomenon where two eigenvalues of a Hermitian matrix representing a quantum observable and depending on continuous real parameters cannot become equal in value ("cross") except on a manifold of dimension . The phenomenon is also known as the von Neumann–Wigner theorem. In the case of a diatomic molecule (with one parameter, namely the bond length), this means that the eigenvalues cannot cross at all. In the case of a triatomic molecule, this means that the eigenvalues can coincide only at a single point (see conical intersection).
This is particularly important in quantum chemistry. In the Born–Oppenheimer approximation, the electronic molecular Hamiltonian is diagonalized on a set of distinct molecular geometries (the obtained eigenvalues are the values of the adiabatic potential energy surfaces). The geometries for which the potential energy surfaces are avoiding to cross are the locus where the Born–Oppenheimer approximation fails.
Avoided crossing also occur in the resonance frequencies of undamped mechanical systems, where the stiffness and mass matrices are real symmetric. There the resonance frequencies are the square root of the generalized eigenvalues.
In two-state systems
Emergence
Study of a two-level system is of vital importance in quantum mechanics because it embodies simplification of many of physically realizable systems. The effect of perturbation on a two-state system Hamiltonian is manifested through avoided crossings in the plot of individual energy versus energy difference curve of the eigenstates. The two-state Hamiltonian can be written as
The eigenvalues of which are and and the eigenvectors, and . These two eigenvectors designate the two states of the system. If the system is prepared in either of the states it would remain in that state. If happens to be equal to there will be a twofold degeneracy in the Hamiltonian. In that case any superposition of the degenerate eigenstates is evidently another eigenstate of the Hamiltonian. Hence the system prepared in any state will remain in that forever.
However, when subjected to an external perturbation, the matrix elements of the Hamiltonian change. For the sake of simplicity we consider a perturbation with only off diagonal elements. Since the overall Hamiltonian must be Hermitian we may simply write the new Hamiltonian
Where P is the perturbation with zero diagonal terms. The fact that P is Hermitian fixes its off-diagonal components. The modified eigenstates can be found by diagonalising the modified Hamiltonian. It turns out that the new eigenvalues and are
If a graph is plotted varying along the horizontal axis and or along the vertical, we find two branches of a hyperbola (as shown in the figure). The curve asymptotically approaches the original unperturbed energy levels. Analyzing the curves it becomes evident that even if the original states were degenerate (i.e. ) the new energy states are no longer equal. However, if is set to zero we may find at , and the levels cross. Thus with the effect of the perturbation these level crossings are avoided.
Quantum resonance
The immediate impact of avoided level crossing in a degenerate two state system is the emergence of a lowered energy eigenstate. The effective lowering of energy always correspond to increasing stability.(see: Energy minimization) Bond resonance in organic molecules exemplifies the occurrence of such avoided crossings. To describe these cases we may note that the non-diagonal elements in an erstwhile diagonalised Hamiltonian not only modify the energy eigenvalues but also superpose the old eigenstates into the new ones. These effects are more prominent if the original Hamiltonian had degeneracy. This superposition of eigenstates to attain more stability is precisely the phenomena of chemical bond resonance.
Our earlier treatment started by denoting the eigenvectors and as the matrix representation of eigenstates and of a two-state system. Using bra–ket notation the matrix elements of are actually the terms
with
where due to the degeneracy of the unperturbed Hamiltonian and the off-diagonal perturbations are and .
The new eigenstates and can be found by solving the eigenvalue equations and . From simple calculations it can be shown that
and
where
It is evident that both of the new eigenstates are superposition of the original degenerate eigenstates and one of the eigenvalues (here ) is less than the original unperturbed eigenenergy. So the corresponding stable system will naturally mix up the former unperturbed eigenstates to minimize its energy. In the example of benzene the experimental evidences of probable bond structures give rise to two different eigenstates, and . The symmetry of these two structures mandates that .
However it turns out that the two-state Hamiltonian of benzene is not diagonal. The off-diagonal elements result into lowering of energy and the benzene molecule stabilizes in a structure which is a superposition of these symmetric ones with energy . For any general two-state system avoided level crossing repels the eigenstates and such that it requires more energy for the system to achieve the higher energy configuration.
Resonances in avoided crossing
In molecules, the nonadiabatic couplings between two adiabatic potentials build the AC region. Because they are not in the bound state region of the adiabatic potentials, the rovibronic resonances in the AC region of two-coupled potentials are very special and usually do not play important roles on the scatterings. Exemplified in particle scattering, resonances in the AC region are comprehensively investigated. The effects of resonances in the AC region on the scattering cross sections strongly depend on the nonadiabatic couplings of the system, it can be very significant as sharp peaks, or inconspicuous buried in the background. More importantly, it shows a simple quantity proposed by Zhu and Nakamura to classify the coupling strength of nonadiabatic interactions, can be well applied to quantitatively estimate the importance of resonances in the AC region.
General avoided crossing theorem
The above illustration of avoided crossing however is a very specific case. From a generalised view point the phenomenon of avoided crossing is actually controlled by the parameters behind the perturbation. For the most general perturbation affecting a two-dimensional subspace of the Hamiltonian , we may write the effective Hamiltonian matrix in that subspace as
Here the elements of the state vectors were chosen to be real so that all the matrix elements become real. Now the eigenvalues of the system for this subspace are given by
The terms under the square root are squared real numbers. So for these two levels to cross we simultaneously require
Now if the perturbation has parameters we may in general vary these numbers to satisfy these two equations.
If we choose the values of to then both of the equations above have one single free parameter. In general it is not possible to find one such that both of the equations are satisfied. However, if we allow another parameter to be free, both of these two equations will now be controlled by the same two parameters
And generally there will be two such values of them for which the equations will be simultaneously satisfied. So with distinct parameters parameters can always be chosen arbitrarily and still we can find two such s such that there would be crossing of energy eigenvalues. In other words, the values of and would be the same for freely varying co-ordinates (while the rest of the two co-ordinates are fixed from the condition equations). Geometrically the eigenvalue equations describe a surface in dimensional space.
Since their intersection is parametrized by coordinates, we may formally state that for continuous real parameters controlling the perturbed Hamiltonian, the levels (or surfaces) can only cross at a manifold of dimension . However the symmetry of the Hamiltonian has a role to play in the dimensionality. If the original Hamiltonian has asymmetric states, , the off-diagonal terms vanish automatically to ensure hermiticity. This allows us to get rid of the equation . Now from similar arguments as posed above, it is straightforward that for an asymmetrical Hamiltonian, the intersection of energy surfaces takes place in a manifold of dimension .
In polyatomic molecules
In an N-atomic polyatomic molecule there are 3N-6 vibrational coordinates (3N-5 for a linear molecule) that enter into the electronic Hamiltonian as parameters. For a diatomic molecule there is only one such coordinate, the bond length r. Thus, due to the avoided crossing theorem, in a diatomic molecule we cannot have level crossings between electronic states of the same symmetry. However, for a polyatomic molecule there is more than one geometry parameter in the electronic Hamiltonian and level crossings between electronic states of the same symmetry are not avoided.
See also
Geometric phase
Christopher Longuet-Higgins
Conical intersection
Vibronic coupling
Adiabatic theorem
Bond hardening
Bond softening
Landau–Zener formula
Level repulsion
References
Sources
Quantum mechanics
Quantum chemistry | Avoided crossing | [
"Physics",
"Chemistry"
] | 1,923 | [
"Quantum chemistry",
"Theoretical physics",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
2,174,032 | https://en.wikipedia.org/wiki/Custom%20hardware%20attack | In cryptography, a custom hardware attack uses specifically designed application-specific integrated circuits (ASIC) to decipher encrypted messages.
Mounting a cryptographic brute force attack requires a large number of similar computations: typically trying one key, checking if the resulting decryption gives a meaningful answer, and then trying the next key if it does not. Computers can perform these calculations at a rate of millions per second, and thousands of computers can be harnessed together in a distributed computing network. But the number of computations required on average grows exponentially with the size of the key, and for many problems standard computers are not fast enough. On the other hand, many cryptographic algorithms lend themselves to fast implementation in hardware, i.e. networks of logic circuits, also known as gates. Integrated circuits (ICs) are constructed of these gates and often can execute cryptographic algorithms hundreds of times faster than a general purpose computer.
Each IC can contain large numbers of gates (hundreds of millions in 2005). Thus, the same decryption circuit, or cell, can be replicated thousands of times on one IC. The communications requirements for these ICs are very simple. Each must be initially loaded with a starting point in the key space and, in some situations, with a comparison test value (see known plaintext attack). Output consists of a signal that the IC has found an answer and the successful key.
Since ICs lend themselves to mass production, thousands or even millions of ICs can be applied to a single problem. The ICs themselves can be mounted in printed circuit boards. A standard board design can be used for different problems since the communication requirements for the chips are the same. Wafer-scale integration is another possibility. The primary limitations on this method are the cost of chip design, IC fabrication, floor space, electric power and thermal dissipation.
History
The earliest custom hardware attack may have been the Bombe used to recover Enigma machine keys in World War II. In 1998, a custom hardware attack was mounted against the Data Encryption Standard cipher by the Electronic Frontier Foundation. Their "Deep Crack" machine cost U.S. $250,000 to build and decrypted the DES Challenge II-2 test message after 56 hours of work. The only other confirmed DES cracker was the COPACOBANA machine (Cost-Optimized PArallel COde Breaker) built in 2006. Unlike Deep Crack, COPACOBANA consists of commercially available FPGAs (reconfigurable logic gates). COPACOBANA costs about $10,000 to build and will recover a DES key in under 6.4 days on average. The cost decrease by roughly a factor of 25 over the EFF machine is an impressive example of the continuous improvement of digital hardware. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007, SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008, their COPACOBANA RIVYERA reduced the time to break DES to the current record of less than one day, using 128 Spartan-3 5000's. It is generally believed that large government code breaking organizations, such as the U.S. National Security Agency, make extensive use of custom hardware attacks, but no examples have been declassified or leaked .
See also
TWIRL
TWINKLE
References
External links
COPACOBANA, Cost Optimized Code Breaker and Analyzer
Cryptographic attacks
Hardware acceleration | Custom hardware attack | [
"Technology"
] | 719 | [
"Cryptographic attacks",
"Hardware acceleration",
"Computer security exploits",
"Computer systems"
] |
2,174,079 | https://en.wikipedia.org/wiki/Chen%E2%80%93Ho%20encoding | Chen–Ho encoding is a memory-efficient alternate system of binary encoding for decimal digits.
The traditional system of binary encoding for decimal digits, known as binary-coded decimal (BCD), uses four bits to encode each digit, resulting in significant wastage of binary data bandwidth (since four bits can store 16 states and are being used to store only 10), even when using packed BCD.
The encoding reduces the storage requirements of two decimal digits (100 states) from 8 to 7 bits, and those of three decimal digits (1000 states) from 12 to 10 bits using only simple Boolean transformations avoiding any complex arithmetic operations like a base conversion.
History
In what appears to have been a multiple discovery, some of the concepts behind what later became known as Chen–Ho encoding were independently developed by Theodore M. Hertz in 1969 and by Tien Chi Chen () (1928–) in 1971.
Hertz of Rockwell filed a patent for his encoding in 1969, which was granted in 1971.
Chen first discussed his ideas with Irving Tze Ho () (1921–2003) in 1971. Chen and Ho were both working for IBM at the time, albeit in different locations. Chen also consulted with Frank Chin Tung to verify the results of his theories independently. IBM filed a patent in their name in 1973, which was granted in 1974. At least by 1973, Hertz's earlier work must have been known to them, as the patent cites his patent as prior art.
With input from Joseph D. Rutledge and John C. McPherson, the final version of the Chen–Ho encoding was circulated inside IBM in 1974 and published in 1975 in the journal Communications of the ACM. This version included several refinements, primarily related to the application of the encoding system. It constitutes a Huffman-like prefix code.
The encoding was referred to as Chen and Ho's scheme in 1975, Chen's encoding in 1982 and became known as Chen–Ho encoding or Chen–Ho algorithm since 2000. After having filed a patent for it in 2001, Michael F. Cowlishaw published a further refinement of Chen–Ho encoding known as densely packed decimal (DPD) encoding in IEE Proceedings – Computers and Digital Techniques in 2002. DPD has subsequently been adopted as the decimal encoding used in the IEEE 754-2008 and ISO/IEC/IEEE 60559:2011 floating-point standards.
Application
Chen noted that the digits zero through seven were simply encoded using three binary digits of the corresponding octal group. He also postulated that one could use a flag to identify a different encoding for the digits eight and nine, which would be encoded using a single bit.
In practice, a series of Boolean transformations are applied to the stream of input bits, compressing BCD encoded digits from 12 bits per three digits to 10 bits per three digits. Reversed transformations are used to decode the resulting coded stream to BCD. Equivalent results can also be achieved by the use of a look-up table.
Chen–Ho encoding is limited to encoding sets of three decimal digits into groups of 10 bits (so called declets). Of the 1024 states possible by using 10 bits, it leaves only 24 states unused (with don't care bits typically set to 0 on write and ignored on read). With only 2.34% wastage it gives a 20% more efficient encoding than BCD with one digit in 4 bits.
Both, Hertz and Chen also proposed similar, but less efficient, encoding schemes to compress sets of two decimal digits (requiring 8 bits in BCD) into groups of 7 bits.
Larger sets of decimal digits could be divided into three- and two-digit groups.
The patents also discuss the possibility to adapt the scheme to digits encoded in any other decimal codes than 8-4-2-1 BCD, like f.e. Excess-3, Excess-6, Jump-at-2, Jump-at-8, Gray, Glixon, O'Brien type-I and Gray–Stibitz code. The same principles could also be applied to other bases.
In 1973, some form of Chen–Ho encoding appears to have been utilized in the address conversion hardware of the optional IBM 7070/7074 emulation feature for the IBM System/370 Model 165 and 370 Model 168 computers.
One prominent application uses a 128-bit register to store 33 decimal digits with a three digit exponent, effectively not less than what could be achieved using binary encoding (whereas BCD encoding would need 144 bits to store the same number of digits).
Encodings for two decimal digits
Hertz encoding
This encoding is not parity-preserving.
Early Chen–Ho encoding, method A
This encoding is not parity-preserving.
Early Chen–Ho encoding, method B
This encoding is not parity-preserving.
Patented and final Chen–Ho encoding
Assuming certain values for the don't-care bits (f.e. 0), this encoding is parity-preserving.
Encodings for three decimal digits
Hertz encoding
This encoding is not parity-preserving.
Early Chen–Ho encoding
This encoding is not parity-preserving.
Patented Chen–Ho encoding
This encoding is not parity-preserving.
Final Chen–Ho encoding
This encoding is not parity-preserving.
Storage efficiency
See also
Binary-coded decimal (BCD)
Densely packed decimal (DPD)
DEC RADIX 50 / MOD40
IBM SQUOZE
Packed BCD
Unicode transformation format (UTF) (similar encoding scheme)
Length-limited Huffman code
Notes
References
Further reading
(60 pages) , (40 pages) and (11 pages) (NB. Three expired patents cited in both, the and s.)
Binary arithmetic | Chen–Ho encoding | [
"Mathematics"
] | 1,173 | [
"Arithmetic",
"Binary arithmetic"
] |
2,174,083 | https://en.wikipedia.org/wiki/Power-system%20automation | Power-system automation is the act of automatically controlling the power system via instrumentation and control devices. Substation automation refers to using data from Intelligent electronic devices (IED), control and automation capabilities within the substation, and control commands from remote users to control power-system devices.
Since full substation automation relies on substation integration, the terms are often used interchangeably. Power-system automation includes processes associated with generation and delivery of power. Monitoring and control of power delivery systems in the substation and on the pole reduce the occurrence of outages and shorten the duration of outages that do occur. The IEDs, communications protocols, and communications methods, work together as a system to perform power-system automation.
The term “power system” describes the collection of devices that make up the physical systems that generate, transmit, and distribute power. The term “instrumentation and control (I&C) system” refers to the collection of devices that monitor, control, and protect the power system. Many power-system automation are monitored by SCADA.
Automation tasks
Power-system automation is composed of several tasks.
Data acquisition Data acquisition refers to acquiring, or collecting, data. This data is collected in the form of measured analog current or voltage values or the open or closed status of contact points. Acquired data can be used locally within the device collecting it, sent to another device in a substation, or sent from the substation to one or several databases for use by operators, engineers, planners, and administration.
Supervision Computer processes and personnel supervise, or monitor, the conditions and status of the power system using this acquired data. Operators and engineers monitor the information remotely on computer displays and graphical wall displays or locally, at the device, on front-panel displays and laptop computers.
Control Control refers to sending command messages to a device to operate the I&C and power-system devices. Traditional supervisory control and data acquisition (SCADA) systems rely on operators to supervise the system and initiate commands from an operator console on the master computer. Field personnel can also control devices using front-panel push buttons or a laptop computer.
In addition, another task is power-system integration, which is the act of communicating data to, from, or among IEDs in the I&C system and remote users. Substation integration refers to combining data from the IED's local to a substation so that there is a single point of contact in the substation for all of the I&C data.
Power-system automation processes rely on data acquisition; power-system supervision and power-system control all working together in a coordinated automatic fashion. The commands are generated automatically and then transmitted in the same fashion as operator initiated commands.
Hardware structure of the power-system automation
Data acquisition system
The instrument transformers with protective relays are used to sense the power-system voltage and current. They are physically connected to power-system apparatus and convert the actual power-system signals. The transducers convert the analog output of an instrument transformer from one magnitude to another or from one value type to another, such as from an ac current to dc voltage. Also the input data is taken from the auxiliary contacts of switch gears and power-system control equipment.
Main processing instrumentation and control (I&C) device
The I&C devices built using microprocessors are commonly referred to as intelligent electronic devices (IEDs). Microprocessors are single chip computers that allow the devices into which they are built to process data, accept commands, and communicate information like a computer. Automatic processes can be run in the IEDs. Some IEDs used in power-system automation are:
Remote Terminal Unit (RTU) A remote terminal unit is an IED that can be installed in a remote location, and acts as a termination point for field contacts. A dedicated pair of copper conductors is used to sense every contact and transducer value. These conductors originate at the power-system device, are installed in trenches or overhead cable trays, and are then terminated on panels within the RTU. The RTU can transfer collected data to other devices and receive data and control commands from other devices. User programmable RTUs are referred to as “smart RTUs.”
Meter A meter is an IED that is used to create accurate measurements of power-system current, voltage, and power values. Metering values such as demand and peak are saved within the meter to create historical information about the activity of the power system.
Digital fault recorder A digital fault recorder (DFR) is an IED that records information about power-system disturbances. It is capable of storing data in a digital format when triggered by conditions detected on the power system. Harmonics, frequency, and voltage are examples of data captured by DFRs.
Programmable logic controller (PLC) A Programmable Logic Controller can be programmed to perform logical control. As with the RTU, a dedicated pair of copper conductors for each contact and transducer value is terminated on panels within the PLC.It is like a work-horse which work upon the command given by their master.
Protective relay A protective relay is an IED designed to sense power-system disturbances and automatically perform control actions on the I&C system and the power system to protect personnel and equipment. The relay has local termination so that the copper conductors for each contact do not have to be routed to a central termination panel associated with RTU.
Controlling (output) devices
Load tap changer (LTC) Load tap changers are devices used to change the tap position on transformers. These devices work automatically or can be controlled via another local IED or from a remote operator or process.
Recloser controller Recloser controllers remotely control the operation of automated reclosers and switches. These devices monitor and store power-system conditions and determine when to perform control actions. They also accept commands from a remote operator or process.
Communications devices
Communications processor A communications processor is a substation controller that incorporates the functions of many other I&C devices into one IED. It has many communications ports to support multiple simultaneous communications links. The communications processor performs data acquisition and control of the other substation IEDs and also concentrates the data it acquires for transmission to one or many masters inside and outside the substation.
Applications
Overcurrent protection
All lines and all electrical equipment must be protected against prolonged overcurrent. If the cause of the overcurrent is nearby then automatically that current is interrupted immediately. But if the cause of the overcurrent is outside the local area then a backup provision automatically disconnects all affected circuits after a suitable time delay.
Note that disconnection can, unfortunately, have a cascade effect, leading to overcurrent in other circuits that then also must therefore disconnect automatically.
Also note that generators that suddenly have lost their load because of such a protection operation will have to shut down automatically immediately, and it may take many hours to restore a proper balance between demand and supply in the system, partly because there must be proper synchronization before any two parts of the system can be reconnected.
Reclosing operations of circuit breakers usually are attempted automatically, and often are successful during thunderstorms, for example.
Supervisory control and data acquisition
A supervisory control and data acquisition system (SCADA) transmits and receives commands or data from process instruments and equipment. Power system elements ranging from pole-mounted switches to entire power plants can be controlled remotely over long distance communication links. Remote switching, telemetering of grids (showing voltage, current, power, direction, consumption in kWh, etc.), even automatic synchronization is used in some power systems.
Optical fibers
Power utility companies protect high voltage lines by monitoring them constantly. This supervision requires the transmission of information between the power substations in order to ensure correct operation while controlling every alarm and failure. Legacy telecom networks were interconnected with metallic wires, but the substation environment is characterized by a high level of electromagnetic fields that may disturb copper wires.
Authorities use a tele-protection scheme to enable substations to communicate with one another to selectively isolate faults on high voltage lines, transformers, reactors and other important elements of the electrical plants. This functionality requires the continuous exchange of critical data in order to assure correct operation. In order to warranty the operation the telecom network should always be in perfect conditions in terms of availability, performance, quality and delays.
Initially these networks were made of metallic conductive media, however the vulnerability of the 56–64 kbit/s channels to electromagnetic interference, signal ground loops, and ground potential rise made them too unreliable for the power industry. Strong electromagnetic fields caused by the high voltages and currents in power lines occur regularly in electric substations.
Moreover, during fault conditions electromagnetic perturbations may rise significantly and disturb those communications channels based on copper wires. The reliability of the communications link interconnecting the protection relays is critical and therefore must be resistant to effects encountered in high voltage areas, such as high frequency induction and ground potential rise.
Consequently, the power industry moved to optical fibers to interconnect the different items installed in substations. Fiber optics need not be grounded and are immune to the interferences caused by electrical noise, eliminating many of the errors commonly seen with electrical connections. The use of fully optical links from power relays to multiplexers as described by IEEE C37.94 became standard.
A more sophisticated architecture for the protection scheme emphasizes the notion of fault tolerant networks. Instead of using a direct relay connection and dedicated fibers, redundant connections make the protection process more reliable by increasing the availability of critical data interchanges.
C37.94
IEEE C37.94, full title IEEE Standard for N Times 64 Kilobit Per Second Optical Fiber Interfaces Between Teleprotection and Multiplexer Equipment, is an IEEE standard, published in 2002, that defines the rules to interconnect tele-protection and multiplexer devices of power utility companies. The standard defines a data frame format for optical interconnection, and references standards for the physical connector for multi-mode optical fiber. Furthermore, it defines behavior of connected equipment on failure of the link, and the timing and optical signal characteristics.
Teleprotection systems must isolate faults very quickly to prevent damage to the network and power outages. The IEEE committee defined C37.94 as a programmable n x 64 kbit/s (n=1...12) multimode optical fiber interface to provide transparent communications between teleprotection relays and multiplexers for distances of up to 2 km. To reach longer distances, the power industry later adopted a single mode optical fiber interface as well.
The standard defines the protection and communications equipment inside a substation using optical fibers, the method for clock recovery, the jitter tolerances allowed in the signals, the physical connection method, and the actions the protection equipment must follow when any kind of network anomalies and faults occur. C37.94 was already implemented by many protection relay manufacturers such as ABB, SEL, RFL, and RAD; and tester manufacturers such as Net Research (NetProbe 2000), ALBEDO and VEEX. Teleprotection equipment once offered a choice of transmission interfaces, such as the IEEE C37.94 compliant optical fiber interface for transmission over fiber pairs, and G.703, 64 kbit/s co-directional and E1 interfaces.
References
See also
Automatic generation control
Smart grid
Smart meter
International Council on Large Electric Systems (CIGRE)
SCADA
Electric power
Smart grid
Electric power transmission
Electrical engineering | Power-system automation | [
"Physics",
"Engineering"
] | 2,376 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
2,174,184 | https://en.wikipedia.org/wiki/Piecewise%20linear%20manifold | In mathematics, a piecewise linear manifold (PL manifold) is a topological manifold together with a piecewise linear structure on it. Such a structure can be defined by means of an atlas, such that one can pass from chart to chart in it by piecewise linear functions. This is slightly stronger than the topological notion of a triangulation.
An isomorphism of PL manifolds is called a PL homeomorphism.
Relation to other categories of manifolds
PL, or more precisely PDIFF, sits between DIFF (the category of smooth manifolds) and TOP (the category of topological manifolds): it is categorically "better behaved" than DIFF — for example, the Generalized Poincaré conjecture is true in PL (with the possible exception of dimension 4, where it is equivalent to DIFF), but is false generally in DIFF — but is "worse behaved" than TOP, as elaborated in surgery theory.
Smooth manifolds
Smooth manifolds have canonical PL structures — they are uniquely triangulizable, by Whitehead's theorem on triangulation — but PL manifolds do not always have smooth structures — they are not always smoothable. This relation can be elaborated by introducing the category PDIFF, which contains both DIFF and PL, and is equivalent to PL.
One way in which PL is better behaved than DIFF is that one can take cones in PL, but not in DIFF — the cone point is acceptable in PL.
A consequence is that the Generalized Poincaré conjecture is true in PL for dimensions greater than four — the proof is to take a homotopy sphere, remove two balls, apply the h-cobordism theorem to conclude that this is a cylinder, and then attach cones to recover a sphere. This last step works in PL but not in DIFF, giving rise to exotic spheres.
Topological manifolds
Not every topological manifold admits a PL structure, and of those that do, the PL structure need not be unique—it can have infinitely many. This is elaborated at Hauptvermutung.
The obstruction to placing a PL structure on a topological manifold is the Kirby–Siebenmann class. To be precise, the Kirby-Siebenmann class is the obstruction to placing a PL-structure on M x R and in dimensions n > 4, the KS class vanishes if and only if M has at least one PL-structure.
Real algebraic sets
An A-structure on a PL manifold is a structure which gives an inductive way of resolving the PL manifold to a smooth manifold. Compact PL manifolds admit A-structures. Compact PL manifolds are homeomorphic to real-algebraic sets. Put another way, A-category sits over the PL-category as a richer category with no obstruction to lifting, that is BA → BPL is a product fibration with BA = BPL × PL/A, and PL manifolds are real algebraic sets because A-manifolds are real algebraic sets.
Combinatorial manifolds and digital manifolds
A combinatorial manifold is a kind of manifold which is discretization of a manifold. It usually means a piecewise linear manifold made by simplicial complexes.
A digital manifold is a special kind of combinatorial manifold which is defined in digital space. See digital topology.
See also
Simplicial manifold
Notes
References
Structures on manifolds
Geometric topology
Manifolds | Piecewise linear manifold | [
"Mathematics"
] | 692 | [
"Space (mathematics)",
"Geometric topology",
"Topological spaces",
"Topology",
"Manifolds"
] |
2,174,240 | https://en.wikipedia.org/wiki/Betahistine | Betahistine, sold under the brand name Serc among others, is an anti-vertigo medication. It is commonly prescribed for balance disorders or to alleviate vertigo symptoms. It was first registered in Europe in 1970 for the treatment of Ménière's disease, but current evidence does not support its efficacy in treating it.
Medical uses
Betahistine was once believed to have some positive effects in the treatment of Ménière's disease and vertigo, but more recent evidence casts doubt on its efficacy. Studies of the use of betahistine have shown a reduction in symptoms of vertigo and, to a lesser extent, tinnitus, but conclusive evidence is lacking at present.
Oral betahistine has been approved for the treatment of Ménière's disease and vestibular vertigo in more than 80 countries worldwide, and has been reportedly prescribed for more than 130 million patients. However, betahistine has not been approved for marketing in the United States for the past few decades, and there is disagreement about its efficacy.
The Cochrane Library concluded in 2001 that "Most trials suggested a reduction of vertigo with betahistine and some suggested a reduction in tinnitus but all these effects may have been caused by bias in the methods. One trial with good methods showed no effect of betahistine on tinnitus compared with placebo in 35 patients. None of the trials showed any effect of betahistine on hearing loss. No serious adverse effects were found with betahistine."
An intranasal formulation of betahistine dihydrochloride received orphan drug designation from the US Food and Drug Administration (FDA) for the treatment of obesity associated with Prader–Willi syndrome, a rare genetic disorder.
Betahistine is also undergoing clinical trials for the treatment of attention deficit hyperactivity disorder (ADHD).
Contraindications
Betahistine is contraindicated for patients with pheochromocytoma. Patients with bronchial asthma or a history of peptic ulcer need to be closely monitored.
Adverse effects
Patients taking betahistine may experience the following adverse effects:
Headache
Low level of gastric adverse effects
Nausea can be an adverse effect, but patients are often already experiencing nausea owing to vertigo, so it goes largely unnoticed.
Patients taking betahistine may experience hypersensitivity and allergic reactions. In the November 2006 issue of "Drug Safety", Dr. Sabine Jeck-Thole and Dr. Wolfgang Wagner reported that betahistine may cause allergic and skin-related adverse effects. These include rashes in several areas of the body; itching and urticaria (hives); and swelling of the face, tongue, and mouth. Other hypersensitivity reactions reported include tingling, numbness, burning sensations, shortness of breath, and laboured breathing. The study authors suggested that hypersensitivity reactions may be a direct result of betahistine's role in increasing histamine concentrations throughout the body. Hypersensitivity reactions quickly subside after betahistine has been discontinued.
Digestive
Betahistine may also cause several digestive-related adverse effects. The package insert for Serc, a trade name for betahistine, states that patients may experience several gastrointestinal side effects. These may include nausea, upset stomach, vomiting, diarrhea, dry mouth, and stomach cramping. These symptoms are usually not serious and subside between doses. Patients experiencing chronic digestive problems may lower their dose to the minimum effective and may mitigate the effects by taking betahistine with meals. Additional digestive problems may require that patients consult their physician in order to find a possible suitable alternative.
Others
People taking betahistine may experience several other adverse effects ranging from mild to serious. The package insert for Serc states that patients may experience nervous-system side effects, including headache. Some nervous system events may also partly be attributable to the underlying condition, rather than the medication used to treat it. Jeck-Thole and Wagner also reported that patients may experience headache and liver problems, including increased liver enzymes and bile-flow disturbances. Any adverse effects that persist or outweigh the relief of symptoms of the original condition may warrant that the patient consult their physician to adjust or change the medication.
Pharmacology
Pharmacodynamics
Betahistine is a weak antagonist or inverse agonist at histamine H₃ receptors and a weak partial agonist at histamine H₁ receptors.
Betahistine primarily acts on histamine H₁ receptors located on blood vessels in the inner ear, leading to vasodilation and increased vascular permeability. These effects help to alleviate endolymphatic hydrops, a key factor in Ménière's disease.
Additionally, betahistine interacts with histamine H₃ receptors as a weak antagonist or inverse agonist. By modulating H₃ receptors, betahistine increases the release of various neurotransmitters, including histamine, acetylcholine, norepinephrine, serotonin, and GABA from nerve endings. This mechanism contributes to improved blood flow in the inner ear and modulation of vestibular compensation.
Betahistine's effects on neurotransmitter release, particularly serotonin, may also play a role in its modulation of vestibular nuclei activity in the brainstem. These combined actions help alleviate symptoms associated with vestibular disorders.
Pharmacokinetics
Betahistine comes in both a tablet form and as an oral solution, and is taken orally. It is rapidly and completely absorbed. The mean plasma elimination half-life is 3 to 4 hours, and excretion is virtually complete in the urine within 24 hours. Plasma protein binding is very low. Betahistine is converted to aminoethylpyridine and hydroxyethylpyridine and excreted in the urine as pyridylacetic acid. There is some evidence that one of these metabolites, aminoethylpyridine, may be active and have effects similar to those of betahistine on ampullar receptors.
Chemistry
Betahistine chemically is 2-[2-(methylamino)ethyl]pyridine, and is formulated as the dihydrochloride salt. Its chemical structure closely resembles those of phenethylamine and histamine.
Society and culture
Brand names
Betahistine is marketed under a number of brand names, including Veserc, Serc, Hiserk, Betaserc, and Vergo.
Availability
Betahistine is widely used and available in over 115 countries worldwide, including in the United Kingdom.
United States
Betahistine, marketed as Serc, received initial approval from the US Food and Drug Administration (FDA) in November 1966 for the treatment of Ménière's disease. This approval was based on a single clinical study conducted by Joseph Elia and published in the Journal of the American Medical Association (JAMA) in April of that year. However, concerns soon arose regarding the study's methodology and the strength of its findings, with public criticism appearing in publications such as the Medical Letter on Drugs and Therapeutics. This prompted an FDA investigation, culminating in the agency obtaining Elia's original study data in April 1967. Subsequent review of the data revealed inadequacies, leading the FDA to issue a notice of intent to withdraw approval in 1968.
Instead of immediate withdrawal, the FDA engaged in discussions with Unimed, the manufacturer, regarding the design of a new clinical trial. This decision not to immediately remove betahistine from the market drew congressional scrutiny, particularly from Representative Lawrence Fountain, who cited the Food, Drug, and Cosmetic Act's mandate for withdrawal when substantial evidence of efficacy is lacking. Internal dissent within the FDA regarding the original approval and its reliance on a single study further complicated the situation. The controversy unfolded against the backdrop of the 1962 Kefauver–Harris Amendment, which had strengthened requirements for demonstrating drug efficacy. Ultimately, the FDA terminated betahistine's new drug application on December 21, 1972, following a lawsuit filed by Consumers Union. Unimed's attempted legal challenge to maintain the drug's market presence was also unsuccessful, with the US Court of Appeals for the Second Circuit upholding the FDA's withdrawal. Betahistine remains unapproved by the FDA, although it is available through compounding pharmacies.
See also
2-Pyridylethylamine
References
Amines
H3 receptor antagonists
Histamine agonists
2-Pyridyl compounds
Vasodilators | Betahistine | [
"Chemistry"
] | 1,766 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
2,174,251 | https://en.wikipedia.org/wiki/Mechanically%20powered%20flashlight | A mechanically powered flashlight (UK: mechanically powered torch) is a flashlight that is powered by electricity generated by the muscle power of the user, so it does not need replacement of batteries, or recharging from an electrical source. There are several types which use different operating mechanisms. They use different motions to generate the required power; such as squeezing a handle, winding a crank, or shaking the flashlight itself. These flashlights can also be distinguished by the technique used to store the energy: a spring, a flywheel, a battery or a capacitor.
Since they are always ready for use, mechanically powered flashlights are often kept as emergency lights in case of power outages or other emergencies. They are also kept at vacation homes, cabins, and other remote locations because they are not limited by battery shelf life like ordinary flashlights. They are considered a green technology, because the disposable batteries used by ordinary flashlights are wasteful in terms of resources used for the amount of energy produced, and also contain heavy metals and toxic chemicals which end up in the environment.
Dyno torch
A dyno torch, dynamo torch, or squeeze flashlight is a flashlight or pocket torch which generates energy via a flywheel. The user repeatedly squeezes a handle to spin a flywheel inside the flashlight, attached to a small generator/dynamo, supplying electric current to an incandescent bulb or light-emitting diode. The flashlight must be pumped continuously during use, with the flywheel turning the generator between squeezes to keep the light going continuously. Because electrical power is produced only when the handle is squeezed, a switch is unnecessary. Dyno lights were issued to German Wehrmacht soldiers during World War II. They were popular in Europe during the war because the electrical power supply to homes was unreliable. In addition to "squeeze flashlight", American soldiers often referred to such lights as "squeezy flashlight" or even "squeegee flashlight".
A version using a pull-cord was used in World War I.
The photo shows the internal mechanism: the L-shaped handle has a gear rack, which spins the white step-up gear, which in turn spins the flywheel on which is mounted both a centrifugal clutch (to allow freewheeling after the lever stops its travel and then returns) and a dark grey magnet, seen on the lower left. The magnet induces an electric current as it spins around the red copper winding, seen on the lower right. The current from the copper winding flows through the filament of an incandescent light bulb (not shown), giving off light. An L-shaped spring returns the handle to its original position after each engagement.
Shake-type design
The linear induction, Faraday flashlight, or "shake flashlight" is another type of mechanically powered flashlight. It has been sold in the US beginning with direct marketing campaigns in 2002.
This design contains a linear electrical generator which charges a supercapacitor which functions similarly to a rechargeable battery when the flashlight is shaken lengthwise. The battery or capacitor powers a white LED lamp. The linear generator consists of a sliding rare-earth magnet which moves back and forth through the center of a solenoid (a coil of copper wire) when it is shaken. A current is induced in the loops of wire by Faraday's law of induction each time the magnet slides through, which charges the capacitor through a rectifier and other circuitry.
The best designs use a supercapacitor instead of a rechargeable battery, since these have a longer working life than a battery. This, along with the long-life light-emitting diode which does not burn out like an incandescent bulb, give the flashlight a long lifetime, making it a useful emergency light. A disadvantage of many current models is that the supercapacitor cannot store much energy in comparison to a lithium-ion cell, limiting the operating time per charge. In most designs, vigorously shaking the light for about 30 seconds may provide up to 5 minutes of light, though the advertised time omits the reduced output of the LED after 2 or 3 minutes. Shaking the unit for 10 to 15 seconds every 2 or 3 minutes as necessary permits the device to be used continuously. It is often viewed as a toy, or an emergency backup for other flashlights.
Fraudulent counterfeit versions of these flashlights have been sold, most of which incorporate hidden coin-sized non-rechargeable lithium batteries. The expensive supercapacitor is omitted from the internal components. In some of these fake designs, the "magnet" is not a magnet or the coil is not connected, and no electricity is generated when the device is shaken. These fraudulent flashlights eventually become useless, since their internal batteries cannot be recharged or replaced, and the case is often permanently glued shut.
Crank-powered design
Another common type is the windup or crank-powered flashlight, with the light powered by a battery which is recharged by a generator turned by a hand crank on the flashlight. One minute of cranking typically provides about 30 to 60 minutes of light. It has the advantage that it doesn't have to be pumped continually during use like the dyno torch or some shake flashlights. However it may be less reliable as an emergency light, because the rechargeable battery it contains eventually wears out. The lithium-ion cells used are typically rated for around 500 charges.
In an alternative "Clockwork Torch" design, produced by Freeplay Energy, the energy is mechanically stored in a flat spiral wound mainspring, rather than a battery. The owner winds the spring up by turning the crank. Then when the light is turned on (by releasing a mechanical brake), the spring unwinds, turning a generator to provide power to run the light. The purpose of this design, originally invented for use in the developing world, was to improve its reliability and useful lifetime by avoiding or reducing reliance on a battery. By 2012 the original design was no longer made, but updated smaller hand-cranked models using LEDs were still available.
Other functions
Some mechanically powered flashlights include additional functions and features beyond just a source of light. Models sold as emergency lights have additional functions intended to be used in emergencies, such as flashing red or yellow lights for roadside emergencies, sirens, and radios such as AM/FM, weather, or shortwave radios. They may also include alternative means of charging the battery, such as an AC adaptor, solar cells, or cords that plug into a cigarette lighter socket in a car.
Crank powered flashlights often have radios and other features. One popular feature is a 5-volt USB charging port for recharging cell phones when an outlet is not available. The quality and long-term reliability of these devices vary over a wide range, from high-reliability mil-spec emergency equipment down to one-time-use non-repairable disposables.
"Steel mills"
The first mechanically powered portable illumination was the "steel mill", used in coal mining during the 1800s. These lamps consisted of a steel disk, rotated at high speed by a crank mechanism. Pressing a flint against the disk produced a shower of sparks and dim illumination. These mills were only used in coal mines, where a risk of explosive firedamp gas made candle lighting unsafe. Caution was required to observe the sparks, so as not to generate very hot sparks that could ignite firedamp. These mills were troublesome to use and were often worked by a boy, whose only task was to provide light for a group of miners. One of the first of these mills was the 18th century Spedding mill, the Spedding family having a long association as the agents for the Lowther family of Westmorland and the Whitehaven collieries.
Steel mills went out of favor after the introduction of the much less cumbersome Davy and Geordie lamps from 1815. The mill idea was revived in 1946, based on the developed technology of cigarette lighters and ferrocerium flints. A spring-wound lamp with eight flints was suggested for emergency signalling at sea.
See also
Human-powered equipment
GravityLight, a mechanically powered and gravity-powered lamp
Solar-powered flashlight
References
Flashlights
Human power | Mechanically powered flashlight | [
"Physics"
] | 1,693 | [
"Power (physics)",
"Physical quantities",
"Human power"
] |
2,174,269 | https://en.wikipedia.org/wiki/Reduced%20product | In model theory, a branch of mathematical logic, and in algebra, the reduced product is a construction that generalizes both direct product and ultraproduct.
Let {Si | i ∈ I} be a nonempty family of structures of the same signature σ indexed by a set I, and let U be a proper filter on I. The domain of the reduced product is the quotient of the Cartesian product
by a certain equivalence relation ~: two elements (ai) and (bi) of the Cartesian product are equivalent if
If U only contains I as an element, the equivalence relation is trivial, and the reduced product is just the direct product. If U is an ultrafilter, the reduced product is an ultraproduct.
Operations from σ are interpreted on the reduced product by applying the operation pointwise. Relations are interpreted by
For example, if each structure is a vector space, then the reduced product is a vector space with addition defined as (a + b)i = ai + bi and multiplication by a scalar c as (ca)i = c ai.
References
, Chapter 6.
Model theory | Reduced product | [
"Mathematics"
] | 230 | [
"Mathematical logic stubs",
"Mathematical logic",
"Model theory"
] |
2,174,332 | https://en.wikipedia.org/wiki/Loop%20space | In topology, a branch of mathematics, the loop space ΩX of a pointed topological space X is the space of (based) loops in X, i.e. continuous pointed maps from the pointed circle S1 to X, equipped with the compact-open topology. Two loops can be multiplied by concatenation. With this operation, the loop space is an A∞-space. That is, the multiplication is homotopy-coherently associative.
The set of path components of ΩX, i.e. the set of based-homotopy equivalence classes of based loops in X, is a group, the fundamental group π1(X).
The iterated loop spaces of X are formed by applying Ω a number of times.
There is an analogous construction for topological spaces without basepoint. The free loop space of a topological space X is the space of maps from the circle S1 to X with the compact-open topology. The free loop space of X is often denoted by .
As a functor, the free loop space construction is right adjoint to cartesian product with the circle, while the loop space construction is right adjoint to the reduced suspension. This adjunction accounts for much of the importance of loop spaces in stable homotopy theory. (A related phenomenon in computer science is currying, where the cartesian product is adjoint to the hom functor.) Informally this is referred to as Eckmann–Hilton duality.
Eckmann–Hilton duality
The loop space is dual to the suspension of the same space; this duality is sometimes called Eckmann–Hilton duality. The basic observation is that
where is the set of homotopy classes of maps ,
and is the suspension of A, and denotes the natural homeomorphism. This homeomorphism is essentially that of currying, modulo the quotients needed to convert the products to reduced products.
In general, does not have a group structure for arbitrary spaces and . However, it can be shown that and do have natural group structures when and are pointed, and the aforementioned isomorphism is of those groups. Thus, setting (the sphere) gives the relationship
.
This follows since the homotopy group is defined as and the spheres can be obtained via suspensions of each-other, i.e. .
See also
Bott periodicity
Eilenberg–MacLane space
Free loop
Fundamental group
Gray's conjecture
List of topologies
Loop group
Path (topology)
Quasigroup
Spectrum (topology)
Path space (algebraic topology)
References
Topology
Homotopy theory
Topological spaces | Loop space | [
"Physics",
"Mathematics"
] | 534 | [
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
2,174,453 | https://en.wikipedia.org/wiki/Lotus%20effect | The lotus effect refers to self-cleaning properties that are a result of ultrahydrophobicity as exhibited by the leaves of Nelumbo, the lotus flower. Dirt particles are picked up by water droplets due to the micro- and nanoscopic architecture on the surface, which minimizes the droplet's adhesion to that surface. Ultrahydrophobicity and self-cleaning properties are also found in other plants, such as Tropaeolum (nasturtium), Opuntia (prickly pear), Alchemilla, cane, and also on the wings of certain insects.
The phenomenon of ultrahydrophobicity was first studied by Dettre and Johnson in 1964 using rough hydrophobic surfaces. Their work developed a theoretical model based on experiments with glass beads coated with paraffin or PTFE telomer. The self-cleaning property of ultrahydrophobic micro-nanostructured surfaces was studied by Wilhelm Barthlott and Ehler in 1977, who described such self-cleaning and ultrahydrophobic properties for the first time as the "lotus effect"; perfluoroalkyl and perfluoropolyether ultrahydrophobic materials were developed by Brown in 1986 for handling chemical and biological fluids. Other biotechnical applications have emerged since the 1990s.
Functional principle
The high surface tension of water causes droplets to assume a nearly spherical shape, since a sphere has minimal surface area, and this shape therefore minimizes the solid-liquid surface energy. On contact of liquid with a surface, adhesion forces result in wetting of the surface. Either complete or incomplete wetting may occur depending on the structure of the surface and the fluid tension of the droplet.
The cause of self-cleaning properties is the hydrophobic water-repellent double structure of the surface. This enables the contact area and the adhesion force between surface and droplet to be significantly reduced, resulting in a self-cleaning process.
This hierarchical double structure is formed out of a characteristic epidermis (its outermost layer called the cuticle) and the covering waxes. The epidermis of the lotus plant possesses papillae 10 μm to 20 μm in height and 10 μm to 15 μm in width on which the so-called epicuticular waxes are imposed. These superimposed waxes are hydrophobic and form the second layer of the double structure. This system regenerates. This biochemical property is responsible for the functioning of the water repellency of the surface.
The hydrophobicity of a surface can be measured by its contact angle. The higher the contact angle the higher the hydrophobicity of a surface. Surfaces with a contact angle < 90° are referred to as hydrophilic and those with an angle >90° as hydrophobic. Some plants show contact angles up to 160° and are called ultrahydrophobic, meaning that only 2–3% of the surface of a droplet (of typical size) is in contact. Plants with a double structured surface like the lotus can reach a contact angle of 170°, whereby the droplet's contact area is only 0.6%. All this leads to a self-cleaning effect.
Dirt particles with an extremely reduced contact area are picked up by water droplets and are thus easily cleaned off the surface. If a water droplet rolls across such a contaminated surface the adhesion between the dirt particle, irrespective of its chemistry, and the droplet is higher than between the particle and the surface. This cleaning effect has been demonstrated on common materials such as stainless steel when a superhydrophobic surface is produced. As this self-cleaning effect is based on the high surface tension of water it does not work with organic solvents. Therefore, the hydrophobicity of a surface is no protection against graffiti.
This effect is of a great importance for plants as a protection against pathogens like fungi or algae growth, and also for animals like butterflies, dragonflies and other insects not able to cleanse all their body parts.
Another positive effect of self-cleaning is the prevention of contamination of the area of a plant surface exposed to light resulting in reduced photosynthesis.
Technical application
When it was discovered that the self-cleaning qualities of ultrahydrophobic surfaces come from physical-chemical properties at the microscopic to nanoscopic scale rather than from the specific chemical properties of the leaf surface, the possibility arose of using this effect in manmade surfaces, by mimicking nature in a general way rather than a specific one.
Some nanotechnologists have developed treatments, coatings, paints, roof tiles, fabrics and other surfaces that can stay dry and clean themselves by replicating in a technical manner the self-cleaning properties of plants, such as the lotus plant. This can usually be achieved using special fluorochemical or silicone treatments on structured surfaces or with compositions containing micro-scale particulates.
In addition to chemical surface treatments, which can be removed over time, metals have been sculpted with femtosecond pulse lasers to produce the lotus effect. The materials are uniformly black at any angle, which combined with the self-cleaning properties might produce very low maintenance solar thermal energy collectors, while the high durability of the metals could be used for self-cleaning latrines to reduce disease transmission.
Further applications have been marketed, such as self-cleaning glasses installed in the sensors of traffic control units on German autobahns developed by a cooperation partner (Ferro GmbH).
The Swiss companies HeiQ and Schoeller Textil have developed stain-resistant textiles under the brand names "HeiQ Eco Dry" and "nanosphere" respectively. In October 2005, tests of the Hohenstein Research Institute showed that clothes treated with NanoSphere technology allowed tomato sauce, coffee and red wine to be easily washed away even after a few washes. Another possible application is thus with self-cleaning awnings, tarpaulins and sails, which otherwise quickly become dirty and difficult to clean.
Superhydrophobic coatings applied to microwave antennas can significantly reduce rain fade and the buildup of ice and snow. "Easy to clean" products in ads are often mistaken in the name of the self-cleaning properties of hydrophobic or ultrahydrophobic surfaces. Patterned ultrahydrophobic surfaces also show promise for "lab-on-a-chip" microfluidic devices and can greatly improve surface-based bioanalysis.
Superhydrophobic or hydrophobic properties have been used in dew harvesting, or the funneling of water to a basin for use in irrigation. The Groasis Waterboxx has a lid with a microscopic pyramidal structure based on the ultrahydrophobic properties that funnel condensation and rainwater into a basin for release to a growing plant's roots.
Research history
Although the self-cleaning phenomenon of the lotus was possibly known in Asia long before (reference to the lotus effect is found in the Bhagavad Gita), its mechanism was explained only in the early 1970s after the introduction of the scanning electron microscope. Studies were performed with leaves of Tropaeolum and lotus (Nelumbo). Similar to lotus effect, a recent study has revealed honeycomb-like micro-structures on the taro leaf, which makes the leaf superhydrophobic. The measured contact angle on this leaf in this study is around 148 degrees.
See also
Biomimetics
Petal effect
Salvinia effect
References
External links
Project Group lotus effect - Nees Institut for biodiversity of plants Friedrich-Wilhelm University of Bonn
Scientific American article: "Self-Cleaning Materials: Lotus Leaf-Inspired Nanotechnology"
Nanotechnology
Surface science | Lotus effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,578 | [
"Nanotechnology",
"Condensed matter physics",
"Surface science",
"Materials science"
] |
2,174,514 | https://en.wikipedia.org/wiki/Sylvester%20matrix | In mathematics, a Sylvester matrix is a matrix associated to two univariate polynomials with coefficients in a field or a commutative ring. The entries of the Sylvester matrix of two polynomials are coefficients of the polynomials. The determinant of the Sylvester matrix of two polynomials is their resultant, which is zero when the two polynomials have a common root (in case of coefficients in a field) or a non-constant common divisor (in case of coefficients in an integral domain).
Sylvester matrices are named after James Joseph Sylvester.
Definition
Formally, let p and q be two nonzero polynomials, respectively of degree m and n. Thus:
The Sylvester matrix associated to p and q is then the matrix constructed as follows:
if n > 0, the first row is:
the second row is the first row, shifted one column to the right; the first element of the row is zero.
the following n − 2 rows are obtained the same way, shifting the coefficients one column to the right each time and setting the other entries in the row to be 0.
if m > 0 the (n + 1)th row is:
the following rows are obtained the same way as before.
Thus, if m = 4 and n = 3, the matrix is:
If one of the degrees is zero (that is, the corresponding polynomial is a nonzero constant polynomial), then there are zero rows consisting of coefficients of the other polynomial, and the Sylvester matrix is a diagonal matrix of dimension the degree of the non-constant polynomial, with the all diagonal coefficients equal to the constant polynomial. If m = n = 0, then the Sylvester matrix is the empty matrix with zero rows and zero columns.
A variant
The above defined Sylvester matrix appears in a Sylvester paper of 1840. In a paper of 1853, Sylvester introduced the following matrix, which is, up to a permutation of the rows, the Sylvester matrix of p and q, which are both considered as having degree max(m, n).
This is thus a -matrix containing pairs of rows. Assuming it is obtained as follows:
the first pair is:
the second pair is the first pair, shifted one column to the right; the first elements in the two rows are zero.
the remaining pairs of rows are obtained the same way as above.
Thus, if m = 4 and n = 3, the matrix is:
The determinant of the 1853 matrix is, up to sign, the product of the determinant of the Sylvester matrix (which is called the resultant of p and q) by (still supposing ).
Applications
These matrices are used in commutative algebra, e.g. to test if two polynomials have a (non-constant) common factor. In such a case, the determinant of the associated Sylvester matrix (which is called the resultant of the two polynomials) equals zero. The converse is also true.
The solutions of the simultaneous linear equations
where is a vector of size and has size , comprise the coefficient vectors of those and only those pairs of polynomials (of degrees and , respectively) which fulfill
where polynomial multiplication and addition is used.
This means the kernel of the transposed Sylvester matrix gives all solutions of the Bézout equation where and .
Consequently the rank of the Sylvester matrix determines the degree of the greatest common divisor of p and q:
Moreover, the coefficients of this greatest common divisor may be expressed as determinants of submatrices of the Sylvester matrix (see Subresultant).
See also
Transfer matrix
Bézout matrix
References
Matrices
Polynomials | Sylvester matrix | [
"Mathematics"
] | 726 | [
"Matrices (mathematics)",
"Polynomials",
"Mathematical objects",
"Algebra"
] |
2,174,531 | https://en.wikipedia.org/wiki/Hashiwokakero | Hashiwokakero (橋をかけろ Hashi o kakero; lit. "build bridges!") is a type of logic puzzle published by Nikoli. It has also been published in English under the name Bridges or Chopsticks (based on a mistranslation: the hashi of the title, 橋, means bridge; hashi written with another character, 箸, means chopsticks). It has also appeared in The Times under the name Hashi. In France, Denmark, the Netherlands, and Belgium it is published under the name Ai-Ki-Ai.
Rules
Hashiwokakero is played on a rectangular grid with no standard size, although the grid itself is not usually drawn. Some cells start out with (usually encircled) numbers from 1 to 8 inclusive; these are the "islands". The rest of the cells are empty.
The goal is to connect all of the islands by drawing a series of bridges between the islands. The bridges must follow certain criteria:
They must begin and end at distinct islands, travelling a straight line in between.
They must not cross any other bridges or islands.
They may only run orthogonally (i.e. they may not run diagonally).
At most two bridges connect a pair of islands.
The number of bridges connected to each island must match the number on that island.
The bridges must connect the islands into a single connected group.
Solution methods
Solving a Hashiwokakero puzzle is a matter of procedural force: having determined where a bridge must be placed, placing it there can eliminate other possible places for bridges, forcing the placement of another bridge, and so on.
An island showing '3' in a corner, '5' along the outside edge, or '7' anywhere must have at least one bridge radiating from it in each valid direction, for if one direction did not have a bridge, even if all other directions sported two bridges, not enough will have been placed. A '4' in a corner, '6' along the border, or '8' anywhere must have two bridges in each direction. This can be generalized as added bridges obstruct routes: a '3' that can only be travelled from vertically must have at least one bridge each for up and down, for example.
It is common practice to cross off or fill in islands whose bridge quota has been reached. In addition to reducing mistakes, this can also help locate potential "short circuits": keeping in mind that all islands must be connected by one network of bridges, a bridge that would create a closed network that no further bridges could be added to can only be permitted if it immediately yields the solution to the complete puzzle. The simplest example of this is two islands showing '1' aligned with each other; unless they are the only two islands in the puzzle, they cannot be connected by a bridge, as that would complete a network that cannot be added to, and would therefore force those two islands to be unreachable by any others.
Any bridge that would completely isolate a group of islands from another group would not be permitted, as one would then have two groups of islands that could not connect. This deduction, however, is not very commonly seen in Hashiwokakero puzzles.
Determining whether a Hashiwokakero puzzle has a solution is NP-complete, by a reduction from finding Hamiltonian cycles in integer-coordinate unit distance graphs. There is a solution using integer linear programming in the MathProg examples included in GLPK. A library of puzzles counting up to 400 islands as well as integer linear programming results are also reported.
History
Hashiwokakero first appeared in Puzzle Communication Nikoli in issue #31 (September 1990), although an earlier form of the puzzle appeared in issue #28 (December 1989).
See also
List of Nikoli puzzle types
References
External links
Nikoli's English page on Hashiwokakero
Logic puzzles
Japanese entertainment terms
NP-complete problems
Japanese board games | Hashiwokakero | [
"Mathematics"
] | 821 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.