id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
43,730,488
https://en.wikipedia.org/wiki/Gliese%20180
Gliese 180 (often shortened to GJ 180), is a small red dwarf star in the equatorial constellation of Eridanus. It is invisible to the naked eye with an apparent visual magnitude of 10.9. The star is located at a distance of 39 light years from the Sun based on parallax, and is drifting closer with a radial velocity of −14.6 km/s. It has a high proper motion, traversing the sky at the rate of  per year. The stellar classification of GJ 180 is catalogued as M2V or M3V, depending on the study, which indicates this is a dim red dwarf – an M-type main-sequence star that is generating energy by core hydrogen fusion. Reiners and associates (2012) do not consider it to be an active star. It is about five billion years old and is spinning with a projected rotational velocity of ~3 km/s, giving it a rotation period of about 65 days. The star has 43% of the Sun's mass and 42% of the radius of the Sun. It is radiating just 2.4% of the luminosity of the Sun from its photosphere at an effective temperature of 3,634 K. Planetary system Gliese 180 is known to have at least two exoplanets, designated Gliese 180 b and Gliese 180 d, and possibly a third, Gliese 180 c; all are super-Earths or mini-Neptunes. Planets 'b' and 'c' were initially reported in 2014, and a follow-up study in 2020 confirmed planet 'b' and found a new planet 'd', but did not find the previously claimed planet 'c'. According to the 2014 study, planets 'b' and 'c' have an orbital period ratio of 7:5, which suggests a mean motion resonance that is stabilizing the orbits. The habitable zone of this star, by the criteria of Kopparapu and associates (2013), ranges from out to , which thus includes planet 'c'. According to the Planetary Habitability Laboratory (PHL) in Puerto Rico, both b and c worlds in the system may be classifiable as potentially habitable planets. Planets Gliese 180 b and Gliese 180 c have minimum masses of 6.4 and 8.3 Earth masses, respectively. However, Dr Mikko Tuomi, of the UK's University of Hertfordshire, whose team identified the planets, disagreed, stating: "The PHL adds some sort of an “extended HZ”, which I, frankly, do not know how it’s calculated, but that adds some areas of potential habitability to the inner and outer edges of the HZ as we have defined it. They included the inner companion of the GJ 180 system (planet b) that we consider too hot to be potentially habitable.” However, as of 2022, the PHL lists only planets c and d, not b, as potentially habitable. See also List of exoplanets discovered in 2014 (Gliese 180 b & c) List of exoplanets discovered in 2020 (Gliese 180 d) References M-type main-sequence stars Planetary systems with two confirmed planets Eridanus (constellation) J04534995-1746235 0180 022762
Gliese 180
Astronomy
693
4,329,257
https://en.wikipedia.org/wiki/Stock%20tank
See Stock tank oil for oil industry stock tank definition. A stock tank is used to provide drinking water for animals such as cattle or horses. Stock tanks can range in size from 100 liters to over 5500 liters (30 to 1500 gallons) and typically are made of galvanized steel. These tanks are filled either by a pump, windpump, creek, spring, rely on runoff water from rain or melting snow, or from water hauled to them in a truck. In some parts of Texas, ranchers refer to ponds and watering holes as stock tanks. Trick tank A trick tank is type of stock tank. It collects precipitation, holds the water in a covered tank to minimize evaporation and maintain adequate water quality, and dispenses water on demand into a basin from which animals can drink. Dispensing may be regulated by a mechanical float device similar to a ballcock in the tank of a flush toilet. Trick tanks are manufactured in several styles, including inverted umbrella and apron. They are heavy and often are used in remote wilderness locations, to which they may require delivery via helicopter. To provide water to wild animals, not livestock, fencing may be built to surround a trick tank. Fences serve to exclude cattle and sheep. Trick tanks are widely used in the southwest United States, where periodic droughts may cause population crashes in game animals unless water supplies are provided. Other uses Stock tanks are increasingly used as "rustic" backyard above-ground pools, or "stock tank pools" by retrofitting a filter pump and adding chlorine or stabilized hydrogen peroxide to keep the water clean throughout the summer. The water will need to be drained periodically and can be reused to water plants if hydrogen peroxide is used. Metal stock tanks have also been used as a cheap alternative for a hot tub. Various heating methods are possible, including use of a propane heater or submersible wood stove. Another use of a stock tank is at a fair or other event where they are filled with ice to keep canned and bottled beverages, or even perishables such as watermelons. A cowboy church commonly uses a stock tank for water baptisms. Stock tanks can also make perfect indoor habitats for aquatic pets, such as fish, turtles or frogs. Tackle shops are known to store minnows for fishing in them. References Livestock Water supply Water conservation tools
Stock tank
Chemistry,Engineering,Environmental_science
482
48,326,008
https://en.wikipedia.org/wiki/Fort%20de%20Feyzin
The Fort Feyzin is a fort built between 1875 and 1877 in Feyzin. It is one of the forts of the second belt of forts around Lyon and more generally the Séré de Rivières fort system. This belt of forts included the forts of Bron, Vancia, Feyzin and Mont Verdun. It currently houses a riding stable managed by the UCPA on behalf of the city of Feyzin. Characteristics Located 230  meters above sea level, it covers a wide area: 22 000 square meters built on 26 wooded hectares, located near the center of Feyzin-le-haut and the Trois cerisiers recreational center. History The fort was designed to defend Lyon to the south. It also ensured protection of , the RN7, Solaize, Saint-Symphorien-d'Ozon and . Fort Feyzin throughout its history has essentially served as a garrison for the army, and as a gendarmerie. The town became the owner of the fort in July 2003 and now holds tours, of the military road near the caponier, the postern stairs and the ditches, the entrance building and rolling bridge over a ditch, which have been completely renovated as well as a nature walk. Thanks to the patronage of Foundation Total S.A. the entrance building of the fort has been restored. Plan of the Fort The fort today The fort of Feyzin is open to the public at the fort bal(l)ade (a one-day event in early summer, held since 2006) and for European Heritage Days. The fort is the subject of a development program with the creation of a leisure center oriented toward installing an equestrian center, activities like archery and orienteering, the renovation of new facilities and the development of training facilities (fire, police, humanitarian associations ...). After the renovation of the entrance pavilion in 2008, the equestrian center opened its doors on 27 July 2013. The old stables used by the army were rehabilitated to accommodate from 20 to 30 ponies; a covered carousel and an open air tiltyard are also present. This equipment is managed and run by the UCPA on behalf of the town of Feyzin. The Bioforce Institute also uses the fort for its training. See also Ceintures de Lyon References Bibliography External links Fortiffsere.fr Fortiff.be Feyzin.passe-simple.over-blog.com Total sponsorship Foundation Development Project of the fort of Feyzin Feyzin Equestrian Centre Séré de Rivières system Fortifications of Lyon
Fort de Feyzin
Engineering
515
22,901,829
https://en.wikipedia.org/wiki/Marches%20Energy%20Agency
Marches Energy Agency (MEA) is an energy agency in the United Kingdom, operating on a not-for-profit basis. The agency was formed by Shropshire County Council in 1995 to promote the use of sustainable energy in the area. Richard Davies was the director from 1998 to 2014, having previously worked as a chemical engineer. Much of their work is conduction in partnership with local authorities, and focuses on helping communities cut their carbon emissions, especially in rural areas. Areas of operation Although MEA initially operated on the English side of the Welsh Marches, it has since expanded its work through service level agreements with Staffordshire Moorlands District Council, the entire Shropshire Council area, and in 2009 to Nottinghamshire and Derbyshire through an agreement with the Local Authority Energy Partnership. Awards In 2009 MEA won an Ashden Award for their work to create Low Carbon Communities. See also Low Carbon Communities References External links MEA homepage Renewable energy organizations Charities based in Shropshire
Marches Energy Agency
Engineering
190
42,013,806
https://en.wikipedia.org/wiki/Littlewood%27s%20Tauberian%20theorem
In mathematics, Littlewood's Tauberian theorem is a strengthening of Tauber's theorem introduced by . Statement Littlewood showed the following: If an = O(1/n ), and as x ↑ 1 we have then Hardy and Littlewood later showed that the hypothesis on an could be weakened to the "one-sided" condition an ≥ –C/n for some constant C. However in some sense the condition is optimal: Littlewood showed that if cn is any unbounded sequence then there is a series with |an| ≤ |cn|/n which is divergent but Abel summable. History described his discovery of the proof of his Tauberian theorem. Alfred Tauber's original theorem was similar to Littlewood's, but with the stronger hypothesis that an=o(1/n). Hardy had proved a similar theorem for Cesàro summation with the weaker hypothesis an=O(1/n), and suggested to Littlewood that the same weaker hypothesis might also be enough for Tauber's theorem. In spite of the fact that the hypothesis in Littlewood's theorem seems only slightly weaker than the hypothesis in Tauber's theorem, Littlewood's proof was far harder than Tauber's, though Jovan Karamata later found an easier proof. Littlewood's theorem follows from the later Hardy–Littlewood Tauberian theorem, which is in turn a special case of Wiener's Tauberian theorem, which itself is a special case of various abstract Tauberian theorems about Banach algebras. Examples References Tauberian theorems
Littlewood's Tauberian theorem
Mathematics
331
33,820,485
https://en.wikipedia.org/wiki/Omphalotus%20mexicanus
Omphalotus mexicanus is a gilled basidiomycete mushroom in the family Marasmiaceae. Found in Mexico, it was described as new to science in 1984. Fruit bodies contain the toxic compounds illudin S and illudin M. Found in the highlands of Mexico and Central America, its fruiting bodies are an unusual dark blue tinted with yellow. References External links mexicanus Fungi described in 1984 Fungi of North America Taxa named by Gastón Guzmán Fungus species
Omphalotus mexicanus
Biology
98
275,564
https://en.wikipedia.org/wiki/Norman%20Borlaug
Norman Ernest Borlaug (; March 25, 1914September 12, 2009) was an American agronomist who led initiatives worldwide that contributed to the extensive increases in agricultural production termed the Green Revolution. Borlaug was awarded multiple honors for his work, including the Nobel Peace Prize, the Presidential Medal of Freedom and the Congressional Gold Medal, one of only seven people to have received all three awards. Borlaug received his B.S. in forestry in 1937 and PhD in plant pathology and genetics from the University of Minnesota in 1942. He took up an agricultural research position with CIMMYT in Mexico, where he developed semi-dwarf, high-yield, disease-resistant wheat varieties. During the mid-20th century, Borlaug led the introduction of these high-yielding varieties combined with modern agricultural production techniques to Mexico, Pakistan, and India. As a result, Mexico became a net exporter of wheat by 1963. Between 1965 and 1970, wheat yields nearly doubled in Pakistan and India, greatly improving the food security in those nations. Borlaug is often called "the father of the Green Revolution", and is credited with saving over a billion people worldwide from starvation. According to Jan Douglas, executive assistant to the president of the World Food Prize Foundation, the source of this number is Gregg Easterbrook's 1997 article "Forgotten Benefactor of Humanity." The article states that the "form of agriculture that Borlaug preaches may have prevented a billion deaths." Dennis T. Avery also estimated that the number of lives saved by Borlaug's efforts to be one billion. In 2009, Josette Sheeran, then the Executive Director of the World Food Programme, stated that Borlaug "saved more lives than any man in human history". He was awarded the 1970 Nobel Peace Prize in recognition of his contributions to world peace through increasing food supply. Later in his life, he helped apply these methods of increasing food production in Asia and Africa. He was also an accomplished wrestler in college and a pioneer of wrestling in the United States, being inducted into the National Wrestling Hall of Fame for his contributions. Early life, education, and family Borlaug was the great-grandchild of Norwegian immigrants to the United States. Ole Olson Dybevig and Solveig Thomasdatter Rinde, of Feios, a small village in Vik kommune, Sogn og Fjordane, Norway, emigrated to Dane County, Wisconsin, in 1854. The family eventually moved to the small Norwegian-American community of Saude, near Cresco, Iowa. There they were members of Saude Lutheran Church, where Norman was baptized and confirmed. Borlaug was born to Henry Oliver (1889–1971) and Clara (Vaala) Borlaug (1888–1972) on his grandparents' farm in Saude in 1914, the first of four children. His three sisters were Palma Lillian (Behrens; 1916–2004), Charlotte (Culbert; 1919–2012) and Helen (b. d. 1921). From age seven to nineteen, he worked on the family farm west of Protivin, fishing, hunting, and raising corn, oats, timothy-grass, cattle, pigs and chickens. He attended the one-teacher, one-room New Oregon #8 rural school in Howard County through eighth grade. Today, the school building, built in 1865, is owned by the Norman Borlaug Heritage Foundation as part of "Project Borlaug Legacy". At Cresco High School, Borlaug was a member of the football, baseball and wrestling teams; his wrestling coach, Dave Barthelma, continually encouraged him to "give 105%". Borlaug attributed his decision to leave the farm and pursue further education to his grandfather's urgent encouragement to learn: Nels Olson Borlaug (1859–1935) once told him, "you're wiser to fill your head now if you want to fill your belly later on." When Borlaug applied for admission to the University of Minnesota in 1933, he failed its entrance exam, but was accepted at the school's newly created two-year General College. After two quarters, he transferred to the College of Agriculture's forestry program. As a member of the University of Minnesota men's wrestling team, Borlaug reached the Big Ten semifinals, and promoted the sport to Minnesota high schools in exhibition matches all around the state: Wrestling taught me some valuable lessons ... I always figured I could hold my own against the best in the world. It made me tough. Many times, I drew on that strength. It's an inappropriate crutch perhaps, but that's the way I'm made. To finance his studies, Borlaug put his education on hold periodically to earn some income, as he did in 1935 as a leader in the Civilian Conservation Corps, working with the unemployed on federal projects. Many of the people who worked for him were starving. He later recalled, "I saw how food changed them. All of this left scars on me". From 1935 to 1938, before and after receiving his Bachelor of Science in forestry in 1937, Borlaug worked for the United States Forest Service at stations in Massachusetts and Idaho. He spent one summer in the middle fork of Idaho's Salmon River, the most isolated piece of wilderness in the nation at that time. In the last months of his undergraduate education, Borlaug attended a Sigma Xi lecture by Elvin Charles Stakman, a professor and soon-to-be head of the plant pathology group at the University of Minnesota. The event was a pivot for Borlaug's future. Stakman, in his speech entitled "These Shifty Little Enemies that Destroy our Food Crops", discussed the manifestation of the plant disease rust, a parasitic fungus that feeds on phytonutrients in wheat, oats, and barley crops. Stakman had discovered that special plant breeding methods produced plants resistant to rust. His research greatly interested Borlaug, and when Borlaug's job at the Forest Service was eliminated because of budget cuts, he asked Stakman if he should go into forest pathology. Stakman advised him to focus on plant pathology instead. He subsequently enrolled at the university to study plant pathology under Stakman. Borlaug earned a Master of Science degree in 1940, and a Ph.D. in plant pathology and genetics in 1942. Borlaug was a member of the Alpha Gamma Rho fraternity. While in college, he met his future wife, Margaret Gibson, as he waited tables at a coffee shop in the university's Dinkytown, where the two worked. They were married in 1937 and had three children, Norma Jean "Jeanie" Laube, Scotty (who died from spina bifida soon after birth), and William; five grandchildren, and six great-grandchildren. On March 8, 2007, Margaret Borlaug died at the age of 95 following a fall. They had been married for 69 years. Borlaug resided in Dallas the last years of his life, although his global humanitarian efforts left him with only a few weeks of the year to spend there. Career From 1942 to 1944, Borlaug was employed as a microbiologist at DuPont in Wilmington, Delaware. It was planned that he would lead research on industrial and agricultural bacteriocides, fungicides, and preservatives. However, following the December 7, 1941, attack on Pearl Harbor Borlaug tried to enlist in the military, but was rejected under wartime labor regulations; his lab was converted to conduct research for the United States armed forces. One of his first projects was to develop glue that could withstand the warm salt water of the South Pacific. The Imperial Japanese Navy had gained control of the island of Guadalcanal, and patrolled the sky and sea by day. The only way for U.S. forces to supply the troops stranded on the island was to approach at night by speedboat, and jettison boxes of canned food and other supplies into the surf to wash ashore. The problem was that the glue holding these containers together disintegrated in saltwater. Within weeks, Borlaug and his colleagues had developed an adhesive that resisted corrosion, allowing food and supplies to reach the stranded Marines. Other tasks included work with camouflage, canteen disinfectants, DDT to control malaria, and insulation for small electronics. In 1940, the Avila Camacho administration took office in Mexico. The administration's primary goal for Mexican agriculture was augmenting the nation's industrialization and economic growth. U.S. Vice President-Elect Henry Wallace, who was instrumental in persuading the Rockefeller Foundation to work with the Mexican government in agricultural development, saw Avila Camacho's ambitions as beneficial to U.S. economic and military interests. The Rockefeller Foundation contacted E.C. Stakman and two other leading agronomists. They developed a proposal for a new organization, the Office of Special Studies, as part of the Mexican Government, but directed by the Rockefeller Foundation. It was to be staffed with both Mexican and US scientists, focusing on soil development, maize and wheat production, and plant pathology. Stakman chose Dr. Jacob George "Dutch" Harrar as project leader. Harrar immediately set out to hire Borlaug as head of the newly established Cooperative Wheat Research and Production Program in Mexico; Borlaug declined, choosing to finish his war service at DuPont. In July 1944, after rejecting DuPont's offer to double his salary, and temporarily leaving behind his pregnant wife and 14-month-old daughter, he flew to Mexico City to head the new program as a geneticist and plant pathologist. In 1964, Borlaug was made the director of the International Wheat Improvement Program at El Batán, Texcoco, on the eastern fringes of Mexico City, as part of the newly established Consultative Group on International Agricultural Research's International Maize and Wheat Improvement Center (Centro Internacional de Mejoramiento de Maíz y Trigo, or CIMMYT). Funding for this autonomous international research training institute developed from the Cooperative Wheat Research Production Program was undertaken jointly by the Ford and Rockefeller Foundations and the Mexican government. Besides his work in genetic resistance against crop loss, Borlaug felt that pesticides including DDT had more benefits than drawbacks for humanity and advocated publicly for their continued use. He continued to support pesticide use despite the severe public criticism he received for it. Borlaug mostly admired the work and personality of Rachel Carson but lamented her Silent Spring, what he saw as an inaccurate portrayal of the effects of DDT. Borlaug retired officially from the position in 1979, but remained a CIMMYT senior consultant. In addition to taking up charitable and educational roles, he continued to be involved in plant research at CIMMYT with wheat, triticale, barley, maize, and high-altitude sorghum. In 1981, Borlaug became a founding member of the World Cultural Council. In 1984, Borlaug began teaching and conducting research at Texas A&M University. Eventually he was given the title Distinguished Professor of International Agriculture at the university and the holder of the Eugene Butler Endowed Chair in Agricultural Biotechnology. He advocated for agricultural biotechnology as he had for pesticides in earlier decades: publicly, knowledgeably, and always despite heavy criticism. Borlaug served on the faculty of the University of Minnesota, University of Iowa, Cornell University, and Texas A&M University. Borlaug remained at A&M until his death in September 2009. Wheat research in Mexico The Cooperative Wheat Research Production Program, a joint venture by the Rockefeller Foundation and the Mexican Ministry of Agriculture, involved research in genetics, plant breeding, plant pathology, entomology, agronomy, soil science, and cereal technology. The goal of the project was to boost wheat production in Mexico, which at the time was importing a large portion of its grain. Plant pathologist George Harrar recruited and assembled the wheat research team in late 1944. The four other members were soil scientist William Colwell; maize breeder Edward Wellhausen; potato breeder John Niederhauser; and Norman Borlaug, all from the United States. During the sixteen years Borlaug remained with the project, he bred a series of remarkably successful high-yield, disease-resistant, semi-dwarf wheat. Borlaug said that his first few years in Mexico were difficult. He lacked trained scientists and equipment. Local farmers were hostile towards the wheat program because of serious crop losses from 1939 to 1941 due to stem rust. "It often appeared to me that I had made a dreadful mistake in accepting the position in Mexico," he wrote in the epilogue to his book, Norman Borlaug on World Hunger. He spent the first ten years breeding wheat cultivars resistant to disease, including rust. In that time, his group made 6,000 individual crossings of wheat. Double harvest season Initially, Borlaug's work had been concentrated in the central highlands, in the village of Chapingo near Texcoco, where the problems with rust and poor soil were most prevalent. The village never met their aims. He realized that he could speed up breeding by taking advantage of the country's two growing seasons. In the summer he would breed wheat in the central highlands as usual, then immediately take the seeds north to the Valle del Yaqui research station near Ciudad Obregón, Sonora. The difference in altitudes and temperatures would allow more crops to be grown each year. Borlaug's boss, George Harrar, was against this expansion. Besides the extra costs of doubling the work, Borlaug's plan went against a then-held principle of agronomy that has since been disproved. It was believed that to store energy for germination before being planted, seeds needed a rest period after harvesting. When Harrar vetoed his plan, Borlaug resigned. Elvin Stakman, who was visiting the project, calmed the situation, talking Borlaug into withdrawing his resignation and Harrar into allowing the double wheat season. As of 1945, wheat would then be bred at locations 700 miles (1000 km) apart, 10 degrees apart in latitude, and 8,500 feet (2600 m) apart in altitude. This was called "shuttle breeding". As an unexpected benefit of the double wheat season, the new breeds did not have problems with photoperiodism. Normally, wheat varieties cannot adapt to new environments, due to the changing periods of sunlight. Borlaug later recalled, "As it worked out, in the north, we were planting when the days were getting shorter, at low elevation and high temperature. Then we'd take the seed from the best plants south and plant it at high elevation, when days were getting longer and there was lots of rain. Soon we had varieties that fit the whole range of conditions. That wasn't supposed to happen by the books". This meant that the project would not need to start separate breeding programs for each geographic region of the planet. Disease resistance through varieties of wheat Because purebred (genotypically identical) plant varieties often only have one or a few major genes for disease resistance, and plant diseases such as rust are continuously producing new races that can overcome a pure line's resistance, multiple linear lines varieties were developed. Multiline varieties are mixtures of several phenotypically similar pure lines which each have different genes for disease resistance. By having similar heights, flowering and maturity dates, seed colors, and agronomic characteristics, they remain compatible with each other, and do not reduce yields when grown together on the field. In 1953, Borlaug extended this technique by suggesting that several pure lines with different resistance genes should be developed through backcross methods using one recurrent parent. Backcrossing involves crossing a hybrid and subsequent generations with a recurrent parent. As a result, the genotype of the backcrossed progeny becomes increasingly similar to that of the recurrent parent. Borlaug's method would allow the various different disease-resistant genes from several donor parents to be transferred into a single recurrent parent. To make sure each line has different resistant genes, each donor parent is used in a separate backcross program. Between five and ten of these lines may then be mixed depending upon the races of pathogen present in the region. As this process is repeated, some lines will become susceptible to the pathogen. These lines can easily be replaced with new resistant lines. As new sources of resistance become available, new lines are developed. In this way, the loss of crops is kept to a minimum, because only one or a few lines become susceptible to a pathogen within a given season, and all other crops are unaffected by the disease. Because the disease would spread more slowly than if the entire population were susceptible, this also reduces the damage to susceptible lines. There is still the possibility that a new race of pathogen will develop to which all lines are susceptible, however. Dwarfing Dwarfing is an important agronomic quality for wheat; dwarf plants produce thick stems. The cultivars Borlaug worked with had tall, thin stalks. Taller wheat grasses better compete for sunlight but tend to collapse under the weight of the extra grain—a trait called lodging—from the rapid growth spurts induced by nitrogen fertilizer Borlaug used in the poor soil. To prevent this, he bred wheat to favor shorter, stronger stalks that could better support larger seed heads. In 1953, he acquired a Japanese dwarf variety of wheat called Norin 10 developed by the agronomist Gonjiro Inazuka in Iwate Prefecture, including ones which had been crossed with a high-yielding American cultivar called Brevor 14 by Orville Vogel. Norin 10/Brevor 14 is semi-dwarf (one-half to two-thirds the height of standard varieties) and produces more stalks and thus more heads of grain per plant. Also, larger amounts of assimilate were partitioned into the actual grains, further increasing the yield. Borlaug crossbred the semi-dwarf Norin 10/Brevor 14 cultivar with his disease-resistant cultivars to produce wheat varieties that were adapted to tropical and sub-tropical climates. Borlaug's new semi-dwarf, disease-resistant varieties, called Pitic 62 and Penjamo 62, changed the potential yield of spring wheat dramatically. By 1963, 95% of Mexico's wheat crops used the semi-dwarf varieties developed by Borlaug. That year, the harvest was six times larger than in 1944, the year Borlaug arrived in Mexico. Mexico had become fully self-sufficient in wheat production, and a net exporter of wheat. Four other high-yield varieties were also released, in 1964: Lerma Rojo 64, Siete Cerros, Sonora 64, and Super X. Expansion to South Asia: the Green Revolution In 1961 to 1962, Borlaug's dwarf spring wheat strains were sent for multilocation testing in the International Wheat Rust Nursery, organized by the U.S. Department of Agriculture. In March 1962, a few of these strains were grown in the fields of the Indian Agricultural Research Institute in Pusa, New Delhi, India. In May 1962, M. S. Swaminathan, a member of IARI's wheat program, requested of Dr B. P. Pal, director of IARI, to arrange for the visit of Borlaug to India and to obtain a wide range of dwarf wheat seed possessing the Norin 10 dwarfing genes. The letter was forwarded to the Indian Ministry of Agriculture headed by Shri C. Subramaniam, which arranged with the Rockefeller Foundation for Borlaug's visit. In March 1963, the Rockefeller Foundation and the Mexican government sent Borlaug and Dr Robert Glenn Anderson to India to continue his work. He supplied 100 kg (220 lb) of seed from each of the four most promising strains and 630 promising selections in advanced generations to the IARI in October 1963, and test plots were subsequently planted at Delhi, Ludhiana, Pant Nagar, Kanpur, Pune and Indore. Anderson stayed as head of the Rockefeller Foundation Wheat Program in New Delhi until 1975. During the mid-1960s the Indian subcontinent was at war and experienced minor famine and starvation, which was limited partially by the U.S. shipping a fifth of its wheat production to India in 1966 and 1967. The Indian and Pakistani bureaucracies and the region's cultural opposition to new agricultural techniques initially prevented Borlaug from fulfilling his desire to immediately plant the new wheat strains there. In 1965, as a response to food shortages, Borlaug imported 550 tons of seeds for the government. Biologist Paul R. Ehrlich wrote in his 1968 bestseller The Population Bomb, "The battle to feed all of humanity is over ... In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now." Ehrlich said, "I have yet to meet anyone familiar with the situation who thinks India will be self-sufficient in food by 1971," and "India couldn't possibly feed two hundred million more people by 1980." In 1965, after extensive testing, Borlaug's team, under Anderson, began its effort by importing about 450 tons of Lerma Rojo and Sonora 64 semi-dwarf seed varieties: 250 tons went to Pakistan and 200 to India. They encountered many obstacles. Their first shipment of wheat was held up in Mexican customs and so it could not be shipped from the port at Guaymas in time for proper planting. Instead, it was sent via a 30-truck convoy from Mexico to the U.S. port in Los Angeles, encountering delays at the Mexico–United States border. Once the convoy entered the U.S., it had to take a detour, as the U.S. National Guard had closed the freeway due to the Watts riots in Los Angeles. When the seeds reached Los Angeles, a Mexican bank refused to honor Pakistan treasury's payment of US$100,000, because the check contained three misspelled words. Still, the seed was loaded onto a freighter destined for Bombay, India, and Karachi, Pakistan. Twelve hours into the freighter's voyage, war broke out between India and Pakistan over the Kashmir region. Borlaug received a telegram from the Pakistani minister of agriculture, Malik Khuda Bakhsh Bucha: "I'm sorry to hear you are having trouble with my check, but I've got troubles, too. Bombs are falling on my front lawn. Be patient, the money is in the bank..." These delays prevented Borlaug's group from conducting the germination tests needed to determine seed quality and proper seeding levels. They started planting immediately and often worked in sight of artillery flashes. A week later, Borlaug discovered that his seeds were germinating at less than half the normal rate. It later turned out that the seeds had been damaged in a Mexican warehouse by over-fumigation with a pesticide. He immediately ordered all locations to double their seeding rates. The initial yields of Borlaug's crops were higher than any ever harvested in South Asia. The countries subsequently committed to importing large quantities of both the Lerma Rojo 64 and Sonora 64 varieties. In 1966, India imported 18,000 tons—the largest purchase and import of any seed in the world at that time. In 1967, Pakistan imported 42,000 tons, and Turkey 21,000 tons. Pakistan's import, planted on 1.5 million acres (6,100 km2), produced enough wheat to seed the entire nation's wheatland the following year. By 1968, when Ehrlich's book was released, William Gaud of the United States Agency for International Development was calling Borlaug's work a "Green Revolution". High yields led to a shortage of various utilities—labor to harvest the crops, bullock carts to haul it to the threshing floor, jute bags, trucks, rail cars, and grain storage facilities. Some local governments were forced to close school buildings temporarily to use them for grain storage. In Pakistan, wheat yields nearly doubled, from 4.6 million tons in 1965 to 7.3 million tons in 1970; Pakistan was self-sufficient in wheat production by 1968. Yields were over 21 million tons by 2000. In India, yields increased from 12.3 million tons in 1965 to 20.1 million tons in 1970. By 1974, India was self-sufficient in the production of all cereals. By 2000, India was harvesting a record 76.4 million tons (2.81 billion bushels) of wheat. Since the 1960s, food production in both nations has increased faster than the rate of population growth. India's use of high-yield farming has prevented an estimated 100 million acres (400,000 km2) of virgin land from being converted into farmland—an area about the size of California, or 13.6% of the total area of India. The use of these wheat varieties has also had a substantial effect on production in six Latin American countries, six countries in the Near and Middle East, and several others in Africa. Borlaug's work with wheat contributed to the development of high-yield semi-dwarf indica and japonica rice cultivars at the International Rice Research Institute and China's Hunan Rice Research Institute. Borlaug's colleagues at the Consultative Group on International Agricultural Research also developed and introduced a high-yield variety of rice throughout most of Asia. Land devoted to the semi-dwarf wheat and rice varieties in Asia expanded from 200 acres (0.8 km2) in 1965 to over 40 million acres (160,000 km2) in 1970. In 1970, this land accounted for over 10% of the more productive cereal land in Asia. Nobel Peace Prize For his contributions to the world food supply, Borlaug was awarded the Nobel Peace Prize in 1970. Norwegian officials notified his wife in Mexico City at 4:00 a.m., but Borlaug had already left for the test fields in the Toluca valley, about 40 miles (65 km) west of Mexico City. A chauffeur took her to the fields to inform her husband. According to his daughter, Jeanie Laube, "My mom said, 'You won the Nobel Peace Prize,' and he said, 'No, I haven't', ... It took some convincing ... He thought the whole thing was a hoax". He was awarded the prize on December 10. In his Nobel Lecture the following day, he speculated on his award: "When the Nobel Peace Prize Committee designated me the recipient of the 1970 award for my contribution to the 'green revolution', they were in effect, I believe, selecting an individual to symbolize the vital role of agriculture and food production in a world that is hungry, both for bread and for peace". His speech repeatedly presented improvements in food production within a sober understanding of the context of population. "The green revolution has won a temporary success in man's war against hunger and deprivation; it has given man a breathing space. If fully implemented, the revolution can provide sufficient food for sustenance during the next three decades. But the frightening power of human reproduction must also be curbed; otherwise, the success of the green revolution will be ephemeral only. "Most people still fail to comprehend the magnitude and menace of the "Population Monster"...Since man is potentially a rational being, however, I am confident that within the next two decades he will recognize the self-destructive course he steers along the road of irresponsible population growth..." Borlaug hypothesis Borlaug continually advocated increasing crop yields as a means to curb deforestation. The large role he played in both increasing crop yields and promoting this view has led to this methodology being called by agricultural economists the "Borlaug hypothesis", namely that increasing the productivity of agriculture on the best farmland can help control deforestation by reducing the demand for new farmland. According to this view, assuming that global food demand is on the rise, restricting crop usage to traditional low-yield methods would also require at least one of the following: the world population to decrease, either voluntarily or as a result of mass starvations; or the conversion of forest land into crop land. It is thus argued that high-yield techniques are ultimately saving ecosystems from destruction. On a global scale, this view holds strictly true ceteris paribus, if deforestation only occurs to increase land for agriculture. But other land uses exist, such as urban areas, pasture, or fallow, so further research is necessary to ascertain what land has been converted for what purposes, to determine how true this view remains. Increased profits from high-yield production may also induce cropland expansion in any case, although as world food needs decrease, this expansion may decrease as well. Borlaug expressed the idea now known as the "Borlaug hypothesis" in a speech given in Oslo, Norway, in 2000, upon the occasion of the 30th anniversary of his acceptance of the Nobel Peace Prize: "Had the global cereal yields of 1950 still prevailed in 1999, we would have needed nearly 1.8 billion ha of additional land of the same quality – instead of the 600 million that was used – to equal the current global harvest." Criticisms and his view of critics Borlaug's name is nearly synonymous with the Green Revolution, against which many criticisms have been mounted over the decades. Throughout his years of research, Borlaug's programs often faced opposition by nonscientists who consider genetic crossbreeding to be unnatural, and therefore those that inherently dislike the unnatural criticized such crossbreeding. These farming techniques, in addition to increasing yields, often reaped large profits for U.S. agribusiness and agrochemical corporations and were criticized by one author in 2003 as widening social inequality in the countries owing to uneven food distribution while forcing a capitalist agenda of U.S. corporations onto countries that had undergone land reform. Other concerns include the crossing of genetic barriers; the inability of a single crop to fulfill all nutritional requirements; the decreased biodiversity from planting few varieties; the environmental and economic effects of inorganic fertilizer and pesticides; the side effects of large amounts of herbicides sprayed on fields of herbicide-resistant crops; and the destruction of wilderness caused by the construction of roads in populated third-world areas. Borlaug refuted or dismissed most claims of his critics but did take certain concerns seriously. He stated that his work has been "a change in the right direction, but it has not transformed the world into a Utopia". Of environmental lobbyists opposing crop yield improvements, he stated, "some of the environmental lobbyists of the Western nations are the salt of the earth, but many of them are elitists. They've never experienced the physical sensation of hunger. They do their lobbying from comfortable office suites in Washington or Brussels. If they lived just one month amid the misery of the developing world, as I have for fifty years, they'd be crying out for tractors and fertilizer and irrigation canals and be outraged that fashionable elitists back home were trying to deny them these things." Borlaug cautioned, "There are no miracles in agricultural production. Nor is there such a thing as a miracle variety of wheat, rice, or maize which can serve as an elixir to cure all ills of a stagnant, traditional agriculture." The journalist John Vidal, writing in The Guardian, commented that the plaudits and honors heaped on Borlaug present him as a "saint or even the god of American farmers", but that the technology was far from perfect. The Green Revolution promised to end hunger and poverty, and to benefit rural societies everywhere. Instead, its long-term effects included what the Indian environmentalist Vandana Shiva has called "rural impoverishment, increased debt, social inequality and the displacement of vast numbers of peasant farmers". Vidal further cites the political commentator Alexander Cockburn, who wrote that Borlaug was "aside from Kissinger, probably the biggest killer of all to have got the peace prize", given that his wheat "led to the death of peasants by the million". Later roles Following his retirement, Borlaug continued to participate in teaching, research and activism. He spent much of the year based at CIMMYT in Mexico, conducting research, and four months of the year serving at Texas A&M University, where he had been a distinguished professor of international agriculture since 1984. From 1994 to 2003, Borlaug served on the International Fertilizer Development Center board of directors. In 1999, the university's Board of Regents named its US$16 million Center for Southern Crop Improvement in honor of Borlaug. He worked in the building's Heep Center, and taught one semester each year. Production in Africa In the early 1980s, environmental groups that were opposed to Borlaug's methods campaigned against his planned expansion of efforts into Africa. They prompted the Rockefeller and Ford Foundations and the World Bank to stop funding most of his African agriculture projects. Western European governments were persuaded to stop supplying fertilizer to Africa. According to David Seckler, former Director General of the International Water Management Institute, "the environmental community in the 1980s went crazy pressuring the donor countries and the big foundations not to support ideas like inorganic fertilizers for Africa." In 1984, during the Ethiopian famine, Ryoichi Sasakawa, the chairman of the Japan Shipbuilding Industry Foundation (now the Nippon Foundation), contacted the semi-retired Borlaug, wondering why the methods used in Asia were not extended to Africa, and hoping Borlaug could help. He convinced Borlaug to help with this new effort, and Borlaug assisted in creating the Sasakawa Africa Association (SAA) to coordinate the project. The SAA is a research and extension organization that aims to increase food production in African countries that are struggling with food shortages. "I assumed we'd do a few years of research first," Borlaug later recalled, "but after I saw the terrible circumstances there, I said, 'Let's just start growing'." Soon, Borlaug and the SAA had projects in seven countries. Yields of maize in developed African countries tripled. Yields of wheat, sorghum, cassava, and cowpeas also increased in these countries. At present (more than ten years after Borlaug's death in 2009), program activities are under way in Benin, Burkina Faso, Ethiopia, Ghana, Guinea, Mali, Malawi, Mozambique, Nigeria, Tanzania, and Uganda, all of which suffered from repeated famines in previous decades. From 1986 to 2009, Borlaug was the President of the SAA. That year, a joint venture between The Carter Center and SAA was launched called Sasakawa-Global 2000 (SG 2000). The program focuses on food, population and agricultural policy. Since then, more than 8 million small-scale farmers in 15 African countries have been trained in SAA farming techniques, which have helped them to double or triple grain production. Those elements that allowed Borlaug's projects to succeed in India and Pakistan, such as well-organized market economies, transportation, and irrigation systems, are severely lacking throughout much of Africa, posing additional obstacles to increasing yields and reducing the ongoing threat of food shortages. Because of these challenges, Borlaug's initial projects were restricted to relatively developed regions of the continent. Despite these setbacks, Borlaug found encouragement. Visiting Ethiopia in 1994 after a major famine, Jimmy Carter won Prime Minister Meles Zenawi's support for a campaign seeking to aid farmers, using the fertilizer diammonium phosphate and Borlaug's methods. The following season, Ethiopia recorded the largest harvests of major crops in history, with a 32% increase in production, and a 15% increase in average yield over the previous season. For Borlaug, the rapid increase in yields suggested that there was still hope for higher food production throughout sub-Saharan Africa, despite lingering questions about population sustainability and the absence of long-term studies in Africa. World Food Prize The World Food Prize is an international award recognizing the achievements of individuals who have advanced human development by improving the quality, quantity or availability of food in the world. The prize was created in 1986 by Norman Borlaug, as a way to recognize personal accomplishments, and as a means of education by using the Prize to establish role models for others. The first prize was given to Borlaug's former colleague, M. S. Swaminathan, in 1987, for his work in India. The next year, Swaminathan used the US$250,000 prize to start the MS Swaminathan Research Foundation for research on sustainable development. Global stem rust and the Borlaug Global Rust Initiative In 2005, Borlaug, with his former graduate student Ronnie Coffman, convened an international expert panel in Kenya on the emerging threat of Ug99 in east Africa. The working group produced a report, "Sounding the Alarm on Global Stem Rust", and their work led to the formation of the Global Rust Initiative. In 2008, with funding from the Bill & Melinda Gates Foundation, the organization was re-named the Borlaug Global Rust Initiative Future of global farming and food supply The limited potential for land expansion for cultivation worried Borlaug, who, in March 2005, stated that, "we will have to double the world food supply by 2050." With 85% of future growth in food production having to come from lands already in use, he recommends a multidisciplinary research focus to further increase yields, mainly through increased crop immunity to large-scale diseases, such as the rust fungus, which affects all cereals but rice. His dream was to "transfer rice immunity to cereals such as wheat, maize, sorghum and barley, and transfer bread-wheat proteins (gliadin and glutenin) to other cereals, especially rice and maize". Borlaug believed that genetically modified organisms (GMO) were the only way to increase food production as the world runs out of unused arable land. GMOs were not inherently dangerous "because we've been genetically modifying plants and animals for a long time. Long before we called it science, people were selecting the best breeds." In a review of Borlaug's 2000 publication entitled Ending world hunger: the promise of biotechnology and the threat of antiscience zealotry, the authors argued that Borlaug's warnings were still true in 2010: According to Borlaug, "Africa, the former Soviet republics, and the cerrado are the last frontiers. After they are in use, the world will have no additional sizable blocks of arable land left to put into production, unless you are willing to level whole forests, which you should not do. So, future food-production increases will have to come from higher yields. And though I have no doubt yields will keep going up, whether they can go up enough to feed the population monster is another matter. Unless progress with agricultural yields remains very strong, the next century will experience sheer human misery that, on a numerical scale, will exceed the worst of everything that has come before". Besides increasing the worldwide food supply, early in his career Borlaug stated that taking steps to decrease the rate of population growth will also be necessary to prevent food shortages. In his Nobel Lecture of 1970, Borlaug stated, "Most people still fail to comprehend the magnitude and menace of the 'Population Monster' ... If it continues to increase at the estimated present rate of two percent a year, the world population will reach 6.5 billion by the year 2000. Currently, with each second, or tick of the clock, about 2.2 additional people are added to the world population. The rhythm of increase will accelerate to 2.7, 3.3, and 4.0 for each tick of the clock by 1980, 1990, and 2000, respectively, unless man becomes more realistic and preoccupied about this impending doom. The tick-tock of the clock will continually grow louder and more menacing each decade. Where will it all end?" However, some observers have suggested that by the 1990s Borlaug had changed his position on population control. They point to a quote from the year 2000 in which he stated: "I now say that the world has the technology—either available or well advanced in the research pipeline—to feed on a sustainable basis a population of 10 billion people. The more pertinent question today is whether farmers and ranchers will be permitted to use this new technology? While the affluent nations can certainly afford to adopt ultra low-risk positions, and pay more for food produced by the so-called 'organic' methods, the one billion chronically undernourished people of the low income, food-deficit nations cannot." However, Borlaug remained on the advisory board of Population Media Center, an organization working to stabilize world population, until his death. Death Borlaug died of lymphoma at the age of 95, on September 12, 2009, in his Dallas home. Borlaug's children released a statement saying, "We would like his life to be a model for making a difference in the lives of others and to bring about efforts to end human misery for all mankind." The Prime Minister of India Manmohan Singh and President of India Pratibha Patil paid tribute to Borlaug saying, "Borlaug's life and achievement are testimony to the far-reaching contribution that one man's towering intellect, persistence and scientific vision can make to human peace and progress." The United Nations' Food and Agriculture Organization (FAO) described Borlaug as "a towering scientist whose work rivals that of the 20th century's other great scientific benefactors of humankind" and Kofi Annan, former Secretary-General of the United Nations said, "As we celebrate Dr. Borlaug's long and remarkable life, we also celebrate the long and productive lives that his achievements have made possible for so many millions of people around the world... we will continue to be inspired by his enduring devotion to the poor, needy and vulnerable of our world." Honors and awards In 1968, Borlaug received what he considered an especially satisfying tribute when the people of Ciudad Obregón, where some of his earliest experiments were undertaken, named a street after him. Also in that year, he became a member of the U.S. National Academy of Sciences. In 1970, he was given an honorary doctorate by the Agricultural University of Norway. In 1970, he was awarded the Nobel Peace Prize by the Norwegian Nobel Committee "for his contributions to the 'green revolution' that was having such an impact on food production particularly in Asia and in Latin America." In 1970, he was awarded the Order of the Aztec Eagle by the Mexican government. In 1971, he was named a Distinguished Fellow of the National Academy of Agronomy and Veterinary Medicine of Argentina In 1971, he received the American Academy of Achievement's Golden Plate Award. In 1974, he was awarded a Peace Medal (in the form of a dove, carrying a wheat ear in its beak) by Haryana Agricultural University, Hisar, India. In 1975, he was named a Distinguished Fellow of the Iowa Academy of Science. In 1980, he received the S. Roger Horchow Award for Greatest Public Service by a Private Citizen, an award given out annually by Jefferson Awards. In 1980, he was elected honorary member of the Hungarian Academy of Sciences. In 1984, his name was placed in the National Agricultural Hall of Fame at the national center in Bonner Springs, Kansas. Also that year, he was recognized for sustained service to humanity through outstanding contributions in plant breeding from the Governors Conference on Agriculture Innovations in Little Rock, Arkansas. Also in 1984, he received the Henry G. Bennet Distinguished Service Award at commencement ceremonies at Oklahoma State University. In 1986, Borlaug was inducted into the Scandinavian-American Hall of Fame during Norsk Høstfest. Borlaug was elected a Foreign Member of the Royal Society (ForMemRS) in 1987. Borlaug had a long history of involvement with the Council for Agricultural Science and Technology (CAST). Borlaug's remarks as the invited speaker at the organization's conference in 1973 became its first published paper. Borlaug received the organization's Charles A. Black Award in 2005 for his contributions to public policy and the public understanding of science. In 2010, CAST changed the name of the Charles A. Black Award (1986-2009) to the Borlaug CAST Communication Award. On August 19, 2013, his statue was unveiled inside the ICAR's NASC Complex at New Delhi, India. On March 25, 2014, a statue of Borlaug at the United States Capitol was unveiled in a ceremony on the 100th anniversary of his birth. This statue replaces the statue of James Harlan as one of the two statues given to the National Statuary Hall Collection by the state of Iowa. Borlaug received the 1977 U.S. Presidential Medal of Freedom, the 2002 Public Welfare Medal from the National Academy of Sciences, the 2002 Rotary International Award for World Understanding and Peace, and the 2004 National Medal of Science. As of January 2004, Borlaug had received 49 honorary degrees from as many universities, in 18 countries, the most recent from Dartmouth College on June 12, 2005, and was a foreign or honorary member of 22 international Academies of Sciences. In Iowa and Minnesota, "World Food Day", October 16, is referred to as "Norman Borlaug World Food Prize Day". Throughout the United States, it is referred to as "World Food Prize Day". In 2006, the Government of India conferred on him its second highest civilian award: the Padma Vibhushan. He was awarded the Danforth Award for Plant Science by the Donald Danforth Plant Science Center, St Louis, Missouri in recognition of his lifelong commitment to increasing global agricultural production through plant science. The stained-glass World Peace Window at St. Mark's Episcopal Cathedral in Minneapolis, Minnesota, depicts "peace makers" of the 20th century, including Norman Borlaug. In August 2006, Dr. Leon Hesser published The Man Who Fed the World: Nobel Peace Prize Laureate Norman Borlaug and His Battle to End World Hunger, an account of Borlaug's life and work. On August 4, the book received the 2006 Print of Peace award, as part of International Read For Peace Week. Borlaug is also the subject of the documentary film The Man Who Tried to Feed the World which first aired on American Experience on April 21, 2020. On September 27, 2006, the United States Senate by unanimous consent passed the Congressional Tribute to Dr. Norman E. Borlaug Act of 2006. The act authorizes that Borlaug be awarded America's highest civilian award, the Congressional Gold Medal. On December 6, 2006, the House of Representatives passed the measure by voice vote. President George Bush signed the bill into law on December 14, 2006, and it became Public Law Number 109–395. According to the act, "the number of lives Dr. Borlaug has saved [is] more than a billion people" The act authorizes the Secretary of the Treasury to strike and sell duplicates of the medal in bronze. He was presented with the medal on July 17, 2007. Borlaug was a foreign fellow of the Bangladesh Academy of Sciences. The Borlaug Dialogue (Norman E. Borlaug International Symposium) is named in his honour. Books The Green Revolution, Peace, and Humanity. 1970. Nobel Lecture, Norwegian Nobel Institute in Oslo, Norway. December 11, 1970. Wheat in the Third World. 1982. Authors: Haldore Hanson, Norman E. Borlaug, and R. Glenn Anderson. Boulder, Colorado: Westview Press. Land use, food, energy and recreation. 1983. Aspen Institute for Humanistic Studies. Feeding a human population that increasingly crowds a fragile planet. 1994. Mexico City. Norman Borlaug on World Hunger. 1997. Edited by Anwar Dil. San Diego/Islamabad/Lahore: Bookservice International. 499 pages. The Green Revolution Revisited and the Road Ahead. 2000. Anniversary Nobel Lecture, Norwegian Nobel Institute in Oslo, Norway. September 8, 2000. "Ending World Hunger. The Promise of Biotechnology and the Threat of Antiscience Zealotry". 2000. Plant Physiology, October 2000, Vol. 124, pp. 487–90. (duplicate) Feeding a World of 10 Billion People: The TVA/IFDC Legacy. International Fertilizer Development Center, 2003. Prospects for world agriculture in the twenty-first century. 2004. Norman E. Borlaug, Christopher R. Dowswell. Published in: Sustainable agriculture and the international rice-wheat system. Foreword to The Frankenfood Myth: How Protest and Politics Threaten the Biotech Revolution. 2004. Henry I. Miller, Gregory Conko. References Further reading External links Norman E. Borlaug papers, University Archives, University of Minnesota – Twin Cities The Pioneer of the Green Revolution 1914 births 2009 deaths American agronomists American Nobel laureates Deaths from lymphoma in Texas Civilian Conservation Corps people Congressional Gold Medal recipients Development specialists Fellows of Bangladesh Academy of Sciences Fellows of the American Academy of Arts and Sciences Fellows of the Royal Society of Edinburgh Founding members of the World Cultural Council Foreign members of the Royal Society Recipients of the Great Cross of the National Order of Scientific Merit (Brazil) Members of the Brazilian Academy of Sciences Members of the Hungarian Academy of Sciences Members of the United States National Academy of Sciences Members of the Polish Academy of Sciences Members of the Royal Swedish Academy of Agriculture and Forestry Mexican Academy of Sciences National Medal of Science laureates Nobel Peace Prize laureates Members of the Norwegian Academy of Science and Letters American people of Norwegian descent Scientists from Dallas People from Cresco, Iowa Scientists from Minnesota Presidential Medal of Freedom recipients Recipients of the Padma Vibhushan in science & engineering Rockefeller Foundation people TWAS fellows University of Minnesota College of Food, Agricultural and Natural Resource Sciences alumni Vannevar Bush Award recipients Fellows of Pakistan Academy of Sciences Foreign fellows of the Indian National Science Academy Foreign members of the Chinese Academy of Engineering American Lutherans Scientists from Iowa 20th-century American biologists 21st-century American scientists 21st-century American biologists Plant breeding
Norman Borlaug
Chemistry
10,587
325,829
https://en.wikipedia.org/wiki/Green%20Man
The Green Man, also known as a foliate head, is a motif in architecture and art, of a face made of, or completely surrounded by, foliage, which normally spreads out from the centre of the face. Apart from a purely decorative function, the Green Man is primarily interpreted as a symbol of rebirth, representing the cycle of new growth that occurs every spring. The Green Man motif has many variations. Branches or vines may sprout from the mouth, nostrils, or other parts of the face, and these shoots may bear flowers or fruit. Found in many cultures from many ages around the world, the Green Man is often related to natural vegetation deities. Often used as decorative architectural ornaments, where they are a form of mascaron or ornamental head, Green Men are frequently found in architectural sculpture on both secular and ecclesiastical buildings in the Western tradition. In churches in England, the image was used to illustrate a popular sermon describing the mystical origins of the cross of Jesus. "Green Man" type foliate heads first appeared in England during the early 12th century deriving from those of France, and were especially popular in the Gothic architecture of the 13th to 15th centuries. The idea that the Green Man motif represents a pagan mythological figure, as proposed by Lady Raglan in 1939, despite its popularity with the lay public, is not supported by evidence. Types Usually referred to in art history as foliate heads or foliate masks, representations of the Green Man take many forms, but most just show a "mask" or frontal depiction of a face, which in architecture is usually in relief. The simplest depict a man's face peering out of dense foliage. Some may have leaves for hair, perhaps with a leafy beard. Often leaves or leafy shoots are shown growing from his open mouth and sometimes even from the nose and eyes as well. In the most abstract examples, the carving at first glance appears to be merely stylised foliage, with the facial element only becoming apparent on closer examination. The face is almost always male; green women are rare. Lady Raglan coined the term "Green Man" for this type of architectural feature in her 1939 article The Green Man in Church Architecture in The Folklore Journal. It is thought that her interest stemmed from carvings at St. Jerome's Church in Llangwm, Monmouthshire. The Green Man appears in many forms, with the three most common types categorized as: the Foliate Head: completely covered in green leaves the Disgorging Head: spews vegetation from its mouth the Bloodsucker Head: sprouts vegetation from all facial orifices (e.g. tear ducts, nostrils, mouth, and ears) History In terms of formalism, art historians see a connection with the masks in Iron Age Celtic art, where faces emerge from stylized vegetal ornament in the "Plastic style" metalwork of La Tène art. Since there are so few survivals, and almost none in wood, the lack of a continuous series of examples is not a fatal objection to such a continuity. The Oxford Dictionary of English Folklore suggests that they ultimately have their origins in late Roman art from leaf masks used to represent gods and mythological figures. A character superficially similar to the Green Man, in the form of a partly foliate mask surrounded by Bacchic figures, appears at the centre of the 4th-century silver salver in the Mildenhall Treasure, found at a Roman villa site in Suffolk, England; the mask is generally agreed to represent Neptune or Oceanus and the foliation is of seaweed. In his lectures at Gresham College, historian and professor Ronald Hutton traces the green man to India, stating "the component parts of Lady Raglan's construct of the Green Man were dismantled. The medieval foliate heads were studied by Kathleen Basford in 1978 and Mercia MacDermott in 2003. They were revealed to have been a motif originally developed in India, which travelled through the medieval Arab empire to Christian Europe. There it became a decoration for monks’ manuscripts, from which it spread to churches." A late 4th-century example of a green man disgorging vegetation from his mouth is at St. Abre, in St. Hilaire-le-grand, France. 11th century Romanesque Templar churches in Jerusalem have Romanesque foliate heads. Harding tentatively suggested that the symbol may have originated in Asia Minor and been brought to Europe by travelling stone carvers. The tradition of the Green Man carved into Christian churches is found across Europe, including examples such as the Seven Green Men of Nicosia carved into the facade of the thirteenth century St Nicholas Church in Cyprus. The motif fitted very easily into the developing use of vegetal architectural sculpture in Romanesque and Gothic architecture in Europe. Later foliate heads in churches may have reflected the legends around Seth, the son of Adam, according to which he plants seeds in his dead father's mouth as he lies in his grave. The tree that grew from them became the tree of the true cross of the crucifixion. This tale was in The Golden Legend of Jacobus de Voragine, a very popular thirteenth century compilation of Christian religious stories, from which the subjects of church sermons were often taken, especially after 1483, when William Caxton printed an English translation of the Golden Legend. According to the Christian author Stephen Miller, author of "The Green Man in Medieval England: Christian Shoots from Pagan Roots" (2022), "It is a Christian/Judaic-derived motif relating to the legends and medieval hagiographies of the Quest of Seth – the three twigs/seeds/kernels planted below the tongue of post-fall Adam by his son Seth (provided by the angel of mercy responsible for guarding Eden) shoot forth, bringing new life to humankind". This notion was first proposed by James Coulter (2006). From the Renaissance onward, elaborate variations on the Green Man theme, often with animal heads rather than human faces, appear in many media other than carvings (including manuscripts, metalwork, bookplates, and stained glass). They seem to have been used for purely decorative effect rather than reflecting any deeply held belief. Modern history In Britain, the image of the Green Man enjoyed a revival in the 19th century, becoming popular with architects during the Gothic revival and the Arts and Crafts era, when it appeared as a decorative motif in and on many buildings, both religious and secular. American architects took up the motif around the same time. Many variations can be found in Neo-gothic Victorian architecture. He was popular amongst Australian stonemasons and can be found on many secular and sacred buildings, including an example on Broadway, Sydney. In 1887 a Swiss engraver, Numa Guyot, created a bookplate depicting a Green Man in exquisite detail. In April 2023, a Green Man's head was depicted on the invitation for the Coronation of Charles III and Camilla, designed by heraldic artist and manuscript illuminator Andrew Jamieson. According to the official royal website: "Central to the design is the motif of the Green Man, an ancient figure from British folklore, symbolic of spring and rebirth, to celebrate the new reign. The shape of the Green Man, crowned in natural foliage, is formed of leaves of oak, ivy, and hawthorn, and the emblematic flowers of the United Kingdom." which alluded to "the nature worshipper in King Charles" but polarized the public. Indeed, as the medieval art historian Cassandra Harrington pointed out, although vegetal figures were abundant throughout the medieval and early modern period, the foliate head motif is not ‘an ancient figure from British folklore’, as the Royal Household has proclaimed, but a European import.' In folklore Citations Sources cited Sandars, Nancy K., Prehistoric Art in Europe, Penguin (Pelican, now Yale, History of Art), 1968 (nb 1st edn.) Further reading Amis, Kingsley. The Green Man, Vintage, London (2004) (Novel) Anderson, William. Green Man: The Archetype of our Oneness with the Earth, HarperCollins (1990) Basford, Kathleen. The Green Man, D.S. Brewer (2004) (The first monograph on the subject, now reprinted in paperback) Beer, Robert. The Encyclopedia of Tibetan Symbols and Motifs Shambhala. (1999) , Cheetham, Tom. Green Man, Earth Angel: The Prophetic Tradition and the Battle for the Soul of the World , SUNY Press 2004 Doel, Fran and Doel, Geoff. The Green Man in Britain, Tempus Publishing Ltd (May 2001) Harding, Mike. A Little Book of the Green Man, Aurium Press, London (1998) Hicks, Clive. The Green Man: A Field Guide, Compass Books (August 2000) MacDermott, Mercia. Explore Green Men, Explore Books, Heart of Albion Press (September 2003) Matthews, John. The Quest for the Green Man, Godsfield Press Ltd (May 2004) Neasham, Mary. The Spirit of the Green Man, Green Magic (December 2003) Varner, Gary R. The Mythic Forest, the Green Man and the Spirit of Nature, Algora Publishing (March 4, 2006) The name of the Green Man Research paper by Brandon S Centerwall from Folklore magazine External links Greenman Encyclopedia Wiki A site with a comprehensive listings of locations of Green Men in the UK Christmas characters Church architecture Cornish folklore English folklore Fairies Iconography Life-death-rebirth gods Medieval legends Mythological human hybrids Romanesque art Scottish folklore Supernatural legends Visual motifs
Green Man
Mathematics
1,964
46,312,132
https://en.wikipedia.org/wiki/Aleurina%20ferruginea
Aleurina ferruginea is a fungus species in the family Pyronemataceae. References Pyronemataceae Fungi described in 1888 Fungus species
Aleurina ferruginea
Biology
35
32,711,871
https://en.wikipedia.org/wiki/Microsoft%20Intune
Microsoft Intune (formerly Microsoft Endpoint Manager and Windows Intune) is a Microsoft cloud-based unified endpoint management service for both corporate and BYOD devices. It extends some of the "on-premises" functionality of Microsoft Configuration Manager to the Microsoft Azure cloud. Distribution No on-premises infrastructure is required for clients to use Intune, and management is accomplished using a web-based portal. Distribution is through a subscription system in which a fixed monthly cost is incurred per user. It is also to use Endpoint Manager in co management with Microsoft Configuration Manager. It is included in Microsoft Enterprise Mobility + Security (EMS) suite and Microsoft Office 365 Enterprise E5, which were both succeeded by Microsoft 365 in July 2017. Microsoft 365 Business Premium licenses also include Intune and EMS. Microsoft Intune is a cloud-based endpoint management solution. It manages user access and simplifies app and device management across your many devices, including mobile devices, desktop computers, and virtual endpoints. As organizations move to support hybrid and remote workforces, they face the challenge of managing devices to access organizational resources. Staff and students must collaborate, work across the board, and access and participate in these resources safely. Managers must protect organizational data, manage end-user access, and support users wherever they work. Function Intune supports Android, iOS, Linux, macOS, and Windows Operating Systems. Administration is done via a web browser. The administration console allows Intune to invoke remote tasks such as malware scans. Since version 2.0, installation of software packages in .exe, .msi and .msp format are supported. Installations are encrypted and compressed on Microsoft Azure Storage. Software installation can begin upon login. It can record and administer volume, retail and OEM licenses, and licenses which are administered by third parties. Upgrades to newer versions of the Intune software are also controlled. Information about inventory is recorded automatically. Managed computers can be grouped together when problems occur. Intune notifies support staff as well as notifying an external dealer via e-mail. Intune plans Since March 2023 Microsoft Intune is available in 3 versions: Intune Plan 1, Intune Plan 2 and Intune Suite. Plan 2 or Suite do not include Plan 1. Microsoft Intune P1 is included with subscriptions to Microsoft 365 E3, E5, F1, F3, Enterprise Mobility + Security E3 and E5, and Business Premium plans. Reception Der Standard praised the application, saying "the cloud service Intune promises to be a simple PC Management tool via Web console. The interface provides a quick overview of the system of state enterprise." German PC World positively evaluated "usability" saying that it "kept the interface simple." Business Computing World criticized the program, saying "Although Windows Intune worked well in our tests and did everything expected of it, we didn't find it all that easy to get to grips with", blaming the unintuitive "deceptively simple" management interface. ITespresso rated it "good", but noted connection issues with the remote assistance feature and that changes to firewall settings could take upwards of a full day to push out to clients. History Microsoft Intune was originally introduced as Windows Intune in April 2010. Microsoft announced plans to extend the service to other platforms and rename it to Microsoft Intune on 8 October 2014. Sources External links Microsoft cloud services Mobile device management software System administration Software distribution Network management
Microsoft Intune
Technology,Engineering
712
8,591,819
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Pyxis
This is the list of notable stars in the constellation Pyxis, sorted by decreasing brightness. See also Lists of stars by constellation References List Pyxis
List of stars in Pyxis
Astronomy
33
28,530,071
https://en.wikipedia.org/wiki/Cortinarius%20varius
Cortinarius varius, also known as the contrary webcap, is a basidiomycete mushroom of the genus Cortinarius. The mushroom has orangish-yellow caps that reach up to in diameter, and thick club-shaped stems up to long. Taxonomy The species was first described by as Agaricus varius by Jacob Christian Schäffer in 1774. It was given its current name by Elias Magnus Fries in 1838. It is commonly known as the "contrary webcap". Description The cap is in diameter, initially spherical to convex, then flattened or depressed, at first with thin, involute margin, bearing fragments of veil when young. The cap surface is sticky and smooth, orangish-yellow, with a light ochre tint, and yellower at the edge than in the middle, where the color is more rusty yellow. The gills are crowded closely together, usually somewhat emarginate (notched), thin and not very broad (5–8 mm). They are initially a rich cornflower blue, to lilac then finally ochre-cinnamon, with slightly scalloped edge. The stem is solid, thick in the lower part like a club. It is usually quite short when young, then often elongated, high and wide, up to or more in the swollen part. Depending on the maturity of the mushroom, the surface of the stem can be covered with tufts of fine hairs that are pressed against the surface, to fibrillose to almost smooth. The stem color is white with a slight blue to lilac tinge at the top that later disappears, slightly yellowish-cream below, changing to completely pale yellowish-ochre when old. The cortina (a cobweb-like partial veil made of silky fibrils) is and white, but later becomes cinnamon when the mushroom drops its spores. The flesh is firm, finely and compactly fleshy, white in the cap, later with a faint yellow tinge, undulatingly fibrillose in the stem and with a faint yellowish tinge. The odor is "pleasant", and the taste is also pleasant and mild. It has been described variously as inedible, or edible, and has been used for pickling. The spores are light rusty-brown, ellipsoid to almond-shaped, measuring 10–15 by 6.5–7.5 μm with a distinct oblique apiculus. The flesh will turns a chrome yellow color when chemically tested with a dilute solution of potassium hydroxide or ammonia. Cortinarius varius is closely related to Cortinarius variosimilis, a species that occurs in North America, but which has a paler cap, paler gills, and shorter spores. Distribution and habitat The fruit bodies of Cortinarius varius grow in groups in coniferous forests, also in glades and at the edge of woods, from the end of summer until late in the autumn, when the frosts set in. In some places it is a common species, in other quite rare. It prefers calcareous soils. See also List of Cortinarius species References varius Fungi described in 1783 Fungi of Europe Taxa named by Jacob Christian Schäffer Fungus species
Cortinarius varius
Biology
671
21,249,745
https://en.wikipedia.org/wiki/Healthcare%20in%20Estonia
Healthcare in Estonia is supervised by the Ministry of Social Affairs and funded by general taxation through the National Health Service. The service is administered by the Estonian Health Insurance Fund (EHIF). An insured person must be either a permanent resident or a legal resident who pays the social tax. All health care providers in Estonia are required to submit the health information of their patients to the digital health information system. Estonia's health care system is based on compulsory insurance based on solidarity funding and on universal access to services provided by private service providers. All providers of health services are autonomous businesses governed by private law. The single buyer and payment method is the Estonian Health Insurance Fund (Eesti Haigekassa), which pays all contracted providers. The majority of general practitioners work for themselves, privately owned businesses, or local governments. In Estonia, the majority of hospitals are either foundations created by the government, municipalities, or other public organizations, or limited businesses owned by the local government. If the hospital has a contract with the Fund, the Estonian Health Insurance Fund will also pay for necessary treatments received in a private hospital. If no agreement is reached, private medical care is not reimbursed. Estonian Health Insurance Fund (EHIF) The Estonian healthcare system is funded through mandatory contributions made through a payroll tax. It accounts for almost two-thirds of all healthcare expenditure in the country. The Estonian Health Insurance Fund (EHIF) is an independent body that acts as the sole purchaser of medical care. It operates through four regional branches, each covering two to six counties, which collect and disburse funds, contract service providers, and provide pharmaceuticals and other health programs. The health insurance system covers about 95% of the population. Contributions are proportional to employment and salaries, but non-contributing citizens represent almost half of the insured people. The Ministry of Social affairs covers uninsured persons and ambulance services. Hospitals Electronic health record Estonia is a pioneer in the use of electronic health records because when general practice was moved out of hospitals in 1998 the records were kept in the hospitals, so GPs had to start their own system. Madis Tiik established an electronic record system though it was officially illegal until 2002. He was a founding member of the eHealth Foundation and became its chief executive. There is now a central record system which is available to all healthcare professionals and can be viewed by the patient. Some tasks are automated, so that doctors do not have to certify that people are fit to drive. The application automatically checks their medical history. Estonia was the first country in the world that has implemented a nationwide EHR system, registering virtually all residents' medical history from birth to death. It was launched on 17 December 2008. Estonia used its existing digital public service software known as X-Road to create the EHR network. Estonia's system was overseen by the Ministry of Social Affairs until the creation of the Estonian e-Health Foundation. Since its implementation, 95% of health data has been digitized. Citizens that participate in the program are given an individual card that is used to access their records, like a national identification. The cost of this system has been €7.50 per person at the time of creation. Costs can stay low due to Estonia's small population. The system is still too small to create proper diagnosis and track national statistics according to the National Audit Office. Along with e-Health records, Estonia has also created an e-Prescription service. It allows doctors to create an electronic prescription that is then added to a patient's health card and accessed at a pharmacy to receive the medicine they may require. Now 97% of prescriptions are digital in Estonia. Child support Upon giving birth, the Estonian government grants one of the parents 100% of their former salary for 18 months, plus 320 Euros of one-time support per child. After 18 months, the parent has the right to resume her/his former position. In addition, the parent and child receive free healthcare. Parents who did not work before giving birth (unemployed, students, etc.) receive 278 Euros a month; the top salary is capped at 2,157 Euros a month. These measures, which have been in force from 2005, have not been proven to have had a major positive effect on the birth rate in Estonia, which has increased already since 2001. Those policy measures concentrate on the first 18 months of the child's life. After 18 months, the monthly state support to a child goes down to 60 Euros a month (for the first two children) and 100 euros (from third child on), plus free healthcare. There are many exceptions and added bonuses to the rule. For example, the child of a single parent receives additional sum of 19.18 Euros of child support. The child of an army member receives 300 Euros a month, and children in foster families receive 240 Euros a month. Despite considerable variation and fluctuations in the support to the family with children, the majority of Estonian families do not face great hardships and the State of The World's Mothers 2011 report ranked Estonia as the 18th best country in the world to be a mother, ahead of countries like Canada and the United States. The 2014 report ranked the neighboring Finland as the best country to be a mother in. According to the CIA World Factbook, Estonia has the 166th lowest maternal death rate in the world. References External links World Health Organization (WHO): Estonia Healthcare in Estonia Electronic health records
Healthcare in Estonia
Technology
1,103
21,837,608
https://en.wikipedia.org/wiki/Normal%20p-complement
In group theory, a branch of mathematics, a normal p-complement of a finite group for a prime p is a normal subgroup of order coprime to p and index a power of p. In other words the group is a semidirect product of the normal p-complement and any Sylow p-subgroup. A group is called p-nilpotent if it has a normal . Cayley normal 2-complement theorem Cayley showed that if the Sylow 2-subgroup of a group G is cyclic then the group has a normal , which shows that the Sylow of a simple group of even order cannot be cyclic. Burnside normal p-complement theorem showed that if a Sylow p-subgroup of a group G is in the center of its normalizer then G has a normal . This implies that if p is the smallest prime dividing the order of a group G and the Sylow is cyclic, then G has a normal . Frobenius normal p-complement theorem The Frobenius normal p-complement theorem is a strengthening of the Burnside normal theorem, which states that if the normalizer of every non-trivial subgroup of a Sylow of G has a normal , then so does G. More precisely, the following conditions are equivalent: G has a normal p-complement The normalizer of every non-trivial p-subgroup has a normal p-complement For every p-subgroup Q, the group NG(Q)/CG(Q) is a p-group. Thompson normal p-complement theorem The Frobenius normal p-complement theorem shows that if every normalizer of a non-trivial subgroup of a Sylow has a normal then so does G. For applications it is often useful to have a stronger version where instead of using all non-trivial subgroups of a Sylow , one uses only the non-trivial characteristic subgroups. For odd primes p Thompson found such a strengthened criterion: in fact he did not need all characteristic subgroups, but only two special ones. showed that if p is an odd prime and the groups N(J(P)) and C(Z(P)) both have normal for a Sylow of G, then G has a normal . In particular if the normalizer of every nontrivial characteristic subgroup of P has a normal , then so does G. This consequence is sufficient for many applications. The result fails for p = 2 as the simple group PSL2(F7) of order 168 is a counterexample. gave a weaker version of this theorem. Glauberman normal p-complement theorem Thompson's normal p-complement theorem used conditions on two particular characteristic subgroups of a Sylow . Glauberman improved this further by showing that one only needs to use one characteristic subgroup: the center of the Thompson subgroup. used his ZJ theorem to prove a normal theorem, that if p is an odd prime and the normalizer of Z(J(P)) has a normal , for P a Sylow of G, then so does G. Here Z stands for the center of a group and J for the Thompson subgroup. The result fails for p = 2 as the simple group PSL2(F7) of order 168 is a counterexample. References Reprinted by Dover 1955 Finite groups
Normal p-complement
Mathematics
677
8,322,150
https://en.wikipedia.org/wiki/Acemannan
Acemannan is a D-isomer mucopolysaccharide in aloe vera leaves. This compound has potential immunostimulant, antiviral, antineoplastic, and gastrointestinal properties. Chemical structure and properties Acemannan's monomer is mannoacetate linked by β-1,4-glycosidic bonds. This polymer is hydrophilic. Immunostimulant properties Acemannan has been demonstrated to induce macrophages to secrete interferon (IFN), tumor necrosis factor-α (TNF-α) and interleukins (IL-1); therefore, it might help to prevent or abrogate viral infection. These three cytokines are known to cause inflammation, and interferon is released in response to viral infections. In vitro studies have shown acemannan to inhibit HIV replication; however, in vivo studies have been inconclusive. Acemannan is currently being used for treatment and clinical management of fibrosarcoma in dogs and cats. Administration of acemannan has been shown to increase tumor necrosis and prolonged host survival; the animals have demonstrated lymphoid infiltration and encapsulation. The compound has been found to have an of >80 mg/kg and LC50 >5,000 mg/kg IV. References Aloe Adjuvants Botanical drugs Polysaccharides
Acemannan
Chemistry
298
6,940,601
https://en.wikipedia.org/wiki/Tiger%20trout
The tiger trout (Salmo trutta × Salvelinus fontinalis) is a sterile, intergeneric hybrid of the brown trout (Salmo trutta) and the brook trout (Salvelinus fontinalis). Pronounced vermiculations in the fish's patterning gave rise to its name, evoking the stripes of a tiger. Tiger trout are a rare anomaly in the wild, as the parent species are relatively unrelated, being members of different genera and possessing mismatched numbers of chromosomes. However, specialized hatchery rearing techniques are able to produce tiger trout reliably enough to meet the demands of stocking programs. Natural occurrence Prior to the 19th century, naturally occurring tiger trout were an impossibility, as the native range of brown trout in Eurasia and brook trout in North America do not overlap and the species could therefore never have encountered one another in the wild. When the widespread stocking of non-native gamefish began in the 1800s, brown trout and brook trout began establishing wild populations alongside each other in some places and the opportunity for hybridization in the wild arose. Instances of stream-born tiger trout were recorded in the United States at least as early as 1944 and, despite being exceptionally rare, they've been documented numerous times during the 20th and 21st centuries. Tiger trout result exclusively from the fertilization of brown trout eggs with brook trout milt, as brook trout eggs are generally too small to be successfully fertilized by brown trout milt. Tigers are known as intergeneric hybrids as the two parent species share only a relatively distant relationship, belonging to different genera within the Salmon family. In fact, brook trout and brown trout have non-matching numbers of chromosomes, with the former possessing 84 and the latter 80. Consequently, even in cases in which brown trout eggs are fertilized by brook trout in the wild, most of these eggs develop improperly and fail to yield any young. Hatchery rearing Tiger trout can be produced reliably in hatcheries and they have been incorporated into stocking programs in the United States at least as early as the 1960s. Hatchery productivity is enhanced by heat shocking the fertilized hybrid eggs, causing the creation of an extra set of chromosomes which increases survival rates from 5% to 85%. Tiger trout have been reported to grow faster than natural species, though this assessment is not universal. They are also known to be highly piscivorous and are consequently a useful control against rough fish populations. This, along with their desirability as novel gamefish, means tigers have continued to be popular with many fish stocking programs. US states with tiger trout stocking programs include Arizona, Arkansas, Colorado, Connecticut, Idaho, Washington, West Virginia, Wyoming, Utah, Virginia, Oregon, Massachusetts and Pennsylvania. See also Splake References Salmonidae Fish hybrids Salmo Salvelinus Intergeneric hybrids
Tiger trout
Biology
592
9,281
https://en.wikipedia.org/wiki/Evolutionary%20linguistics
Evolutionary linguistics or Darwinian linguistics is a sociobiological approach to the study of language. Evolutionary linguists consider linguistics as a subfield of sociobiology and evolutionary psychology. The approach is also closely linked with evolutionary anthropology, cognitive linguistics and biolinguistics. Studying languages as the products of nature, it is interested in the biological origin and development of language. Evolutionary linguistics is contrasted with humanistic approaches, especially structural linguistics. A main challenge in this research is the lack of empirical data: there are no archaeological traces of early human language. Computational biological modelling and clinical research with artificial languages have been employed to fill in gaps of knowledge. Although biology is understood to shape the brain, which processes language, there is no clear link between biology and specific human language structures or linguistic universals. For lack of a breakthrough in the field, there have been numerous debates about what kind of natural phenomenon language might be. Some researchers focus on the innate aspects of language. It is suggested that grammar has emerged adaptationally from the human genome, bringing about a language instinct; or that it depends on a single mutation which has caused a language organ to appear in the human brain. This is hypothesized to result in a crystalline grammatical structure underlying all human languages. Others suggest language is not crystallized, but fluid and ever-changing. Others, yet, liken languages to living organisms. Languages are considered analogous to a parasite or populations of mind-viruses. There is so far little scientific evidence for any of these claims, and some of them have been labelled as pseudoscience. History 1863–1945: social Darwinism Although pre-Darwinian theorists had compared languages to living organisms as a metaphor, the comparison was first taken literally in 1863 by the historical linguist August Schleicher who was inspired by Charles Darwin's On the Origin of Species. At the time there was not enough evidence to prove that Darwin's theory of natural selection was correct. Schleicher proposed that linguistics could be used as a testing ground for the study of the evolution of species. A review of Schleicher's book Darwinism as Tested by the Science of Language appeared in the first issue of Nature journal in 1870. Darwin reiterated Schleicher's proposition in his 1871 book The Descent of Man, claiming that languages are comparable to species, and that language change occurs through natural selection as words 'struggle for life'. Darwin believed that languages had evolved from animal mating calls. Darwinists considered the concept of language creation as unscientific. August Schleicher and his friend Ernst Haeckel were keen gardeners and regarded the study of cultures as a type of botany, with different species competing for the same living space. Similar ideas became later advocated by politicians who wanted to appeal to working class voters, not least by the national socialists who subsequently included the concept of struggle for living space in their agenda. Highly influential until the end of World War II, social Darwinism was eventually banished from human sciences, leading to a strict separation of natural and sociocultural studies. This gave rise to the dominance of structural linguistics in Europe. There had long been a dispute between the Darwinists and the French intellectuals with the topic of language evolution famously having been banned by the Paris Linguistic Society as early as in 1866. Ferdinand de Saussure proposed structuralism to replace evolutionary linguistics in his Course in General Linguistics, published posthumously in 1916. The structuralists rose to academic political power in human and social sciences in the aftermath of the student revolts of Spring 1968, establishing Sorbonne as an international centrepoint of humanistic thinking. From 1959 onwards: genetic determinism In the United States, structuralism was however fended off by the advocates of behavioural psychology; a linguistics framework nicknamed as 'American structuralism'. It was eventually replaced by the approach of Noam Chomsky who published a modification of Louis Hjelmslev's formal structuralist theory, claiming that syntactic structures are innate. An active figure in peace demonstrations in the 1950s and 1960s, Chomsky rose to academic political power following Spring 1968 at the MIT. Chomsky became an influential opponent of the French intellectuals during the following decades, and his supporters successfully confronted the post-structuralists in the Science Wars of the late 1990s. The shift of the century saw a new academic funding policy where interdisciplinary research became favoured, effectively directing research funds to biological humanities. The decline of structuralism was evident by 2015 with Sorbonne having lost its former spirit. Chomsky eventually claimed that syntactic structures are caused by a random mutation in the human genome, proposing a similar explanation for other human faculties such as ethics. But Steven Pinker argued in 1990 that they are the outcome of evolutionary adaptations. From 1976 onwards: Neo-Darwinism At the same time when the Chomskyan paradigm of biological determinism defeated humanism, it was losing its own clout within sociobiology. It was reported likewise in 2015 that generative grammar was under fire in applied linguistics and in the process of being replaced with usage-based linguistics; a derivative of Richard Dawkins's memetics. It is a concept of linguistic units as replicators. Following the publication of memetics in Dawkins's 1976 nonfiction bestseller The Selfish Gene, many biologically inclined linguists, frustrated with the lack of evidence for Chomsky's Universal Grammar, grouped under different brands including a framework called Cognitive Linguistics (with capitalised initials), and 'functional' (adaptational) linguistics (not to be confused with functional linguistics) to confront both Chomsky and the humanists. The replicator approach is today dominant in evolutionary linguistics, applied linguistics, cognitive linguistics and linguistic typology; while the generative approach has maintained its position in general linguistics, especially syntax; and in computational linguistics. View of linguistics Evolutionary linguistics is part of a wider framework of Universal Darwinism. In this view, linguistics is seen as an ecological environment for research traditions struggling for the same resources. According to David Hull, these traditions correspond to species in biology. Relationships between research traditions can be symbiotic, competitive or parasitic. An adaptation of Hull's theory in linguistics is proposed by William Croft. He argues that the Darwinian method is more advantageous than linguistic models based on physics, structuralist sociology, or hermeneutics. Approaches Evolutionary linguistics is often divided into functionalism and formalism, concepts which are not to be confused with functionalism and formalism in the humanistic reference. Functional evolutionary linguistics considers languages as adaptations to human mind. The formalist view regards them as crystallised or non-adaptational. Functionalism (adaptationism) The adaptational view of language is advocated by various frameworks of cognitive and evolutionary linguistics, with the terms 'functionalism' and 'Cognitive Linguistics' often being equated. It is hypothesised that the evolution of the animal brain provides humans with a mechanism of abstract reasoning which is a 'metaphorical' version of image-based reasoning. Language is not considered as a separate area of cognition, but as coinciding with general cognitive capacities, such as perception, attention, motor skills, and spatial and visual processing. It is argued to function according to the same principles as these. It is thought that the brain links action schemes to form–meaning pairs which are called constructions. Cognitive linguistic approaches to syntax are called cognitive and construction grammar. Also deriving from memetics and other cultural replicator theories, these can study the natural or social selection and adaptation of linguistic units. Adaptational models reject a formal systemic view of language and consider language as a population of linguistic units. The bad reputation of social Darwinism and memetics has been discussed in the literature, and recommendations for new terminology have been given. What correspond to replicators or mind-viruses in memetics are called linguemes in Croft's theory of Utterance Selection (TUS), and likewise linguemes or constructions in construction grammar and usage-based linguistics; and metaphors, frames or schemas in cognitive and construction grammar. The reference of memetics has been largely replaced with that of a Complex Adaptive System. In current linguistics, this term covers a wide range of evolutionary notions while maintaining the Neo-Darwinian concepts of replication and replicator population. Functional evolutionary linguistics is not to be confused with functional humanistic linguistics. Formalism (structuralism) Advocates of formal evolutionary explanation in linguistics argue that linguistic structures are crystallised. Inspired by 19th century advances in crystallography, Schleicher argued that different types of languages are like plants, animals and crystals. The idea of linguistic structures as frozen drops was revived in tagmemics, an approach to linguistics with the goal to uncover divine symmetries underlying all languages, as if caused by the Creation. In modern biolinguistics, the X-bar tree is argued to be like natural systems such as ferromagnetic droplets and botanic forms. Generative grammar considers syntactic structures similar to snowflakes. It is hypothesised that such patterns are caused by a mutation in humans. The formal–structural evolutionary aspect of linguistics is not to be confused with structural linguistics. Evidence There was some hope of a breakthrough with the discovery of the FOXP2 gene. There is little support, however, for the idea that FOXP2 is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech. The idea that people have a language instinct is disputed. Memetics is sometimes discredited as pseudoscience and neurological claims made by evolutionary cognitive linguists have been likened to pseudoscience. All in all, there does not appear to be any evidence for the basic tenets of evolutionary linguistics beyond the fact that language is processed by the brain, and brain structures are shaped by genes. Criticism Evolutionary linguistics has been criticised by advocates of (humanistic) structural and functional linguistics. Ferdinand de Saussure commented on 19th century evolutionary linguistics: Mark Aronoff, however, argues that historical linguistics had its golden age during the time of Schleicher and his supporters, enjoying a place among the hard sciences, and considers the return of Darwinian linguistics as a positive development. Esa Itkonen nonetheless deems the revival of Darwinism as a hopeless enterprise: Itkonen also points out that the principles of natural selection are not applicable because language innovation and acceptance have the same source which is the speech community. In biological evolution, mutation and selection have different sources. This makes it possible for people to change their languages, but not their genotype. See also Biolinguistics Evolutionary psychology of language FOXP2 Origin of language Historical linguistics Phylogenetic tree Universal Darwinism References Further reading External links Agent-Based Models of Language Evolution ARTI Artificial Intelligence Laboratory, Vrije Universiteit Brussel Cognitive Neuroscience Laboratory Computerized comparative linguistics Fluid Construction Grammar Language Evolution and Computation Bibliography Language Evolution and Computation Research Unit, University of Edinburgh Evolution of language Sociobiology
Evolutionary linguistics
Biology
2,236
448,951
https://en.wikipedia.org/wiki/List%20of%20trolleybus%20systems
This is a list of cities where trolleybuses operate, or operated in the past, as part of the public transport system. The original list has been divided to improve user-friendliness and to reduce article size. Separate lists—separate articles in Wikipedia—have been made for the following countries: Americas Brazil Canada United States Europe (Note: countries not listed here are included in this article; see Contents table below) France Germany Italy Russia Spain Switzerland Ukraine United Kingdom This page also provides references that are applicable to all parts of the complete list. Bold typeface for a location city indicates an existing trolleybus system, currently in operation (temporary suspensions not counted), or a new system currently under construction. Africa Algeria Egypt Morocco South Africa Tunisia Americas Argentina Brazil Canada Chile Colombia Cuba Note: Tests began 18 September 1949 along tramway lines using "all-service vehicles" (dual-mode buses) purchased secondhand from Newark, New Jersey, US. The tests did not involve building new or converting existing supply because Havana's tramway had twin-wire overhead. Regular service was not operated. Ecuador Mexico Note: The Mexico City trolleybus system was long thought to have opened in April 1952, but is now known to have opened more than a year earlier, in March 1951. Previous to that, there was an experimental line, for testing without passengers, in 1947 or 1948. Peru Trinidad and Tobago United States Uruguay Venezuela Asia Afghanistan Armenia Azerbaijan China Georgia India Note: In Kolkata (Calcutta), trial operation with a single trolleybus on a short test line took place in 1977. Iran Japan Notes for the two tunnel trolleybus lines: The 6.1 km Kanden Tunnel Trolleybus line operated almost entirely in tunnel, through a mountain, and connected Ōgizawa Station with Kurobe Dam, for tourists and hikers. The transport service continues to operate, but no longer uses trolleybuses. Ōgizawa Station is in Ōmachi city, Nagano Prefecture. The affiliated 3.6 km Tateyama Tunnel Trolleybus line, similarly, operates entirely in tunnel and connects Daikanbo with Murodō. The line is located in Tateyama town, Toyama Prefecture. Both lines are part of the Tateyama Kurobe Alpine Route. This passes through Chūbu-Sangaku National Park (also known in English as "Japan Alps National Park"). Kazakhstan Note: A Russian-language source states that system in the city of Turkistan became an unrealised project. Kyrgyzstan Malaysia Mongolia Myanmar Nepal North Korea Philippines Saudi Arabia Singapore Sri Lanka Tajikistan Turkey Turkmenistan Uzbekistan Note: A Russian-language source states that systems were under construction in the following locations: Angren Chirchiq Guliston Qarshi Kokand Navoiy Termez Yangiabad Vietnam Europe Austria Goods (freight) line (trolleytruck): Belarus Note: Plans were announced in 2001 for new systems in: Baranovichi Barysaw Lida Molodechno Novopolotsk Orsha Pinsk Polotsk Soligorsk (Trolleybus Magazine) Belgium (by province) Antwerp (Antwerpen) Brussels Note: The Brussels-Capital Region is not a province. Neither does it belong to one, nor does it contain any. East Flanders (Oost-Vlaanderen) Liège Bosnia-Herzegovina Bulgaria Croatia Czech Republic Denmark Estonia Finland France Germany Greece Hungary Italy Latvia Lithuania Moldova Netherlands Gelderland Groningen South Holland (Zuid-Holland) Note for Rotterdam: Trolleybus overhead installed in the Maas tunnel in 1941, on instructions from German military authorities. Not used. Norway Poland Portugal Romania First trolleybus system in Romania opened in Chernivtsi on 1 February 1939. Today, the city is part of Ukraine. Russia Serbia Slovakia Slovenia Spain Sweden Switzerland Turkey See Asia section of list, above. Although trolleybuses served the European part of Istanbul, the country's three other trolleybus systems (and a fourth under construction currently) were or are all located in the Asian part of Turkey. Ukraine United Kingdom Oceania Australia New South Wales Queensland South Australia Tasmania Western Australia New Zealand United States (territories only, in Oceania) Hawaii (Territory of) Note: The trolleybus system existed only in the period before Hawaii became a U.S. state. For convenience, it is also included in List of trolleybus systems in the United States. See also List of bus operating companies List of bus rapid transit systems Trolleybus usage by country List of town tramway systems List of tram and light rail transit systems Lists of urban rail transit systems References Books Gregoris, Paolo; Rizzoli, Francesco; and Serra, Claudio. 2003. "Giro d'Italia in filobus" (). Cortona: Editore Calosci. Jones, David. Australian Trolleybuses. Wellington: City Tramway Publications. Mackinger, Gunter. 1979. "Obus in Österreich" (). (Eisenbahn-Sammelheft Nr. 16.) Wien: Verlag Slezak. Millar, Sean. 1986. "Trolleybuses in New Zealand" (). Auckland: Millar Publishing. Murray, Alan. 2000. "World Trolleybus Encyclopaedia" (). Yateley, Hampshire, UK: Trolleybooks. Pabst, Martin. 1989. "Tram & Trolley in Africa" (). Krefeld: Röhr Verlag GMBH. Peschkes, Robert. "World Gazetteer of Tram, Trolleybus, and Rapid Transit Systems." Part One, Latin America (). 1980. Exeter, UK: Quail Map Company. Part Two, Asia+USSR / Africa / Australia (). 1987. London: Rapid Transit Publications. Part Three, Europe (). 1993. London: Rapid Transit Publications. Part Four, North America (). 1998. London: Rapid Transit Publications. Sebree, Mac, and Paul Ward. 1974. "The Trolley Coach in North America" (Interurbans Special 59). Los Angeles: Interurbans. Stock, Werner. 1987. "Obus-Anlagen in Deutschland" (). Bielefeld: Hermann Busch Verlag. "Straßenbahnatlas ehem. Sowjetunion" / "Tramway Atlas of the former USSR" (). 1996. Berlin: Arbeitsgemeinschaft Blickpunkt Straßenbahn, in conjunction with Light Rail Transit Association, London. "Straßenbahnatlas Rumänien" (compiled by Andreas Günther, Sergei Tarkhov and Christian Blank; ). 2004. Berlin: Arbeitsgemeinschaft Blickpunkt Straßenbahn. Tarkhov, Sergei. 2000. "Empire of the Trolleybus: Vol 1 - Russia" (). London: Rapid Transit Publications. 吉川文夫 (Yoshikawa, Fumio). 1995. 日本のトロリーバス (Nippon no "trolleybus") (). Tokyo: kk Denkisha-kenkyûkai. Periodicals "Trolleybus Magazine" (ISSN 0266-7452). National Trolleybus Association (UK). Bimonthly. Tarkhov, Sergei and Dmitriy Merzlov. "North Korean Surprises - Part 3". (Trolleybus Magazine No. 246, November–December 2002). External links All Time List of North American Trolleybus Systems (David Wyatt) Bibliography of the Electric Trolleybus (Richard DeArmond) (Elektrotransport v gorodakh byvshego SSSR, Dmitry Zinoviev) World tram and trolleybus systems (рус., en.) Latin American Trolleybus Installations (Allen Morrison) The Tramways of Cuba (Allen Morrison) TrolleyMotion Progetto Città Elettriche (Italy) Tram.nu Atlas (Bruse LF Persson) UK Trolleybus Systems & Museums (Bruce Lake) Wires of Faded Glory (Richard A. Bílek) World Trolleybus List - Systems Closed Tom's North American Trolley bus Pix TRANSIRA Association (Romanian Trolleybuses) Trolleybus in Europe (public-transport.net) World Map of Trolleybus systems in operation (TransPhoto) Trolleybus systems Trolleybus systems
List of trolleybus systems
Physics
1,682
12,461,906
https://en.wikipedia.org/wiki/C3H5
{{DISPLAYTITLE:C3H5}} The molecular formula C3H5 (molar mass: 41.07 g/mol, exact mass: 41.0391 u) may refer to: Allyl group Cyclopropyl group
C3H5
Chemistry
57
24,005,535
https://en.wikipedia.org/wiki/C29H42O6
{{DISPLAYTITLE:C29H42O6}} The molecular formula C29H42O6 may refer to: Hydrocortisone cypionate, a synthetic glucocorticoid corticosteroid and a corticosteroid ester Kendomycin, an anticancer macrolide Molecular formulas
C29H42O6
Physics,Chemistry
72
628,811
https://en.wikipedia.org/wiki/Chlorella
Chlorella is a genus of about thirteen species of single-celled or colonial green algae of the division Chlorophyta. The cells are spherical in shape, about 2 to 10 μm in diameter, and are without flagella. Their chloroplasts contain the green photosynthetic pigments chlorophyll-a and -b. In ideal conditions cells of Chlorella multiply rapidly, requiring only carbon dioxide, water, sunlight, and a small amount of minerals to reproduce. The name Chlorella is taken from the Greek χλώρος, chlōros/ khlōros, meaning green, and the Latin diminutive suffix -ella, meaning small. German biochemist and cell physiologist Otto Heinrich Warburg, awarded with the Nobel Prize in Physiology or Medicine in 1931 for his research on cell respiration, also studied photosynthesis in Chlorella. In 1961, Melvin Calvin of the University of California received the Nobel Prize in Chemistry for his research on the pathways of carbon dioxide assimilation in plants using Chlorella. Chlorella has been considered as a source of food and energy because its photosynthetic efficiency can reach 8%, which exceeds that of other highly efficient crops such as sugar cane. Description Chlorella consists of small, rounded cells which are spherical, subspherical, or ellipsoidal, and may be surrounded by a layer of mucilage. The cells contain a single chloroplast which is parietal (lying against the inner side of the cell membrane), with a single pyrenoid that is surrounded by grains of starch. Reproduction occurs by the formation of autospores; zoospores or gametes are not known to be produced in Chlorella. Autospores are released by a tear in the cell wall. The daughter cell may remain attached to the parent cell wall, thereby forming colonies of cells. Taxonomy Chlorella was first described by Martinus Beijerinck in 1890. Since then, over a hundred taxa have been described within the genus. However, biochemical and genomic data has revealed that many of these species were not closely related to each other, even being placed in a separate class Chlorophyceae. In other words, the "green ball" form of Chlorella appears to be a product of convergent evolution and not a natural taxon. Identifying Chlorella-like algae based on morphological features alone is generally not possible. Some strains of "Chlorella" used for food are incorrectly identified, or correspond to genera that were classified out of true Chlorella. For example, Heterochlorella luteoviridis is typically known as Chlorella luteoviridis which is no longer considered a valid name. As a food source When first harvested, Chlorella was suggested as an inexpensive protein supplement to the human diet. According to the American Cancer Society, "available scientific studies do not support its effectiveness for preventing or treating cancer or any other disease in humans". Under certain growing conditions, Chlorella yields oils that are high in polyunsaturated fats—Chlorella minutissima has yielded eicosapentaenoic acid at 39.9% of total lipids. History Following global fears of an uncontrollable human population boom during the late 1940s and the early 1950s, Chlorella was seen as a new and promising primary food source and as a possible solution to the then-current world hunger crisis. Many people during this time thought hunger would be an overwhelming problem and saw Chlorella as a way to end this crisis by providing large amounts of high-quality food for a relatively low cost. Many institutions began to research the algae, including the Carnegie Institution, the Rockefeller Foundation, the NIH, UC Berkeley, the Atomic Energy Commission, and Stanford University. Following World War II, many Europeans were starving, and many Malthusians attributed this not only to the war, but also to the inability of the world to produce enough food to support the increasing population. According to a 1946 FAO report, the world would need to produce 25 to 35% more food in 1960 than in 1939 to keep up with the increasing population, while health improvements would require a 90 to 100% increase. Because meat was costly and energy-intensive to produce, protein shortages were also an issue. Increasing cultivated area alone would go only so far in providing adequate nutrition to the population. The USDA calculated that, to feed the U.S. population by 1975, it would have to add 200 million acres (800,000 km2) of land, but only 45 million were available. One way to combat national food shortages was to increase the land available for farmers, yet the American frontier and farm land had long since been extinguished in trade for expansion and urban life. Hopes rested solely on new agricultural techniques and technologies. Because of these circumstances, an alternative solution was needed. To cope with the upcoming postwar population boom in the United States and elsewhere, researchers decided to tap into the unexploited sea resources. Initial testing by the Stanford Research Institute showed Chlorella (when growing in warm, sunny, shallow conditions) could convert 20% of solar energy into a plant that, when dried, contains 50% protein. In addition, Chlorella contains fat and vitamins. The plant's photosynthetic efficiency allows it to yield more protein per unit area than any plant—one scientist predicted 10,000 tons of protein a year could be produced with just 20 workers staffing a 1000-acre (4-km2) Chlorella farm. The pilot research performed at Stanford and elsewhere led to immense press from journalists and newspapers, yet did not lead to large-scale algae production. Chlorella seemed like a viable option because of the technological advances in agriculture at the time and the widespread acclaim it got from experts and scientists who studied it. Algae researchers had even hoped to add a neutralized Chlorella powder to conventional food products, as a way to fortify them with vitamins and minerals. When the preliminary laboratory results were published, the scientific community at first backed the possibilities of Chlorella. Science News Letter praised the optimistic results in an article entitled "Algae to Feed the Starving". John Burlew, the editor of the Carnegie Institution of Washington book Algal Culture-from Laboratory to Pilot Plant, stated, "the algae culture may fill a very real need", which Science News Letter turned into "future populations of the world will be kept from starving by the production of improved or educated algae related to the green scum on ponds". The cover of the magazine also featured Arthur D. Little's Cambridge laboratory, which was a supposed future food factory. A few years later, the magazine published an article entitled "Tomorrow's Dinner", which stated, "There is no doubt in the mind of scientists that the farms of the future will actually be factories." Science Digest also reported, "common pond scum would soon become the world's most important agricultural crop." However, in the decades since those claims were made, algae have not been cultivated on that large of a scale. Current status Since the growing world food problem of the 1940s was solved by better crop efficiency and other advances in traditional agriculture, Chlorella has not seen the kind of public and scientific interest that it had in the 1940s. Chlorella has only a niche market for companies promoting it as a dietary supplement. Production difficulties The experimental research was carried out in laboratories, rather than in the field, and scientists discovered that Chlorella would be much more difficult to produce than previously thought. To be practical, the algae grown would have to be placed either in artificial light or in shade to produce at its maximum photosynthetic efficiency. In addition, for the Chlorella to be as productive as the world would require, it would have to be grown in carbonated water, which would have added millions to the production cost. A sophisticated process, and additional cost, was required to harvest the crop and for Chlorella to be a viable food source, its cell walls would have to be pulverized. The plant could reach its nutritional potential only in highly modified artificial situations. Another problem was developing sufficiently palatable food products from Chlorella. Although the production of Chlorella looked promising and involved creative technology, it has not to date been cultivated on the scale some had predicted. It has not been sold on the scale of Spirulina, soybean products, or whole grains. Costs have remained high, and Chlorella has for the most part been sold as a health food, for cosmetics, or as animal feed. After a decade of experimentation, studies showed that following exposure to sunlight, Chlorella captured just 2.5% of the solar energy, not much better than conventional crops. Chlorella, too, was found by scientists in the 1960s to be impossible for humans and other animals to digest in its natural state due to the tough cell walls encapsulating the nutrients, which presented further problems for its use in American food production. Use in carbon dioxide reduction and oxygen production In 1965, the Russian CELSS experiment BIOS-3 determined that 8 m2 of exposed Chlorella could remove carbon dioxide and replace oxygen within the sealed environment for a single human. The algae were grown in vats underneath artificial light. Dietary supplement Chlorella is consumed as a dietary supplement. Some manufacturers of Chlorella products have falsely asserted that it has health benefits, including an ability to treat cancer, for which the American Cancer Society stated "available scientific studies do not support its effectiveness for preventing or treating cancer or any other disease in humans". The United States Food and Drug Administration has issued warning letters to supplement companies for falsely advertising health benefits of consuming chlorella products, such as one company in October 2020. There is some support from animal studies of chlorella's ability to detoxify insecticides. Chlorella protothecoides accelerated the detoxification of rats poisoned with chlordecone, a persistent insecticide, decreasing the half-life of the toxin from 40 to 19 days. The ingested algae passed through the gastrointestinal tract unharmed, interrupted the enteric recirculation of the persistent insecticide, and subsequently eliminated the bound chlordecone with the feces. Health concerns A 2002 study showed that Chlorella cell walls contain lipopolysaccharides, endotoxins found in Gram-negative bacteria that affect the immune system and may cause inflammation. However, more recent studies have found that the lipopolysaccharides in organisms other than Gram-negative bacteria, for example in cyanobacteria, are considerably different from the lipopolysaccharides in Gram-negative bacteria. See also Calvin cycle List of ineffective cancer treatments Quorn: food made from mycoprotein Soyuz 28, a 1978 space mission which included experiments on Chlorella Spirulina (dietary supplement) Chlorellosis, a disease caused by the infection of Chlorella''. References Trebouxiophyceae genera Trebouxiophyceae Edible algae Dietary supplements Algaculture Alternative cancer treatments
Chlorella
Biology
2,311
851,269
https://en.wikipedia.org/wiki/Unit%20record%20equipment
Starting at the end of the nineteenth century, well before the advent of electronic computers, data processing was performed using electromechanical machines collectively referred to as unit record equipment, electric accounting machines (EAM) or tabulating machines. Unit record machines came to be as ubiquitous in industry and government in the first two-thirds of the twentieth century as computers became in the last third. They allowed large volume, sophisticated data-processing tasks to be accomplished before electronic computers were invented and while they were still in their infancy. This data processing was accomplished by processing punched cards through various unit record machines in a carefully choreographed progression. This progression, or flow, from machine to machine was often planned and documented with detailed flowcharts that used standardized symbols for documents and the various machine functions. All but the earliest machines had high-speed mechanical feeders to process cards at rates from around 100 to 2,000 per minute, sensing punched holes with mechanical, electrical, or, later, optical sensors. The operation of many machines was directed by the use of a removable plugboard, control panel, or connection box. Initially all machines were manual or electromechanical. The first use of an electronic component was in 1937 when a photocell was used in a Social Security bill-feed machine. Electronic components were used on other machines beginning in the late 1940s. The term unit record equipment also refers to peripheral equipment attached to computers that reads or writes unit records, e.g., card readers, card punches, printers, MICR readers. IBM was the largest supplier of unit record equipment and this article largely reflects IBM practice and terminology. History Beginnings In the 1880s Herman Hollerith was the first to record data on a medium that could then be read by a machine. Prior uses of machine readable media had been for lists of instructions (not data) to drive programmed machines such as Jacquard looms and mechanized musical instruments. "After some initial trials with paper tape, he settled on punched cards [...]". To process these punched cards, sometimes referred to as "Hollerith cards", he invented the keypunch, sorter, and tabulator unit record machines. These inventions were the foundation of the data processing industry. The tabulator used electromechanical relays to increment mechanical counters. Hollerith's method was used in the 1890 census. The company he founded in 1896, the Tabulating Machine Company (TMC), was one of four companies that in 1911 were amalgamated in the forming of a fifth company, the Computing-Tabulating-Recording Company, later renamed IBM. Following the 1900 census a permanent Census bureau was formed. The bureau's contract disputes with Hollerith led to the formation of the Census Machine Shop where James Powers and others developed new machines for part of the 1910 census processing. Powers left the Census Bureau in 1911, with rights to patents for the machines he developed, and formed the Powers Accounting Machine Company. In 1927 Powers' company was acquired by Remington Rand. In 1919 Fredrik Rosing Bull, after examining Hollerith's machines, began developing unit record machines for his employer. Bull's patents were sold in 1931, constituting the basis for Groupe Bull. These companies, and others, manufactured and marketed a variety of general-purpose unit record machines for creating, sorting, and tabulating punched cards, even after the development of computers in the 1950s. Punched card technology had quickly developed into a powerful tool for business data-processing. Timeline 1884: Herman Hollerith files a patent application titled "Art of Compiling Statistics"; granted on January 8, 1889. 1886: First use of tabulating machine in Baltimore's Department of Health. 1887: Hollerith files a patent application for an integrating tabulator (granted in 1890). 1889: First recorded use of integrating tabulator in the Office of the Surgeon General of the Army. 1890-1895: U.S. Census, Superintendents Robert P. Porter 1889-1893 and Carroll D. Wright 1893–1897, tabulations are done using equipment supplied by Hollerith. 1896: The Tabulating Machine Company founded by Hollerith, trade name for products is Hollerith 1901: Hollerith Automatic Horizontal Sorter 1904: Porter, having returned to England, forms The Tabulator Limited (UK) to market Hollerith's machines. 1905: Hollerith reincorporates the Tabulating Machine Company as The Tabulating Machine Company 1906: Hollerith Type 1 Tabulator, the first tabulator with an automatic card feed and control panel. 1909: The Tabulator Limited renamed as British Tabulating Machine Company (BTM). 1910: Tabulators built by the Census Machine Shop print results. 1910: Willy Heidinger, an acquaintance of Hollerith, licenses Hollerith's The Tabulating Machine Company patents, creating Dehomag in Germany. 1911: Computing-Tabulating-Recording Company (CTR), a holding company, formed by the amalgamation of The Tabulating Machine Company and three other companies. 1911: James Powers forms Powers Tabulating Machine Company, later renamed Powers Accounting Machine Company. Powers had been employed by the Census Bureau to work on tabulating machine development and was given the right to patent his inventions there. The machines he developed sensed card punches mechanically, as opposed to Hollerith's electric sensing. 1912: The first Powers horizontal sorting machine. 1914: Thomas J. Watson hired by CTR. 1914: The Tabulating Machine Company produces 2 million punched cards per day. 1914: The first Powers printing tabulator. 1915 Powers Tabulating Machine Company establishes European operations through the Accounting and Tabulating Machine Company of Great Britain Limited. 1919: Fredrik Rosing Bull, after studying Hollerith's machines, constructs a prototype 'ordering, recording and adding machine' (tabulator) of his own design. About a dozen machines were produced during the following several years for his employer. 1920s: Early in this decade punched cards began use as bank checks. 1920: BTM begins manufacturing its own machines, rather than simply marketing Hollerith equipment. 1920: The Tabulating Machine Company's first printing tabulator, the Hollerith Type 3. 1921: Powers-Samas develops the first commercial alphabetic punched card representation. 1922: Powers develops an alphabetic printer. 1923: Powers develops a tabulator that accumulates and prints both sub and grand totals (rolling totals). 1923: CTR acquires 90% ownership of Dehomag, thus acquiring patents developed by them. 1924: Computing-Tabulating-Recording Company (CTR) renamed International Business Machines (IBM). There would be no IBM-labeled products until 1933. 1925: The Tabulating Machine Company's first horizontal card sorter, the Hollerith Type 80, processes 400 cards/min. 1927: Remington Typewriter Company and Rand Kardex combine to form Remington Rand. Within a year, Remington Rand acquires the Powers Accounting Machine Company. 1928: The Tabulating Machine Company's first tabulator that could subtract, the Hollerith Type IV tabulator. The Tabulating Machine Company begins its collaboration with Benjamin Wood, Wallace John Eckert and the Statistical Bureau at Columbia University. The Tabulating Machine Company's 80-column card introduced. Comrie uses punched card machines to calculate the motions of the moon. This project, in which 20,000,000 holes are punched into 500,000 cards continues into 1929. It is the first use of punched cards in a purely scientific application. 1929 The Accounting and Tabulating Machine Company of Great Britain Limited renamed Powers-Samas Accounting Machine Limited (Samas, full name Societe Anonyme des Machines a Statistiques, had been the Power's sales agency in France, formed in 1922). The informal reference "Acc and Tab" would persist. 1930: The Remington Rand 90 column card, offering "more storage capacity [and] alphabetic capability" 1931: H.W.Egli - BULL founded to capitalize on the punched card technology patents of Fredrik Rosing Bull. The Tabulator model T30 is introduced. 1931: The Tabulating Machine Company's first punched card machine that could multiply, the 600 Multiplying Punch. Their first alphabetical accounting machine - although not a complete alphabet, the Alphabetic Tabulator Model B was quickly followed by the full alphabet ATC. 1931: The term "Super Computing Machine" is used by the New York World newspaper to describe the Columbia Difference Tabulator, a one-of-a-kind special purpose tabulator-based machine made for the Columbia Statistical Bureau, a machine so massive it was nicknamed "Packard". The Packard attracted users from across the country: "the Carnegie Foundation, Yale, Pittsburgh, Chicago, Ohio State, Harvard, California and Princeton." 1933: Compagnie des Machines Bull is the new name of the reorganized H.W. Egli - Bull. 1933: The Tabulating Machine Company name disappears as subsidiary companies are merged into IBM. The Hollerith trade name is replaced by IBM. IBM introduces removable control panels. 1933: Dehomag's BK tabulator (developed independently of IBM) announced. 1934: IBM renames its Tabulators as Electric Accounting Machines. 1935: BTM Rolling Total Tabulator introduced. 1937: Leslie Comrie establishes the Scientific Computing Service Limited - the first for-profit calculating agency. 1937: The first collator, the IBM 077 Collator The first use of an electronic component in an IBM product was a photocell in a Social Security bill-feed machine. By 1937 IBM had 32 presses at work in Endicott, N.Y., printing, cutting and stacking five to 10 million punched cards every day. 1938: Powers-Samas multiplying punch introduced. 1941 Introduction of Bull Type A unit record machines based on 80 column card. 1943: "IBM had about 10,000 tabulators on rental [...] 601 multipliers numbered about 2000 [...] keypunch[es] 24,500". 1946: The first IBM punched card machine that could divide, the IBM 602, was introduced. Unreliable, it "was upgraded to the 602-A (a '602 that worked') [...] by 1948". The IBM 603 Electronic Multiplier was introduced, "the first electronic calculator ever placed into production.". 1948: The IBM 604 Electronic Punch. "No other calculator of comparable size or cost could match its capability". 1949: The IBM 024 Card Punch, 026 Printing Card Punch, 082 Sorter, 403 Accounting machine, 407 Accounting machine, and Card Programmed Calculator (CPC) introduced. 1952: Bull Gamma 3 introduced. An electronic calculator with delay-line memory, programmed by a connection panel, that was connected to a tabulator or card reader-punch. The Gamma 3 had greater capacity, greater speed, and lower rentals than competitive products. 1952: Remington Rand 409 Calculator (aka. UNIVAC 60, UNIVC 120) introduced. 1952: Underwood Corp acquires the American assets of Powers-Samas. By the 1950s punched cards and unit record machines had become ubiquitous in academia, industry and government. The warning often printed on cards that were to be individually handled, "Do not fold, spindle or mutilate", coined by Charles A. Philips, became a motto for the post-World War II era (even though many people had no idea what spindle meant). With the development of computers punched cards found new uses as their principal input media. Punched cards were used not only for data, but for a new application - computer programs, see: Computer programming in the punched card era. Unit record machines therefore remained in computer installations in a supporting role for keypunching, reproducing card decks, and printing. 1955: IBM produces 72.5 million punched cards per day. 1957: The IBM 608, a transistor version of the 1948 IBM 604. First commercial all-transistor calculator. 1958: The "Series 50", basic accounting machines, was announced. These were modified machines, with reduced speed and/or function, offered for rental at reduced rates. The name "Series 50" relates to a similar marketing effort, the "Model 50", seen in the IBM 1940 product booklet. An alternate report identifies the modified machines as "Type 5050" introduced in 1959 and notes that Remington-Rand introduced similar products. 1959: BTM is merged with rival Powers-Samas to form International Computers and Tabulators(ICT). 1959: The IBM 1401, internally known in IBM for a time as "SPACE" for "Stored Program Accounting and Calculating Equipment" and developed in part as a response to the Bull Gamma 3, outperforms three IBM 407s and a 604, while having a much lower rental. That functionality combined with the availability of tape drives, accelerated the decline in unit record equipment usage. 1960: The IBM 609 Calculator, an improved 608 with core memory. This will be IBMs last punched card calculator. Many organizations were loath to alter systems that were working, so production unit record installations remained in operation long after computers offered faster and more cost effective solutions. Cost or availability of equipment was another factor; for example in 1965 an IBM 1620 computer did not have a printer as standard equipment, so it was normal in such installations to punch output onto cards and then print these cards on an IBM 407 accounting machine. Specialized uses of punched cards such as toll collection, microform aperture cards, and punched card voting kept unit record equipment in use into the twenty-first century. 1968: International Computers and Tabulators (ICT) is merged with English Electric Computers, forming International Computers Limited (ICL). 1969: The IBM System/3, renting for less than $1,000 a month, the ancestor of IBM's midrange computer product line, aka. minicomputers, was aimed at new customers and organizations that still used IBM 1400 series computers or unit record equipment. It featured a new, smaller, punched card with a 96 column format. Instead of the rectangular punches in the classic IBM card, the new cards had tiny (1 mm), circular holes much like paper tape. By July 1974 more than 25,000 System/3s had been installed. 1971: The IBM 129 Card Data Recorder (keypunch and auxiliary on-line card reader/punch) is the last, or among the last, 80-column card unit record product announcements (other than card readers and card punches attached to computers). 1975 Cardamation founded, a U.S. company that supplied punched card equipment and supplies until 2011. Endings 1976: The IBM 407 Accounting Machine was withdrawn from marketing. 1978: IBM's Rochester plant made its last shipment of the IBM 082, 084, 085, 087, 514, and 548 machines. The System/3 was succeeded by the System/38. 1980: The last reconditioning of an IBM 519 Document Originating Punch. 1984: The IBM 029 Card Punch, announced in 1964, was withdrawn from marketing. IBM closed its last punch card manufacturing plant. 2010: A group from the Computer History Museum reported that an IBM 402 Accounting Machine and related punched card equipment was still in operation at a filter manufacturing company in Conroe, Texas. The punched card system was still in use as of 2013. 2011: The owner of Cardamation, Robert G. Swartz, dies, and the company, perhaps the last supplier of punch card equipment, ceases operation. 2015: Punched cards for time clocks and some other applications were still available; one supplier was the California Tab Card Company. As of 2018, the web site was no longer in service. Punched cards The basic unit of data storage was the punched card. The IBM 80-column card was introduced in 1928. The Remington Rand Card with 45 columns in each of two tiers, thus 90 columns, in 1930. Powers-Samas punched cards include one with 130 columns. Columns on different punch cards vary from 5 to 12 punch positions. The method used to store data on punched cards is vendor specific. In general each column represents a single digit, letter or special character. Sequential card columns allocated for a specific use, such as names, addresses, multi-digit numbers, etc., are known as a field. An employee number might occupy 5 columns; hourly pay rate, 3 columns; hours worked in a given week, 2 columns; department number 3 columns; project charge code 6 columns and so on. Keypunching Original data were usually punched into cards by workers, often women, known as keypunch operators, under the control of a program card (called a drum card because it was installed on a rotating drum in the machine), which could automatically skip or duplicate predefined card columns, enforce numeric-only entry, and, later, right-justify a number entered. Their work was often checked by a second operator using a verifier machine, also under the control of a drum card. The verifier operator re-keyed the source data and the machine compared what was keyed to what had been punched on the original card. Sorting An activity in many unit record shops was sorting card decks into the order necessary for the next processing step. Sorters, like the IBM 80 series Card Sorters, sorted input cards into one of 13 pockets depending on the holes punched in a selected column and the sorter's settings. The 13th pocket was for blanks and rejects. Cards were sorted on one card column at a time; sorting on, for example, a five digit zip code required that the card deck be processed five times. Sorting an input card deck into ascending sequence on a multiple column field, such as an employee number, was done by a radix sort, bucket sort, or a combination of the two methods. Sorters were also used to separate decks of interspersed master and detail cards, either by a significant hole punch or by the cards corner-cut. More advanced functionality was available in the IBM 101 Electronic Statistical Machine, which could Sort Count Accumulate totals Print summaries Send calculated results (counts and totals) to an attached IBM 524 Duplicating Summary Punch. Tabulating Reports and summary data were generated by accounting or tabulating machines. The original tabulators only counted the presence of a hole at a location on a card. Simple logic, like ands and ors could be done using relays. Later tabulators, such as those in IBM's 300 series, directed by a control panel, could do both addition and subtraction of selected fields to one or more counters and print each card on its own line. At some signal, say a following card with a different customer number, totals could be printed for the just completed customer number. Tabulators became complex: the IBM 405 contained 55,000 parts (2,400 different) and 75 miles of wire; a Remington Rand machine circa 1941 contained 40,000 parts. Calculating In 1931, IBM introduced the model 600 multiplying punch. The ability to divide became commercially available after World War II. The earliest of these calculating punches were electromechanical. Later models employed vacuum tube logic. Electronic modules developed for these units were used in early computers, such as the IBM 650. The Bull Gamma 3 calculator could be attached to tabulating machines, unlike the stand-alone IBM calculators. Card punching Card punching operations included: Gang punching - producing a large number of identically punched cards—for example, for inventory tickets. Reproducing - reproducing a card deck in its entirety or just selected fields. A payroll master card deck might be reproduced at the end of a pay period with the hours worked and net pay fields blank and ready for the next pay period's data. Programs in the form of card decks were reproduced for backup. Summary punching - punching new cards with details and totals from an attached tabulating machine. Mark sense reading - detecting electrographic lead pencil marks on ovals printed on the card and punching the corresponding data values into the card. Singularly or in combination, these operations were provided in a variety of machines. The IBM 519 Document-Originating Machine could perform all of the above operations. The IBM 549 Ticket Converter read data from Kimball tags, copying that data to punched cards. With the development of computers, punched cards were also produced by computer output devices. Collating IBM collators had two input hoppers and four output pockets. These machines could merge or match card decks based on the control panel's wiring as illustrated here. The Remington Rand Interfiling Reproducing Punch Type 310-1 was designed to merge two separate files into a single file. It could also punch additional information into those cards and select desired cards. Collators performed operations comparable to a database join. Interpreting An interpreter prints characters on a punched card equivalent to the values of all or selected columns. The columns to be printed can be selected and even reordered, based on the machine's control panel wiring. Later models could print on one of several rows on the card. Unlike keypunches, which print values directly above each column, interpreters generally use a font that was a little wider than a column and can only print up to 60 characters per row. Typical models include the IBM 550 Numeric Interpreter, the IBM 557 Alphabetic Interpreter, and the Remington Rand Type 312 Alphabetic Interpreter. Filing Batches of punched cards were often stored in tub files, where individual cards could be pulled to meet the requirements of a particular application. Transmission of punched card data Electrical transmission of punched card data was invented in the early 1930s. The device was called an Electrical Remote Control of Office Machines and was assigned to IBM. Inventors were Joseph C. Bolt of Boston & Curt I. Johnson; Worcester, Mass. assors to the Tabulating Machine Co., Endicott, NY. The Distance Control Device received a US patent in Aug.9,1932: . Letters from IBM talk about filling in Canada in 9/15/1931. Processing punched tape The IBM 046 Tape-to-Card Punch and the IBM 047 Tape-to-Card Printing Punch (which was almost identical, but with the addition of a printing mechanism) read data from punched paper tape and punched that data into cards. The IBM 063 Card-Controlled Tape Punch read punched cards, punching that data into paper tape. Control panel wiring and Connection boxes The operation of Hollerith/BTM/IBM/Bull tabulators and many other types of unit record equipment was directed by a control panel. Operation of Powers-Samas/Remington Rand unit record equipment was directed by a connection box. Control panels had a rectangular array of holes called hubs which were organized into groups. Wires with metal ferrules at each end were placed in the hubs to make connections. The output from some card column positions might connected to a tabulating machine's counter, for example. A shop would typically have separate control panels for each task a machine was used for. Paper handling equipment For many applications, the volume of fan-fold paper produced by tabulators required other machines, not considered to be unit record machines, to ease paper handling. A decollator separated multi-part fan-fold paper into individual stacks of one-part fan-fold and removed the carbon paper. A burster separated one-part fan-fold paper into individual sheets. For some uses it was desirable to remove the tractor-feed holes on either side of the fan-fold paper. In these cases the form's edge strips were perforated and the burster removed them as well. See also British Tabulating Machine Company Fredrik Rosing Bull Gustav Tauschek IBM Electromatic Table Printing Machine IBM 632 Accounting Machine IBM 6400 Series Leslie Comrie List of IBM products Powers Accounting Machine Company Powers-Samas Remington Rand List of UNIVAC products Wallace John Eckert Notes and references Further reading Note: Most IBM form numbers end with an edition number, a hyphen followed by one or two digits. For Hollerith and Hollerith's early machines see: Herman Hollerith#Further reading Histories Reprinted by Arno Press, 1976, from the best available copy. Some text is illegible. includes Hollerith (1889) reprint Punched card applications – With 42 contributors and articles ranging from Analysis of College Test Results to Uses of the Automatic Multiplying Punch this is book provides an extensive view of unit record equipment use over a wide range of applications. For details of this book see The Baehne Book.. The appendix has IBM and Powers provided product detail sheets, with photo and text, for many machines. (source: ) There is a 1954 edition, Ann F. Beach, et al., similar title and a 1956 edition, Joyce Alsop. Describes several punched card applications. Note: ISBN is for a reprint ed. The machines Unabridged edition of "Data Processing Tech 3 &2", aka. "Rate Training manual NAVPERS 10264-B", 3rd revised ed. 1970 Chapter 3 Punched Card Equipment describes American machines with some details of their logical organization and examples of control panel wiring. The four main systems in current use - Powers-Samas, Hollerith, Findex, and Paramount - are examined and the fundamentals principles of each are fully explained. An accessible book of recollections (sometimes with errors), with photographs and descriptions of many unit record machines. The ISBN is for an earlier (2006), printed, edition. This elementary introduction to punched card systems is unusual because unlike most others, it not only deals with the IBM systems but also illustrates the card formats and equipment offered by Remington Rand and Underwood Samas. Erwin Tomash Library IBM (1936) Machine Methods of Accounting, 360 p. Includes a 12-page 1936 IBM-written history of IBM and descriptions of many machines. A simplified description of common IBM machines and their uses. With descriptions, photos and rental prices. The IBM Operators Guide, 22-8485 was an earlier edition of this book Has extensive descriptions of unit record machine construction. Ken Shirriff's blog Inside card sorters: 1920s data processing with punched cards and relays. External links Columbia University Computing History IBM Tabulators and Accounting Machines IBM Calculators IBM Card Interpreters IBM Reproducing and Summary Punches IBM Collators Columbia University Computing History: L.J. Comrie From that site Comrie was the first to turn punched-card equipment to scientific use History of Bull Extracted and translated from Science et Vie Micro magazine, No. 74, July–August, 1990: The very international history of a French giant Musée virtuel de Bull et de l'informatique Française: Information Technology Industry TimeLine From that site The present TimeLine page differs from similar pages available on the Internet because it is focused more on the industry than on "inventions". It was originally designed to show the place of the European and more specifically the French computer industry facing its world-wide competition. Most of published time-line charts either consider that everything had an American origin or they show their country patriotism (French, Italian, Russian or British) or their company patriotism. Musée virtuel de Bull et de l'informatique Française (Virtual Museum of French computers) Systems Catalog Early office museum IBM Archives IBM Punch Card Systems in the U.S. Army IBM early Card Reader and 1949 electronic Calculator video of unit record equipment in museum Working Tabulating machines and punched card equipment in technikum29 Computermuseum (nr. Frankfurt/Main, Germany) Punched card ja:タビュレーティングマシン
Unit record equipment
Technology
5,740
3,412,018
https://en.wikipedia.org/wiki/Coupled%20human%E2%80%93environment%20system
A coupled human–environment system (known also as a coupled human and natural system, or CHANS) characterizes the dynamical two-way interactions between human systems (e.g., economic, social) and natural (e.g., hydrologic, atmospheric, biological, geological) systems. This coupling expresses the idea that the evolution of humans and environmental systems may no longer be treated as individual isolated systems. As CHANS research is relatively new, it has not yet matured into a coherent field. Some research programs draw from, and build on, the perspectives developed in trans-disciplinary fields such as human ecology, ecological anthropology, environmental geography, economics, as well as others. In contrast, other research programs, such as Critical Zone science, aim to develop a more quantitative theoretic framework focusing on the development of analytical and numerical models, by building on theoretical advances in complex adaptive systems, complexity economics, dynamical systems theory, and the earth sciences. To some extent, all CHANS programs recognize the need to move beyond traditional research methods developed in the social and natural sciences, as these are not sufficient to quantify the highly nonlinear dynamics often present in CHANS. Some research into CHANS emulates the more traditional research programs that tended to separate the social from the ecological sciences. History The phrase "coupled human–environment systems" appears in the earlier literature (dating back to 1999) noting that social and natural systems are inseparable. "In 2007 a formal standing program in Dynamics of Coupled Natural and Human Systems was created by the U.S. National Science Foundation." Research into CHANS is increasing in frequency in scientific literature concerning the sustainability and conservation of ecosystems and society. Funding by the National Science Foundation to study "Dynamics of Coupled Natural and Human Systems" occurred from 2001-2005 as a part of a "special competition" within the "Biocomplexity in the environment" program, and in 2007 gained formal standing. Bibliography W.C. Clark, B. L. Turner, R. W. Kates, J. Richards, J. T. Mathews, and W. Meyer, eds. The Earth as Transformed by Human Action. (Cambridge, UK: Cambridge University Press, 1990). Eric Sheppard and Robert B. McMaster, eds. Scale and Geographic Inquiry: Nature, Society, and Method (see especially "Crossing the Divide: Linking Global and Local Scales in Human–Environment Systems" by William E. Easterling and Colin Polsky) (Blackwell Publishing, January 1, 2004) See also Human ecology Conservation medicine Deep ecology Environmental factor – examples of coupled systems References External links CHANS-Net is an international network for researchers studying or people interested in the topic of Coupled Human and Natural Systems. The organization facilitates communication and collaboration. Biocomplexity in the Environment, a 2006 priority area of the National Science Foundation. Human geography Human ecology Environmental social science concepts
Coupled human–environment system
Environmental_science
589
4,937,777
https://en.wikipedia.org/wiki/Polythiazyl
Polythiazyl (polymeric sulfur nitride), , is an electrically conductive, gold- or bronze-colored polymer with metallic luster. It was the first conductive inorganic polymer discovered and was also found to be a superconductor at very low temperatures (below 0.26 K). It is a fibrous solid, described as "lustrous golden on the faces and dark blue-black", depending on the orientation of the sample. It is air stable and insoluble in all solvents. History The compound was first reported as early as 1910 by F.P. Burt, who obtained it by heating tetrasulfur tetranitride in vacuum over silver wool. The compound was the first compound with only non-metallic elements in which superconductivity could be demonstrated. However, the relatively low transition temperature at about 0.3 K makes a practical application unlikely. Properties Polythiazyl is a metallic-golden and shiny, crystalline but fibrous material. The polymer is mostly inert to oxygen and water, but decomposes in air to a grey powder. At temperatures above 240 °C explosive decomposition can occur. The compound also explodes on impact. Explosion generally proceeds via decomposition to the elements. Polythiazyl shows an anisotropic electrical conductivity. Along the fibres or SN chains, the bond is electrically conductive, perpendicular to it acts as an insulator. The one-dimensional conductivity is based on the bonding conditions in the S-N chain, where each sulfur atom provides two π electrons and each nitrogen atom provides one π electron to form two-center 3π electron bonding units. Two polymorphic crystal forms were observed in the compound. The monoclinic form I obtained from the synthesis can be converted into an orthorhombic form II by mechanical treatment such as grinding. Structure and bonding The material is a polymer, containing trivalent nitrogen, and divalent and tetravalent sulfur. The S and N atoms on adjacent chains align. Several resonance structures can be written. The structure of the crystalline compound was resolved by X-ray diffraction. This showed alternating S–N bond lengths of 159 pm and 163 pm and S–N–S bond angles of 120 ° and N–S–N bond angles of 106 °. Synthesis Polythiazyl is synthesized by the polymerization of the cyclic formal dimer disulfur dinitride (), which is in turn synthesized from the formal tetramer tetrasulfur tetranitride (), in the presence of hot silver wool. The reaction begins when silver abstracts sulfur from to produce a catalyst; the resulting gaseous is then isolated through sublimation onto a cold surface: S4N4 + 8 Ag → 4 Ag2S + 2 N2 S4N4 (low-pressure gas at 250-300 °C; Ag2S catalyst) → 2 S2N2 (gas) → 2 S2N2 (stable solid at 77 K) When warmed to room temperature, the additional heat induces spontaneous polymerization: S2N2 (0 °C) → (SN)x Uses Due to its electrical conductivity, polythiazyl is used in LEDs, transistors, battery cathodes, and solar cells. Literature King, R.S.P.: Novel chemistry and applications of polythiazyl, Doctoral Thesis Loughborough University 2009, pdf-Download References Inorganic polymers Nitrides Superconductors Sulfur–nitrogen compounds Conductive polymers
Polythiazyl
Chemistry,Materials_science
728
5,190,349
https://en.wikipedia.org/wiki/Word%20error%20rate
Word error rate (WER) is a common metric of the performance of a speech recognition or machine translation system. The WER metric typically ranges from 0 to 1, where 0 indicates that the compared pieces of text are exactly identical, and 1 (or larger) indicates that they are completely different with no similarity. This way, a WER of 0.8 means that there is an 80% error rate for compared sentences. The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the Levenshtein distance, working at the word level instead of the phoneme level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort. This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between perplexity and word error rate. Word error rate can then be computed as: where S is the number of substitutions, D is the number of deletions, I is the number of insertions, C is the number of correct words, N is the number of words in the reference (N=S+D+C) The intuition behind 'deletion' and 'insertion' is how to get from the reference to the hypothesis. So if we have the reference "This is wikipedia" and hypothesis "This _ wikipedia", we call it a deletion. Note that since N is the number of words in the reference, the word error rate can be larger than 1.0, namely if the number of insertions I is larger than the number of correct words C. When reporting the performance of a speech recognition system, sometimes word accuracy (WAcc) is used instead: Since the WER can be larger than 1.0, the word accuracy can be smaller than 0.0. Experiments It is commonly believed that a lower word error rate shows superior accuracy in recognition of speech, compared with a higher word error rate. However, at least one study has shown that this may not be true. In a Microsoft Research experiment, it was shown that, if people were trained under "that matches the optimization objective for understanding", (Wang, Acero and Chelba, 2003) they would show a higher accuracy in understanding of language than other people who demonstrated a lower word error rate, showing that true understanding of spoken language relies on more than just high word recognition accuracy. Other metrics One problem with using a generic formula such as the one above, however, is that no account is taken of the effect that different types of error may have on the likelihood of successful outcome, e.g. some errors may be more disruptive than others and some may be corrected more easily than others. These factors are likely to be specific to the syntax being tested. A further problem is that, even with the best alignment, the formula cannot distinguish a substitution error from a combined deletion plus insertion error. Hunt (1990) has proposed the use of a weighted measure of performance accuracy where errors of substitution are weighted at unity but errors of deletion and insertion are both weighted only at 0.5, thus: There is some debate, however, as to whether Hunt's formula may properly be used to assess the performance of a single system, as it was developed as a means of comparing more fairly competing candidate systems. A further complication is added by whether a given syntax allows for error correction and, if it does, how easy that process is for the user. There is thus some merit to the argument that performance metrics should be developed to suit the particular system being measured. Whichever metric is used, however, one major theoretical problem in assessing the performance of a system is deciding whether a word has been “mis-pronounced,” i.e. does the fault lie with the user or with the recogniser. This may be particularly relevant in a system which is designed to cope with non-native speakers of a given language or with strong regional accents. The pace at which words should be spoken during the measurement process is also a source of variability between subjects, as is the need for subjects to rest or take a breath. All such factors may need to be controlled in some way. For text dictation it is generally agreed that performance accuracy at a rate below 95% is not acceptable, but this again may be syntax and/or domain specific, e.g. whether there is time pressure on users to complete the task, whether there are alternative methods of completion, and so on. The term "Single Word Error Rate" is sometimes referred to as the percentage of incorrect recognitions for each different word in the system vocabulary. Edit distance The word error rate may also be referred to as the length normalized edit distance. The normalized edit distance between X and Y, d( X, Y ) is defined as the minimum of W( P ) / L ( P ), where P is an editing path between X and Y, W ( P ) is the sum of the weights of the elementary edit operations of P, and L(P) is the number of these operations (length of P). See also BLEU F-Measure METEOR NIST (metric) ROUGE (metric) References Notes Other sources McCowan et al. 2005: On the Use of Information Retrieval Measures for Speech Recognition Evaluation Hunt, M.J., 1990: Figures of Merit for Assessing Connected Word Recognisers (Speech Communication, 9, 1990, pp 239-336) Zechner, K., Waibel, A.Minimizing Word Error Rate in Textual Summaries of Spoken Language Speech recognition Machine translation Evaluation of machine translation Rates
Word error rate
Technology
1,230
406,418
https://en.wikipedia.org/wiki/Levenshtein%20distance
In information theory, linguistics, and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after Soviet mathematician Vladimir Levenshtein, who defined the metric in 1965. Levenshtein distance may also be referred to as edit distance, although that term may also denote a larger family of distance metrics known collectively as edit distance. It is closely related to pairwise string alignments. Definition The Levenshtein distance between two strings (of length and respectively) is given by where where the of some string is a string of all but the first character of (i.e. ), and is the first character of (i.e. ). Either the notation or is used to refer to the th character of the string , counting from 0, thus . The first element in the minimum corresponds to deletion (from to ), the second to insertion and the third to replacement. This definition corresponds directly to the naive recursive implementation. Example For example, the Levenshtein distance between "kitten" and "sitting" is 3, since the following 3 edits change one into the other, and there is no way to do it with fewer than 3 edits: kitten → sitten (substitution of "s" for "k"), sitten → sittin (substitution of "i" for "e"), sittin → sitting (insertion of "g" at the end). A simple example of a deletion can be seen with "uninformed" and "uniformed" which have a distance of 1: uninformed → uniformed (deletion of "n"). Upper and lower bounds The Levenshtein distance has several simple upper and lower bounds. These include: It is at least the absolute value of the difference of the sizes of the two strings. It is at most the length of the longer string. It is zero if and only if the strings are equal. If the strings have the same size, the Hamming distance is an upper bound on the Levenshtein distance. The Hamming distance is the number of positions at which the corresponding symbols in the two strings are different. The Levenshtein distance between two strings is no greater than the sum of their Levenshtein distances from a third string (triangle inequality). An example where the Levenshtein distance between two strings of the same length is strictly less than the Hamming distance is given by the pair "flaw" and "lawn". Here the Levenshtein distance equals 2 (delete "f" from the front; insert "n" at the end). The Hamming distance is 4. Applications In approximate string matching, the objective is to find matches for short strings in many longer texts, in situations where a small number of differences is to be expected. The short strings could come from a dictionary, for instance. Here, one of the strings is typically short, while the other is arbitrarily long. This has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, and software to assist natural-language translation based on translation memory. The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in fuzzy string searching in applications such as record linkage, the compared strings are usually short to help improve speed of comparisons. In linguistics, the Levenshtein distance is used as a metric to quantify the linguistic distance, or how different two languages are from one another. It is related to mutual intelligibility: the higher the linguistic distance, the lower the mutual intelligibility, and the lower the linguistic distance, the higher the mutual intelligibility. Relationship with other edit distance metrics There are other popular measures of edit distance, which are calculated using a different set of allowable edit operations. For instance, the Damerau–Levenshtein distance allows the transposition of two adjacent characters alongside insertion, deletion, substitution; the longest common subsequence (LCS) distance allows only insertion and deletion, not substitution; the Hamming distance allows only substitution, hence, it only applies to strings of the same length. the Jaro distance allows only transposition. Edit distance is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith–Waterman algorithm, which make an operation's cost depend on where it is applied. Computation Recursive This is a straightforward, but inefficient, recursive Haskell implementation of a lDistance function that takes two strings, s and t, together with their lengths, and returns the Levenshtein distance between them: lDistance :: Eq a => [a] -> [a] -> Int lDistance [] t = length t -- If s is empty, the distance is the number of characters in t lDistance s [] = length s -- If t is empty, the distance is the number of characters in s lDistance (a : s') (b : t') | a == b = lDistance s' t' -- If the first characters are the same, they can be ignored | otherwise = 1 + minimum -- Otherwise try all three possible actions and select the best one [ lDistance (a : s') t' -- Character is inserted (b inserted) , lDistance s' (b : t') -- Character is deleted (a deleted) , lDistance s' t' -- Character is replaced (a replaced with b) ] This implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times. A more efficient method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible suffixes might be stored in an array , where is the distance between the last characters of string s and the last characters of string t. The table is easy to construct one row at a time starting with row 0. When the entire table has been built, the desired distance is in the table in the last row and column, representing the distance between all of the characters in s and all the characters in t. Iterative with full matrix This section uses 1-based strings rather than 0-based strings. If m is a matrix, is the ith row and the jth column of the matrix, with the first row having index 0 and the first column having index 0. Computing the Levenshtein distance is based on the observation that if we reserve a matrix to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix in a dynamic programming fashion, and thus find the distance between the two full strings as the last value computed. This algorithm, an example of bottom-up dynamic programming, is discussed, with variants, in the 1974 article The String-to-string correction problem by Robert A. Wagner and Michael J. Fischer. This is a straightforward pseudocode implementation for a function LevenshteinDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them: function LevenshteinDistance(char s[1..m], char t[1..n]): // for all i and j, d[i,j] will hold the Levenshtein distance between // the first i characters of s and the first j characters of t declare int d[0..m, 0..n] set each element in d to zero // source prefixes can be transformed into empty string by // dropping all characters for i from 1 to m: d[i, 0] := i // target prefixes can be reached from empty source prefix // by inserting every character for j from 1 to n: d[0, j] := j for j from 1 to n: for i from 1 to m: if s[i] = t[j]: substitutionCost := 0 else: substitutionCost := 1 d[i, j] := minimum(d[i-1, j] + 1, // deletion d[i, j-1] + 1, // insertion d[i-1, j-1] + substitutionCost) // substitution return d[m, n] Two examples of the resulting matrix (hovering over a tagged number reveals the operation performed to get that number): {|class="wikitable" |- | | ! k ! i ! t ! t ! e ! n |- | ||0 ||1 ||2 ||3 ||4 ||5 ||6 |- ! s | 1 || ||2 ||3 ||4 ||5 ||6 |- ! i | 2 ||2 || ||2 ||3 ||4 ||5 |- ! t | 3 ||3 ||2 || ||2 ||3 ||4 |- !t | 4 ||4 ||3 ||2 || ||2 ||3 |- ! i | 5 ||5 ||4 ||3 ||2 || ||3 |- ! n | 6 ||6 ||5 ||4 ||3 ||3 || |- ! g | 7 ||7 ||6 ||5 ||4 ||4 || |} The invariant maintained throughout the algorithm is that we can transform the initial segment into using a minimum of operations. At the end, the bottom-right element of the array contains the answer. Iterative with two matrix rows It turns out that only two rows of the table the previous row and the current row being calculated are needed for the construction, if one does not want to reconstruct the edited input strings. The Levenshtein distance may be calculated iteratively using the following algorithm: function LevenshteinDistance(char s[0..m-1], char t[0..n-1]): // create two work vectors of integer distances declare int v0[n + 1] declare int v1[n + 1] // initialize v0 (the previous row of distances) // this row is A[0][i]: edit distance from an empty s to t; // that distance is the number of characters to append to s to make t. for i from 0 to n: v0[i] = i for i from 0 to m - 1: // calculate v1 (current row distances) from the previous row v0 // first element of v1 is A[i + 1][0] // edit distance is delete (i + 1) chars from s to match empty t v1[0] = i + 1 // use formula to fill in the rest of the row for j from 0 to n - 1: // calculating costs for A[i + 1][j + 1] deletionCost := v0[j + 1] + 1 insertionCost := v1[j] + 1 if s[i] = t[j]: substitutionCost := v0[j] else: substitutionCost := v0[j] + 1 v1[j + 1] := minimum(deletionCost, insertionCost, substitutionCost) // copy v1 (current row) to v0 (previous row) for next iteration // since data in v1 is always invalidated, a swap without copy could be more efficient swap v0 with v1 // after the last swap, the results of v1 are now in v0 return v0[n] Hirschberg's algorithm combines this method with divide and conquer. It can compute the optimal edit sequence, and not just the edit distance, in the same asymptotic time and space bounds. Automata Levenshtein automata efficiently determine whether a string has an edit distance lower than a given constant from a given string. Approximation The Levenshtein distance between two strings of length can be approximated to within a factor where is a free parameter to be tuned, in time . Computational complexity It has been shown that the Levenshtein distance of two strings of length cannot be computed in time for any ε greater than zero unless the strong exponential time hypothesis is false. See also agrep Damerau–Levenshtein distance diff Dynamic time warping Euclidean distance Homology of sequences in genetics Hamming distance Hunt–Szymanski algorithm Jaccard index Jaro–Winkler distance Locality-sensitive hashing Longest common subsequence problem Lucene (an open source search engine that implements edit distance) Manhattan distance Metric space MinHash Optimal matching algorithm Numerical taxonomy Sørensen similarity index References External links Rosetta Code implementations of Levenshtein distance String metrics Dynamic programming Articles with example pseudocode Quantitative linguistics
Levenshtein distance
Technology
2,843
12,734,316
https://en.wikipedia.org/wiki/Graf%20Zeppelin-class%20aircraft%20carrier
The Graf Zeppelin-class aircraft carriers''' were four German Kriegsmarine aircraft carriers planned in the mid-1930s by Grand Admiral Erich Raeder as part of the Plan Z rearmament program after Germany and Great Britain signed the Anglo-German Naval Agreement. They were planned after a thorough study of Japanese carrier designs. German naval architects ran into difficulties due to lack of experience in building such vessels, the realities of carrier operations in the North Sea and the lack of overall clarity in the ships' mission objectives. This lack of clarity led to features such as cruiser-type guns for commerce raiding and defense against British cruisers, that were either eliminated from or not included in American and Japanese carrier designs. American and Japanese carriers, designed along the lines of task-force defense, used supporting cruisers for surface firepower, which allowed flight operations to continue without disruption and reduced the chances of exposure to risks that surface action would have entailed. A combination of political infighting between the Kriegsmarine and the Luftwaffe, disputes within the ranks of the Kriegsmarine itself and Adolf Hitler's waning interest all conspired against the carriers. A shortage of workers and materials slowed construction still further and, in 1939, Raeder reduced the number of ships from four to two. Even so, the Luftwaffe trained its first unit of pilots for carrier service and readied it for flight operations. With the advent of World War II, priorities shifted to U-boat construction; one carrier, Flugzeugträger B, was broken up on the slipway while work on the other, Flugzeugträger A (christened Graf Zeppelin) was continued tentatively but suspended in 1940. The air unit scheduled for her was disbanded at that time. The role of aircraft in the Battle of Taranto, the pursuit of the German battleship , the attack on Pearl Harbor and the Battle of Midway demonstrated conclusively the usefulness of aircraft carriers in modern naval warfare. With Hitler's authorization, work resumed on the remaining carrier. Progress was again delayed, this time by the demand for newer planes specifically designed for carrier use and the need for modernizing the ship in light of wartime developments. Hitler's disenchantment with the performance of the Kriegsmarine's surface units led to a final stoppage of work. The ship was captured by the Soviet Union at the end of the war and sunk as a target ship in 1947. Design and construction After 1933, the Kriegsmarine began to examine the possibility of building an aircraft carrier. Wilhelm Hadeler had been Assistant to the Professor of Naval Construction at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin) for nine years when he was appointed to draft preliminary designs for an aircraft carrier in April 1934. Hadeler's first design was a ship that could carry 50 aircraft and steam at . The Anglo-German Naval Agreement, signed on 18 June 1935, allowed Germany to construct aircraft carriers with total displacement up to 38,500 tons, though Germany was limited to 35% of total British tonnage in any category of warship. The Kriegsmarine then decided to scale back Hadeler's design to , which would permit the construction of two ships within the 35% limit. The design staff decided that the new carrier would need to be able to defend itself against surface combatants, which necessitated armor protection to the standard of a heavy cruiser. A battery of sixteen guns were deemed sufficient to defend the ship from destroyers. In 1935, Adolf Hitler announced that Germany would construct aircraft carriers to strengthen the Kriegsmarine. A Luftwaffe officer, a naval officer, and a constructor visited Japan in the autumn of 1935 to obtain flight deck equipment blueprints and inspect the Japanese aircraft carrier . The Germans also unsuccessfully attempted to examine the British carrier . The keel of Graf Zeppelin was laid down on 28 December 1936, on the slipway that had recently held the battleship . The ship was built by the Deutsche Werke shipyard in Kiel. Two years later, Großadmiral (Grand Admiral) Erich Raeder presented an ambitious shipbuilding program called Plan Z which would build up the Kriegsmarine to a point where it could challenge the British Royal Navy in the North Sea. Under Plan Z, by 1945 as part of the balanced force the navy would have four carriers; the pair of Graf Zeppelin-class ships were the first two in the plan. Hitler approved the construction program on 1 March 1939. In 1938, a second carrier, ordered under the provisional name "B", was laid down at the Germaniawerft dockyard in Kiel. Graf Zeppelin was launched on 8 December 1938. Design Hull The Graf Zeppelin class's hull was divided into 19 watertight compartments, the standard division for all capital ships in the Kriegsmarine. Their belt armor was to vary from over the machinery spaces and aft magazines, to over the forward magazines and tapered down to at the bows. Stern armor was kept at to protect the steering gear. Inboard of the main armor belt was a anti-torpedo bulkhead. Horizontal armor protection against aerial bombs and plunging shellfire started with the flight deck, which acted as the main strength deck. The armor was generally thick except for those areas around the elevator shafts and funnel uptakes where thickness increased to in order to give the elevators necessary structural strength and the critical uptakes greater splinter protection. Beneath the lower hangar was the main armored deck (or tween deck) where armor thickness varied from over the magazines to over the machinery spaces. Along the peripheries, it formed a 45 degree slope where it joined the lower portion of the waterline belt armor. The Graf Zeppelins original length-to-beam ratio was 9.26:1, resulting in a slender silhouette. However, in May 1942, the accumulating top-weight of recent design changes required the addition of deep bulges to each side of Graf Zeppelins hull, decreasing that ratio to 8.33:1 and giving her the widest beam of any carrier designed prior to 1942. The bulges served mainly to improve Graf Zeppelins stability but they also gave her an added degree of anti-torpedo protection and increased her operating range because selected compartments were designed to store approximately 1500 additional tons of fuel oil.Graf Zeppelins straight-stemmed prow was rebuilt in early 1940 with the addition of a more sharply angled "Atlantic prow", intended to improve overall seakeeping. This added to her overall length. Machinery The Graf Zeppelin class's power plant was to consist of 16 La Mont high-pressure boilers, similar to those used in the heavy cruisers. Their four sets of geared turbines, connected to four shafts, were expected to produce and propel the carrier at a top speed of . With a maximum bunkerage capacity of 5000 tons of fuel oil (prior to the addition of bulges in 1942), the Graf Zeppelins calculated radius of action was at . However, wartime experience on ships with similar power plants showed such estimates were highly inaccurate, and actual operational ranges tended to be much lower. Two Voith-Schneider cycloidal propeller-rudders were to be installed in the forward bow of the ship along the center-line. These were intended to assist in berthing the ship in harbor and also in negotiating narrow waterways such as the Kiel Canal where, due to the carrier's high freeboard and difficulty in maneuvering at speeds below , gusting winds might push the ship into the canal sides. In an emergency, the units could have been used to steer the ships at speeds under and, if the ships' main engines were rendered inoperable, could propel the vessel at a speed of in calm seas. When not in use, they were to be retracted into their vertical shafts and protected by water-tight covers. Flight deck and hangars Flight deck The Graf Zeppelins steel flight deck, overlaid with wooden planking, was long by wide at its maximum. It had a slight round down right aft and overhung the main superstructure but not the stern; being supported by steel girders. At the bow, the carriers were to have an open forecastle and the leading edge of her flight deck was uneven (mainly due to the blunt ends of her catapult tracks), but it did not appear likely that would have caused any undue air turbulence. Careful wind-tunnel studies using models confirmed this, but they also revealed that their long low island structure would generate a vortex over the flight deck in these tests when the ship yawed to port. This was considered to be an acceptable hazard when conducting air operations. Hangars The Graf Zeppelin class's upper and lower hangars were long and narrow with unarmored sides and ends. Workshops, stores and crew quarters were located outboard of the hangars, a design feature similar to that of British carriers. The upper hangar measured x ; the lower hangar x . The upper hangar had vertical clearance while the lower hangar had less headroom due to the ceiling braces. Total usable hangar space was with stowage for 43 aircraft: 20 Fieseler Fi 167 torpedo bombers, 18 in the lower hangar, two in the upper hangar; 13 Junkers Ju 87C dive bombers in the upper hangar and 10 Messerschmitt Bf 109T fighters in the upper hangar. Elevators The Graf Zeppelin class had three electrically operated elevators positioned along the flight-deck's center-line: one near the bow, abreast the forward end of the island; one amidships; and one aft. They were octagonal in shape, measuring x , and were designed to transfer aircraft weighing up to 5.5 tons between decks.Breyer, p. 54 Launch catapults Two Deutsche Werke compressed air-driven telescoping catapults were installed at the forward end of the flight deck for power-assisted launches. They were long and designed to accelerate a fighter to a speed of approximately and a bomber to . A dual set of rails led back from the catapults to the forward and midship elevators. In the hangars, aircraft were to be hoisted by crane - a method also proposed for the Essex-class carriers of the United States Navy, but rejected as too time-consuming - onto collapsible launch trolleys. The aircraft/trolley combination would then be lifted to flight deck level on the elevator and trundled along the rails to the catapult start points. When the catapults were triggered, a burst of compressed air would propel moveable slideways within the catapult track wells forward. As each plane lifted off, its launch trolley would reach the end of the slideway but remain locked in place until the tow attachment cables were released. Once the slideways were retracted back into the catapult track wells and the tow cables unhooked, the launch trollies would be manually pushed forward onto recovery platforms, lowered to the forecastle on "B" deck, then rolled back into the upper hangar for re-use via a secondary set of rails. When not in use, the catapult tracks were to be covered with sheet metal fairings to protect them from harsh weather. Eighteen aircraft could have theoretically been launched at a rate of one every 30 seconds before exhausting the catapult air reservoirs. It would then have taken 50 minutes to recharge the reservoirs. The two large cylinders holding the compressed air were housed in insulated compartments located between the two catapult tracks, below flight deck level but above the main armored deck. This positioning afforded them only light protection from potential battle damage. The insulated compartments were to be electrically heated to a temperature of in order to prevent ice from forming on the cylinder piping and control equipment as the compressed air was vented during launches. It was intended from the outset that all of the Graf Zeppelins aircraft would normally launch via catapult. Rolling take-offs would be performed only in an emergency or if the catapults were inoperable due to battle damage or mechanical failure. Whether this practice would have been strictly adhered to or later modified, based on actual air trials and combat experience is open to question, especially given the limited capacity of the air reservoirs and the long recharging times necessary between launches. One advantage of such a system, however, was that the Graf Zeppelins could have launched their aircraft without need for turning the ship into the wind or under conditions where the prevailing winds were too light to provide enough lift for her heavier aircraft. They could also have launched and landed aircraft simultaneously. To facilitate rapid catapult launches and eliminate the necessity of time-consuming engine warm-ups, up to eight aircraft were to be kept in readiness aboard the German carriers on their hangar decks by the use of steam pre-heaters. These would keep the aircraft engines at an operational temperature of . In addition, engine oil was to be kept warmed in separate holding tanks, then added via hand-pumps to the aircraft engines shortly before launch. Once the aircraft were raised to flight deck level via the elevators, aircraft oil temperature could be maintained, if need be, through the use of electric pre-heaters plugged into power points on the flight deck. Otherwise, the aircraft could have been immediately catapult-launched as their engines would already have been at or near normal operating temperature. Arresting gear Four arrester wires were positioned at the after end of the flight deck with two more emergency wires located afore and abaft of the amidships elevator. Original drawings show four additional wires fore and aft of the forward lift, possibly intended to allow recovery of aircraft over the bows, but these may have been deleted from the ship's final configuration. To assist with night landings, the arrester wires were to be illuminated with neon lights. Wind barriers Two high, slitted steel wind barriers were installed afore the midships and forward elevators. These were designed to reduce wind velocity over the flight deck to a distance of approximately behind them. When not in use they could be lowered flush with the deck to allow aircraft to pass over them. Island The Graf Zeppelins starboard-side island housed the command and navigating bridges and charthouse. It also served as a platform for three searchlights, four domed stabilized fire-control directors and a large vertical funnel. To compensate for the weight of the island, the carrier's flight deck and hangars were offset to port from her longitudinal axis. Design additions proposed in 1942 included a tall fighter-director tower, air search radar antennas and a curved cap for her funnel, the latter intended to keep smoke and exhaust gases away from the armored fighter-director cabin. Armament The Graf Zeppelins were to be armed with separate high and low angle guns for AA and anti-ship defense at a time when most other major navies were switching to dual-purpose AA weapons and relying on escort ships to protect their carriers from surface threats. Her primary anti-shipping armament consisted of sixteen SK C/28 guns paired in eight armored casemates. These were mounted, two each, at the four corners of the carriers' upper hangar deck, positions that raised the possibility the guns would be washed out in heavy seas, especially those in the forward casemates. Chief Engineer Hadeler had originally planned for only eight such weapons on the carriers, four on each side in single mountings. However, the Naval Armaments Office misinterpreted his proposal to save space by pairing them and instead doubled the number of guns to 16, resulting in a need for increased ammunition stowage and more electrically operated hoists to service them. Later in Graf Zeppelins construction, some consideration was given to deleting these guns and replacing them with SK C/33 guns mounted on sponsons just below flight deck level. But the structural modifications needed to accommodate such a change were judged too difficult and time-consuming, requiring major changes to the ship's design, and the matter was shelved. Primary AA protection came from 12 guns, paired in six turrets positioned three afore and three aft of the carrier's island. Potential blast damage to planes sited on the flight deck when these guns fired to port was an unavoidable risk and would have limited any flight activity during an engagement. The Graf Zeppelin class's secondary AA defenses consisted of 11 twin SK C/30 guns mounted on sponsons located along the flight deck edges: four on the starboard side, six to port and one mounted on the ship's forecastle. In addition, seven MG C/30 guns were installed on single-mount platforms on either side of the carrier: four to port and three to starboard. These guns were later changed to Flakvierling mountings. Flight testing at Travemünde In 1937, with Graf Zeppelins launch scheduled for the end of the following year, the Luftwaffe's experimental test facility at Travemünde (Erprobungsstelle See or E-Stelle See) on the Baltic coast - one of the four such Erprobungstelle facilities of the Third Reich, with the headquarters at Rechlin - began a lengthy program of testing prototype carrier aircraft. This included performing simulated carrier landings and take-offs and training future carrier pilots. The runway was painted with a contoured outline of Graf Zeppelins flight deck and simulated deck landings were then conducted over an arresting cable strung width-wise across the airstrip. The cable was attached to an electromechanical braking device manufactured by DEMAG (Deutsche Maschinenfabrik A.G. Duisburg). Testing began in March 1938 using the Heinkel He 50, Arado Ar 195 and Ar 197. Later, a stronger braking winch was supplied by Atlas-Werke of Bremen and this allowed heavier aircraft, such as the Fieseler Fi 167 and Junkers Ju 87, to be tested. After some initial problems, Luftwaffe pilots performed 1,500 successful braked landings out of 1,800 attempted. Launches were practiced using a long barge-mounted pneumatic catapult, moored in the Trave River estuary. The Heinkel-designed catapult, built by Deutsche Werke Kiel (DWK), could accelerate aircraft to speeds of depending on wind conditions. Test planes were first hoisted by crane onto collapsible launch carriages in the same manner as intended on Graf Zeppelin. The catapult test program began in April 1940 and, by early May, 36 launches had been conducted, all carefully documented and filmed for later study: 17 by Arado Ar 197s, 15 by modified Junkers Ju 87Bs and four using a modified Messerschmitt Bf 109D. Further testing followed, and by June Luftwaffe officials were fully satisfied with the catapult system's performance. Aircraft The expected role of the Graf Zeppelin class was that of a seagoing scouting platform and her initial planned air group reflected that emphasis: 20 Fieseler Fi 167 biplanes for scouting and torpedo attack, 10 Messerschmitt Bf 109 fighters, and 13 Junkers Ju 87 dive bombers. This was later changed to 30 Bf 109 fighters and 12 Ju 87 dive-bombers as carrier doctrine in Japan, Great Britain and the United States shifted away from purely reconnaissance duties toward offensive combat missions. In late 1938, the Technische Amt RLM (Technical Office of the Reichsluftfahrtministerium or State Ministry of Aviation) requested that Messerschmitt's Augsburg design bureau draw up plans for a carrier-borne version of the Bf 109E fighter, to be designated Bf 109T (the "T" standing for Träger or Carrier). By December 1940, the RLM decided to complete only seven carrier-equipped Bf 109T-1s and to finish the remainder as land-based T-2s since work on Graf Zeppelin had ceased back in April and there appeared to be little likelihood she would then be commissioned any time soon. When work on Graf Zeppelin ceased, the T-2s were deployed to Norway. At the end of 1941, when interest in completing Graf Zeppelin revived, the surviving Bf 109 T-2s were withdrawn from front-line service in order to again prepare them for possible carrier duty. Seven T-2s were rebuilt to T-1 standards and handed over to the Kriegsmarine on 19 May 1942. By December, a total of 48 Bf 109T-2s had been converted back into T-1s. 46 of these were stationed at Pillau in East Prussia and reserved for use aboard the carrier. By February 1943, all work on Graf Zeppelin had ceased and the aircraft were returned to Luftwaffe service in April. When work on Graf Zeppelin was suspended in May 1940, the 12 completed Fi 167s were organized into Erprobungsstaffel 167 for the purpose of conducting further operational trials. By the time work on the carrier resumed two years later in May 1942, the Fi 167 was no longer considered adequate for its intended role and the Technische Amt decided to replace it with a modified torpedo-carrying version of the Junkers Ju 87D. Ten Ju 87C-0 pre-production aircraft were built and sent to the testing facilities at Rechlin and Travemünde where they underwent extensive service trials, including catapult launches and simulated deck landings. Of the 170 Ju 87C-1 ordered, only a few saw completion, suspension of work on Graf Zeppelin in May 1940 resulting in cancellation of the entire order. Existing aircraft and those airframes in process were eventually converted back into Ju 87B-2s. Work on developing a torpedo-carrying version of the Ju 87D for anti-shipping sorties in the Mediterranean had already commenced in early 1942 when the possibility again arose that Graf Zeppelin might be completed. As the Fieseler Fi 167 was now considered obsolete, the Technische Amt requested that Junkers modify the Ju 87D-4 into a carrier-borne torpedo-bomber/recon plane to be designated Ju 87E-1. But when all further work on Graf Zeppelin was halted for good in February 1943, the entire order was canceled. None of the Ju 87Ds converted to carry a torpedo were used operationally. By May 1942, when work was ordered resumed on Graf Zeppelin, the older Bf 109T carrier-borne fighter was considered obsolete. By September 1942 detailed plans for the new fighter, the Me 155, were completed. When it became apparent Graf Zeppelin would not be commissioned for at least another two years, Messerschmitt was unofficially told to shelve the projected fighter design. No prototype of the carrier-borne version of the plane was ever constructed. On 1 August 1938, four months prior to Graf Zeppelins launch date, the Luftwaffe formed its first carrier-based air unit, designated Trägergruppe I/186, on Rugia Island near Burg. It was composed of three squadrons (Staffeln) and was intended to serve aboard both carriers when completed. By October shipyard construction delays resulted in disbandment of the air group as it was considered too large and costly to maintain given the uncertainty over when the two vessels would be ready for sea trials. Instead, on 1 November that same year a single fighter squadron (Trägerjagdstaffel) was created, 6./186, and placed under the command of Cpt. Heinrich Seeliger. Later, a dive bomber squadron was added, 4./186, equipped with Ju 87Bs under Cpt. Blattner. Six months after, in July 1939, a second fighter squadron was formed, 5./186, under Oberleutnant Gerhard Kadow and partly staffed with pilots culled from 6./186. By August the three squadrons were reorganised into Trägergruppe II/186 under the command of Major Walter Hagen in anticipation that Graf Zeppelin would be ready for service trials by the summer of 1940. Ships in class Construction on the Kriegsmarines two aircraft carriers had been fitful from the start due to a shortage of welders and delays in obtaining materials. Graf Zeppelin Work started on Graf Zeppelin, ordered as Flugzeugträger A, in 1936. She was laid down on 28 December that year, and launched on 8 December 1938. She was incomplete by April 1940, when a changed strategic situation led to work on her being suspended. By early 1942 the usefulness of aircraft carriers in modern naval warfare had been amply demonstrated, and on 13 May 1942, with Hitler's authorization, the German Naval Supreme Command ordered work resumed on the carrier. With technical problems, such as the demand for newer planes specifically designed for carrier use, and the need for modernization, progress was delayed. The German naval staff hoped all these changes could be accomplished by April 1943, with the carrier's first sea trials taking place that August. By late January 1943 Hitler had become so disenchanted with the Kriegsmarine, especially with what he perceived as the poor performance of its surface fleet, that he ordered all of its larger ships taken out of service and scrapped. As of 2 February 1943, construction on the carrier ended for good.Graf Zeppelin languished for the next two years in various Baltic ports. On 25 April 1945, she was scuttled at Stettin (now Szczecin, Poland), ahead of the advancing Red Army. The ship was subsequently raised by the Soviets and was used for target practice and sunk in 1947. Her wreck was discovered in 2006 by Polish researchers in the Baltic off Władysławowo, at the head of the Hel Peninsula. Flugzeugträger B The contract to build the ship was awarded to the Friedrich Krupp Germaniawerft in Kiel in 1938, with a planned launch date on 1 July 1940. Work on Flugzeugträger B began in 1938 but was halted on 19 September 1939 because, now that Germany was at war with Great Britain and France, priority had shifted to U-boat construction. The hull, completed only up to the armored deck, sat rusting on its slipway until 28 February 1940, when Admiral Raeder ordered her broken up and scrapped. Scrapping was completed four months later. The Kriegsmarine never named a vessel before it was launched, so it was only given the designation "B" ("A" was Graf Zeppelins designation before launch). Had it been completed, the aircraft carrier could have been named Peter Strasser in honor of the World War I leader of the naval airships Peter Strasser. Flugzeugträger C and D In 1937, the Kriegsmarine planned two additional aircraft carriers of the Graf Zeppelin'' class with the official designations C and D. Both these carriers were planned to be operational by 1943. However, by the end of 1938, this plan was changed to only build these two carriers plus any further units as smaller carriers. See also List of aircraft carriers List of aircraft carriers of Germany List of naval ship classes of Germany List of ship classes of the Second World War Footnotes Notes References Further reading External links "Graf Zeppelin Rediscovered – Hitler's Showpiece Aircraft Carrier Found." Spiegel Online International article dated 27-7-2006. Retrieved 20-9-2010. A video with photos of the unfinished Graf Zeppelin Aircraft carrier classes Proposed aircraft carriers Proposed ships of Germany
Graf Zeppelin-class aircraft carrier
Engineering
5,588
152,643
https://en.wikipedia.org/wiki/Hand%20fan
A handheld fan, or simply hand fan, is a broad, flat surface that is waved back-and-forth to create an airflow. Generally, purpose-made handheld fans are folding fans, which are shaped like a sector of a circle and made of a thin material (such as paper or feathers) mounted on slats which revolve around a pivot so that it can be closed when not in use. Hand fans were used before mechanical fans were invented. Fans work by utilizing the concepts of Thermodynamics. On human skin, the airflow from hand fans increase the evaporation rate of sweat, lowering body temperature due to the latent heat of the evaporation of water. It also increases heat convection by displacing the warmer air produced by body heat that surrounds the skin, which has an additional cooling effect, provided that the ambient air temperature is lower than the skin temperature, which is typically about . Next to the folding fan, the rigid hand screen fan was also a highly decorative and desired object among the higher social classes. They serve a different purpose to the lighter, easier to carry hand fans. Hand screen fans were mostly used to shield a lady's face against the glare of the sun or fire. History Africa Hand fans originated about 4000 years ago in Egypt. Egyptians viewed them as sacred objects, and the tomb of Tutankhamun contained two elaborate hand fans. Ancient Europe Archaeological ruins and ancient texts show that the hand fan was used in ancient Greece at least from the 4th century BC and was known as a (). Fans were also used to keep flies away (like a fly-flapper), this kind of fan was less stiff and was named μυιoσόβη. Another use for a fan was to fan the flame, e.g. in cookery or at the altar. Christian Europe's earliest known fan was the flabellum (ceremonial fan), which dates from the 6th century. It was used during services to drive insects away from the consecrated bread and wine. Its use died out in western Europe, but continues in the Eastern Orthodox and Ethiopian Churches. East Asia China There were many kinds of fans in ancient China. The Chinese character for "fan" () is etymologically composed of the characters for "door" () and "feather" (). Historically, fans have played an important aspect in the life of the Chinese people. The Chinese have used hand-held fans as a way to relieve themselves during hot days since the ancient times; the fans are also an embodiment of the wisdom of Chinese culture and art. They were also used for ceremonial and ritual purposes and as a sartorial accessory when wearing . They were also carriers of Chinese traditional arts and literature and were representative of its user's personal aesthetic sense and their social status. Specific concepts of status and gender were associated with types of fans in Chinese history, but generally folding fans were reserved for males while rigid fans were for females. In ancient China, fans came in various shapes and forms (such as in a leaf, oval or a half-moon shape), and were made in different materials such as silk, bamboo, and feathers. So far, the earliest fans that have been found date to the Spring and Autumn and Warring States period. It was suggested by the Cultural Relics Archaeology Institute of Hubei Province that these fans were made of either bamboo or feathers and were oftentimes used as burial objects in the State of Chu. The oldest existing Chinese fans are a pair of woven bamboo, wood or paper side-mounted fans from the 2nd century BC. The Chinese form of the feather fan, known as , was a row of feathers mounted in the end of a handle. The arts of fan making eventually progressed to the point that by the Jin dynasty, fans could come in different shapes and could be made in different materials. The selling of hexagonal-shaped fan was also recorded in the Book of Jin. In later centuries, Chinese poems and four-word idioms were used to decorate fans, using Chinese calligraphy pens. The Chinese dancing fan was developed in the 7th century. The most ancient ritual Chinese fan is the , also known as , which is believed to have been invented by Emperor Shun. It is characterized with a long handle and the fan looks like a door in shape. This type of fan was used for ceremonial purposes. While its shape evolved throughout the millennia, it remained used as a symbol of imperial power and authority; it continued to be used until the fall of the Qing dynasty. Silk round-shaped fans are called (), also known as "fans of reunion"; it is a type of "rigid fan". These types of fans were mostly used by women in the Tang dynasty and were later introduced into Japan. These round fans remained mainstream even after the growing popularity of the folding fans. Round fans with Chinese paintings and with calligraphy became very popular in the Song dynasty. During the Song dynasty, famous artists were often commissioned to paint fans. Lacquer fans were also one of the unique handcraft of the Song dynasty. Chinese brides also used a type of moon-shaped round fan in a traditional Chinese wedding called . The ceremonial rite of was an important ceremony in Chinese wedding: the bride would hold it in front of her face to hide her shyness, to remain mysterious, and as a way to exorcise evil spirits. After all the other wedding ceremonies were completed and after the groom had impressed the bride, the bride would then proceed in revealing her face to the groom by removing the from her face. Another popular type of Chinese fan was the palm leaf fan (), also known as (), which was made of the leaves and stalks of (Livistona chinensis). The folding fan (), invented in Japan, was later introduced to the Chinese in the 10th century. In 988 AD, folding fans were first introduced in China by a Japanese monk from Japan as a tribute during the Northern Song dynasty; these folding fans became very fashionable in China by the Southern Song dynasty. The folding fans were referred to as "Japanese fans" by the Chinese. While the folding fans gained popularity, the traditional silk round fans continued to remain mainstream in the Song dynasty. The folding fan later became very fashionable in the Ming dynasty; however, folding fans were met with resistance because they were believed to be intended for the lower-class people and servants. The Chinese also innovated the design of the folding fan by creating the fan ('broken fan'). Foreign export From the late 18th century until 1845, trade between America and China flourished. During this period, Chinese fans reached the peak of their popularity in America; popular fans among American women were the fan, and fans made of palm leaf, feather, and paper. The most popular type during this period appeared to have been the palm leaf fan. The custom of using fans among the American middle class and by ladies was attributed to this Chinese influence. Japan In ancient Japan, hand fans, such as oval and silk fans, were greatly influenced by Chinese fans. The earliest visual depiction of fans in Japan dates back to the 6th century AD, with burial tomb paintings showed drawings of fans. The folding fan was invented in Japan, with dates ranging from the 6th to 9th centuries; it was a court fan called the , after the court women's dress named . According to the (History of Song), a Japanese monk offered the folding fans (twenty and two to the emperor of China in 988. Later in the 11th century, Korean envoys brought along Korean folding fans which were of Japanese origin as gifts to Chinese court. The popularity of folding fans was such that sumptuary laws were passed during Heian period which restricted the decoration of both and paper folding fans. The earliest fans in Japan were made by tying thin stripes of (or Japanese cypress) together with thread. The number of strips of wood differed according to the person's rank. Later in the 16th century, Portuguese traders introduced it to the west and soon both men and women throughout the continent adopted it. They are used today by Shinto priests in formal costume and in the formal costume of the Japanese court (they can be seen used by the Emperor and Empress during enthronement and marriage) and are brightly painted with long tassels. Simple Japanese paper fans are sometimes known as . Printed fan leaves and painted fans are done on a paper ground. The paper was originally handmade and displayed the characteristic watermarks. Machine-made paper fans, introduced in the 19th century, are smoother, with an even texture. Even today, geisha and use folding fans in their fan dances as well. Japanese fans are made of paper on a bamboo frame, usually with a design painted on them. In addition to folding fans (), the non-bending fans () are popular and commonplace. The fan is primarily used for fanning oneself in hot weather. The fan subsequently spread to other parts of Asia, including Burma, Thailand, Cambodia and Sri Lanka, and such fans are still used by Buddhist monks as "ceremonial fans". Fans were also used in the military as a way of sending signals on the field of battle. However, fans were mainly used for social and court activities. In Japan, fans were variously used by warriors as a form of weapon, by actors and dancers for performances, and by children as a toy. Traditionally, the rigid fan (also called fixed fan) was the most popular form in China, although the folding fan came into popularity during the Ming dynasty between the years of 1368 and 1644, and there are many beautiful examples of these folding fans still remaining. The (or Japanese dancing fan) has ten sticks and a thick paper mount showing the family crest, and Japanese painters made a large variety of designs and patterns. The slats, of ivory, bone, mica, mother of pearl, sandalwood, or tortoise shell, were carved and covered with paper or fabric. Folding fans have "montures" which are the sticks and guards, and the leaves were usually painted by craftsmen. Social significance was attached to the fan in the Far East as well, and the management of the fan became a highly regarded feminine art. Fans were even used as a weapon – called the iron fan, or in Japanese. See also, the , a military leader's fan (in old Japan); used in the modern day as an umpire's fan in sumo wrestling, it is a type of Japanese war fan, like the . Korea Every Dano (May 5 of the lunar calendar) when the heat began, there was a custom in which the king distributed hand fans to his vassals. The vassal, who received a hand fan from the king, did an ink-and-wash painting and handed out white fans to his elders and the indebted people, which has made the practice of exchanging hand fans widely popular. These cultural factors also contributed to the creation of various types of hand fan in Korea. Vietnam The hand fan () is an integral part of Vietnamese culture. According to the Vân Đài Loại Ngữ, a book written by Lê Quý Ðôn, in the old times Vietnamese people used hand fans made from bird feather and , a type of fan made from leaves of the taraw palm tree. The folding fans only started appearing in Vietnam in the 10th century, known as in Vietnamese. Christian missionary Christoforo Borri recorded that in 1621, both Vietnamese men and women frequently held hand fans as part of their daily garment. Many villages in Vietnam have long-dating traditions of making exquisite hand fans such as Canh Hoạch village and Đào Xá village, with fan-making dating back to the early 19th century. Simple handheld fans, such as and the are commonly found in the Vietnamese countrysides and popularly used among farmers and working people. The has the simplest design, cut directly from the dried Areca leaf stems, then pressed to flatten. It appears in "Thằng Bờm", a well-known Vietnamese (a type of Vietnamese folk song). The also has a simple design, made by sewing a half-moon shaped Maclurochloa leaf onto a straight bamboo stick. Re-introduction in Europe Hand fans were absent from Europe during the High Middle Ages until they were reintroduced in the 13th and 14th centuries. Fans from the Middle East were brought by Crusaders, and refugees from the Byzantine Empire. In the 15th and early 16th century, Chinese folding fans were introduced in Europe and later played an important role in the social circles of Europe in the 18th century. The Portuguese traders first opened up the sea route to China in the 15th century and reached Japan in the mid-16th century, and appear to be the first people who introduced Oriental (Chinese and Japanese) fans in Europe which lead to their popularity, as well as the increased oriental fan imports in Europe. The fan became especially popular in Spain, where flamenco dancers used the fan and extended its use to the nobility. European fan-makers have introduced more modern designs and have enabled the hand fan to work with modern fashion. 17th century In the 17th century the folding fan, and its attendant semiotic culture, were introduced from China and Japan. By the end of the 17th century, there were enormous imports of China folding in Europe due to its popularity and to a lesser extent, Japanese folding fans were also reaching Europe by that period. These fans are particularly well displayed in the portraits of the high-born women of the era. Queen Elizabeth I of England can be seen to carry both folding fans decorated with pom poms on their guardsticks as well as the older style rigid fan, usually decorated with feathers and jewels. These rigid style fans often hung from the skirts of ladies, but of the fans of this era it is only the more exotic folding ones which have survived. Those folding fans of the 15th century found in museums today have either leather leaves with cut out designs forming a lace-like design or a more rigid leaf with inlays of more exotic materials like mica. One of the characteristics of these fans is the rather crude bone or ivory sticks and the way the leather leaves are often slotted onto the sticks rather than glued as with later folding fans. Fans made entirely of decorated sticks without a fan "leaf" were known as fans. The fan originated in China. However, despite the relative crude methods of construction, folding fans were at this era high status, exotic items on par with elaborate gloves as gifts to royalty. In the 17th century the rigid fan which was seen in portraits of the previous century had fallen out of favour as folding fans gained dominance in Europe. Fans started to display well painted leaves, often with a religious or classical subject. The reverse side of these early fans also started to display elaborate flower designs. The sticks are often plain ivory or tortoiseshell, sometimes inlaid with gold or silver pique work. The way the sticks sit close to each other, often with little or no space between them is one of the distinguishing characteristics of fans of this era. In 1685 the Edict of Nantes was revoked in France. This caused large scale immigration from France to the surrounding Protestant countries (such as England) of many fan craftsmen. This dispersion in skill is reflected in the growing quality of many fans from these non-French countries after this date. 18th century In the 18th century, fans reached a high degree of artistry and were being made throughout Europe often by specialized craftsmen, either in leaves or sticks. Folded fans of silk, or parchment were decorated and painted by artists. Fans were also imported from China by the East India Companies at this time. Around the middle 18th century, inventors started designing mechanical fans. Wind-up fans (similar to wind-up clocks) were popular in the 18th century. 19th century In the 19th century in the West, European fashion caused fan decoration and size to vary. It has been said that in the courts of England, Spain and elsewhere, fans were used in a more or less secret, unspoken code of messages. These fan languages were a way to cope with the restricting social etiquette. However, modern research has proved that this was a marketing ploy developed in the 19th century – one that has kept its appeal remarkably over the succeeding centuries. This is now used for marketing by fan makers like Cussons & Sons & Co. Ltd who produced a series of advertisements in 1954 showing "the language of the fan" with fans supplied by the well known French fan maker Duvelleroy. The rigid or screen fan () became also fashionable during the 18th and 19th century. They never reached the same level of popularity as the easy to carry around, folding fans which became almost an integrated part of women's dress. The screen fan was mainly used inside the interior of the house. In 18th and 19th century paintings of interiors one sometimes sees one laying on a chimney mantle. They were mainly used to protect a woman's face against the glare and heat of the fire, to avoid getting , or ruddy cheeks from the heat. But probably not in the least it served to keep the heat from spoiling the carefully applied make-up which in those days was often wax-based. Until the 20th century houses were heated by open fires in chimneys or by stoves, and the lack of insulation made many a house very draughty and cold during winter. Therefore, any social or family gathering would be in close proximity to the fireplace. The design of the screen fan is a fixed handle, most often made out of exquisitely turned (painted or guided) wood, fixed to a flat screen. The screen could be made out of silk stretched on a frame or thin wood, leather or papier mache. The surface is often exquisitely painted with scenes ranging from flowers and birds of paradise to religious scenes. At the end of the 19th century they disappeared when the need for them ceased to exist. During the 19th century names like the Birmingham-based firm of Jennens and Bettridge produced many papier-mâché fans. Modern day Modern day hand fans are less popular than in the past, but are still used by many. Drag subculture A large group that continues to use folding hand fans for cultural and fashion use are drag queens. Stemming from ideas of imitating and appropriating cultural ideas of excess, wealth, status and elegance, large folding hand fans, sometimes or more in radius, are used to punctuate speech, as part of performances, or as accessories to an outfit. Fans may have phrases taken from the lexicon of drag and LGBTQ+ culture written on them, and may be decorated in other ways, such as the addition of sequins or tassels. Folding fans are often used to emphasize a point in a person's speech, rather than for express use of fanning oneself. A person might harshly snap open the fan when engaging in "throwing shade" on (comically insulting) another person, creating a loud snapping noise that punctuates the insult. Drag dance numbers also utilise larger hand fans as a way to add flair and as a prop, used to emphasise movements in the dance. Popular drag comedy webshow UNHhhh has used folding fans as a point of humour, with the sound made by a folding fan unfolding coined onomatopoeically as a "thworp" by the editors. Categories Hand fans have three general categories: Fixed (or rigid, flat) fans (Chinese: , ; Japanese: , ): circular fans, palm-leaf fans, straw fans, feather fans Folding fans (Chinese: , ; Japanese: , ): silk folding fans, paper folding fans, sandalwood fans Modern powered mechanical hand fans: hand fans which appear as mini mechanical rotating fans with blades. These are usually axial fans, and often use blades made from a soft material for safety. These are usually battery operated, but can be hand cranked as well. Gallery See also Church fan – Fans used in churches in the United States Use in fashion Abaniko Use in dance – Korean fan dance Cariñosa – national dance of the Philippines Singkil – traditional Maranao dance from the Philippines Pagapir – a traditional fan dance in Mindanao, Philippines – traditional dance originating from Japan Use as weapons Princess Iron Fan Japanese war fan Korean fighting fan Use in comedy Use in politics Islami Andolan Bangladesh – an Islamic political party in Bangladesh that uses a hand fan as its electoral symbol Museums Musée de l'Éventail (Paris) The Fan Museum in Greenwich (Greenwich, London) The Hand Fan Museum in Healdsburg, California References Sources Nussbaum, Louis Frédéric and Käthe Roth. (2005). Japan Encyclopedia. Cambridge: Harvard University Press. ; OCLC 48943301 Books Alexander, Helene. The Fan Museum, Third Millennium Publishing, 2001 Alexander, Helene & Hovinga-Van Eijsden, Fransje. A Touch of Dutch - Fans from the Royal House of Orange-Nassau, The Fan Museum, February 2008, Armstrong, Nancy. Book of Fans. Smithmark Publishing, 1984. Armstrong, Nancy. Fans, Souvenir Press, 1984 Bennett, Anna G. Unfolding beauty: The art of the fan : the collection of Esther Oldham and the Museum of Fine Arts, Boston. Thames and Hudson (1988). Bennett, Anna G. & Berson, Ruth Fans in fashion. Publisher Charles E. Tuttle Co. Inc & The Fine Arts Museums of San Francisco (1981) Biger, Pierre-Henri. Sens et sujets de l'éventail européen de Louis XIV à Louis-Philippe. Histoire of Art Thesis, Rennes 2 University, 2015. (https://tel.archives-ouvertes.fr/tel-01220297) Checcoli, Anna. " Il ventaglio e i suoi segreti ", Tassinari, 2009 Checcoli, Anna. " Ventagli Cinesi Giapponesi ed Orientali ", Tassinari, 2009 Cowen, Pamela. A Fanfare for the Sun King: Unfolding Fans for Louis XIV, Third Millennium Publishing (September, 2003) Das, Justin. Pankha -Traditional crafted hand fans of the Indian Subcontinent from the collection of Justin Das - The fan museum, Greenwich (2004) Faulkner, Rupert. Hiroshige Fan Prints, V&A publications, 2001 Fendel, Cynthia. Novelty Hand Fans, Fashionable Functional Fun Accessories of the Past. Hand Fan Productions, 2006 Fendel, Cynthia. Celluloid Hand Fans. Hand Fan Productions, 2001. Gitter, Kurt A. Japanese fan paintings from western collections. Publisher - New Orleans Museum of Art (1985). Hart, Avril & Taylor, Emma. Fans (V & A Fashion Accessories Series). Publisher- V & A Publications. Hutt, Julia & Alexander, Helene. Ogi: A History of the Japanese Fan. Art Media Resources; Bilingual edition (February 1, 1992) Irons, Neville John. Fans of Imperial China. Kaiserreich Kunst Ltd, 1982 Letourmy-Bordier, Georgina & Le Guen, Sylvain, L'éventail, matières d'excellence : La nature sublimée par les mains de l'artisan, Musée de la Nacre et de la Tabletterie (September 2015) Mayor, Susan. A Collectors Guide to Fans, Charles Letts, 1990 Mayor, Susan. The Letts Guide to Collecting Fans. Charles Letts, 1991 North, Audrey. Australia's fan heritage. Boolarong Publications (1985). Qian, Gonglin. Chinese Fans: Artistry and Aesthetics (Arts of China, #2). Long River Press (August 31, 2004) Rhead, G. Wooliscroft. The History of the Fan, Kegan Paul, 1910 Roberts, Jane. Unfolding Pictures: Fans in the Royal Collection. Publisher - Royal Collection (January 30, 2006. Tam, C.S. Fan Paintings by Late Ch'ing Shanghai Masters. Publisher - Urban Council for an exhibition in the Hong Kong Museum of Art (1977) Vannotti, Franco. ''Peinture Chinoise de la Dynastie Ts'ing (1644–1912). Collections Baur, Geneve (1974) External links A visual guide to Victorian fan language, photos by Fabio and Gabrielle Arciniegas mm's fan collection with monographies on love symbols on fans, celluloid fans, George Barbier and more Hand fan collection Anna Checcoli All About Hand Fans with Cynthia Fendel Hand Fan Museum The Fan Circle International Tessen warrior fan The Fan Museum in Greenwich, London Fan Association of North America Fans in the Staten Island Historical Society Online Collections Database La Place de l'Eventail (French website dedicated to the European Hand Fan (most pages in English) Galerie Le Curieux, Paris Fans in the 16th and 17th Centuries Variety of Hand Held fans in different colour and styles Maison Sylvain Le Guen - contemporary hand fans by Sylvain Le Guen Allhandfans - Site entirely dedicated to the hand fan Museu Tèxtil i d'Indumentària in Barcelona Articles containing video clips Ancient Egyptian technology Ancient Greek technology Chinese culture Chinese inventions Cooling technology Greek inventions Ventilation fans Fashion accessories Hand tools Culture of Japan Japanese inventions
Hand fan
Engineering
5,183
44,559,400
https://en.wikipedia.org/wiki/Tylopilus%20hondurensis
Tylopilus hondurensis is a bolete fungus in the family Boletaceae. Found in Honduras, where it grows under Pinus oocarpa, it was described as new to science in 1983. References External links hondurensis Fungi described in 1983 Fungi of Central America Fungus species
Tylopilus hondurensis
Biology
62
24,517,589
https://en.wikipedia.org/wiki/Luidia%2C%20Inc.
Luidia, Inc. produces portable interactive whiteboard technology for classrooms and conference rooms. Its eBeam hardware and software products work with computers and digital projectors to use existing whiteboard or writing surface as interactive whiteboards. The company’s eBeam products allow text, images and video to be projected onto display surfaces, where an interactive stylus or marker can be used to add notes, access menus, manipulate images and create diagrams and drawings. Technology Luidia’s eBeam technology uses infrared and ultrasound receivers to track the location of a transmitter-equipped pen, called a stylus, or a standard dry-erase marker in a transmitter-equipped sleeve. Company history Luidia’s eBeam technology was originally developed and patented by engineers at Electronics for Imaging Inc. (Nasdaq: EFII), a Foster City, California developer of digital print server technology. Luidia was spun off from EFI in July 2003 with venture funding from Globespan Capital Partners and Silicom Ventures. In 2007, Luidia was selected by the Mexican government to install eBeam-enabled interactive boards in public seventh-grade classrooms in Mexico as part of the government’s Enciclomedia program. In 2007 and 2008, Luidia was accredited by Deloitte LLP in the accounting firm’s Silicon Valley “Technology Fast 50” program, which accredits fast-growing companies in the San Francisco Bay area. In January 2021 their main sites and web documentation started returning 404 errors; however, their shop was still up. In 2022, Ludia sent out notice that it would be shutting down in July of that year. References Display technology Educational technology companies of the United States Computer companies established in 2003 2003 establishments in California American companies established in 2003 Computer companies disestablished in 2022 2022 disestablishments in California American companies disestablished in 2022
Luidia, Inc.
Engineering
384
74,593,188
https://en.wikipedia.org/wiki/Smoldyn
Smoldyn is an open-source software application for cell-scale biochemical simulations. It uses particle-based simulation, meaning that it simulates each molecule of interest individually, in order to capture natural stochasticity and yield nanometer-scale spatial resolution. Simulated molecules diffuse, react, are confined by surfaces, and bind to membranes in similar manners as in real biochemical systems. History Smoldyn was initially released in 2003 as a simulator that represented chemical reactions between diffusing particles in rectilinear volumes. Further development added support for surfaces, multiscale simulation molecules with excluded volume, rule-based modeling and C/C++ and Python APIs. Smoldyn development has been funded by a postdoctoral NSF grant awarded to Steve Andrews, a US DOE contract awarded to Adam Arkin, a grant from the Computational Research Laboratories (Pune, India) awarded to Upinder Bhalla, a MITRE contract and several NIH grants awarded to Roger Brent, and a Simons Foundation grant awarded to Steve Andrews. Development team Smoldyn has been developed primarily by Steve Andrews, over the course of multiple research and teaching positions. Other contributors have included Nathan Addy, Martin Robinson, and Diliwar Singh. Features Smoldyn is primarily a tool for biophysics and systems biology research. It focuses on spatial scales that are between nanometers and microns. The following features descriptions are drawn from the Smoldyn documentation. Model definition: Models are entered as text files that describe the system. This includes: lists of molecule species, their diffusion coefficients, and their chemical reactions; lists of surfaces and their interactions with molecules; initial molecule and surface locations; and actions that a "virtual experimenter" carries out during the simulation. Real-time graphics: Smoldyn displays the simulated system to a graphics window as the simulation runs. Simulated behaviors: Smoldyn's simulated behaviors focus on molecular diffusion, interaction with surfaces, and interactions with each other. This enables simulation of: molecular diffusion and drift, chemical reactions, excluded volume interactions, macromolecular crowding, allosteric interactions, surface adsorption and desorption, partial transmission through surfaces, on-surface diffusion, and long-range intermolecular forces. Accuracy: Smoldyn development has focused strongly on quantitative accuracy. Tests have been run and published to show that diffusion, chemical reactions, surface interactions, excluded volume interactions, and on-surface diffusion simulate with high quantitative accuracy, typically with substantially less than 1% error. Rule-based modeling: Smoldyn supports two types of rule-based modeling. It reads the BNGL language, which it parses with the BioNetGen software. It also supports a method that is based on wildcard characters. Multi-scale simulation: Because particle-based simulation is computationally intensive, Smoldyn also supports simulation using a spatial version of the Gillespie algorithm. These algorithms are linked together to enable both to be used in a single simulation. C/C++ and Python APIs: All of Smoldyn's functions can be accessed through either a C/C++ or a Python API. GPU acceleration Smoldyn has been refactored twice to run on GPUs, each time offering approximately 200-fold speed improvements. However, neither version supports the full range of features that is available in the CPU version. They are not being supported currently. See also List of systems biology modeling software References External links Systems biology Free simulation software
Smoldyn
Biology
714
1,582,494
https://en.wikipedia.org/wiki/Data%20access
Data access is a generic term referring to a process which has both an IT-specific meaning and other connotations involving access rights in a broader legal and/or political sense. In the former it typically refers to software and activities related to storing, retrieving, or acting on data housed in a database or other repository. Details Two fundamental types of data access exist: sequential access (as in magnetic tape, for example) random access (as in indexed media) Data access crucially involves authorization to access different data repositories. Data access can help distinguish the abilities of administrators and users. For example, administrators may have the ability to remove, edit and add data, while general users may not even have "read" rights if they lack access to particular information. Historically, each repository (including each different database, file system, etc.), might require the use of different methods and languages, and many of these repositories stored their content in different and incompatible formats. Over the years standardized languages, methods, and formats, have developed to serve as interfaces between the often proprietary, and always idiosyncratic, specific languages and methods. Such standards include SQL (1974- ), ODBC (ca 1990- ), JDBC, XQJ, ADO.NET, XML, XQuery, XPath (1999- ), and Web Services. Some of these standards enable translation of data from unstructured (such as HTML or free-text files) to structured (such as XML or SQL). Structures such as connection strings and DBURLs can attempt to standardise methods of connecting to databases. See also Right of access to personal data Data access object Data access layer References Data management Data analysis
Data access
Technology
354
22,446,288
https://en.wikipedia.org/wiki/Hebeloma%20alpinum
Hebeloma alpinum is a species of mushroom in the family Hymenogastraceae. It was originally described from Switzerland by Favre as variety alpina of Hebeloma crustuliniforme; G. Bruchet raised it to species status in 1970. See also List of Hebeloma species References alpinum Fungi described in 1955 Fungi of Europe Fungus species
Hebeloma alpinum
Biology
79
57,001,960
https://en.wikipedia.org/wiki/Potassium%20amyl%20xanthate
Potassium amyl xanthate (/pəˈtæsiəm ˌæmɪl ˈzænθeɪt/) is an organosulfur compound with the chemical formula CH3(CH2)4OCS2K. It is a pale yellow powder with a pungent odor that is soluble in water. It is widely used in the mining industry for the separation of ores using the flotation process. Production and properties As typical for xanthates, potassium amyl xanthate is prepared by reacting n-amyl alcohol with carbon disulfide and potassium hydroxide. CH3(CH2)4OH + CS2 + KOH → CH3(CH2)4OCS2K + H2O Potassium amyl xanthate is a pale yellow powder. Its solutions are relatively stable between pH 8 and 13 with a maximum of stability at pH 10. Related compounds Sodium amyl xanthate is used in the separation of nickel and copper from their ores. Safety The is 90-148 mg/kg (oral, rat). It is a biodegradable compound. References Salts Organosulfur compounds
Potassium amyl xanthate
Chemistry
244
34,656,547
https://en.wikipedia.org/wiki/Staggered%20extension%20process
The staggered extension process (also referred to as StEP) is a common technique used in biotechnology and molecular biology to create new, mutated genes with qualities of one or more initial genes. The technique itself is a modified polymerase chain reaction with very short (approximately 10 seconds) cycles. In these cycles the elongation of DNA is very quick (only a few hundred base pairs) and synthesized fragments anneal with complementary fragments of other strands. In this way, mutations of the initial genes are shuffled and in the end genes with new combinations of mutations are amplified. The StEP protocol has been found to be useful as a method of directed evolution for the discovery of enzymes useful to industry. References Genetics Molecular biology techniques
Staggered extension process
Chemistry,Biology
144
11,293,812
https://en.wikipedia.org/wiki/Hamburg/ESO%20Survey
The Hamburg/ESO Survey is an astrometric star catalogue published by the University of Hamburg. The catalog contains stars between magnitudes 13 and 18 covering the Southern extragalactic sky. The stated goals of the catalog are Compile samples of high-redshift (1.5 < z < 3.2), bright Quasi-stellar objects suited for high-resolution spectroscopy (e.g., for the ESO-VLT); Provide targets for ultraviolet spectroscopy with HST Discover new gravitationally lensed systems Construct large flux-limited and bias-free samples of bright low-redshift QSOs and Seyferts for host galaxy studies Determine the local luminosity function of QSOs Study the evolution of the most luminous part of the QSO population References External links Astronomical catalogues of stars University of Hamburg
Hamburg/ESO Survey
Astronomy
170
10,896
https://en.wikipedia.org/wiki/Felix%20Bloch
Felix Bloch (; ; 23 October 1905 – 10 September 1983) was a Swiss-American physicist and Nobel physics laureate who worked mainly in the U.S. He and Edward Mills Purcell were awarded the 1952 Nobel Prize for Physics for "their development of new ways and methods for nuclear magnetic precision measurements." In 1954–1955, he served for one year as the first director-general of CERN. Felix Bloch made fundamental theoretical contributions to the understanding of ferromagnetism and electron behavior in crystal lattices. He is also considered one of the developers of nuclear magnetic resonance. Biography Early life, education, and family Bloch was born in Zürich, Switzerland to Jewish parents Gustav and Agnes Bloch. Gustav Bloch, his father, was financially unable to attend University and worked as a wholesale grain dealer in Zürich. Gustav moved to Zürich from Moravia in 1890 to become a Swiss citizen. Their first child was a girl born in 1902 while Felix was born three years later. Bloch entered public elementary school at the age of six and is said to have been teased, in part because he "spoke Swiss German with a somewhat different accent than most members of the class". He received support from his older sister during much of this time, but she died at the age of twelve, devastating Felix, who is said to have lived a "depressed and isolated life" in the following years. Bloch learned to play the piano by the age of eight and was drawn to arithmetic for its "clarity and beauty". Bloch graduated from elementary school at twelve and enrolled in the Cantonal Gymnasium in Zürich for secondary school in 1918. He was placed on a six-year curriculum here to prepare him for University. He continued his curriculum through 1924, even through his study of engineering and physics in other schools, though it was limited to mathematics and languages after the first three years. After these first three years at the Gymnasium, at age fifteen Bloch began to study at the Eidgenössische Technische Hochschule (ETHZ), also in Zürich. Although he initially studied engineering he soon changed to physics. During this time he attended lectures and seminars given by Peter Debye and Hermann Weyl at ETH Zürich and Erwin Schrödinger at the neighboring University of Zürich. A fellow student in these seminars was John von Neumann. Bloch graduated in 1927, and was encouraged by Debye to go to Leipzig to study with Werner Heisenberg. Bloch became Heisenberg's first graduate student, and gained his doctorate in 1928. His doctoral thesis established the quantum theory of solids, using waves to describe electrons in periodic lattices. On March 14, 1940, Bloch married Lore Clara Misch (1911–1996), a fellow physicist working on X-ray crystallography, whom he had met at an American Physical Society meeting. They had four children, twins George Jacob Bloch and Daniel Arthur Bloch (born January 15, 1941), son Frank Samuel Bloch (born January 16, 1945), and daughter Ruth Hedy Bloch (born September 15, 1949). Career Bloch remained in European academia, working on superconductivity with Wolfgang Pauli in Zürich; with Hans Kramers and Adriaan Fokker in Holland; with Heisenberg on ferromagnetism, where he developed a description of boundaries between magnetic domains, now known as "Bloch walls", and theoretically proposed a concept of spin waves, excitations of magnetic structure; with Niels Bohr in Copenhagen, where he worked on a theoretical description of the stopping of charged particles traveling through matter; and with Enrico Fermi in Rome. In 1932, Bloch returned to Leipzig to assume a position as "Privatdozent" (lecturer). In 1933, immediately after Hitler came to power, he left Germany because he was Jewish, returning to Zürich, before traveling to Paris to lecture at the Institut Henri Poincaré. In 1934, the chairman of Stanford Physics invited Bloch to join the faculty. Bloch accepted the offer and emigrated to the United States. In the fall of 1938, Bloch began working with the 37 inch cyclotron at the University of California, Berkeley to determine the magnetic moment of the neutron. Bloch went on to become the first professor for theoretical physics at Stanford. In 1939, he became a naturalized citizen of the United States. During WWII, Bloch briefly worked on the atomic bomb project at Los Alamos. Disliking the military atmosphere of the laboratory and uninterested in the theoretical work there, Bloch left to join the radar project at Harvard University. After the war, he concentrated on investigations into nuclear induction and nuclear magnetic resonance, which are the underlying principles of MRI. In 1946 he proposed the Bloch equations which determine the time evolution of nuclear magnetization. He was elected to the United States National Academy of Sciences in 1948. Along with Edward Purcell, Bloch was awarded the 1952 Nobel Prize in Physics for his work on nuclear magnetic induction. When CERN was being set up in the early 1950s, its founders were searching for someone of stature and international prestige to head the fledgling international laboratory, and in 1954 Professor Bloch became CERN's first director-general, at the time when construction was getting under way on the present Meyrin site and plans for the first machines were being drawn up. After leaving CERN, he returned to Stanford University, where he in 1961 was made Max Stein Professor of Physics. In 1964, he was elected a foreign member of the Royal Netherlands Academy of Arts and Sciences. He was also a member of the American Academy of Arts and Sciences and the American Philosophical Society. Bloch died in Zürich in 1983. See also List of Jewish Nobel laureates List of things named after Felix Bloch Footnotes References Further reading Bloch, F.; Staub, H. "Fission Spectrum", Los Alamos National Laboratory (LANL) (through predecessor agency Los Alamos Scientific Lab), United States Department of Energy (through predecessor agency the US Atomic Energy Commission), (August 18, 1943). External links Oral History interview transcript with Felix Bloch on 14 May 1964, American Institute of Physics, Niels Bohr Library and Archives - interview conducted by Thomas S. Kuhn in Palo Alto, California Oral History interview transcript with Felix Bloch on 15 August 1968, American Institute of Physics, Niels Bohr Library and Archives - interview conducted by Charles Weiner at Stanford University Oral History interview transcript with Felix Bloch 15 December 1981, American Institute of Physics, Niels Bohr Library and Archives - interview conducted by Lillian Hoddeson at Stanford University Felix Bloch Papers, 1931–1987 (33 linear ft.) are housed in the Department of Special Collections and University Archives at Stanford University Libraries National Academy of Sciences Biographical Memoir Felix Bloch Papers 1905 births 1983 deaths Nobel laureates in Physics Swiss Nobel laureates American Nobel laureates 20th-century American physicists American people of Swiss-Jewish descent Naturalized citizens of the United States People associated with CERN ETH Zurich alumni Experimental physicists Harvard University people Jewish American scientists Jewish American physicists Leipzig University alumni Manhattan Project people Members of the Royal Netherlands Academy of Arts and Sciences Members of the United States National Academy of Sciences Fellows of the American Physical Society Recipients of the Pour le Mérite (civil class) Stanford University Department of Physics faculty 20th-century Swiss Jews Swiss physicists Swiss emigrants to the United States Nuclear magnetic resonance Scientists from Zurich Members of the American Philosophical Society Presidents of the American Physical Society
Felix Bloch
Physics,Chemistry
1,542
49,083,688
https://en.wikipedia.org/wiki/Romanesque%20Revival%20architecture%20in%20the%20United%20Kingdom
Romanesque Revival, Norman Revival or Neo-Norman styles of building in the United Kingdom were inspired by the Romanesque architecture of the 11th and 12th centuries AD. In the United Kingdom it started to appear as an architectural style in the 18th. century but reached its greatest popularity in the mid to latter years of the 19th. century. The style can be viewed as a strand of Gothic Revival architecture and part of the Historicist or Historismus styles of architecture that became popular in both Europe and Britain during the 19th. century. Early examples of the style in Germany of the 1820s and 1830s are referred to as Rundbogenstil or round arched style. In Britain the style was introduced by architects and their patrons, who had been on tours in Europe and it appears that the German and British styles of Romanesque developed largely independently. Initially in Britain the style was used for church building, but as the 19th. century progressed it was adapted for public buildings, museums, schools and commercial buildings, but rarely for domestic buildings. By the start of the 20th. century it had gone out of fashion and only occasionally were examples of the style built. Origins The development of the Norman revival style or Neo-Norman took place over a long time in the British Isles starting with Inigo Jones‘s re-fenestration of the White Tower of the Tower of London in 1637–38 and work at Windsor Castle by Hugh May for Charles II, but this was little more than restoration work. More surprising is the west door of Kenilworth Church, inserted in the tower in 1570 probably at the time of a visit of Queen Elizabeth. This appears to have been historic arch, sourced possibly from an unknown monastic building, Another early example of Romanesque revival is the south porch of North Scarle Church in Lincolnshire. Pevsner suggests that it might be Elizabethan, but late 17th. century seems more likely as the oak door seems to be original and probably of that date. In the 18th century the use of round arched windows was thought of as being Saxon rather than Norman and examples of buildings with round arched windows include Shirburn Castle in Oxfordshire, Wentworth in Yorkshire and Enmore Castle in Somerset. At Allerton Mauleverer in Yorkshire St Martin’s Church was re-built for Richard Arundell in 1745. The church has been described as a proto-Neo-Norman. While it gives the impression of a Romanesque revival church, with a massive crossing tower and round headed windows, yet it also has windows with gothic tracery and a hammerbeam nave roof. The architect is thought to have been John Vardy surveyor to the Office of Works. A further example of church building in the Romanesque revival style took place in 1792 when Elisa Wingfield commissioned plans from Samuel Pepys Cockerell for the conservation and rebuilding of St Peter's Church, Tickencote in Rutland. The church which contained much notable Romanesque decoration and an elaborate chancel arch appears to have been close to collapse. Cockerell encased the chancel, keeping the arch in position but the outer walls were completely re-built and the exterior ornamentation of arcades and round headed windows were replaced in new stonework. A new tower over a porch was built on the south side, which gave added stability to the older structure. The nave was completely re-built using some new romanesque mot is and copying others, such as the engaged columns, from those on the exterior of the chancel. This style of Romanesque revival architecture is very similar to the style that emerges in the 1840s under architectects such as Thomas Penson and Benjamin Ferrey. Tickencote could be considered to mark the start of the Romanesque revival in church architecture. In Scotland the style started to emerge with the Duke of Argyll’s castle at Inverary, started in 1744, and castles by Robert Adam at Culzean (1771), Oxenfoord (1780–2), Dalquharran (1782–5), and Seton Palace (1792). In England James Wyatt used round arched windows at Sandleford Priory, Berkshire in 1780–79 and the Duke of Norfolk started to rebuild Arundel Castle, while Eastnor Castle in Herefordshire was built by Robert Smirke between 1812 and 1820. At Eastnor Smirke combines a rugged Romanesque with more subtle Gothic window tracery. The elements of Eastnor are further developed about 1840 at Devizes Castle, where the Bath architect Henry Goodridge combines the Castellated gothic style of Eastnor with a Romanesque entrance arch and Romanesque windows. Development of archaeologically correct Romanesque It was at this point that the Norman Revival became a recognisable architectural style. In 1817 Thomas Rickman published his An Attempt to Discriminate the Styles of English Architecture from the Conquest to the Reformation. It was now realised that round-arch architecture was largely Romanesque in the British Isles and came to be described as Norman rather than Saxon. This distinction was finally recognised when Rickman’s article in Archaeologia (1832–33), was published by the Society of Antiquaries. The start of an archaeologically correct Norman Revival can be recognised in the architecture of Thomas Hopper. His first attempt at this style was at Gosford Castle in Armagh in Ireland, but far more successful was his Penrhyn Castle near Bangor in North Wales. This was built for the Pennant family, between 1820 and 1837. An example of Romanesque revival architecture being used for follies in 1832 was by George Proctor at Benington Lordship in Hertfordshire. Proctor added a neo-Norman gatehouse, summerhouse and curtain wall to the site of a Motte-and-bailey castle. The arches and decorative features to the gatehouse and summerhouse were in Pulhamite, a form of Cast stone that was manufactured by James Pulham and Son. The Neo-Norman style did not catch on for domestic buildings, though many country houses and mock castles were built in the Castle Gothic or Castellated style during the Victorian period, which were mixed Gothic styles. Church architecture and the Romanesque Revival However, the Norman Revival did catch on for church architecture. It was Thomas Penson, a Welsh architect, who would have been familiar with Hopper’s work at Penrhyn, who developed Romanesque Revival church architecture. Penson was influenced by French and Belgian Romanesque architecture, and particularly the earlier Romanesque phase of German Brick Gothic. At St David’s Newtown, 1843–47 and St Agatha’s Llanymynech, 1845, he copies the tower of St. Salvator's Cathedral, Bruges. Other examples of Romanesque revival by Penson are Christ Church, Welshpool, 1839–1844, and the porch to Langedwyn Church. He was an innovator in the use of terracotta to produce decorative Romanesque mouldings, saving on the expense of stonework. Penson’s last church in the Romanesque Revival style was Rhosllannerchrugog, Wrexham of 1852 Sara Losh and Wreay Church A most remarkable example of Romanesque church building was St Mary’s Church at Wreay, near Carlisle. This was designed by Sara Losh and built between 1841 and 1842. Losh had travelled widely on the Continent and particularly in Italy and drew inspiration from Romanesque sources. Losh is known to have read Thomas Hope's Historical Study of Architecture (1815), which uses the term Lombardic for the style brought into Italy from the early Christian churches of Constantinople, and she described St Mary's as being an unpolished mode of building that most approximates to early Saxon or modified Lombard This statement suggests that at the time of its design Losh was not aware of Thomas Rickman’s re-classification of Saxon and Norman architecture. While the appearance and layout of the church may be considered as Romanesque, her free interpretation of the decoration on the woodwork and the arched stone windows and doors is anticipating the styles of the Arts and Crafts movement, while the arcading of the apse certainly has a Byzantine feel to it. Sarah Losh gallery Other Early Victorian architects working in the Romanesque revival style The early years of the 1840s saw a considerable upsurge of interest in the developing Romanesque revival style by some of the leading architects of the period. During the 19th century the architecture selected for Anglican churches depended on the churchmanship of particular congregations. Whereas high churches and Anglo-Catholic, which were influenced by the Oxford Movement, were built in Gothic Revival architecture; low churches and Broad churches of the period were often built in the Romanesque Revival style. The architects specialising in Romanesque revival took their designs from different local styles of the European Romanesque. Edmund Sharpe Another architect who popularised the Romanesque revival style was Edmund Sharpe, who set up his practice at Lancaster in 1835. At Cambridge he had been a great friend of William Whewell and presumably of the polymath and architectural historian Professor Robert Willis. He was awarded a travelling scholarship to study the early architecture of Germany and Southern France and supplied Thomas Rickman with information. However, Sharpe despite being an architectural historian of some note, built churches that were much freer interpretations of the Romanesque and German Brick Gothic and might be considered less archaeologically correct. Four of Sharpe's earliest churches – St Saviour, Bamber Bridge (1836–37); St Mark, Witton (1836–38); and St Paul, Farington, near Leyland (1839–40) – were in the Romanesque style, which he chose because no style can be worked so cheap as the Romanesque. They turned out to be little more than rectangular 'preaching boxes'… with no frills and little ornamentation; and many of them were later enlarged. The only subsequent churches in which Sharpe used Romanesque elements St Mary's Church, Conistone in Wharfedale (1846); and St Paul, Scotforth in south Lancaster (1874–6). Sharpe's final essay in the Romanesque Revival style St Paul's Church, Scotforth, was described by Nikolaus Pevsner as a strange building and an anachronism, almost beyond belief. Sharpe had retired from his architectural practice in 1851. He then pursued a career in railway engineering. In 1874, when he was aged 68, he returned to architecture and designed this church which was opened in 1876. The church had used terracotta in a similar fashion to his earliest churches and it can only be assumed that its anachronistic appearance was that he had used a design that he had prepared at least 20 years earlier. Romanesque by Sharpe Benjamin Ferrey The work of Benjamin Ferrey can be considered to be similar to that of Thomas Penson, based on the English and French Romanesque traditions. An early example is his church of St Nicholas at East Grafton in Wiltshire, built in 1842–44. Here the chancel terminates in an apse and the west has overlapping arcading and chevron decoration over the entrance door. Central crossing tower and arcaded windows on the west frontis approached through a stone Romanesque Lych gate. A more important work by Ferrey is the Church of St James the Great at Morpeth, Northumberland. The church, which is supposed to be modelled on the Cathedral of Monreale in Sicily, was built between 1844 and 1846. The unusual arched gate is acceptably Sicilian in inspiration and is similar to the portico at the cathedral at Monreale, but the rest of the composition has much more in common with French and Belgian Romanesque. A further church in this style is Christ Church in Melplash in Dorset of 1845–46. This has a heavy crossing tower and in many respects resembles the Church of St. James the Great at Morpeth. Romanesque by Ferrey William Perkin of Leeds St Michael and All Angels Barton le Street, North Yorkshire. This church was very extensively rebuilt in 1869–71 by William Perkin and his sons of Leeds. Unlike Tickencote there is no record of the church that was replaced. The building contains an extensive and impressive range of genuine romanesque, but these seem to have been largely repositioned. The pews and some other furnishings are in Romanesque revival style. The rebuilding was financed by Hugo Francis Meynell-Ingram. Barton le Street Gallery Romanesque gallery Chapel architecture In the 1860s, Romanesque architecture became a popular style of architecture for Dissenting chapels. Possibly this was intended to give the impression that they were churches similar to Anglican churches and they are often referred to as churches rather than chapels. This form of architecture was often chosen by the Baptists. This architecture is an adapted and debased form of Italianate Romanesque as seen at the Potter's Bar Old Baptist Church in Hertfordshire in 1859. The Baptist Church in Brown Street Salisbury of the 1880s is similar in bright red brick and was built in the 1880s. It is noticeably more archaeologically correct in its use of arched windows and the entrance door. A Baptist Chapel of 1870 by the Lincoln architects Drury and Mortimer, the Mint Lane Baptist Chapel in Lincoln is in a debased Italianate Romanesque revival style but has a surprising tower in the castellated Gothic style. Synagogues The Romanesque revival was also a style of architecture that appealed to Jewish communities and there are examples of Synagogues in this style. One of these is the synagogue built adjacent to the docks at Grimsby in Lincolnshire. Later Romanesque Revival churches in Scotland The earliest example of Romanesque revival architecture in Scotland is the West Kirk, Sandgate, Ayr. By the architect William Gale for the Presbyterian Free Church and built in 1844–45. The columns used for the windows are very similar to those used by Edmund Sharpe at Scotford church. In Scotland the Presbyterians occasionally built in the Romanesque revival style, but only later in the 19th century. St Conan’s Kirk Lochawe, Agyll and Bute is an extraordinary early 20th century church on the shore of Loch Awe, built by Walter Douglas Campbell, brother of the 1st Lord Blythswood, was started in 1881, but not completed until 1930. The kirk is largely Romanesque in style, but is also mixed with other completely unrelated styles. Another church is Cranshaws in Berwickshire. Here the 1739 church was rebuilt in 1899 by architect George Fortune in Romanesque Revival style. Sir Alfred Waterhouse and developed Romanesque Architecture Interest in Romanesque Revival Architecture was renewed by Sir Alfred Waterhouse's Natural History Museum in Kensington, which was built between 1873 and 1881. It was built in shades of buff coloured terracotta and opened the way for the Romanesque style to be used for other buildings, apart from churches. Waterhouse tended to mix the architectural styles, often using decorative Romanesque arches to provide impressive entrances for his buildings, and popularised the deep red terracotta produced by manufacturer's in the Wrexham and Ruabon area of North-East Wales. This included parts of the Prudential Insurance building in London and the entrance to Strangeways Prison. Buildings by Waterhouse Later 19th and 20th century Waterhouse inspired other architects to build using terracotta and this material was used by William Watkins, a Lincoln architect who very successfully used a deep red Ruabon terracotta for the Lincoln Christ's Hospital Girls School of 1893. An earlier building by Watkins of 1873, was originally built as a warehouse at 42 Silver Street, Lincoln. This uses artificial stone for Romanesque columns and arches to embellish the frontage. St Aidan's Church, Leeds St Aidan's Church, Leeds is a massive basilica church built in the tradition of Waterhouse, using red terracotta brickwork. The design was won by competition in 1889 and the church was built between 1891 and 1894 by the Newcastle architects RJ Johnson and A Crawford Hick. The style is a hybrid of Italian, French and German Romanesque and the Corbel table or moulded stringcourse below the eaves was based on that of Lund Cathedral in Sweden. The inside is sumptuously decorated with mosaic decoration by Sir Frank Brangwyn in the apse and a multi-coloured marble font with Romanesque arcading. The interior columns of the basilica have ornamental capitals in the Byzantine style. 20th century In the 20th.century the use of the Romanesque revival style in church architecture appears to be restricted to brick built churches and often these churches have similarities with churches built in the Byzantine revival style, which became more popular in the early years of the 20th century. Churches in the Romanesque revival style include All Saints, Bute Avenue, Petersham, Richmond-upon-Thames. This Romanesque revival church was designed by J Kelly and completed in 1908. An even later example of the style is St Francis, Linden Road, Bournville, consecrated in 1925. This church was designed by Harvey and Wicks, the architects of the Bournville Estate References Sources and further reading External links Architectural history British architectural styles Revival architectural styles Victorian architectural styles
Romanesque Revival architecture in the United Kingdom
Engineering
3,412
39,114,680
https://en.wikipedia.org/wiki/Material%20criticality
Material criticality is the determination of which materials that flow through an industry or economy are most important to the production process. It is a sub-category within the field of material flow analysis (MFA), which is a method to quantitatively analyze the flows of materials used for industrial production in an industry or economy. MFA is a useful tool to assess what impacts materials used in the industrial process have and how efficiently a given process uses them. Material criticality evaluation criteria consist of three dimensions: supply risk, vulnerability to supply restriction, and environmental implications. Supply risk comprises several components, and changes based on short or long-term temporal outlooks. Vulnerability to supply restriction is dependent on the organizational level (global, national, and corporate). This methodology was developed from a United States National Research Council model, and is intended to help stakeholders make strategic decisions about the materials used in their production process. In the globalized economy, scarcity of essential materials in the industrial supply chain is a growing concern. As a result, nations and other large institutions are increasingly analyzing a material's criticality and seek to minimize any risk, restriction, or environmental impact associated with the material. Supply risk Supply risk is one of three dimensions that determine a material's criticality. Supply risk can be evaluated for the medium term (5–10 years, typically most appropriate for corporations and governments) and the long term (multiple decades, usually considered by long-range planners, futurists, and sustainability scholars). Supply risk consists of three components: Geological, Technological, and Economic; Social and Regulatory; Geopolitical. The first component focuses on the availability of the material's supply and the last two focus on how access to that supply could be restricted. The components are assessed on a 0-100 scale for both medium and long-term risk with higher values indicating higher risk. The aggregated scores yield a material's supply risk. Geological, Technological, and Economic The geological, technological and economic components of supply risk relate to the most basic questions relating to a materials availability; geologically, how much (material) is there; technologically, is it feasible to obtain; and economically, is it practical to do so. This component comprises two indicators of equal weight. The first looks at the relative abundance of material resulting in "depletion time" or relatively how much of the material has not been consumed. The second is a percentage of a given material extracted as a companion or trace material extracted as a by-product. This is used to understand depletion rates of materials consumed as a by-product to extraction. Quoting Graedel et al., "One should not regard the result as how long it will be until we run out, but rather as a useful relative indicator of the contemporary balance between supply and demand for the metal in question." In practice, geological, technological, economic, political and other aspects of criticality are interconnected. For example, new exploration technologies can alter geological availability, shortages can lead to higher prices which can in turn promote technological innovation. Social and Regulatory The social and regulatory components of a materials supply risk can impede or expedite the development of mineral resources. Regulations can hinder the reliability of mineral resource supply. Social perceptions towards the negative environmental and socioeconomic effects on communities typically fuel these regulations. Material criticality employs the policy potential index (PPI) and human development index (HDI) indicators to quantify the social and regulatory components of supply risk evaluation. Geopolitical The geopolitical component of a material's supply risk takes into account how governmental decisions and stability can significantly impact a material's accessibility. For example, politically unstable and war-torn nations pose a greater risk to supply restriction than developed peaceful nations. Material concentration, geographic location, security, socio-economic distress, and political stability are all analyzed to address what amount the geopolitical component should factor into a material's supply risk. Metal scarcity Metals are among the most important materials to the industrialized world, everything from infrastructure to personal electronic devices heavily relies on metals for production. As a result, global supply is being increasingly monitored and examined. For example, a recent study analyzed the varying levels of risk to the copper metals around the world. Another study found that increasing metal scarcity could alter typical industrial behavior. It also noted that metals heavily concentrated in certain geographic areas, such as strontium in China or the platinum group in South Africa and Russia; pose greater risk for supply disruptions. Since the late 1990s China has had a near monopoly on a variety of rare-earth metals commonly used in every day products. Much to the surprise of the international trade community China began restricting exports of these metals in 2009. The U.S. and World Trade Organization immediately protested however China has not changed its stance. This is a great example of a geopolitical based supply risk. To combat this supply disruption other countries, such as Japan, are attempting new and innovative methods of mining these rare-earth metals. Vulnerability to Supply Restriction Vulnerability to Supply Restriction (VSR) is an index that tells us how likely a particular element is to be restricted due to usage and availability. What evaluates the importance of a particular element at a social, economic and political level can be evaluated at three organizational levels; corporate, national and global levels. In total, it comprises eight indicator categories for the Corporate and National level, and 4 for the Global. VSR is important in evaluating each significant end-use applications of a material separately. The current approach realizes that indicators may be common or be specific for one to two. The three organizational levels use an adjusted 0-100 scale, including 4 bins, each with a range of 25 points. Quantifying the VSR is based on materials importance and substitutability, and an ability to innovate can be included at some organizational levels. Global VSR at the Global level is focused on the intrinsic value of a material to the society of a country or countries and to what level a substitution is possible. It is not a short term evaluation and none of its indicators are evaluated as such. The global levels matrix does not include as many categories as the Corporate and National level VSR evaluations are. They are only evaluated by the Importance and Substitutability. 1) Importance This consists of an indicator labeled percentage of population utilizing. 2) Substitutability This comprises substitute performance, substitute availability, and the environmental impact ratio. National Introduce national vulnerability to supply restriction: Looks at importance of an element, but does it through domestic industries and the country's population. It is evaluated on either a short or long term, and can be regarded as more intermediate in time. 1) Importance Composed of two indicators: national economic importance and percentage of population utilizing element. 2) Substitutability Indicators are the same as at the corporate level except for that Price ratio is now labeled Net Importance price ratio. 3) Susceptibility This is no longer labeled “ability to innovate” as it was at the corporate level. It is now “Susceptibility” and its indicator is no longer Corporate Innovation. The focus is now (1) net importance reliance (2) global innovation index. Corporate At the corporate level VSR is used to find the importance of an element in regards to (1) corporations current product lines (2) corporations Future product lines; with economic considerations each. (3) Ability to innovate. The corporate level is used to reinforce the belief that these innovative corporations are adapting more quickly to supply restriction. Emphasis on economic considerations. There is a development of sets of varied scenarios so that an estimate for how they might evolve is available. 1) Importance Two indicators: national economic importance and percentage of population utilizing. 2) Substitutability Substitutability evaluates (1) Substitute Performance (2) Substitute Availability (3) Environmental Impact Ration (4) Price Ratio. This evaluates the possible implications of an alternative material or metal in case the one at hand has a larger environmental impact or is in short supply. 3) Ability to Innovate A corporation that uses natural resources is dependent upon that resource and a disruption in its supply can impact revenues and market share. A competitor's ability to find a substitute or more efficient means of extraction could overtake a corporation. Toyota vs Ford and Lithium Lithium is used in Toyota and Ford cars' electric car batteries. Lithium is an energy critical element (ECE) and a non-renewable resource. About 100 times more lithium is necessary in an electric car battery as in a standard laptop battery. As society tries to lessen fossil fuel usage through the use of electric vehicles, lithium will be subjected to increased demand. At the corporate level, lithium must be evaluated in terms of its importance to the company and see to what extent it can be replaced in the company's products. Both Ford and Toyota's current and most used batteries in electric cars are lithium-ion batteries. Ford Motor company's senior manager of energy storage research stated, “There are foreseen limits of lithium ion technology,” this was stated in coordination with a graph estimating a diminishing number by 2017. . According to Toyota's environmental technology corporate strategy, “As Toyota anticipates the widespread use of electric vehicles in the future, we have begun research in developing next-generation secondary batteries with performance that greatly exceeds that of lithium-ion batteries.” At the national level, lithium-producing countries must consider their national lithium policies. The major lithium-producing countries include Bolivia, Chile, Argentina, Afghanistan, and Tibet. The high demand for lithium could bring large revenues into these resource-rich nations: a ton of lithium can sell for anywhere between $4500 and $5200, and the purer lithium that is used in batteries sells at the upper end of that interval. Bolivia's current reserve is estimated to be around 100 million tons. By comparison, the current market value of a ton of zinc is roughly $2670. Finally, at the global level, highly developed countries are the ones extracting resources and bringing industry into poorer countries. In terms of the population utilizing lithium, there is a relatively large number of people using lithium, with technology encompassing a large percent of our interactions and activities in the world. With some villages in Africa operating more cell phones than bathrooms, it is reasonable to estimate a large percent of the world uses lithium, and to predict that the material usage will increase as industrialization and technological dependency grows. In terms of Toyota and Ford's lithium usage, it is important to note that as of 2005, global zinc air production could produce enough zinc-air batteries to power 1 billion electric vehicles, and lithium reserves could only power ten million lithium-ion powered vehicles . Environmental implication The burden that various materials impose on the environment is considered in material criticality. There are numerous negative effects that materials can have on the environment due to either their toxicity, the amounts of energy and water used in processing, and their emissions into the air, water and the land. The purpose of including an evaluation of environmental implications is to transfer information on potential impacts of using a specific material to product designers, government officials, and nongovernmental agencies. The environmental implication evaluation can use data from a source like the Ecoinvent Database. The ecoinvent database provides a single score for the negative impact to human health and ecosystems on a scale from 0–100. The scope of the score is Cradle to Gate. Environmental implications can also be reflected in social attitudes that may pose as a barrier to the development of resources in the form of objections to extraction. These objections may arise from a fear of how the new extraction site could potentially negatively impact the surrounding communities and ecosystems. This barrier can affect the reliability and security of resources. Improved technology and infrastructure in the recycling re-use, and more efficient use of materials could mitigate some of the negative environmental impacts associated with them. This could also improve the reliability and security of resources. An example of environmental implications is the ban on lead (Pb) in many products. Once government officials and product designers became aware of the dangers of lead government and company policies started prohibiting its use. Criticality focus Material criticality is a relatively new field of research. As global industrial activity continues to increase a wide array of stakeholders are paying more attention to material criticality in order to assess how production processes may be impacted and made more efficient. British Petroleum, the United States Department of Energy, and the European Union have all established review procedures to determine material criticality and how it affects their behavior. Additionally, there has been a growing body of academic study in this field, led by Thomas Graedel of Yale. Material criticality is going to be an essential factor in the industrial production process for the foreseeable future. See also References External links Ecoinvent Database Industrial ecology
Material criticality
Chemistry,Engineering
2,627
10,977,794
https://en.wikipedia.org/wiki/Technetium-99
Technetium-99 (99Tc) is an isotope of technetium which decays with a half-life of 211,000 years to stable ruthenium-99, emitting beta particles, but no gamma rays. It is the most significant long-lived fission product of uranium fission, producing the largest fraction of the total long-lived radiation emissions of nuclear waste. Technetium-99 has a fission product yield of 6.0507% for thermal neutron fission of uranium-235. The metastable technetium-99m (99mTc) is a short-lived (half-life about 6 hours) nuclear isomer used in nuclear medicine, produced from molybdenum-99. It decays by isomeric transition to technetium-99, a desirable characteristic, since the very long half-life and type of decay of technetium-99 imposes little further radiation burden on the body. Radiation The weak beta emission is stopped by the walls of laboratory glassware. Soft X-rays are emitted when the beta particles are stopped, but as long as the body is kept more than 30 cm away these should pose no problem. The primary hazard when working with technetium is inhalation of dust; such radioactive contamination in the lungs can pose a significant cancer risk. Role in nuclear waste Due to its high fission yield, relatively long half-life, and mobility in the environment, technetium-99 is one of the more significant components of nuclear waste. Measured in becquerels per amount of spent fuel, it is the dominant producer of radiation in the period from about 104 to 106 years after the creation of the nuclear waste. The next shortest-lived fission product is samarium-151 with a half-life of 90 years, though a number of actinides produced by neutron capture have half-lives in the intermediate range. Releases An estimated 160 TBq (about 250 kg) of technetium-99 was released into the environment up to 1994 by atmospheric nuclear tests. The amount of technetium-99 from civilian nuclear power released into the environment up to 1986 is estimated to be on the order of 1000 TBq (about 1600 kg), primarily by outdated methods of nuclear fuel reprocessing; most of this was discharged into the sea. In recent years, reprocessing methods have improved to reduce emissions, but the primary release of technetium-99 into the environment is by the Sellafield plant, which released an estimated 550 TBq (about 900 kg) from 1995–1999 into the Irish Sea. From 2000 onwards the amount has been limited by regulation to 90 TBq (about 140 kg) per year. In the environment The long half-life of technetium-99 and its ability to form an anionic species make it (along with 129I) a major concern when considering long-term disposal of high-level radioactive waste. Many of the processes designed to remove fission products from medium-active process streams in reprocessing plants are designed to remove cationic species like caesium (e.g., 137Cs, 134Cs) and strontium (e.g., 90Sr). Hence the pertechnetate escapes through these treatment processes. Current disposal options favor burial in geologically stable rock. The primary danger with such a course is that the waste is likely to come into contact with water, which could leach radioactive contamination into the environment. The natural cation-exchange capacity of soils tends to immobilize plutonium, uranium, and caesium cations. However, the anion-exchange capacity is usually much smaller, so minerals are less likely to adsorb the pertechnetate and iodide anions, leaving them mobile in the soil. For this reason, the environmental chemistry of technetium is an active area of research. Separation of technetium-99 Several methods have been proposed for technetium-99 separation including: crystallization, liquid-liquid extraction, molecular recognition methods, volatilization, and others. In 2012 the crystalline compound Notre Dame Thorium Borate-1 (NDTB-1) was presented by researchers at the University of Notre Dame. It can be tailored to safely absorb radioactive ions from nuclear waste streams. Once captured, the radioactive ions can then be exchanged for higher-charged species of a similar size, recycling the material for re-use. Lab results using the NDTB-1 crystals removed approximately 96 percent of technetium-99. Transmutation of technetium to stable ruthenium-100 An alternative disposal method, transmutation, has been demonstrated at CERN for technetium-99. This transmutation process bombards the technetium ( as a metal target) with neutrons, forming the short-lived (half-life 16 seconds) which decays by beta decay to stable ruthenium (). Given the relatively high market value of ruthenium and the particularly undesirable properties of technetium, this type of nuclear transmutation appears particularly promising. See also Isotopes of technetium List of elements facing shortage Technetium-99m References Fission products Isotopes of technetium Radiochemistry Radiopharmaceuticals
Technetium-99
Chemistry
1,088
35,173,784
https://en.wikipedia.org/wiki/Eichler%E2%80%93Shimura%20isomorphism
In mathematics, Eichler cohomology (also called parabolic cohomology or cuspidal cohomology) is a cohomology theory for Fuchsian groups, introduced by , that is a variation of group cohomology analogous to the image of the cohomology with compact support in the ordinary cohomology group. The Eichler–Shimura isomorphism, introduced by Eichler for complex cohomology and by for real cohomology, is an isomorphism between an Eichler cohomology group and a space of cusp forms. There are several variations of the Eichler–Shimura isomorphism, because one can use either real or complex coefficients, and can also use either Eichler cohomology or ordinary group cohomology as in . There is also a variation of the Eichler–Shimura isomorphisms using l-adic cohomology instead of real cohomology, which relates the coefficients of cusp forms to eigenvalues of Frobenius acting on these groups. used this to reduce the Ramanujan conjecture to the Weil conjectures that he later proved. Eichler cohomology If G is a Fuchsian group and M is a representation of it then the Eichler cohomology group H(G,M) is defined to be the kernel of the map from H(G,M) to Πc H(Gc,M), where the product is over the cusps c of a fundamental domain of G, and Gc is the subgroup fixing the cusp c. The Eichler–Shimura isomorphism is an isomorphism between the space of cusp forms on G of weight n + 2 and the first Eichler cohomology of the group G with the coefficients in the G-module , where the rank of depends on n (Shimura, "Intruduction to the arithmetic theory of automorphic functions", Theorem 8.4) References Modular forms
Eichler–Shimura isomorphism
Mathematics
413
20,580,874
https://en.wikipedia.org/wiki/Thrombolite
Thrombolites (from Ancient Greek θρόμβος thrómbos meaning "clot" and λῐ́θος líthos meaning "stone") are clotted accretionary structures formed in shallow water by the trapping, binding, and cementation of sedimentary grains by biofilms of microorganisms, especially cyanobacteria. Structures Thrombolites have a clotted structure without the laminae of stromatolites. Each clot within a thrombolite mound is a separate cyanobacterial colony. The clots are on the scale of millimetres to centimetres and may be interspersed with sand, mud or sparry carbonate. Clots that make up thrombolites are called thromboids to avoid confusion with other clotted textures. The larger clots make up more than 40% of a thrombolite's volume and each clot has a complex internal structure of cells and rimmed lobes resulting primarily from calcification of the cyanobacterial colony. Very little sediment is found within the clots because the main growth method is calcification rather than sediment trapping. There is active debate about the size of thromboids, with some seeing thromboids as a macrostructural feature (domical hemispheroid) and others viewing thromboids as a mesostructural feature (random polylobate and subspherical mesoclots). Types There are two main types of thrombolites: Calcified microbe thrombolites This type of thrombolites contain clots that are dominantly composed of calcified microfossil components. These clots do not have a fixed form or size and can expand vertically. Furthermore, burrows and trilobite fragments can exist in these thrombolites. Coarse agglutinated thrombolites This type of thrombolites is composed of small openings that trap fine-grained sediments. They are also known "thrombolitic-stromatolites" due to their close relation with the same composition of stromatolites. Because they trap sediment, their formation is linked to the rise of algal-cyanobacterial mats. Differences from stromatolites Thrombolites can be distinguished from microbialites or stromatolites by their massive size, which is characterized by macroscopic clotted fabric. Stromatolites are similar but consist of layered accretions. Thrombolites appear with random patterns that can be seen by the naked eye, while stromatolites has the texture of built-up layers. Ancient fossil record Calcified microbe thrombolites occur in sedimentary rocks from the shallow water ocean during the Neoproterozoic and early Palaeozoic. Locations Thrombolites are rare on modern Earth, but exist in areas of groundwater discharge with high concentration of nutrients and organic ions, such as shallow seawater, freshwater, and saltwater lakes, and streams. Thrombolites are now found in only a few places in the world, including: Laguna Negra, Catamarca, Argentina Basin Lakes and Blue Lake, Australia Lake Clifton, Australia Lake Richmond, Australia Lake Thetis, Australia Flower's Cove, Canada Manito Lake and Pavilion Lake, Canada Lakes Nuoertu and Huhejaran, China Kiritimati Atoll, Kiribati Cuatro Ciénegas and Lake Alchichica, Mexico Ciocaia, Romania Lake Van and Salda Lake, Turkey Green Lake, United States Lake Sarmiento, Chile References Trace fossils Cyanobacteria
Thrombolite
Biology
758
3,841,774
https://en.wikipedia.org/wiki/Nokia%20Business%20Center
Nokia Business Center (NBC) was a mobile email solution by Nokia, providing push e-mail and (through a paid-for client upgrade) calendar and contact availability to mobile devices. The server runs on Red Hat Enterprise Linux. It was discontinued in 2014. External links Press Release about support for IBM Lotus Notes and Domino addition to NBC Nokia services Mobile web
Nokia Business Center
Technology
73
11,849,160
https://en.wikipedia.org/wiki/Tyrosine%20kinase%202
Non-receptor tyrosine-protein kinase TYK2 is an enzyme that in humans is encoded by the TYK2 gene. TYK2 was the first member of the JAK family that was described (the other members are JAK1, JAK2, and JAK3). It has been implicated in IFN-α, IL-6, IL-10 and IL-12 signaling. Function This gene encodes a member of the tyrosine kinase and, to be more specific, the Janus kinases (JAKs) protein families. This protein associates with the cytoplasmic domain of type I and type II cytokine receptors and promulgate cytokine signals by phosphorylating receptor subunits. It is also component of both the type I and type III interferon signaling pathways. As such, it may play a role in anti-viral immunity. Cytokines play pivotal roles in immunity and inflammation by regulating the survival, proliferation, differentiation, and function of immune cells, as well as cells from other organ systems. Hence, targeting cytokines and their receptors is an effective means of treating such disorders. Type I and II cytokine receptors associate with Janus family kinases (JAKs) to affect intracellular signaling. Cytokines including interleukins, interferons and hemopoietins activate the Janus kinases, which associate with their cognate receptors. The mammalian JAK family has four members: JAK1, JAK2, JAK3 and tyrosine kinase 2 (TYK2). The connection between Jaks and cytokine signaling was first revealed when a screen for genes involved in interferon type I (IFN-1) signaling identified TYK2 as an essential element, which is activated by an array of cytokine receptors. TYK2 has broader and profound functions in humans than previously appreciated on the basis of analysis of murine models, which indicate that TYK2 functions primarily in IL-12 and type I-IFN signaling. TYK2 deficiency has more dramatic effects in human cells than in mouse cells. However, in addition to IFN-α and -β and IL-12 signaling, TYK2 has major effects on the transduction of IL-23, IL-10, and IL-6 signals. Since, IL-6 signals through the gp-130 receptor-chain that is common to a large family of cytokines, including IL-6, IL-11, IL-27, IL-31, oncostatin M (OSM), ciliary neurotrophic factor, cardiotrophin 1, cardiotrophin-like cytokine, and LIF, TYK2 might also affect signaling through these cytokines. Recently, it has been recognized that IL-12 and IL-23 share ligand and receptor subunits that activate TYK2. IL-10 is a critical anti-inflammatory cytokine, and IL-10−/− mice suffer from fatal, systemic autoimmune disease. TYK2 is activated by IL-10, and its deficiency affects the ability to generate and respond to IL-10. Under physiological conditions, immune cells are, in general, regulated by the action of many cytokines and it has become clear that cross-talk between different cytokine-signalling pathways is involved in the regulation of the JAK–STAT pathway. Role in inflammation It is now widely accepted that atherosclerosis is a result of cellular and molecular events characteristic of inflammation. Vascular inflammation can be caused by upregulation of Ang-II, which is produced locally by inflamed vessels and induces synthesis and secretion of IL-6, a cytokine responsible for induction of angiotensinogen synthesis in liver through JAK/STAT3 pathway, which gets activated through high affinity membrane protein receptors on target cells, termed IL-6R-chain recruiting gp-130 that is associated with tyrosine kinases (Jaks 1/2, and TYK2 kinase). Cytokines IL-4 and IL-13 gets elevated in lungs of chronically suffered asthmatics. Signalling through IL-4/IL-13 complexes is thought to occur through IL-4Rα-chain, which is responsible for activation of JAK-1 and TYK2 kinases. A role of TYK2 in rheumatoid arthritis is directly observed in TYK2-deficient mice that were resistant to experimental arthritis. TYK2−/− mice displayed a lack of responsiveness to a small amount of IFN-α, but they respond normally to a high concentration of IFN-α/β. In addition, these mice respond normally to IL-6 and IL-10, suggesting that TYK2 is dispensable for mediating for IL-6 and IL-10 signaling and does not play a major role in IFN-α signaling. Although TYK2−/− mice are phenotypically normal, they exhibit abnormal responses to inflammatory challenges in a variety of cells isolated from TYK2−/− mice. The most remarkable phenotype observed in TYK2-deficient macrophages was lack of nitric oxide production upon stimulation with LPS. Further elucidation of molecular mechanisms of LPS signaling, showed that TYK2 and IFN-β deficiency leads resistance to LPS-induced endotoxin shock, whereas STAT1-deficient mice are susceptible. Development of a TYK2 inhibitor appears to be a rational approach in the drug discovery. Clinical significance A mutation in this gene has been associated with hyperimmunoglobulin E syndrome (HIES), a primary immunodeficiency characterized by elevated serum immunoglobulin E. TYK2 appears to play a central role in the inflammatory cascade responses in the pathogenesis of immune-mediated inflammatory diseases such as psoriasis. The drug deucravacitinib (marketed as Sotyktu), a small-molecule TYK2 inhibitor, was approved for moderate-to-severe plaque psoriasis in 2022. The P1104A allele of TYK2 has been shown to increase risk of tuberculosis when carried as a homozygote; population genetic analyses suggest that the arrival of tuberculosis in Europe drove the frequency of that allele down three-fold about 2,000 years before present. Interactions Tyrosine kinase 2 has been shown to interact with FYN, PTPN6, IFNAR1, Ku80 and GNB2L1. References Further reading Signal transduction Tyrosine kinases
Tyrosine kinase 2
Chemistry,Biology
1,382
603,916
https://en.wikipedia.org/wiki/K%C3%A4hler%20differential
In mathematics, Kähler differentials provide an adaptation of differential forms to arbitrary commutative rings or schemes. The notion was introduced by Erich Kähler in the 1930s. It was adopted as standard in commutative algebra and algebraic geometry somewhat later, once the need was felt to adapt methods from calculus and geometry over the complex numbers to contexts where such methods are not available. Definition Let and be commutative rings and be a ring homomorphism. An important example is for a field and a unital algebra over (such as the coordinate ring of an affine variety). Kähler differentials formalize the observation that the derivatives of polynomials are again polynomial. In this sense, differentiation is a notion which can be expressed in purely algebraic terms. This observation can be turned into a definition of the module of differentials in different, but equivalent ways. Definition using derivations An -linear derivation on is an -module homomorphism to an -module satisfying the Leibniz rule (it automatically follows from this definition that the image of is in the kernel of ). The module of Kähler differentials is defined as the -module for which there is a universal derivation . As with other universal properties, this means that is the best possible derivation in the sense that any other derivation may be obtained from it by composition with an -module homomorphism. In other words, the composition with provides, for every , an -module isomorphism One construction of and proceeds by constructing a free -module with one formal generator for each in , and imposing the relations , , , for all in and all and in . The universal derivation sends to . The relations imply that the universal derivation is a homomorphism of -modules. Definition using the augmentation ideal Another construction proceeds by letting be the ideal in the tensor product defined as the kernel of the multiplication map Then the module of Kähler differentials of can be equivalently defined by and the universal derivation is the homomorphism defined by This construction is equivalent to the previous one because is the kernel of the projection Thus we have: Then may be identified with by the map induced by the complementary projection This identifies with the -module generated by the formal generators for in , subject to being a homomorphism of -modules which sends each element of to zero. Taking the quotient by precisely imposes the Leibniz rule. Examples and basic facts For any commutative ring , the Kähler differentials of the polynomial ring are a free -module of rank n generated by the differentials of the variables: Kähler differentials are compatible with extension of scalars, in the sense that for a second -algebra and , there is an isomorphism As a particular case of this, Kähler differentials are compatible with localizations, meaning that if is a multiplicative set in , then there is an isomorphism Given two ring homomorphisms , there is a short exact sequence of -modules If for some ideal , the term vanishes and the sequence can be continued at the left as follows: A generalization of these two short exact sequences is provided by the cotangent complex. The latter sequence and the above computation for the polynomial ring allows the computation of the Kähler differentials of finitely generated -algebras . Briefly, these are generated by the differentials of the variables and have relations coming from the differentials of the equations. For example, for a single polynomial in a single variable, Kähler differentials for schemes Because Kähler differentials are compatible with localization, they may be constructed on a general scheme by performing either of the two definitions above on affine open subschemes and gluing. However, the second definition has a geometric interpretation that globalizes immediately. In this interpretation, represents the ideal defining the diagonal in the fiber product of with itself over . This construction therefore has a more geometric flavor, in the sense that the notion of first infinitesimal neighbourhood of the diagonal is thereby captured, via functions vanishing modulo functions vanishing at least to second order (see cotangent space for related notions). Moreover, it extends to a general morphism of schemes by setting to be the ideal of the diagonal in the fiber product . The cotangent sheaf , together with the derivation defined analogously to before, is universal among -linear derivations of -modules. If is an open affine subscheme of whose image in is contained in an open affine subscheme , then the cotangent sheaf restricts to a sheaf on which is similarly universal. It is therefore the sheaf associated to the module of Kähler differentials for the rings underlying and . Similar to the commutative algebra case, there exist exact sequences associated to morphisms of schemes. Given morphisms and of schemes there is an exact sequence of sheaves on Also, if is a closed subscheme given by the ideal sheaf , then and there is an exact sequence of sheaves on Examples Finite separable field extensions If is a finite field extension, then if and only if is separable. Consequently, if is a finite separable field extension and is a smooth variety (or scheme), then the relative cotangent sequence proves . Cotangent modules of a projective variety Given a projective scheme , its cotangent sheaf can be computed from the sheafification of the cotangent module on the underlying graded algebra. For example, consider the complex curve then we can compute the cotangent module as Then, Morphisms of schemes Consider the morphism in . Then, using the first sequence we see that hence Higher differential forms and algebraic de Rham cohomology de Rham complex As before, fix a map . Differential forms of higher degree are defined as the exterior powers (over ), The derivation extends in a natural way to a sequence of maps satisfying This is a cochain complex known as the de Rham complex. The de Rham complex enjoys an additional multiplicative structure, the wedge product This turns the de Rham complex into a commutative differential graded algebra. It also has a coalgebra structure inherited from the one on the exterior algebra. de Rham cohomology The hypercohomology of the de Rham complex of sheaves is called the algebraic de Rham cohomology of over and is denoted by or just if is clear from the context. (In many situations, is the spectrum of a field of characteristic zero.) Algebraic de Rham cohomology was introduced by . It is closely related to crystalline cohomology. As is familiar from coherent cohomology of other quasi-coherent sheaves, the computation of de Rham cohomology is simplified when and are affine schemes. In this case, because affine schemes have no higher cohomology, can be computed as the cohomology of the complex of abelian groups which is, termwise, the global sections of the sheaves . To take a very particular example, suppose that is the multiplicative group over Because this is an affine scheme, hypercohomology reduces to ordinary cohomology. The algebraic de Rham complex is The differential obeys the usual rules of calculus, meaning The kernel and cokernel compute algebraic de Rham cohomology, so and all other algebraic de Rham cohomology groups are zero. By way of comparison, the algebraic de Rham cohomology groups of are much larger, namely, Since the Betti numbers of these cohomology groups are not what is expected, crystalline cohomology was developed to remedy this issue; it defines a Weil cohomology theory over finite fields. Grothendieck's comparison theorem If is a smooth complex algebraic variety, there is a natural comparison map of complexes of sheaves between the algebraic de Rham complex and the smooth de Rham complex defined in terms of (complex-valued) differential forms on , the complex manifold associated to X. Here, denotes the complex analytification functor. This map is far from being an isomorphism. Nonetheless, showed that the comparison map induces an isomorphism from algebraic to smooth de Rham cohomology (and thus to singular cohomology by de Rham's theorem). In particular, if X is a smooth affine algebraic variety embedded in , then the inclusion of the subcomplex of algebraic differential forms into that of all smooth forms on X is a quasi-isomorphism. For example, if , then as shown above, the computation of algebraic de Rham cohomology gives explicit generators for and , respectively, while all other cohomology groups vanish. Since X is homotopy equivalent to a circle, this is as predicted by Grothendieck's theorem. Counter-examples in the singular case can be found with non-Du Bois singularities such as the graded ring with where and . Other counterexamples can be found in algebraic plane curves with isolated singularities whose Milnor and Tjurina numbers are non-equal. A proof of Grothendieck's theorem using the concept of a mixed Weil cohomology theory was given by . Applications Canonical divisor If is a smooth variety over a field , then is a vector bundle (i.e., a locally free -module) of rank equal to the dimension of . This implies, in particular, that is a line bundle or, equivalently, a divisor. It is referred to as the canonical divisor. The canonical divisor is, as it turns out, a dualizing complex and therefore appears in various important theorems in algebraic geometry such as Serre duality or Verdier duality. Classification of algebraic curves The geometric genus of a smooth algebraic variety of dimension over a field is defined as the dimension For curves, this purely algebraic definition agrees with the topological definition (for ) as the "number of handles" of the Riemann surface associated to X. There is a rather sharp trichotomy of geometric and arithmetic properties depending on the genus of a curve, for being 0 (rational curves), 1 (elliptic curves), and greater than 1 (hyperbolic Riemann surfaces, including hyperelliptic curves), respectively. Tangent bundle and Riemann–Roch theorem The tangent bundle of a smooth variety is, by definition, the dual of the cotangent sheaf . The Riemann–Roch theorem and its far-reaching generalization, the Grothendieck–Riemann–Roch theorem, contain as a crucial ingredient the Todd class of the tangent bundle. Unramified and smooth morphisms The sheaf of differentials is related to various algebro-geometric notions. A morphism of schemes is unramified if and only if is zero. A special case of this assertion is that for a field , is separable over iff , which can also be read off the above computation. A morphism of finite type is a smooth morphism if it is flat and if is a locally free -module of appropriate rank. The computation of above shows that the projection from affine space is smooth. Periods Periods are, broadly speaking, integrals of certain arithmetically defined differential forms. The simplest example of a period is , which arises as Algebraic de Rham cohomology is used to construct periods as follows: For an algebraic variety defined over the above-mentioned compatibility with base-change yields a natural isomorphism On the other hand, the right hand cohomology group is isomorphic to de Rham cohomology of the complex manifold associated to , denoted here Yet another classical result, de Rham's theorem, asserts an isomorphism of the latter cohomology group with singular cohomology (or sheaf cohomology) with complex coefficients, , which by the universal coefficient theorem is in its turn isomorphic to Composing these isomorphisms yields two rational vector spaces which, after tensoring with become isomorphic. Choosing bases of these rational subspaces (also called lattices), the determinant of the base-change matrix is a complex number, well defined up to multiplication by a rational number. Such numbers are periods. Algebraic number theory In algebraic number theory, Kähler differentials may be used to study the ramification in an extension of algebraic number fields. If is a finite extension with rings of integers and respectively then the different ideal , which encodes the ramification data, is the annihilator of the -module : Related notions Hochschild homology is a homology theory for associative rings that turns out to be closely related to Kähler differentials. This is because of the Hochschild-Kostant-Rosenberg theorem which states that the Hochschild homology of an algebra of a smooth variety is isomorphic to the de-Rham complex for a field of characteristic . A derived enhancement of this theorem states that the Hochschild homology of a differential graded algebra is isomorphic to the derived de-Rham complex. The de Rham–Witt complex is, in very rough terms, an enhancement of the de Rham complex for the ring of Witt vectors. Notes References (letter to Michael Atiyah, October 14, 1963) External links Notes on p-adic algebraic de-Rham cohomology - gives many computations over characteristic 0 as motivation A thread devoted to the relation on algebraic and analytic differential forms Differentials (Stacks project) Commutative algebra Differential algebra Algebraic geometry Cohomology theories
Kähler differential
Mathematics
2,755
8,603
https://en.wikipedia.org/wiki/Diffraction
Diffraction is the deviation of waves from straight-line propagation without any change in their energy due to an obstacle or through an aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Diffraction is the same physical effect as interference, but interference is typically applied to superposition of a few waves and the term diffraction is used when many waves are superposed. Italian scientist Francesco Maria Grimaldi coined the word diffraction and was the first to record accurate observations of the phenomenon in 1660. In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength, as shown in the inserted image. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. If there are multiple closely spaced openings, a complex pattern of varying intensity can result. These effects also occur when a light wave travels through a medium with a varying refractive index, or when a sound wave travels through a medium with varying acoustic impedance – all waves diffract, including gravitational waves, water waves, and other electromagnetic waves such as X-rays and radio waves. Furthermore, quantum mechanics also demonstrates that matter possesses wave-like properties and, therefore, undergoes diffraction (which is measurable at subatomic to molecular levels). History The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces', referring to light breaking up into different directions. The results of Grimaldi's observations were published posthumously in 1665. Isaac Newton studied these effects and attributed them to inflexion of light rays. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating to be discovered. Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two closely spaced slits. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. In 1818, supporters of the corpuscular theory of light proposed that the Paris Academy prize question address diffraction, expecting to see the wave theory defeated. However, Augustin-Jean Fresnel took the prize with his new theory wave propagation, combining the ideas of Christiaan Huygens with Young's interference concept. Siméon Denis Poisson challenged the Fresnel theory by showing that it predicted light in the shadow behind circular obstruction; Dominique-François-Jean Arago proceeded to demonstrate experimentally that such light is visible, confirming Fresnel's diffraction model. Mechanism In classical physics diffraction arises because of how waves propagate; this is described by the Huygens–Fresnel principle and the principle of superposition of waves. The propagation of a wave can be visualized by considering every particle of the transmitted medium on a wavefront as a point source for a secondary spherical wave. The wave displacement at any subsequent point is the sum of these secondary waves. When waves are added together, their sum is determined by the relative phases as well as the amplitudes of the individual waves so that the summed amplitude of the waves can have any value between zero and the sum of the individual amplitudes. Hence, diffraction patterns usually have a series of maxima and minima. In the modern quantum mechanical understanding of light propagation through a slit (or slits) every photon is described by its wavefunction that determines the probability distribution for the photon: the light and dark bands are the areas where the photons are more or less likely to be detected. The wavefunction is determined by the physical surroundings such as slit geometry, screen distance, and initial conditions when the photon is created. The wave nature of individual photons (as opposed to wave properties only arising from the interactions between multitudes of photons) was implied by a low-intensity double-slit experiment first performed by G. I. Taylor in 1909. The quantum approach has some striking similarities to the Huygens-Fresnel principle; based on that principle, as light travels through slits and boundaries, secondary point light sources are created near or along these obstacles, and the resulting diffraction pattern is going to be the intensity profile based on the collective interference of all these light sources that have different optical paths. In the quantum formalism, that is similar to considering the limited regions around the slits and boundaries from which photons are more likely to originate, and calculating the probability distribution (that is proportional to the resulting intensity of classical formalism). There are various analytical models for photons which allow the diffracted field to be calculated, including the Kirchhoff diffraction equation (derived from the wave equation), the Fraunhofer diffraction approximation of the Kirchhoff equation (applicable to the far field), the Fresnel diffraction approximation (applicable to the near field) and the Feynman path integral formulation. Most configurations cannot be solved analytically, but can yield numerical solutions through finite element and boundary element methods. In many cases it is assumed that there is only one scattering event, what is called kinematical diffraction, with an Ewald's sphere construction used to represent that there is no change in energy during the diffraction process. For matter waves a similar but slightly different approach is used based upon a relativistically corrected form of the Schrödinger equation, as first detailed by Hans Bethe. The Fraunhofer and Fresnel limits exist for these as well, although they correspond more to approximations for the matter wave Green's function (propagator) for the Schrödinger equation. More common is full multiple scattering models particular in electron diffraction; in some cases similar dynamical diffraction models are also used for X-rays. It is possible to obtain a qualitative understanding of many diffraction phenomena by considering how the relative phases of the individual secondary wave sources vary, and, in particular, the conditions in which the phase difference equals half a cycle in which case waves will cancel one another out. The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case; water waves propagate only on the surface of the water. For light, we can often neglect one direction if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes, we will have to take into account the full three-dimensional nature of the problem. Examples The effects of diffraction are often seen in everyday life. The most striking examples of diffraction are those that involve light; for example, the closely spaced tracks on a CD or DVD act as a diffraction grating to form the familiar rainbow pattern seen when looking at a disc. This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired; the hologram on a credit card is an example. Diffraction in the atmosphere by small particles can cause a corona - a bright disc and rings around a bright light source like the sun or the moon. At the opposite point one may also observe glory - bright rings around the shadow of the observer. In contrast to the corona, glory requires the particles to be transparent spheres (like fog droplets), since the backscattering of the light that forms the glory involves refraction and internal reflection within the droplet. A shadow of a solid object, using light from a compact source, shows small fringes near its edges. Diffraction spikes are diffraction patterns caused due to non-circular aperture in camera or support struts in telescope; In normal vision, diffraction through eyelashes may produce such spikes. The speckle pattern which is observed when laser light falls on an optically rough surface is also a diffraction phenomenon. When deli meat appears to be iridescent, that is diffraction off the meat fibers. All these effects are a consequence of the fact that light propagates as a wave. Diffraction can occur with any kind of wave. Ocean waves diffract around jetties and other obstacles. Sound waves can diffract around objects, which is why one can still hear someone calling even when hiding behind a tree. Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope. Other examples of diffraction are considered below. Single-slit diffraction A long slit of infinitesimal width which is illuminated by light diffracts the light into a series of circular waves and the wavefront which emerges from the slit is a cylindrical wave of uniform intensity, in accordance with the Huygens–Fresnel principle. An illuminated slit that is wider than a wavelength produces interference effects in the space downstream of the slit. Assuming that the slit behaves as though it has a large number of point sources spaced evenly across the width of the slit interference effects can be calculated. The analysis of this system is simplified if we consider light of a single wavelength. If the incident light is coherent, these sources all have the same phase. Light incident at a given point in the space downstream of the slit is made up of contributions from each of these point sources and if the relative phases of these contributions vary by or more, we may expect to find minima and maxima in the diffracted light. Such phase differences are caused by differences in the path lengths over which contributing rays reach the point from the slit. We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. The light from a source located at the top edge of the slit interferes destructively with a source located at the middle of the slit, when the path difference between them is equal to Similarly, the source just below the top of the slit will interfere destructively with the source located just below the middle of the slit at the same angle. We can continue this reasoning along the entire height of the slit to conclude that the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apart that is half the width of the slit. The path difference is approximately so that the minimum intensity occurs at an angle given by where is the width of the slit, is the angle of incidence at which the minimum intensity occurs, and is the wavelength of the light. A similar argument can be used to show that if we imagine the slit to be divided into four, six, eight parts, etc., minima are obtained at angles given by where is an integer other than zero. There is no such simple argument to enable us to find the maxima of the diffraction pattern. The intensity profile can be calculated using the Fraunhofer diffraction equation as where is the intensity at a given angle, is the intensity at the central maximum which is also a normalization factor of the intensity profile that can be determined by an integration from to and conservation of energy, and which is the unnormalized sinc function. This analysis applies only to the far field (Fraunhofer diffraction), that is, at a distance much larger than the width of the slit. From the intensity profile above, if the intensity will have little dependency on hence the wavefront emerging from the slit would resemble a cylindrical wave with azimuthal symmetry; If only would have appreciable intensity, hence the wavefront emerging from the slit would resemble that of geometrical optics. When the incident angle of the light onto the slit is non-zero (which causes a change in the path length), the intensity profile in the Fraunhofer regime (i.e. far field) becomes: The choice of plus/minus sign depends on the definition of the incident angle Diffraction grating A diffraction grating is an optical component with a regular pattern. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles θm which are given by the grating equation where is the angle at which the light is incident, is the separation of grating elements, and is an integer which can be positive or negative. The light diffracted by a grating is found by summing the light diffracted from each of the elements, and is essentially a convolution of diffraction and interference patterns. The figure shows the light diffracted by 2-element and 5-element gratings where the grating spacings are the same; it can be seen that the maxima are in the same position, but the detailed structures of the intensities are different. Circular aperture The far-field diffraction of a plane wave incident on a circular aperture is often referred to as the Airy disk. The variation in intensity with angle is given by where is the radius of the circular aperture, is equal to and is a Bessel function. The smaller the aperture, the larger the spot size at a given distance, and the greater the divergence of the diffracted beams. General aperture The wave that emerges from a point source has amplitude at location that is given by the solution of the frequency domain wave equation for a point source (the Helmholtz equation), where is the 3-dimensional delta function. The delta function has only radial dependence, so the Laplace operator (a.k.a. scalar Laplacian) in the spherical coordinate system simplifies to (See del in cylindrical and spherical coordinates.) By direct substitution, the solution to this equation can be readily shown to be the scalar Green's function, which in the spherical coordinate system (and using the physics time convention ) is This solution assumes that the delta function source is located at the origin. If the source is located at an arbitrary source point, denoted by the vector and the field point is located at the point , then we may represent the scalar Green's function (for arbitrary source location) as Therefore, if an electric field is incident on the aperture, the field produced by this aperture distribution is given by the surface integral where the source point in the aperture is given by the vector In the far field, wherein the parallel rays approximation can be employed, the Green's function, simplifies to as can be seen in the adjacent figure. The expression for the far-zone (Fraunhofer region) field becomes Now, since and the expression for the Fraunhofer region field from a planar aperture now becomes Letting and the Fraunhofer region field of the planar aperture assumes the form of a Fourier transform In the far-field / Fraunhofer region, this becomes the spatial Fourier transform of the aperture distribution. Huygens' principle when applied to an aperture simply says that the far-field diffraction pattern is the spatial Fourier transform of the aperture shape, and this is a direct by-product of using the parallel-rays approximation, which is identical to doing a plane wave decomposition of the aperture plane fields (see Fourier optics). Propagation of a laser beam The way in which the beam profile of a laser beam changes as it propagates is determined by diffraction. When the entire emitted beam has a planar, spatially coherent wave front, it approximates Gaussian beam profile and has the lowest divergence for a given diameter. The smaller the output beam, the quicker it diverges. It is possible to reduce the divergence of a laser beam by first expanding it with one convex lens, and then collimating it with a second convex lens whose focal point is coincident with that of the first lens. The resulting beam has a larger diameter, and hence a lower divergence. Divergence of a laser beam may be reduced below the diffraction of a Gaussian beam or even reversed to convergence if the refractive index of the propagation media increases with the light intensity. This may result in a self-focusing effect. When the wave front of the emitted beam has perturbations, only the transverse coherence length (where the wave front perturbation is less than 1/4 of the wavelength) should be considered as a Gaussian beam diameter when determining the divergence of the laser beam. If the transverse coherence length in the vertical direction is higher than in horizontal, the laser beam divergence will be lower in the vertical direction than in the horizontal. Diffraction-limited imaging The ability of an imaging system to resolve detail is ultimately limited by diffraction. This is because a plane wave incident on a circular lens or mirror is diffracted as described above. The light is not focused to a point but forms an Airy disk having a central spot in the focal plane whose radius (as measured to the first null) is where is the wavelength of the light and is the f-number (focal length divided by aperture diameter ) of the imaging optics; this is strictly accurate for (paraxial case). In object space, the corresponding angular resolution is where is the diameter of the entrance pupil of the imaging lens (e.g., of a telescope's main mirror). Two point sources will each produce an Airy pattern – see the photo of a binary star. As the point sources move closer together, the patterns will start to overlap, and ultimately they will merge to form a single pattern, in which case the two point sources cannot be resolved in the image. The Rayleigh criterion specifies that two point sources are considered "resolved" if the separation of the two images is at least the radius of the Airy disk, i.e. if the first minimum of one coincides with the maximum of the other. Thus, the larger the aperture of the lens compared to the wavelength, the finer the resolution of an imaging system. This is one reason astronomical telescopes require large objectives, and why microscope objectives require a large numerical aperture (large aperture diameter compared to working distance) in order to obtain the highest possible resolution. Speckle patterns The speckle pattern seen when using a laser pointer is another diffraction phenomenon. It is a result of the superposition of many waves with different phases, which are produced when a laser beam illuminates a rough surface. They add together to give a resultant wave whose amplitude, and therefore intensity, varies randomly. Babinet's principle Babinet's principle is a useful theorem stating that the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape, but with differing intensities. This means that the interference conditions of a single obstruction would be the same as that of a single slit. "Knife edge" The knife-edge effect or knife-edge diffraction is a truncation of a portion of the incident radiation that strikes a sharp well-defined obstacle, such as a mountain range or the wall of a building. The knife-edge effect is explained by the Huygens–Fresnel principle, which states that a well-defined obstruction to an electromagnetic wave acts as a secondary source, and creates a new wavefront. This new wavefront propagates into the geometric shadow area of the obstacle. Knife-edge diffraction is an outgrowth of the "half-plane problem", originally solved by Arnold Sommerfeld using a plane wave spectrum formulation. A generalization of the half-plane problem is the "wedge problem", solvable as a boundary value problem in cylindrical coordinates. The solution in cylindrical coordinates was then extended to the optical regime by Joseph B. Keller, who introduced the notion of diffraction coefficients through his geometrical theory of diffraction (GTD). In 1974, Prabhakar Pathak and Robert Kouyoumjian extended the (singular) Keller coefficients via the uniform theory of diffraction (UTD). Patterns Several qualitative observations can be made of diffraction in general: The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versa. (More precisely, this is true of the sines of the angles.) The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object. When the diffracting object has a periodic structure, for example in a diffraction grating, the features generally become sharper. The third figure, for example, shows a comparison of a double-slit pattern with a pattern formed by five slits, both sets of slits having the same spacing, between the center of one slit and the next. Matter wave diffraction According to quantum theory every particle exhibits wave properties and can therefore diffract. Diffraction of electrons and neutrons is one of the powerful arguments in favor of quantum mechanics. The wavelength associated with a non-relativistic particle is the de Broglie wavelength where is the Planck constant and is the momentum of the particle (mass × velocity for slow-moving particles). For example, a sodium atom traveling at about 300 m/s would have a de Broglie wavelength of about 50 picometres. Diffraction of matter waves has been observed for small particles, like electrons, neutrons, atoms, and even large molecules. The short wavelength of these matter waves makes them ideally suited to study the atomic structure of solids, molecules and proteins. Bragg diffraction Diffraction from a large three-dimensional periodic structure such as many thousands of atoms in a crystal is called Bragg diffraction. It is similar to what occurs when waves are scattered from a diffraction grating. Bragg diffraction is a consequence of interference between waves reflecting from many different crystal planes. The condition of constructive interference is given by Bragg's law: where is the wavelength, is the distance between crystal planes, is the angle of the diffracted wave, and is an integer known as the order of the diffracted beam. Bragg diffraction may be carried out using either electromagnetic radiation of very short wavelength like X-rays or matter waves like neutrons (and electrons) whose wavelength is on the order of (or much smaller than) the atomic spacing. The pattern produced gives information of the separations of crystallographic planes , allowing one to deduce the crystal structure. For completeness, Bragg diffraction is a limit for a large number of atoms with X-rays or neutrons, and is rarely valid for electron diffraction or with solid particles in the size range of less than 50 nanometers. Coherence The description of diffraction relies on the interference of waves emanating from the same source taking different paths to the same point on a screen. In this description, the difference in phase between waves that took different paths is only dependent on the effective path length. This does not take into account the fact that waves that arrive at the screen at the same time were emitted by the source at different times. The initial phase with which the source emits waves can change over time in an unpredictable way. This means that waves emitted by the source at times that are too far apart can no longer form a constant interference pattern since the relation between their phases is no longer time independent. The length over which the phase in a beam of light is correlated is called the coherence length. In order for interference to occur, the path length difference must be smaller than the coherence length. This is sometimes referred to as spectral coherence, as it is related to the presence of different frequency components in the wave. In the case of light emitted by an atomic transition, the coherence length is related to the lifetime of the excited state from which the atom made its transition. If waves are emitted from an extended source, this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double-slit experiment, this would mean that if the transverse coherence length is smaller than the spacing between the two slits, the resulting pattern on a screen would look like two single-slit diffraction patterns. In the case of particles like electrons, neutrons, and atoms, the coherence length is related to the spatial extent of the wave function that describes the particle. Applications Diffraction before destruction A new way to image single biological particles has emerged since the 2010s, utilising the bright X-rays generated by X-ray free-electron lasers. These femtosecond-duration pulses will allow for the (potential) imaging of single biological macromolecules. Due to these short pulses, radiation damage can be outrun, and diffraction patterns of single biological macromolecules will be able to be obtained. See also Angle-sensitive pixel Atmospheric diffraction Brocken spectre Cloud iridescence Coherent diffraction imaging Diffraction from slits Diffraction spike Diffraction vs. interference Diffractive solar sail Diffractometer Dynamical theory of diffraction Electron diffraction Fraunhofer diffraction Fresnel imager Fresnel number Fresnel zone Point spread function Powder diffraction Quasioptics Refraction Reflection Schaefer–Bergmann diffraction Thinned-array curse X-ray diffraction References External links The Feynman Lectures on Physics Vol. I Ch. 30: Diffraction Using a cd as a diffraction grating at YouTube Physical phenomena
Diffraction
Physics,Chemistry,Materials_science
5,438
37,895,661
https://en.wikipedia.org/wiki/SequenceL
SequenceL is a general purpose functional programming language and auto-parallelizing (Parallel computing) compiler and tool set, whose primary design objectives are performance on multi-core processor hardware, ease of programming, platform portability/optimization, and code clarity and readability. Its main advantage is that it can be used to write straightforward code that automatically takes full advantage of all the processing power available, without programmers needing to be concerned with identifying parallelisms, specifying vectorization, avoiding race conditions, and other challenges of manual directive-based programming approaches such as OpenMP. Programs written in SequenceL can be compiled to multithreaded code that runs in parallel, with no explicit indications from a programmer of how or what to parallelize. , versions of the SequenceL compiler generate parallel code in C++ and OpenCL, which allows it to work with most popular programming languages, including C, C++, C#, Fortran, Java, and Python. A platform-specific runtime manages the threads safely, automatically providing parallel performance according to the number of cores available, currently supporting x86, POWER8, and ARM platforms. History SequenceL was initially developed over a 20-year period starting in 1989, mostly at Texas Tech University. Primary funding was from NASA, which originally wanted to develop a specification language which was "self-verifying"; that is, once written, the requirements could be executed, and the results verified against the desired outcome. The principal researcher on the project was initially Dr. Daniel Cooke, who was soon joined by Dr. Nelson Rushton (another Texas Tech professor) and later Dr. Brad Nemanich (then a PhD student under Cooke). The goal of creating a language that was simple enough to be readable, but unambiguous enough to be executable, drove the inventors to settle on a functional, declarative language approach, where a programmer describes desired results, rather than the means to achieve them. The language is then free to solve the problem in the most efficient manner that it can find. As the language evolved, the researchers developed new computational approaches, including consume-simplify-produce (CSP). In 1998, research began to apply SequenceL to parallel computing. This culminated in 2004 when it took its more complete form with the addition of the normalize-transpose (NT) semantic, which coincided with the major vendors of central processing units (CPUs) making a major shift to multi-core processors rather than continuing to increase clock speeds. NT is the semantic work-horse, being used to simplify and decompose structures, based on a dataflow-like execution strategy similar to GAMMA and NESL. The NT semantic achieves a goal similar to that of the Lämmel and Peyton-Jones' boilerplate elimination. All other features of the language are definable from these two laws - including recursion, subscripting structures, function references, and evaluation of function bodies. Though it was not the original intent, these new approaches allowed the language to parallelize a large fraction of the operations it performed, transparently to the programmer. In 2006, a prototype auto-parallelizing compiler was developed at Texas Tech University. In 2009, Texas Tech licensed the intellectual property to Texas Multicore Technologies (TMT), for follow-on commercial development. In January 2017 TMT released v3, which includes a free Community Edition for download in addition to the commercial Professional Edition. Design SequenceL is designed to be as simple as possible to learn and use, focusing on algorithmic code where it adds value, e.g., the inventors chose not to reinvent I/O since C handled that well. As a result, the full language reference for SequenceL is only 40 pages, with copious examples, and its formal grammar has around 15 production rules. SequenceL is strictly evaluated (like Lisp), statically typed with type inference (like Haskell), and uses a combination of infix and prefix operators that resemble standard, informal mathematical notation (like C, Pascal, Python, etc.). It is a purely declarative language, meaning that a programmer defines functions, in the mathematical sense, without giving instructions for their implementation. For example, the mathematical definition of matrix multiplication is as follows: The product of the m×p matrix A with the p×n matrix B is the m×n matrix whose (i,j)'th entry is The SequenceL definition mirrors that definition more or less exactly: matmul(A(2), B(2)) [i,j] := let k := 1...size(B); in sum( A[i,k] * B[k,j] ); The subscripts following each parameter A and B on the left hand side of the definition indicate that A and B are depth-2 structures (i.e., lists of lists of scalars), which are here thought of as matrices. From this formal definition, SequenceL infers the dimensions of the defined product from the formula for its (i, j)'th entry (as the set of pairs (i, j) for which the right hand side is defined) and computes each entry by the same formula as in the informal definition above. Notice there are no explicit instructions for iteration in this definition, or for the order in which operations are to be carried out. Because of this, the SequenceL compiler can perform operations in any order (including parallel order) which satisfies the defining equation. In this example, computation of coordinates in the product will be parallelized in a way that, for large matrices, scales linearly with the number of processors. As noted above, SequenceL has no built-in constructs for input/output (I/O) since it was designed to work in an additive manner with other programming languages. The decision to compile to multithreaded C++ and support the 20+ Simplified Wrapper and Interface Generator (SWIG) languages (C, C++, C#, Java, Python, etc.) means it easily fits into extant design flows, training, and tools. It can be used to enhance extant applications, create multicore libraries, and even create standalone applications by linking the resulting code with other code which performs I/O tasks. SequenceL functions can also be queried from an interpreter with given inputs, like Python and other interpreted languages. Normalize–transpose The main non-scalar construct of SequenceL is the sequence, which is essentially a list. Sequences may be nested to any level. To avoid the routine use of recursion common in many purely functional languages, SequenceL uses a technique termed normalize–transpose (NT), in which scalar operations are automatically distributed over elements of a sequence. For example, in SequenceL we have This results not from overloading the '+' operator, but from the effect of NT that extends to all operations, both built-in and user-defined. As another example, if f() is a 3-argument function whose arguments are scalars, then for any appropriate x and z we will have The NT construct can be used for multiple arguments at once, as in, for example It also works when the expected argument is a non-scalar of any type T, and the actual argument is a list of objects of type T (or, in greater generality, any data structure whose coordinates are of type T). For example, if A is a matrix and Xs is a list of matrices [X1, ..., Xn], and given the above definition of matrix multiply, in SequenceL we would have matmul(A,Xs) = [matmul(A,X1),...,matmul(A,Xn)] As a rule, NTs eliminate the need for iteration, recursion, or high level functional operators to do the same things to every member of a data structure, or to process corresponding parts of similarly shaped structures together. This tends to account for most uses of iteration and recursion. Example: prime numbers A good example that demonstrates the above concepts would be in finding prime numbers. A prime number is defined as An integer greater than 1, with no positive divisors other than itself and 1. So a positive integer z is prime if no numbers from 2 through z-1, inclusive, divide evenly. SequenceL allows this problem to be programmed by literally transcribing the above definition into the language. In SequenceL, a sequence of the numbers from 2 through z-1, inclusive, is just (2...(z-1)), so a program to find all of the primes between 100 and 200 can be written: prime(z) := z when none(z mod (2...(z-1)) = 0); Which, in English just says, ...return the argument if none of the numbers between 2, and 1 less than the argument itself, divide evenly into it. If that condition isn't met, the function returns nothing. As a result, running this program yields cmd:>prime(17) 17 cmd:>prime(18) empty The string "between 100 and 200" doesn't appear in the program. Rather, a programmer will typically pass that part in as the argument. Since the program expects a scalar as an argument, passing it a sequence of numbers instead will cause SequenceL to perform the operation on each member of the sequence automatically. Since the function returns empty for failing values, the result will be the input sequence, but filtered to return only those numbers that satisfy the criteria for primes: cmd:>prime(100...200) [101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,179,181,191,193,197,199] In addition to solving this problem with a very short and readable program, SequenceL's evaluation of the nested sequences would all be performed in parallel. Components The following software components are available and supported by TMT for use in writing SequenceL code. All components are available on x86 platforms running Windows, macOS, and most varieties of Linux (including CentOS, RedHat, openSUSE, and Ubuntu), and on ARM and IBM Power platforms running most varieties of Linux. Interpreter A command-line interpreter allows writing code directly into a command shell, or loading code from prewritten text files. This code can be executed, and the results evaluated, to assist in checking code correctness, or finding a quick answer. It is also available via the popular Eclipse integrated development environment (IDE). Code executed in the interpreter does not run in parallel; it executes in one thread. Compiler A command-line compiler reads SequenceL code and generates highly parallelized, vectorized, C++, and optionally OpenCL, which must be linked with the SequenceL runtime library to execute. Runtime The runtime environment is a pre-compiled set of libraries which works with the compiled parallelized C++ code to execute optimally on the target platform. It builds on Intel Threaded Building Blocks (TBB) and handles things such as cache optimization, memory management, work queues-stealing, and performance monitoring. Eclipse IDE plug-in with debugger An Eclipse integrated development environment plug-in provides standard editing abilities (function rollup, chromacoding, etc.), and a SequenceL debugging environment. This plug-in runs against the SequenceL Interpreter, so cannot be used to debug the multithreaded code; however, by providing automatic parallelization, debugging of parallel SequenceL code is really verifying correctness of sequential SequenceL code. That is, if it runs correctly sequentially, it should run correctly in parallel – so debugging in the interpreter is sufficient. Libraries Various math and other standard function libraries are included as SequenceL source code to streamline the programming process and serve as best practice examples. These may be imported, in much the same way that C or C++ libraries are #included. See also Parallel computing Automatic parallelization tool Multi-core processor Multiprocessing Functional programming Purely functional programming Declarative programming Automatic vectorization Simon Peyton Jones Rosetta Code References External links Texas Multicore Technologies Why SequenceL Works OpenMP compared to SequenceL SequenceL Features Overview: Patented Automatic Parallelization in SequenceL YouTube: Texas Multicore Technologies Free Downloads Programmer Resources and Education Normalize, Transpose and Distribute: An Automatic Approach for Handling Nonscalars US Patent 8,839,212, Method, apparatus and computer program product for automatically generating a computer program using consume, simplify and produce semantics with normalize, transpose and distribute operations SequenceL examples on Rosetta Code wiki High-level programming languages Parallel computing Array programming languages Cross-platform software Declarative programming languages Functional programming Functional languages Statically typed programming languages Heterogeneous computing Concurrent programming languages Mathematical software Numerical analysis software for Windows Numerical analysis software for macOS Numerical analysis software for Linux Numerical linear algebra Numerical programming languages Numerical software Science software for Windows Science software for macOS Science software for Linux GPGPU
SequenceL
Mathematics
2,753
183,330
https://en.wikipedia.org/wiki/Chassis
A chassis (, ; plural chassis from French châssis ) is the load-bearing framework of a manufactured object, which structurally supports the object in its construction and function. An example of a chassis is a vehicle frame, the underpart of a motor vehicle, on which the body is mounted; if the running gear such as wheels and transmission, and sometimes even the driver's seat, are included, then the assembly is described as a rolling chassis. Examples of use Vehicles In the case of vehicles, the term rolling chassis means the frame plus the "running gear" like engine, transmission, drive shaft, differential, and suspension. The "rolling chassis" description originated from assembly production when an integrated chassis "rolled on its own tires" just before truck bodies were bolted to the frames near the end of the line. An underbody (sometimes referred to as "coachwork"), which is usually not necessary for the integrity of the structure, is built on the chassis to complete the vehicle. For commercial vehicles, a rolling chassis consists of an assembly of all the essential parts of a truck without the body to be ready for operation on the road. A car chassis will be different from one for commercial vehicles because of the heavier loads and constant work use. Commercial vehicle manufacturers sell "chassis only", "cowl and chassis", as well as "chassis cab" versions that can be outfitted with specialized bodies. These include motor homes, fire engines, ambulances, box trucks, etc. In particular applications, such as school buses, a government agency like National Highway Traffic Safety Administration (NHTSA) in the U.S. defines the design standards of chassis and body conversions. An armoured fighting vehicle's hull serves as the chassis and comprises the bottom part of the AFV that includes the tracks, engine, driver's seat, and crew compartment. This describes the lower hull, although common usage might include the upper hull to mean the AFV without the turret. The hull serves as a basis for platforms on tanks, armoured personnel carriers, combat engineering vehicles, etc. In the intermodal trucking industry, a chassis is a type of semi-trailer onto which a cargo container can be mounted for road transport. Electronics In an electronic device (such as a computer), the chassis consists of a frame or other internal supporting structure on which the circuit boards and other electronics are mounted. In some designs, such as older ENIAC sets, the chassis is mounted inside a heavy, rigid cabinet, while in other designs such as modern computer cases, lightweight covers or panels are attached to the chassis. The combination of chassis and outer covering is sometimes called an enclosure. Firearms In firearms, the chassis is a bedding frame on long guns such as rifles to replace the traditionally wooden stock, for the purpose of better accurizing the gun. The chassis is usually made from hard metallic material such as aluminium alloy (and less frequently stainless steel, titanium alloy or recently magnesium alloy) due to metals having superior stiffness and compressive strength compared with wood or synthetic polymer, which are commonly used in conventional rifle stocks. The chassis essentially functions as a more extensive pillar bedding, providing a metal-on-metal bearing surface that has reduced shifting potential under the stress of recoil. A barreled action bedded into a metal chassis would theoretically operate more consistently during repeated firing, resulting in better precision. With the increasing availability of CNC machining, chassis have become more affordable and sophisticated as well as gained increasing popularity as these types of chassis can be expanded to accommodate customizable "furniture" (buttstock, pistol grip, etc.) and rail interface systems that provide mounting points for various accessories. See also Airframe Backbone chassis Body-on-frame Bogie Coachbuilder Locomotive frame Monocoque, construction from a structural shell instead of a structural frame Undercarriage (disambiguation) Underframe References External links Automotive chassis types Vehicle technology Computer enclosure Carriages and mountings
Chassis
Engineering
804
33,758,261
https://en.wikipedia.org/wiki/HP%20Performance%20Optimized%20Datacenter
The HP Performance Optimized Datacenter (POD) is a range of three modular data centers manufactured by HP. Housed in purpose-built modules of standard shipping container form-factor of either 20 feet or 40 feet in length the data centers are shipped preconfigured with racks, cabling and equipment for power and cooling. They can support technologies from HP or third parties. The claimed capacity is the equivalent of up to 10,000 square feet of typical data center capacity depending on the model. Depending on the model, they use either chilled water cooling or a combination of direct expansion air cooling. HP POD 20c and 40c The POD 40c was launched in 2008. This 40-foot modular data center has a maximum power capacity up to 27 kW per rack. The POD 40c supports 3,500 compute nodes or 12,000 LFF hard drives. HP has claimed this offers the computing equivalent of 4,000 square foot of traditional data center space. The POD 20c was launched in 2010. This modular data center is housed in a 20-foot container. This version houses 10 industry-standard 50U racks of hardware. The POD uses an efficient cooling design of variable speed fans, hot and cold aisle containment and close coupled cooling to maximize capacity and efficiency. The POD 20c can operate at a Power Usage Effectiveness of 1.25. PODs can maintain cold aisle temperatures higher than typical brick and mortar data centers. The temperature of the cold aisle in traditional data centers is typically 68 to 72 degrees, whereas the POD can efficiently operate at temperatures in this range up to 90 degrees. Both the 20c and the 40c are water-cooled. The benefit of water cooling is higher capacity and less power usage than traditional air-cooled systems. HP POD 240a The HP POD 240a was launched in June 2011. It can be configured with two rows of 44 extra height 50U racks that could house 4,400 server nodes of typical size, or 7,040 server nodes of the densest size. HP claimed that the typical brick and mortar data center required to house this equipment would be 10,000 square feet of floor space. HP claims "near-perfect" Energy efficiency for the POD 240a, which it nicknames the "EcoPOD". HP says it has recorded estimated Power Usage Effectiveness (PUE) ratios of 1.05 to 1.4, depending on IT load and location. The perfect efficiency would be a Power usage effectiveness (PUE) of 1.0. The POD 240a has a refrigerant-based air cooled HVAC system with air-side economization When the ambient air conditions are cool enough, the 240a uses economizer or free air mode—where outside air can be taken in and circulated inside the modular data center to cool the IT equipment. Customers In September 2013, eBay announced that they were "deploying the world’s largest modular data center, with 44 rack positions and 1.4Megawatts of power" using HP EcoPODs. See also HP Flexible Data Center IBM Portable Modular Data Center Sun Modular Data Center References External links HP Performance Optimized Datacenter (POD) HP Performance Optimized Datacenter (POD) 240a HP Performance Optimized Datacenter (POD) 20c and 40c Hewlett-Packard Intermodal containers Data centers Modular datacenter
HP Performance Optimized Datacenter
Technology
692
11,584,601
https://en.wikipedia.org/wiki/Melanoidin
Melanoidins are brown, high molecular weight heterogeneous polymers that are formed when sugars and amino acids combine (through the Maillard reaction) at high temperatures and low water activity. They were discovered by Schmiedeberg in 1897. Melanoidins are commonly present in foods that have undergone some form of non-enzymatic browning, such as barley malts (Vienna and Munich), bread crust, bakery products, and coffee. They are also present in the wastewater of sugar refineries, necessitating treatment in order to avoid contamination around the outflow of these refineries. Dietary melanoidins themselves produce various effects in the organism: they decrease Phase I liver enzyme activity and promote glycation in vivo, which may contribute to diabetes, reduced vascular compliance, and Alzheimer's disease. Some of the melanoidins are metabolized by the intestinal microflora. Coffee is one of the main sources of melanoidins in the human diet, yet coffee consumption is associated with some health benefits and antiglycative action. Footnotes References Food science Substances discovered in the 19th century
Melanoidin
Chemistry
234
1,276,249
https://en.wikipedia.org/wiki/Golden%20number%20%28time%29
A golden number (sometimes capitalized) is a number assigned to each year in sequence which is used to indicate the dates of all the calendric new moons for each year in a 19-year Metonic cycle. They are used in computus (the calculation of the date of Easter) and also in Runic calendars. The golden number of any Julian or Gregorian calendar year can be calculated by dividing the year by 19, taking the remainder, and adding 1. (In mathematics this can be expressed as (year number modulo 19) + 1.) For example, divided by 19 gives , remainder . Adding 1 to the remainder gives a golden number of . The golden number, as it was later called, first appears in a calendar composed by Abbo of Fleury around the year 1000. Around 1162 a certain Master William referred to this number as the golden number "because it is more precious than the other numbers." The name refers to the practice of printing golden numbers in gold. The term became widely known and used, in part through the computistic poem Massa Compoti written by Alexander de Villa Dei around 1200. See also Dominical letter Date of Easter Paschal Full Moon References External links Time in astronomy Calendars
Golden number (time)
Physics,Astronomy
256
28,291,162
https://en.wikipedia.org/wiki/Whitney%20topologies
In mathematics, and especially differential topology, functional analysis and singularity theory, the Whitney topologies are a countably infinite family of topologies defined on the set of smooth mappings between two smooth manifolds. They are named after the American mathematician Hassler Whitney. Construction Let M and N be two real, smooth manifolds. Furthermore, let C∞(M,N) denote the space of smooth mappings between M and N. The notation C∞ means that the mappings are infinitely differentiable, i.e. partial derivatives of all orders exist and are continuous. Whitney Ck-topology For some integer , let Jk(M,N) denote the k-jet space of mappings between M and N. The jet space can be endowed with a smooth structure (i.e. a structure as a C∞ manifold) which make it into a topological space. This topology is used to define a topology on C∞(M,N). For a fixed integer consider an open subset and denote by Sk(U) the following: The sets Sk(U) form a basis for the Whitney Ck-topology on C∞(M,N). Whitney C∞-topology For each choice of , the Whitney Ck-topology gives a topology for C∞(M,N); in other words the Whitney Ck-topology tells us which subsets of C∞(M,N) are open sets. Let us denote by Wk the set of open subsets of C∞(M,N) with respect to the Whitney Ck-topology. Then the Whitney C∞-topology is defined to be the topology whose basis is given by W, where: Dimensionality Notice that C∞(M,N) has infinite dimension, whereas Jk(M,N) has finite dimension. In fact, Jk(M,N) is a real, finite-dimensional manifold. To see this, let denote the space of polynomials, with real coefficients, in m variables of order at most k and with zero as the constant term. This is a real vector space with dimension Writing } then, by the standard theory of vector spaces and so is a real, finite-dimensional manifold. Next, define: Using b to denote the dimension Bkm,n, we see that , and so is a real, finite-dimensional manifold. In fact, if M and N have dimension m and n respectively then: Topology Given the Whitney C∞-topology, the space C∞(M,N) is a Baire space, i.e. every residual set is dense. References Differential topology Singularity theory
Whitney topologies
Mathematics
533
25,828,666
https://en.wikipedia.org/wiki/Cro-Magnon
Cro-Magnons or European early modern humans (EEMH) were the first early modern humans (Homo sapiens) to settle in Europe, migrating from western Asia, continuously occupying the continent possibly from as early as 56,800 years ago. They interacted and interbred with the indigenous Neanderthals (H. neanderthalensis) of Europe and Western Asia, who went extinct 40,000 to 35,000 years ago. The first wave of modern humans in Europe (Initial Upper Paleolithic) left no genetic legacy to modern Europeans; however, from 37,000 years ago a second wave succeeded in forming a single founder population, from which all subsequent Cro-Magnons descended and which contributes ancestry to present-day Europeans. Cro-Magnons produced Upper Palaeolithic cultures, the first major one being the Aurignacian, which was succeeded by the Gravettian by 30,000 years ago. The Gravettian split into the Epi-Gravettian in the east and Solutrean in the west, due to major climatic degradation during the Last Glacial Maximum (LGM), peaking 21,000 years ago. As Europe warmed, the Solutrean evolved into the Magdalenian by 20,000 years ago, and these peoples recolonised Europe. The Magdalenian and Epi-Gravettian gave way to Mesolithic cultures as big game animals were dying out and the Last Glacial Period drew to a close. Cro-Magnons were anatomically similar to present-day Europeans, West Asians, and North Africans; however, they were more robust, having larger brains, broader faces, more prominent brow ridges, and bigger teeth, compared to the present-day average. The earliest Cro-Magnon specimens also exhibit some features that are reminiscent of those found in Neanderthals. The first Cro-Magnons would have had darker skin tones than most modern Europeans; natural selection for lighter skin would not have begun until 30,000 years ago. Before the LGM, Cro-Magnons had overall low population density, tall stature similar to post-industrial humans, and expansive trade routes stretching as long as , and hunted big game animals. Cro-Magnons had much higher populations than the Neanderthals, possibly due to higher fertility rates; life expectancy for both species was typically under 40 years. Following the LGM, population density increased as communities travelled less frequently (though for longer distances), and the need to feed so many more people in tandem with the increasing scarcity of big game caused them to rely more heavily on small or aquatic game (broad spectrum revolution), and to more frequently participate in game drive systems and slaughter whole herds at a time. The Cro-Magnon arsenal included spears, spear-throwers, harpoons, and possibly throwing sticks and Palaeolithic dogs. Cro-Magnons likely commonly constructed temporary huts while moving around, and Gravettian peoples notably made large huts on the East European Plain out of mammoth bones. Cro-Magnons are well renowned for creating a diverse array of artistic works, including cave paintings, Venus figurines, perforated batons, animal figurines, and geometric patterns. They also wore decorative beads, and plant-fibre clothes dyed with various plant-based dyes. For music, they produced bone flutes and whistles, and possibly also bullroarers, rasps, drums, idiophones, and other instruments. They buried their dead, though possibly only people who had achieved or were born into high status. The name "Cro-Magnon" comes from the five skeletons discovered by French palaeontologist Louis Lartet in 1868 at the Cro-Magnon rock shelter, Les Eyzies, Dordogne, France, after the area was accidentally discovered while a road was constructed for a railway station. Remains of Palaeolithic cultures have been known for centuries, but they were initially interpreted in a creationist model, wherein they represented antediluvian peoples which were wiped out by the Great Flood. Following the conception and popularisation of evolution in the mid-to-late 19th century, Cro-Magnons became the subject of much scientific racism, with early race theories allying with Nordicism and Pan-Germanism. Such historical race concepts were overturned by the mid-20th century. During the first wave feminism movement, the Venus figurines were notably interpreted as evidence of some matriarchal religion, though such claims had mostly died down in academia by the 1970s. Chronology Initial Upper Palaeolithic When early modern humans (Homo sapiens) migrated onto the European continent, they interacted with the indigenous Neanderthals (H. neanderthalensis) which had already inhabited Europe for hundreds of thousands of years. In 2019, Greek palaeoanthropologist Katerina Harvati and colleagues argued that two 210,000 year old skulls from Apidima Cave, Greece, represent modern humans rather than Neanderthalsindicating these populations have an unexpectedly deep historybut this was disputed in 2020 by French paleoanthropologist and colleagues. About 60,000 years ago, marine isotope stage 3 began, characterised by oscillating climatic patterns, causing sudden retreat and recolonisation phases in vegetation, fluctuating between forestland and open steppeland. The earliest indication of Upper Palaeolithic modern human migration into Europe is a series of modern human teeth with Neronian industry stone tools found at Mandrin Cave, Malataverne in France, dated in 2022 to between 56,800 and 51,700 years ago. The Neronian is one of the many industries associated with modern humans classed as transitional between the Middle and Upper Palaeolithic. Beyond this there is the Balkan Bohunician industry beginning 48,000 years ago, likely deriving from the Levantine Emiran industry; the remains found in the cave in Ranis, Germany, up to 47,500 years old; and the next-oldest fossils date to roughly 44,000 years ago in Bulgaria, Italy, and Britain. It is unclear, while migrating westward, if they followed the Danube or went along the Mediterranean coast. Beginning about 45,000 years ago, the Proto-Aurignacian culture, the first widely recognised European Upper Palaeolithic culture, spread out across Europe, probably descending from the Near-Eastern Ahmarian culture. Aurignacian The Aurignacian industry took hold perhaps in south-central Europe sometime after 40,000 years ago, with the onset of Heinrich event 4 (a period of extreme seasonality) and the Campanian Ignimbrite eruption near Naples (which covered eastern Europe in ash). The Aurignacian culture rapidly replaced others across the continent. This wave of modern humans replaced Neanderthals and their Mousterian culture. In the Danube Valley, Aurignacian sites are few and far between, compared to later traditions, until 35,000 years ago. From here, the "Typical Aurignacian" becomes quite prevalent, and extends until 29,000 years ago. Gravettian Gradually replaced by the Gravettian culture, the close of the Aurignacian is poorly defined. "Aurignacoid" or "Epi-Aurignacian" tools are identified as late as 18 to 15 thousand years ago. It is also unclear where the Gravettian originated from as it diverges strongly from the Aurignacian (and therefore may not have descended from it). Nonetheless, genetic evidence indicates that not all Aurignacian bloodlines went extinct. Hypotheses for Gravettian genesis include evolution: in central Europe from the Szeletian (which developed from the Bohunician) which existed 41,000 to 37,000 years ago; or from the Ahmarian or similar cultures from the Near East or the Caucasus that existed before 40,000 years ago. It is further debated where the earliest occurrence is identified, with the former hypothesis arguing for Germany about 37,500 years ago, and the latter III rockshelter in Crimea about 38 to 36 thousand years ago. In either case, the appearance of the Gravettian coincides with a significant temperature drop. Also around 37,000 years ago, the founder population of all later early modern humans existed, and Europe would remain in genetic isolation from the rest of the world for the next 23,000 years. Last Glacial Maximum Around 29,000 years ago, marine isotope stage 2 began and cooling intensified. This peaked about 21,000 years ago during the Last Glacial Maximum (LGM) when Scandinavia, the Baltic region, and the British Isles were covered in glaciers, and winter sea ice reached the French seaboard. The Alps were also covered in glaciers, and most of Europe was polar desert, with mammoth steppe and forest steppe dominating the Mediterranean coast. Consequently, large swathes of Europe were uninhabitable, and two distinct cultures emerged with unique technologies to adapt to the new environment: the Solutrean in southwestern Europe, which invented brand new technologies, and the Epi-Gravettian from Italy to the East European Plain, which adapted the previous Gravettian technologies. Solutrean peoples inhabited the permafrost zone, whereas Epi-Gravettian peoples appear to have stuck to less harsh, seasonally frozen areas. Relatively few sites are known through this time. The glaciers began retreating about 20,000 years ago, and the Solutrean evolved into the Magdalenian, which would recolonise Western and Central Europe over the next couple thousand years. Starting during the Older Dryas roughly 14,000 years ago, Final Magdalenian traditions appear, namely the Azilian, Hamburgian, and Creswellian. During the Bølling–Allerød warming, Near Eastern genes began showing up in the indigenous Europeans, indicating the end of Europe's genetic isolation. Possibly due to the continual reduction of European big game, the Magdalenian and Epi-Gravettian were completely replaced by the Mesolithic by the beginning of the Holocene. Mesolithic Europe was completely re-peopled during the Holocene climatic optimum from 9 to 5 thousand years ago. Mesolithic Western Hunter-Gatherers (WHG) contributed significantly to the present-day European genome, alongside Ancient North Eurasians (ANE) which descended from the Siberian Mal'ta–Buret' culture and Caucasus Hunter-Gatherers (CHG). Most present-day Europeans have a 40–60% WHG ratio, and the 8,000 year old Mesolithic Loschbour man seems to have had a similar genetic makeup. Near Eastern Neolithic farmers which split from the European hunter-gatherers about 40,000 years ago started to spread out across Europe by 8,000 years ago, ushering in the Neolithic with Early European Farmers (EEF). EEF contribute about 30% of ancestry to present-day Baltic populations, and up to 90% in present-day Mediterranean populations. The latter may have inherited WHG ancestry via EEF introgression. The Eastern Hunter-Gatherers (EHG) population identified around the steppes of the Urals also dispersed, and the Scandinavian Hunter-Gatherers appear to be a mix of WHG and EHG. Around 4,500 years ago, the immigration of the Yamnaya and Corded Ware cultures from the eastern steppes brought the Bronze Age, the Proto-Indo-European language, and more or less the present-day genetic makeup of Europeans. Cro-Magnon rock shelter In 1863, a railway was constructed leading to Les Eyzies, a hamlet in the commune of Les Eyzies-de-Tayac-Sireuil, Dordogne, southwestern France. In 1868, M. François Berthoumeyrou, a contractor, was commissioned to make a road along the railway connecting the new Les Eyzies train station. In March, the road workers dug up a rock shelter, around deep, on the left bank of the Vézère River. They found flint stone tools, animal bones, and human remains. Berthoumeyrou ordered his men to halt the work and informed the government officials of the discovery. He also informed a local geologist, Abel Laganne, who recovered ornaments, more flints, and two human skulls. As assigned by the French Minister of Public Instruction Victor Duruy to verify the finds, Louis Lartet made systematic excavation and discovered additional human remains, animal bones, stone tools, and ornaments. He deliberated the discovery before the meeting of the Society of Anthropology of Paris on 21 May, the proceedings published in its journal Bulletins et Mémoires de la Société d'Anthropologie de Paris. He described the site as a cemetery and identified the humans as cave dwellers. The site is called Abri de Cro-Magnon (Cro-Magnon rock shelter), now recognised as a UNESCO World Heritage Site. Abri means "rock shelter" in French, cro means "hole" in Occitan, and Magnon was the landowner. The original human remains were brought to and preserved at the National Museum of Natural History in Paris. The number of individuals at the Cro-Magnon rock shelter has eluded scientists for over a century. The original workers reported that they found 15 skeletons. In his report, Lartet identified five individuals based on the skulls, three of them males (designated Cro-Magnon 1, 3 and 4), one female (Cro-Magnon 2) and an infant (Cro-Magnon 5). In 1868, anatomist Paul Broca noted five adults and several infants. Broca introduced the specimen names and called Cro-Magnon 1 Le Vieillard, from which the name "Old Man" became popularly used. After complete analyses of individual bones by early 2000s, it became generally agreed that the rock shelter contained 140 human remains from at least eight individuals: four adults and four infant. Classification Fossils and artifacts from the Palaeolithic had actually been known for decades, but these were interpreted in a creationist model (as the concept of evolution had not yet been conceived). For example, the Aurignacian Red Lady of Paviland (actually a young man) from South Wales was described by geologist Reverend William Buckland in 1822 as a citizen of Roman Britain. Subsequent authors contended the skeleton was either evidence of antediluvian (before the Great Flood) people in Britain, or was swept far from the inhabited lands farther south by the powerful floodwaters. Buckland assumed the specimen was a woman because he was adorned with jewellery (shells, ivory rods and rings, and a wolf-bone skewer), and Buckland also stated (possibly in jest) the jewellery was evidence of witchcraft. Around this time, the uniformitarianism movement was gaining traction, headed principally by Charles Lyell, arguing that fossil materials well predated the biblical chronology. Following Charles Darwin's 1859 On the Origin of Species, racial anthropologists and raciologists began splitting off putative species and subspecies of present-day humans based on unreliable and pseudoscientific metrics gathered from anthropometry, physiognomy, and phrenology continuing into the 20th century. This was a continuation of Carl Linnaeus' 1735 Systema Naturae, where he invented the modern classification system, in doing so classifying humans as Homo sapiens with several putative subspecies classifications for different races based on racist behavioural definitions (in accord with historical race concepts): "H. s. europaeus" (European descent, governed by laws), "H. s. afer" (African descent, impulse), "H. s. asiaticus" (Asian descent, opinions), and "H. s. americanus" (Native American descent, customs). The racial classification system was quickly extended to fossil specimens, including both Cro-Magnons and the Neanderthals, after the true extent of their antiquity was recognised. In 1869, Lartet had proposed the subspecies classification "H. s. fossilis" for the Cro-Magnon remains. Other supposed fossil human species included (among many others): "H. pre-aethiopicus" for a skull from Dordogne which had "Ethiopic affinities"; "H. predmosti" or "H. predmostensis" for a series of skulls from Brno, Czech Republic, purportedly transitional between Neanderthals and Cro-Magnons; H. mentonensis for a skull from Menton, France; "H. grimaldensis" for Grimaldi man and other skeletons near Grimaldi, Monaco; and "H. aurignacensis" or "H. a. hauseri" for the Combe-Capelle skull. These fossil races, alongside Ernst Haeckel's idea of there being backwards races which require further evolution (social darwinism), popularised the view in European thought that the civilised white man had descended from primitive, low browed ape ancestors through a series of savage races. Prominent brow-ridges were classified as an ape-like trait; consequently, Neanderthals (as well as Aboriginal Australians) were considered a lowly race. These European fossils were considered to have been the ancestors to specifically living European races. Among the earliest attempts to classify Cro-Magnons was done by racial anthropologists Joseph Deniker and William Z. Ripley in 1900, who characterised them as tall and intelligent proto-Aryans, superior to other races, who descended from Scandinavia and Germany. Further race theories revolved around progressively lighter, blonder, and superior races evolving in Central Europe and spreading out in waves to replace their darker ancestors, culminating in the "Nordic race". These aligned well with Nordicism and Pan-Germanism (that is, Aryan supremacy), which gained popularity just before World War I, and was notably used by the Nazis to justify the conquest of Europe and the supremacy of the German people in World War II. Stature was among the characteristics used to distinguish these sub-races, so taller Cro-Magnons such as specimens from the French Cro-Magnon, Paviland, and Grimaldi sites were classified as ancestral to the "Nordic race", and smaller ones such as Combe-Capelle and Chancelade man (both also from France) were considered the forerunners of either the "Mediterranean race" or "Eskimoids". The Venus figurinessculptures of pregnant women with exaggerated breasts and thighswere used as evidence of the presence of the "Negroid race" in Palaeolithic Europe, because they were interpreted as having been based on real women with steatopygia (a condition which causes thicker thighs, common in the women of the San people of Southern Africa) and the hairdos of some are supposedly similar to some seen in Ancient Egypt. By the 1940s, the positivism movementwhich fought to remove political and cultural bias from science and had begun about a century earlierhad gained popular support in European anthropology. Due to this movement and raciology's associations with Nazism, raciology fell out of practice. Demographics The beginning of the Upper Palaeolithic is thought to have been characterised by a major population increase in Europe, with the human population of western Europe possibly increasing by a factor of 10 in the Neanderthal/modern human transition. The archaeological record indicates that the overwhelming majority of Palaeolithic people (both Neanderthals and modern humans) died before reaching the age of 40, with few elderly individuals recorded. It is possible the population boom was caused by a significant increase in fertility rates. A 2005 study estimated the population of Upper Palaeolithic Europe by calculating the total geographic area which was inhabited based on the archaeological record; averaged the population density of Chipewyan, Hän, Hill people, and Naskapi Native Americans which live in cold climates and applied to this to Cro-Magnons; and assumed that population density continually increased with time calculated by the change in the number of total sites per time period. The study calculated that: from 40 to 30 thousand years ago the population was roughly 1,700–28,400 (average 4,400); from 30 to 22 thousand years ago roughly 1,900–30,600 (average 4,800); from 22 to 16.5 thousand years ago roughly 2,300–37,700 (average 5,900); and 16.5–11.5 thousand years ago roughly 11,300–72,600 (average 28,700). Following the LGM, Cro-Magnons are thought to have been much less mobile and featured a higher population density, indicated by seemingly shorter trade routes as well as symptoms of nutritional stress. Biology Physical attributes Cro-Magnons are physically similar to present-day humans, with a globular braincase, completely flat face, gracile brow ridge, and defined chin. However, the bones of Cro-Magnons are somewhat thicker and more robust. The earliest Cro-Magnons often display features that are reminiscent of those seen in Neanderthals. Aurignacians in particular featured a higher proportion of traits somewhat reminiscent of Neanderthals, such as (though not limited to) a slightly flattened skullcap and consequent occipital bun protruding from the back of the skull (the latter could be quite defined). Their frequency significantly diminished in Gravettians, and in 2007, palaeoanthropologist Erik Trinkaus concluded these were remnants of Neanderthal introgression which were eventually bred out of the gene pool in his review of the relevant morphology. For 28 modern human specimens from 190 to 25 thousand years ago, average brain volume was estimated to have been about , and for 13 Cro-Magnons about . In comparison, present-day humans average , which is notably smaller. This is because the Cro-Magnon brain, though within the variation for present-day humans, exhibits longer average frontal lobe length and taller occipital lobe height. The parietal lobes, however, are shorter in Cro-Magnons. It is unclear if this could equate to any functional differences between present-day and early modern humans. In early Upper Palaeolithic western Europe (before the LGM), 20 men and 10 women were estimated to have averaged and , respectively. This is similar to post-industrial modern northern Europeans. In contrast, in a sample of 21 and 15 late Upper Palaeolithic western European men and women (after the LGM), the averages were and , similar to pre-industrial modern humans. It is unclear why earlier Cro-Magnons were taller, especially considering that cold-climate creatures are short-limbed and thus short-statured to better retain body heat (Allen's rule). This has variously been explained as: retention of a hypothetically tall ancestral condition; higher-quality diet and nutrition due to the hunting of megafauna which later became uncommon or extinct; functional adaptation to increase stride length and movement efficiency while running during a hunt; increasing territorialism among later Cro-Magnons reducing gene flow between communities and increasing inbreeding rate; or statistical bias due to small sample size or because taller people were more likely to achieve higher status in a group before the LGM and thus were more likely to be buried and preserved. Prior to genetic analysis, it was generally assumed that Cro-Magnons, like present-day Europeans, were light skinned as an adaptation to better generate vitamin D from the less luminous sun farther north. However, of the three predominant genes responsible for lighter skin in present-day EuropeansKITLG, SLC24A5, and SLC45A2the latter two, as well as the TYRP1 gene associated with lighter hair and eye colour, experienced positive selection as late as 19 to 11 thousand years ago during the Mesolithic transition. Such a late timing was potentially caused by overall low population and/or low cross-continental movement required for such an adaptive shift in skin, hair, and eye colouration. However, KITLG experienced positive selection in Cro-Magnons (as well as East Asians) beginning approximately 30,000 years ago. Genetics While anatomically modern humans have been present outside of Africa during some isolated time intervals potentially as early as 250,000 years ago, present-day non-Africans descend from the out of Africa expansion which occurred around 65–55 thousand years ago. This movement was an offshoot of the rapid expansion within East Africa associated with mtDNA haplogroup L3. Mitochondrial DNA analysis places Cro-Magnons as the sister group to Upper Palaeolithic East Asian groups, divergence occurring roughly 50,000 years ago. Initial genomic studies on the earliest Cro-Magnons in 2014, namely on the 37,000-year-old Kostenki-14 individual, identified 3 major lineages which are also present in present-day Europeans: one related to all later Cro-Magnons; a "Basal Eurasian" lineage which split from the common ancestor of present-day Europeans and East Asians before they split from each other; and another related to a 24,000-year-old individual from the Siberian Mal'ta–Buret' culture (near Lake Baikal). Contrary to this, Fu et al. (2016), evaluating much earlier European specimens, including Ust'-Ishim and Oase-1 from 45,000 years ago, found no evidence of a "Basal Eurasian" component to the genome, nor did they find evidence of Mal'ta–Buret' introgression when looking at a wider range of Cro-Magnons from the entire Upper Palaeolithic. The study instead concluded that such a genetic makeup in present-day Europeans stemmed from Near Eastern and Siberian introgression occurring predominantly in the Neolithic and the Bronze Age (though beginning by 14,000 years ago), but all Cro-Magnons specimens including and following Kostenki-14 contributed to the present-day European genome and were more closely related to present-day Europeans than East Asians. Earlier Cro-Magnons (10 tested in total), on the other hand, did not seem to be ancestral to any present-day population, nor did they form any cohesive group in and of themselves, each representing either completely distinct genetic lineages, admixture between major lineages, or have highly divergent ancestry. Because of these, the study also concluded that, beginning roughly 37,000 years ago, Cro-Magnons descended from a single founder population and were reproductively isolated from the rest of the world. The study reported that an Aurignacian individual from Grottes de Goyet, Belgium, has more genetic affinities to the Magdalenian inhabitants of Cueva de El Mirón, Spain, than to more or less contemporaneous eastern European Gravettians. Haplogroups identified in Cro-Magnons are the patrilineal (from father to son) Y-DNA haplogroups the earliest C1, the latest IJ, and K2a; and matrilineal (from mother to child) mt-DNA haplogroup N, R, and U. Y-haplogroup IJ descended from Southwest Asia. Haplogroup I emerged about 35 to 30 thousand years ago, either in Europe or West Asia. Mt-haplogroup U5 arose in Europe just prior to the LGM, between 35 and 25 thousand years ago. The 14,000 year old Villabruna 1 skeleton from Ripari Villabruna, Italy, is the oldest identified bearer of Y-haplogroup R1b (R1b1a-L754* (xL389,V88)) found in Europe, likely brought in from eastern introgression. The Azilian "Bichon man" skeleton from the Swiss Jura was found to be associated with the WHG lineage. He was a bearer of Y-DNA haplogroup I2a and mtDNA haplogroup U5b1h. Genetic evidence suggests early modern humans interbred with Neanderthals. Genes in the present-day genome are estimated to have entered about 65 to 47 thousand years ago, most likely in West Asia soon after modern humans left Africa. In 2015, the 40,000 year old modern human Oase 1 was found to have had 6–9% (point estimate 7.3%) Neanderthal DNA, indicating a Neanderthal ancestor up to four to six generations earlier, but this hybrid Romanian population does not appear to have made a substantial contribution to the genomes of later Europeans. Therefore, it is possible that interbreeding was common between Neanderthals and Cro-Magnons which did not contribute to the present-day genome. The percentage of Neanderthal genes gradually decreased with time, which could indicate they were maladaptive and were selected out of the gene pool. Valini et al. 2022 found that Europe was populated by three distinct lineages. The earliest inhabitants (represented by Zlaty Kun ~50kya) split from the common Eurasian lineage before the divergence of Western and Eastern Eurasians, but after the divergence of the hypothetical Basal-Eurasians. This earliest sample did not cluster with any modern human population, including Africans, and died out without leaving ancestry to modern peoples. The second wave (represented by Bacho Kiro ~45kya) appeared to be more closely related to modern East Asians and Australasians compared to Europeans, suggesting that this lineage split initially after the formation of Eastern Eurasians, and migrated instead northwestwards into Europe. This lineage similarly did not contribute ancestry to later populations, and was replaced by a West-Eurasian lineage (~40kya), which expanded into Europe and Siberia. Proper Aurignacian people (40-26kya) were still part of a large Western Eurasian "meta-population", related to Paleolithic Siberian and Western Asian populations. Earlier samples (such as the Bacho Kiro sample) were relatively closer to East Asians and Australasians, although distinct from them. In a genetic study published in Nature in March 2023, the authors found that the ancestors of the Western Hunter-Gatherers (WHGs) were populations associated with the Epigravettian culture, which largely replaced populations associated with the Magdalenian culture about 14,000 years ago. The Magdalenian-associated individuals descended from populations associated with the western Gravettian, Solutrean and Aurignacian cultures. Culture There is a notable technological complexification coinciding with the replacement of Neanderthals with Cro-Magnons in the archaeological record, and so the terms "Middle Palaeolithic" and "Upper Palaeolithic" were created to distinguish between these two time periods. Largely based on western European archaeology, the transition was dubbed the "Upper Palaeolithic Revolution," (extended to be a worldwide phenomenon) and the idea of "behavioural modernity" became associated with this event and early modern cultures. It is largely agreed that the Upper Palaeolithic seems to feature a higher rate of technological and cultural evolution than the Middle Palaeolithic, but it is debated if behavioural modernity was truly an abrupt development or was a slow progression initiating far earlier than the Upper Paleolithic, especially when considering the non-European archaeological record. Practices considered modern include: the production of microliths, the common use of bone and antler, the common use of grinding and pounding tools, high quality evidence of body decoration and figurine production, long-distance trade networks, and improved hunting technology. In regard to art, the Magdalenian produced some of the most intricate Palaeolithic pieces, and they even elaborately decorated normal, everyday objects. Hunting and gathering Historically, ethnographic studies on hunter-gatherer subsistence strategies have long placed emphasis on sexual division of labour and most especially the hunting of big game by men. This culminated in the 1966 book Man the Hunter, which focuses almost entirely on the importance of male contributions of food to the group. As this was published during the second-wave feminism movement, this was quickly met with backlash from many female anthropologists. Among these was Australian archaeologist Betty Meehan in her 1974 article Woman the Gatherer, who argued that women play a vital role in these communities by gathering more reliable food plants and small game, as big game hunting has a low success rate. The concept of "Woman the Gatherer" has since gained significant support. Nonetheless, Palaeolithic peoples are typically characterised as having had a meat-heavy diet, with an emphasis on big prey items. The LGM extirpated most European megafauna (Quaternary extinction event), and similarly post-LGM peoples tend to have a higher rate of nutrient-deficiency-related ailments, including a reduction in average height. Probably due to the contraction of habitable territory, these bands were subsisting on a much broader food range of plants, smaller animals, and aquatic resources (broad spectrum revolution). Prey items It has typically been assumed that Cro-Magnons closely studied prey habits in order to maximise return depending on the season. For example, large mammals (including red deer, horses, and ibex) congregate seasonally, and reindeer were possibly seasonally plagued by insects rendering fur sometimes unsuitable for hideworking. In particularly southwestern France, Cro-Magnons depended heavily upon reindeer, and so it is hypothesised that these communities followed the herds, with occupation of the Perigord and the Pyrenees only occurring in the summer. Epi-Gravettian communities, especially, generally focused on hunting one species of large game, most commonly horse or bison. There is much evidence that Cro-Magnons, especially in western Europe following the LGM, corralled large prey animals into natural confined spaces (such as against a cliff wall, a cul-de-sac, or a water body) in order to efficiently slaughter whole herds of animals (game drive system). They seem to have scheduled mass kills to coincide with migration patterns, in particular for red deer, horses, reindeer, bison, aurochs, and ibex, and occasionally woolly mammoths. Game drive systems became especially popular following the LGM, possibly an extension of increasing food return. There are also multiple examples of consumption of seasonally abundant fish, becoming more prevalent in the mid-Upper-Palaeolithic. Nonetheless, Magdalenian peoples appear to have had a greater dependence on small animals, aquatic resources, and plants than predecessors, probably due to the relative scarcity of European big game following the LGM (Quaternary extinction event). It is possible that human activity, in addition to the rapid retreat of favourable steppeland, inhibited recolonisation of most of Europe by megafauna following the LGM (such as mammoths, woolly rhinoceroses, Irish elk, and cave lions), in part contributing to their extinction which occurred by the beginning of or well into the Holocene depending on the species. Plant items Most archaeobotanical studies on Pleistocene plant gathering and processing techniques focus on the end of the Paleolithic as precursors to agriculture in the Neolithic. While isotopic studies indicate nearly all nutritional requirements of Palaeolithic populations may have been mostly satisfied by meat, similar to Inuit cuisine, the issue of offsetting protein poisoning (nitrogen overloading) by eating fatty foods (blubber most especially in the Inuit diet) becomes problematic in more temperate climates with leaner prey. The isotopic score for Palaeolithic peoples are comparable to Inuit with a diet comprising 1–4% plant components, but also to Onge from the tropical Andaman Islands, Paraguayan Aché, Arnhem Land Aboriginals, and Venezuelan Hiwi whose diets comprise up to 25% plant components. Thus, the importance of plants may have varied greatly depending on local climatic conditions. The Palaeolithic archaeobotanical record outside Europe (especially in the Middle East) shows these peoples were capable of processing a massive range of plant resources, in the 20,000 year old Israeli Ohalo II site as many as 150 types of seeds, fruits, nuts, and starches. There are several European Mediterranean cave sites dating to near the end of the Palaeolithic which suggest the inhabitants were harvesting acorn, almond, pistacia, hawthorn, wild pear, blackthorn, rosehip, sorbus, and grape. Multiple German sites bear evidence of wild cherry, blackberry, dewberry, and raspberry consumption. The Palaeolithic archaeobotanical record becomes sparser farther north, but water caltrop and water lily tubers are consumed at least in the northern European Mesolithic. It is unclear to what extent they would process or pretreat otherwise inedible plants which require multiple steps (such as a combination of fermenting, grinding, boiling, etc.). Weaponry For weapons, Cro-Magnons crafted spearpoints using predominantly bone and antler, possibly because these materials were readily abundant. Compared to stone, these materials are compressive, making them fairly shatterproof. These were then hafted onto a shaft to be used as javelins. It is possible that Aurignacian craftsmen further hafted bone barbs onto the spearheads, but firm evidence of such technology is recorded earliest 23,500 years ago, and does not become more common until the Mesolithic. Aurignacian craftsmen produced lozenge-shaped (diamond-like) spearheads. By 30,000 years ago, spearheads were manufactured with a more rounded-off base, and by 28,000 years ago spindle-shaped heads were introduced. During the Gravettian, spearheads with a bevelled base were being produced. By the beginning of the LGM, the spear-thrower was invented in Europe, which can increase the force and accuracy of the projectile. A possible boomerang made of mammoth tusk was identified in Poland (though it may have been unable to return to the thrower), and dating to 23,000 years ago, it would be the oldest known boomerang. Stone spearheads with leaf- and shouldered-points become more prevalent in the Solutrean. Both large and small spearheads were produced in great quantity, and the smaller ones may have been attached to projectile darts. Archery was possibly invented in the Solutrean, though less ambiguous bow technology is first reported in the Mesolithic. Bone technology was revitalised in the Magdalanian, and long-range technology as well as harpoons become much more prevalent. Some harpoon fragments are speculated to have been leisters or tridents, and true harpoons are commonly found along seasonal salmon migration routes. Society Social system As opposed to the patriarchy prominent in historical societies, the idea of a prehistoric predominance of either matriarchy or matrifocal families (centred on motherhood) was first supposed in 1861 by legal scholar Johann Jakob Bachofen. The earliest models of this believed that monogamy was not widely practiced in ancient timesthus, the paternal line was resultantly more difficult to keep track of than the maternalresulting in a matrilineal (and matriarchal) society. Matriarchs were then conquered by patriarchs at the dawn of civilisation. The switch from matriarchy to patriarchy and the hypothetical adoption of monogamy was seen as a leap forward. However, when the first Palaeolithic representations of humans were discovered, the so-called Venus figurineswhich typically feature pronounced breasts, buttocks, and vulvas (areas generally sexualised in present-day Western Culture)they were initially interpreted as pornographic in nature. The first Venus discovered was named the "Vénus impudique" ("immodest Venus") by the discoverer Paul Hurault, 8th Marquis de Vibraye, because it lacked clothes and had a prominent vulva. The name "Venus", after the Roman goddess of beauty, in itself implies an erotic function. Such a pattern in the representation of the human form led to suggestions that human forms were generally pornography for men, meaning men were primarily responsible for artwork and craftsmanship in the Palaeolithic whereas women were tasked with child rearing and various domestic works. This would equate to a patriarchal social system. The Palaeolithic matriarchy model was adapted by prominent communist Friedrich Engels, who instead argued that women were robbed of power by men due to economic changes which could only be undone with the adoption of communism (Marxist feminism). The former sentiment was adopted by the first-wave feminism movement, who attacked the patriarchy by making Darwinist arguments of a supposed natural egalitarian or matrifocal state of human society instead of patriarchal, as well as interpreting the Venuses as evidence of mother goddess worship as part of some matriarchal religion. Consequently, by the mid-20th century, the Venuses were primarily interpreted as evidence of some Palaeolithic fertility cult. Such claims died down in the 1970s as archaeologists moved away from the highly speculative models produced by the previous generation. Through the second-wave feminism movement, the prehistoric matriarchal religion hypothesis was primarily propelled by Lithuanian-American archaeologist Marija Gimbutas. Her interpretations of the Palaeolithic were notably involved in the Goddess movement. Equally ardent arguments against the matriarchy hypothesis have also been prominent, such as American religious scholar Cynthia Eller's 2000 The Myth of Matriarchal Prehistory. Looking at the archaeological record, depictions of women are markedly more common than of men. In contrast to the commonplace Venuses in the Gravettian, Gravettian depictions of men are rare and contested, the only reliable one being a fragmented ivory figurine from the grave of a Pavlovian site in Brno, Czech Republic (it is also the only statuette found in a Palaeolithic grave). 2-D Magdalenian engravings from 15 to 11 thousand years ago do depict males, indicated by an erect penis and facial hair, though profiles of women with an exaggerated buttock are much more common. There are less than 100 depictions of males in the Cro-Magnons archaeological record (of them, about a third are depicted with erections.) On the other hand, most individuals which received a burial (which may have been related to social status) were men. Anatomically, the robustness of limbs (which is an indicator of strength) between Cro-Magnon men and women were consistently not appreciably different from each other. Such low levels of sexual dimorphism through the Upper Pleistocene could potentially mean that sexual division of labour, which characterises historic societies (both agricultural and hunter-gatherer), only became commonplace in the Holocene. Trading The Upper Palaeolithic is characterised by evidence of expansive trade routes and the great distances at which communities could maintain interactions. The early Upper Palaeolithic is especially known for highly mobile lifestyles, with Gravettian groups (at least those analysed in Italy and Moravia, Ukraine) often sourcing some raw materials upwards of . However, it is debated if this represents sample bias, and if western and northern Europe were less mobile. Some cultural practices such as creating Venus figurines or specific burial rituals during the Gravettian stretched across the continent. Genetic evidence suggests that, despite strong evidence of cultural transmission, Gravettian Europeans did not introgress into Siberians, meaning there was a movement of ideas but not people between Europe and Siberia. At the 30,000 year old Romanian Poiana Cireşului site, perforated shells of the Homalopoma sanguineum sea snail were recovered, which is significant as it inhabits the Mediterranean at nearest away. Such interlinkage may have been an important survival tool, with the steadily deteriorating climate. Given low estimated population density, this may have required a rather complex, cross-continental social organisation system. By and following the LGM, population densities are thought to have been much higher with the marked decrease of habitable lands, resulting in more regional economies. Decreased land availability could have increased travel distance, as habitable refugia may have been few and far between, and increasing population density within these few refugia would have made long-distance travel less economic. This trend continued into the Mesolithic with the adoption of sedentism. Nonetheless, there is some evidence of long-distance Magdalenian trade routes. For example, at Lascaux, a painting of a bull had remnants of the manganese mineral hausmannite, which can only be manufactured in heat in excess of , which was probably impossible for Cro-Magnons; this means they likely encountered natural hausmannite which is known to be found away in the Pyrenees. Unless there was a hausmannite source much closer to Lascaux which has since been depleted, this could mean that there was a local economy based on manganese ores. Also, at Ekain, Basque Country, the inhabitants were using the locally rare manganese mineral groutite in their paintings, which they possibly mined out of the cave itself. Based on the distribution of Mediterranean and Atlantic seashell jewellery even well inland, there may have been a network during the Late Glacial Interstadial (14 to 12 thousand years ago) along the rivers Rhine and Rhône in France, Germany, and Switzerland. Housing Cro-Magnon cave sites quite often feature distinct spatial organisation, with certain areas specifically designated for specific activities, such as hearth areas, kitchens, butchering grounds, sleeping grounds, and trash pile. It is difficult to tell if all material from a site was deposited at about the same time, or if the site was used multiple times. Cro-Magnons are thought to have been quite mobile, indicated by the great lengths of trade routes, and such a lifestyle was likely supported by the constructions of temporary shelters in open environments, such as huts. Evidence of huts is typically associated with a hearth. Magdalenian peoples, especially, are thought to have been highly migratory, following herds while repopulating Europe, and several cave and open-air sites indicate the area was abandoned and revisited regularly. The 19,000 year old Peyre Blanque site, France, and at least the area around it may have been revisited for thousands of years. In the Magdalenian, stone lined rectangular areas typically were interpreted as having been the foundations or flooring of huts. At Magdalenian Pincevent, France, small, circular dwellings were speculated to have existed based on the spacing of stone tools and bones; these sometimes featured an indoor hearth, work area, or sleeping space (but not all at the same time). A 23,000 year old hut from the Israeli Ohalo II was identified as having used grasses as flooring or possibly bedding, but it is unclear if Cro-Magnons also lined their huts with grass or instead used animal pelts. A 13,800 year old slab from Molí del Salt, Spain, has 7 dome-shaped figures engraved onto it, which are postulated to represent temporary dome-shaped huts. Over 70 dwellings constructed by Cro-Magnons out of mammoth bones have been identified, primarily from the Russian Plain, possibly semi-permanent hunting camps. They seem to have built tipis and yarangas. These were typically constructed following the LGM after 22,000 years ago by Epi-Gravettian peoples; the earliest hut identified comes from the Molodova I site, Ukraine, which was dated to 44,000 years ago (making it possible it was built by Neanderthals). Typically, these huts measured in diameter, or if oval shaped. Huts could get as small as . One of the largest huts has a diameter of a 25,000 year old hut identified in Kostenki, Russiaand was constructed out of 64 mammoth skulls, but given the little evidence of occupation, this is postulated to have been used for food storage rather than as a living space. Some huts have burned bones, which has typically been interpreted as bones used as fuel for fireplaces due to the scarcity of firewood, and/or disposal of waste. A few huts, however, have evidence of wood burning, or mixed wood/bone burning. Mammoth hut foundations were generally made by pushing a great quantity of mammoth skulls into the ground (most commonly, though not always, with the tusks facing up to possibly be used as further supports), and the walls by putting into the ground vertically shoulder blades, pelvises, long bones, jaws, and the spine. Long bones were often used as poles, commonly placed on the end of another long bone or in the cavity of where tusk used to be. Foundation may have extended as far as underground. Generally, multiple huts were built in a locality, placed apart depending on location. Tusks may have been used to make entrances, skins pulled over for roofing, and the interior sealed up by loess dug out of pits. Some architectural decisions seem to have been purely for aesthetics, best seen in the 4 Epi-Gravettian huts from Mezhyrich, Mezine, Ukraine, where jaws were stacked to create a chevron or zigzag pattern in 2 huts, and long bones were stacked to create horizontal or vertical lines in respectively 1 and 2 huts. The chevron was a commonly used symbol on the Russian Plain, painted or engraved on bones, tools, figurines, and mammoth skulls. Dogs At some point in time, Cro-Magnons domesticated the dog, probably as a result of a symbiotic hunting relationship. DNA evidence suggests that present-day dogs split from wolves around the beginning of the LGM. However, potential Palaeolithic dogs have been found preceding thisnamely the 36,000-year-old Goyet dog from Belgium and the 33,000-year-old Altai dog from Siberiawhich could indicate there were multiple attempts at domesticating European wolves. These "dogs" had a wide size range, from over in height in eastern Europe to less than 30–45 cm (1 ft–1 ft 6 in) in central and western Europe, and in all of Europe. These "dogs" are identified by having a shorter snout and skull, and wider palate and braincase than contemporary wolves. Nonetheless, an Aurignacian origin for domestication is controversial. At the 27 to 24 thousand year old Předmostí site, Czech Republic, 3 "dogs" were identified with their skulls perforated (probably to extract the brain), and 1 had a mammoth bone in its mouth. The discoverers interpreted this as a burial ritual. The 14,500-year-old Bonn-Oberkassel dog from Germany was found buried alongside a 40-year-old man and a 25-year-old woman, as well as traces of red hematite, and is genetically placed as an ancestor to present-day dogs. It was diagnosed with canine distemper virus and probably died between 19 and 23 weeks of age. It would have required extensive human care to survive without being able to contribute anything, suggesting that, at this point, humans and dogs were connected by emotional or symbolic ties rather than purely materialistic personal gain. The exact utility of these proto-dogs is unclear, but they may have played a vital role in hunting, as well as domestic services such as transporting items or guarding camp or carcasses. Art When examples of Upper Palaeolithic art were first discovered in the 19th century in the form of engraved objects, they were assumed to have been "art for art's sake" as Palaeolithic peoples were widely conceived as having been uncultured savages. This model was primarily championed by French archaeologist Louis Laurent Gabriel de Mortillet. Then, detailed paintings found deep within caves were discovered, the first being Cueva de Altamira, Spain, in 1879. The "art for art's sake" model came apart by the turn of the century as more examples of cave art were found in hard-to-reach places in western Europe such as Combarelles and Font-de-Gaume, for which the idea of it being simply a leisure activity became increasingly untenable. Cave art Cro-Magnons are well known for having painted or engraved geometric designs, hand stencils, plants, animals, and seemingly human/animal hybrid creatures on cave walls deep inside caves. Typically the same species are represented in caves which have such art, but the total number of species is quite numerous, and namely includes creatures such as mammoths, bison, lions, bears, and ibex. Nonetheless, some caves were dominated by certain forms, such as Grotte de Niaux where over half of the animals are bison. Images could be drawn on top of one another. Landscapes were never depicted, with the exception of a supposed depiction of a volcanic eruption at Chauvet-Pont d'Arc, France, dating to 36,000 years ago. Cave art is found in dark cave recesses, and the artists either lit a fire on the cave floor or used portable stone lamps to see. Drawing materials include black charcoal and red and yellow ochre crayons, but they, along with a variety of other minerals, could also be ground into powder and mixed with water to create paint. Large, flat rocks may have been used as palettes, and brushes may have included reeds, bristles, and twigs, and possibly a blowgun was used to spray paint over less accessible areas. Hand stencils could either be made by holding the hand to the wall and spitting paint over it (leaving a negative image) or by applying paint to the hand and then sticking it to the wall. Some hand stencils are missing fingers, but it is unclear if the artist was actually missing the finger or simply excluded it from the stencil. It has generally been assumed that the larger prints were left by men and the smaller ones by boys, but the exclusion of women entirely may be improbable. Though many hypotheses have been proposed for the symbolism of cave art, it is still debated why these works were created in the first place. One of the first hypotheses regarding their symbolism was forwarded by French religious historian Salomon Reinach who supposed that, because only animals were depicted on cave walls, the images represented totem veneration, in which a group or a group member identifies with a certain animal associated with certain powers, and honours or respects this animal in some way such as by not hunting it. If this were the case, then Cro-Magnons communities within a region would have subdivided themselves into, for example, a "horse clan", a "bison clan", a "lion clan", and so forth. This was soon contested as some caves contain depictions of animals wounded by projectiles, and generally multiple species are represented. In 1903, Reinach proposed that the cave art represented sympathetic magic (between the painting and the painting's subject), and by drawing an animal doing some kind of action, the artist believed they were exerting that same action onto the animal. That is, by being the master of the image, they could master the animal itself. The hunting magic modeland the idea that art was magical and utilitarian in Cro-Magnons societygained much popularity in the following decades. In this model, herbivorous prey items were depicted as having been wounded prior to a hunt in order to cast a spell over them; some animals were incompletely depicted to enfeeble them; geometric designs were traps; and human/animal hybrids were sorcerers dressed as animals to gain their power, or were gods ruling over the animals. Many animals were depicted as completely healthy and intact, and sometimes pregnant, which this model interprets as fertility magic to promote reproduction; however, if the animal was a carnivore, then this model says that the depiction served to destroy the animal. By the mid-20th century, this model was being contested because of how few depictions of wounded animals exist; the collection of consumed animal bones in decorated caves often did not match types of animals depicted in terms of abundance; and the magic model does not explain hand stencils. Following the 1960s, begun by German-American art historian Max Raphael, the study of cave art took on a much more statistical approach, analysing and quantifying items such as the types and distribution of animals depicted, cave topography, and cave wall morphology. Based on such structuralist tests, horses and bovines seem to have been preferentially clustered together typically in a central position, and such binary organisation led to the suggestion that this was sexual symbolism, and some animals and iconography were designated by Cro-Magnons as either male or female. This conclusion has been heavily contested as well, due to the subjective definition of association between two different animals, and the great detail the animals were depicted in, permitting sexual identification (and further, the hypothesis that bison were supposed to be feminine contradicts the finding that many are male). Also in the late 20th century, with the popularisation of the hypothesis that Cro-Magnons practised shamanism, the human/animal hybrids and geometrical symbols were interpreted within this framework as the visions a shaman would see while in a trance (entoptic phenomena). Opponents mainly attack the comparisons made between Palaeolithic cultures and present-day shamanistic societies for being in some way inaccurate. In 1988, archaeologists David Lewis-Williams and Thomas Dowson suggested trances were induced by hallucinogenic plants containing either mescaline, LSD, or psilocybin; but there is no evidence Cro-Magnons purposefully ate them. Portable art Venus figurines are commonly found associated with Cro-Magnons and are the earliest well-acknowledged representation of human figures. These are most frequently found in the Gravettian (notably in the French Upper Périgordian, the Czech Pavlovian, and West Russian Kostenkian) most dating from 29 to 23 thousand years ago. Almost all Venuses depict naked women, and are generally hand-held sized. They feature a downturned head, no face, thin arms which end at or cross over voluminous breasts, rotund buttocks, a distended abdomen (interpreted as pregnancy), tiny and bent legs, and pegged or unnaturally short feet. Venuses vary in proportions which may represent limitations using certain materials over others, or intentional design choices. Eastern European Venuses seem to have more of an emphasis on the breasts and stomach, whereas western European ones emphasise the hips and thighs. The earliest interpretations of the Venuses believed these were literal representations of women with obesity or steatopygia (a condition where a woman's body stores more fat in the thighs and buttocks, making them especially prominent). Another early hypothesis was that ideal womanhood for Cro-Magnons involved obesity, or that the Venuses were used by men as erotica due to the exaggeration of body parts typically sexualised in Western Culture (as well as the lack of detail to individualising traits such as the face and limbs). Extending present-day Western norms to Palaeolithic peoples was contested, and a counter interpretation is that either Venuses were mother goddesses, or that Cro-Magnons believed depictions of things had magical properties over the subject, and that such a depiction of a pregnant woman would facilitate fertility and fecundity. This is also contested as it assumes women are only thought of in terms of child rearing. Cro-Magnons also carved perforated batons out of horn, bone, or stone, most commonly through the Solutrean and Magdalenian. Such batons disappear from the archaeological record at the Magdalenian's close. Some batons seem phallic in nature. By 2010, about 60 batons had been hypothesised to be representations of penises (all with erections), of which 30 show decoration, and 23 are perforated. Several phallic batons are depicted as circumcised and seemingly bearing some ornamentation such as piercings, scarification, or tattooing. The purpose of perforated batons has been debated, which suggestions for spiritual or religious purposes, ornamentation or status symbol, currency, drumsticks, tent holders, weaving tools, spear straighteners, spear throwers, or dildos. Unperforated phallic batons, measuring just a few centimetres long to up to , were interpreted as sex toys quite early on. Depictions of animals were commonly produced by Cro-Magnons. As of 2015, as many as 50 Aurignacian ivory figurines and fragments have been recovered from the German Swabian Jura. Of the discernible figures, most represent mammoths and lions, and a few horses, bison, possibly a rhino, waterfowl, fish, and small mammals. These sculptures are hand-sized and would have been portable works, and some figurines were made into wearable pendants. Some figurines also featured enigmatic engravings, dots, marks, lines, hooks, and criss-cross patterns. Cro-Magnons also made purely symbolic engravings. There are several plaques of bone or antler (referred to as polishers, spatulas, palettes, or knives) which feature series of equidistantly placed notches, most notably the well-preserved 32,000 year old Blanchard plaque from L'Abri Blanchard, France, which features 24 markings in a seemingly serpentine pattern. The discoverer, British palaeontologist Thomas Rupert Jones, speculated in 1875 this was an early counting system for tallying items such as animals killed, or some other notation system. In 1957, Czech archaeologist Karel Absolon suggested they represent arithmetic. In 1972, Marshack postulated they may be calendars. Also in 1972, Marshack identified 15 to 13 thousand year old Magdalenian plaques bearing small, abstract symbols seemingly into organised blocks or sets, which he interpreted as representing an early writing system. Czech archaeologist Bohuslav Klíma speculated a complex engraving on a mammoth tusk he discovered in the Gravettian Pavlov site, Czech Republic, as being a map, showing a meandering river centre-left, a mountain centre-right, and a living grounds at the centre indicated by a double circle. A few similar engravings have been identified across Europe (in particular the Russian Plain), which he also postulated were maps, plans, or stories. Body art Cro-Magnons are commonly associated with large pieces of pigments ("crayons"), namely made of red ochre. For Cro-Magnons, it is typically assumed that ochre was used for some symbolic purposes, most notably for cosmetics such as body paint. This is because ochre in some sites had to be imported from very long distances, and it is also associated with burials. It is unclear why they specifically chose red ochre instead of other colours. In terms of colour psychology, popular hypotheses include the putative "female cosmetic coalitions" hypothesis and the "red dress effect". It is also possible that ochre was chosen for its utility, such as an ingredient for adhesives, hide tanning agent, insect repellent, sunscreen, medicinal properties, dietary supplement, or as a soft hammer. Cro-Magnons appear to have been using grinding and crushing tools to process ochre before applying it to the skin. In 1962, French archaeologists Saint-Just and Marthe Péquart identified bi-pointed needles in the Magdalenian Le Mas-d'Azil, which they speculated might have been used in tattooing. Hypothesised depictions of penises from most commonly the Magdalenian (though a few dating back to the Aurignacian) appear to be decorated with tattoos, scarification, and piercings. Designs include lines, plaques, dots or holes, and human or animal figures. Clothing Cro-Magnons produced beads, which are typically assumed to have been attached to clothing or portable items as body decoration. Beads had already been in use since the Middle Palaeolithic, but production dramatically increased in the Upper Palaeolithic. It is unclear why communities chose specific raw materials over other ones, and they seem to have upheld local bead making traditions for a very long time. For example, Mediterranean communities used specific types of marine shells to make beads and pendants for more than 20,000 years; and central and western European communities often used pierced animal (and less commonly human) teeth. In the Aurignacian, beads and pendants were being made of shells, teeth, ivory, stone, bone, and antler; and there are a few examples of use of fossil materials including a belemnite, nummulite, ammonite, and amber. They may have also been producing ivory and stone rings, diadems, and labrets. Beads could be manufactured in numerous different styles, such as conical, elliptical, drop-shaped, disc-shaped, ovoid, rectangular, trapezoidal, and so on. Beads may have been used to facilitate social communication, to display the wearer's socio-economic status, as they could have been capable of communicating labour costs (and thereby, a person's wealth, energy, connections, etc.) simply by looking at them. The distribution of ornaments on buried Gravettian individuals, and the likeliness that most of the buried were dressed with whatever they were wearing upon death, indicates that jewellery was primarily worn on the head as opposed to the neck or the torso. The Gravettian Dolní Věstonice I and III and Pavlov I sites in Moravia, Czech Republic, yielded many clay fragments with textile impressions. These indicate a highly sophisticated and standardised textile industry, including the production of: single-ply, double-ply, triple-ply, and braided string and cordage; knotted nets; wicker baskets; and woven cloth including simple and diagonal twined cloth, plain woven cloth, and twilled cloth. Some cloths appear to have a design pattern. There are also plaited items which may have been baskets or mats. Due to the wide range of textile gauges and weaves, it is possible they could also produce wall hangings, blankets, bags, shawls, shirts, skirts, and sashes. These people used plant rather than animal fibres, possibly nettle, milkweed, yew, or alder which have historically been used in weaving. Such plant fibre fragments have also been recorded at the Russian Kostenki and Zaraysk as well as the German Gönnersdorf site. The inhabitants of Dzudzuana Cave, Georgia, appear to have been staining flax fibres with plant-based dyes, including yellow, red, pink, blue, turquoise, violet, black, brown, gray, green, and khaki. The emergence of textiles in the European archaeological record also coincides with the proliferation of the sewing needle in European sites. Ivory needles are found in most late Upper Palaeolithic sites, which could correlate to frequent sewing, and the predominance of small needles (too small to tailor clothes out of hide and leather) could indicate work on softer woven fabrics or accessory stitching and embroidery of leather products. There is some potential evidence of simple loom technology. However, these have also been interpreted as either hunting implements or art pieces. Rounded objects made of mammoth phalanges from Předmostí and Avdeevo, Russia, may have been loom weights or human figures. Perforated, washer-like ivory or bone discs from across Europe were potentially spindle whorls. A foot-shaped piece of ivory from Kniegrotte, Germany, was possibly a comb or a decorative pendant. On the basis of wearing analyses, Cro-Magnons are also speculated to have used net spacers or weaving sticks. In 1960, French archaeologist Fernand Lacorre suggested that perforated batons were used to spin cordage. Some Venuses depict hairdos and clothing worn by Gravettian women. The Venus of Willendorf seems to be wearing a cap, possibly woven fabric or made from shells, featuring at least seven rows and an additional two half-rows covering the nape of the neck. It may have been made starting at a knotted centre and spiraling downward from right to left, and then backstitching all the rows to each other. The Kostenki-1 Venus seems to be wearing a similar cap, though each row seems to overlap the other. The Venus of Brassempouy seems to be wearing some nondescript open, twined hair cover. The engraved Venus of Laussel from France seems to be wearing some headwear with rectangular gridding, and could potentially represent a snood. Most East European Venuses with headwear also display notching and checkwork on the upper body which are suggestive of bandeaux (a strip of cloth bordering around the tops of the breasts) with some even featuring straps connecting it to around the neck; these seem to be absent in western European Venuses. Some also wear belts: in eastern Europe, these are seen on the waist; whereas in central and western Europe they are worn on the low hip. The Venus of Lespugue seems to be wearing a plant fibre string skirt comprising 11 cords running behind the legs. Music Cro-Magnons are known to have created flutes out of hollow bird bones as well as mammoth ivory, first appearing in the archaeological record with the Aurignacian about 40,000 years ago in the German Swabian Jura. The Swabian Jura flutes appear to have been able to produce a wide range of tones. One virtually complete flute made of the radius of a griffon vulture from Hohle Fels measures in length and in diameter. The bone had been smoothed down and was pierced with holes. These finger holes exhibit cut marks, which could indicate the exact placement of these holes was specifically measured to create concert pitch (that is, to make the instrument in tune) or a scale. The part near the elbow joint had two V-shaped carvings, presumably a mouthpiece. Ivory flutes would have required a great time investment to make, as it requires more skill and precision to craft compared to a bird bone flute. A section of ivory must be sawed off to the correct size, cut in half so it can be hollowed out, and then the two pieces have to be refitted and stuck together by an adhesive in an air-tight seal. Cro-Magnons also created bone whistles out of deer phalanges. Such sophisticated music technology could potentially speak to a much longer musical tradition than the archaeological record indicates, as modern hunter-gatherers have been documented to create instruments out of: more biodegradable materials (less likely to fossilise) such as reeds, gourds, skins, and bark; more or less unmodified items such as horns, conch shells, logs, and stones; and their weapons, including spear thrower shafts or boomerangs as clapsticks, or a hunting bow. It is speculated that a few Cro-Magnon artefacts represent bullroarers or percussion instruments such as rasps, but these are harder to prove. One probable bullroarer is identified at Lalinde, France, dating to 14 to 12 thousand years ago, measuring long and decorated with geometric incisions. In the mammoth-bone houses at Mezine, Ukraine, an thigh-bone, a jawbone, a shoulder blade, and a pelvis of a mammoth bear evidence of paint and repeated percussion. These were first proposed by archaeologist Sergei Bibikov to have served as drums, with either a reindeer antler or mammoth tusk fragment also found at the site being used as a drum stick, though this is contested. Other European sites have yielded potential percussion mallets made of mammoth bone or reindeer antler. It is speculated that some Cro-Magnons marked certain sections of caves with red paint which could be struck to produce a note that would resonate throughout the cave chamber, somewhat like a xylophone. Language The early modern human vocal apparatus is generally thought to have been the same as that in present-day humans, as the present-day variation of the FOXP2 gene associated with the neurological prerequisites for speech and language ability seems to have evolved within the last 100,000 years, and the modern human hyoid bone (which supports the tongue and facilitates speech) evolved by 60,000 years ago demonstrated by the Israeli Skhul and Qafzeh humans. These indicate Upper Palaeolithic humans had the anatomical basis for language and the same range of potential phonemes (sounds) as present-day humans. Though Cro-Magnon languages likely contributed to present-day languages, it is unclear what early languages would have sounded like because words denature and are replaced by entirely original words quite rapidly, making it difficult to identity language cognates (a word in multiple different languages which descended from a common ancestor) which originated before 9 to 5 thousand years ago. Nonetheless, it has been controversially hypothesised that Eurasian languages are all related and form the "Nostratic languages" with an early common ancestor existing just after the end of the LGM. In 2013, evolutionary biologist Mark Pagel and colleagues postulated that among "Nostratic languages", frequently used words more often have speculated cognates, and that this was evidence that 23 identified words were "ultraconserved" and supposedly changed very little in use and pronunciation, descending from a common ancestor about 15,000 years ago at the end of the LGM. Archaeologist Paul Heggarty said that Pagel's data was subjective interpretation of supposed cognates, and the extreme volatility of sound and pronunciation of words (for example, Latin [ˈakʷã] (aquam) "water" → French [o] (eau) in just 2,000 years) makes it unclear if cognates can even be identified that far back if they do indeed exist. Religion Shamanism Several Upper Palaeolithic caves feature depictions of seemingly part-human, part-animal chimaeras (typically part bison, reindeer, or deer), variously termed "anthropozoomorphs", "therianthropes", or "sorcerers". These have typically been interpreted as being the centre of some shamanistic ritual, and to represent some cultural revolution and the origins of subjectivity. The oldest such cave drawing has been identified at the 30,000 year old Chauvet Cave, where a figure with a bison upper body and human lower body was drawn onto a stalactite, facing a depiction of a vulva with two tapering legs. The 17,000 year old Grotte de Lascaux, France, has a seemingly dead bird-human hybrid between a rhino and a charging bison, with a bird on top of a pole placed near the figure's right hand. A bird on a stick is used as a symbol of mystical power by some modern shamanistic cultures who believe that birds are psychopomps, and can move between the land of the living and the land of the dead. In these cultures, they believe the shaman can either transform into a bird or use a bird as a spirit guide. The 14,000 year old Grotte des Trois-Frères, France, features 3 sorcerers. The so-called "The Dancing Sorcerer" or "God of Les Trois Frères" seems to bear human legs and feet, paws, a deer head with antlers, a fox or horse tail, a beard, and a flaccid penis, interpreted as dancing on all-fours. Another smaller sorcerer with a bison head, human legs and feet, and upright posture stands above several animal depictions, and is interpreted as holding and playing a musical bow to herd all the animals. The third sorcerer has a seemingly bison upper body and human lower body with testicles and an erection. Some drawn human figures feature lines radiating out. These are generally interpreted as wounded people, with the lines representing pain or spears, possibly related to some initiation process for shamans. One such "wounded man" at Grotte de Cougnac, France, is drawn on the chest of a red Irish elk. A wounded sorcerer with a bison head is found at the 17,000 year old Grotte de Gabillou. Some caves featured "vanquished men", lying presumably dead at the foot of generally a bull or bear. For tangible art, the early Aurignacian Hohlenstein-Stadel, Swabian Jura, has yielded the famous lion-human sculpture. It is tall, which is much larger than the other Swabian Jura figurines. A possible second lion-human was also found in the nearby Hohle Fels. An ivory slab from Geissenklösterle has a carved relief of a human figure with its arms raised in the air wearing a hide, the "worshipper". A 28,000 year old "puppet" was identified at Brno, Czech Republic, consisting of an isolated head piece, torso piece, and left arm piece. It is presumed that the head and torso were connected by a rod, and the torso and arm by some string allowing the arm to move. Because it was found in a grave, this is speculated to have belonged to a shaman for use in rituals involving the dead. A 14,000 year old large stone from Cueva del Juyo, Spain, seems to have been carved to be the conjoined face of a man on the right and a big cat on the left (when facing it). The man half seems to feature a moustache and a beard. The cat half (either a leopard or a lion) has slanting eyes, a snout, a fang, and spots on the muzzle suggestive of whiskers. Spanish archaeologists Leslie G. Freeman and Joaquín González Echegaray argued that Cueva del Juyo was specifically modified to serve as a sanctuary site to carry out rituals. They said the inhabitants dug out a triangular trench and filled it with offerings including Patella (limpets), the common periwinkle (a sea snail), pigments, the legs and jaws (possibly with meat still on them) of red and roe deer, and a red deer antler positioned upright. The trench and offerings were then filled in with dirt, and a seemingly flower-like arrangement of bright cylindrical pieces of red, yellow, and green pigments was placed on top. This was then buried with clay, stone slabs, and bone spearpoints. The clay shell was covered by a slab of limestone supported by large flat stones. Somewhat similar structures associated with some representation of a human have also been found elsewhere in Magdalenian Spain, such as at Entrefoces rock shelter, Cueva de la Garma or Cueva de Praileaitz, Errallako Koba, and Isturitz and Oxocelhaya caves in the Basque Country. Mortuary practices Cro-Magnons buried their dead, commonly with a variety symbolic grave goods as well as red ochre, and multiple people were often buried in the same grave. However, the archaeological record has yielded few graves, less than 5 preserved per millennium, which could indicate burials were seldom given. Consequently, it is unclear if they represent isolated burials or form a much more generalised mortuary tradition. Across Europe, some graves contained multiple individuals, in this case most often featuring both sexes. Most burials are dated to the Gravettian (most notably 31–29 thousand years ago) and towards the end of the Magdalenian (from 14 to 11 thousand years ago). None are identified during the Aurignacian. Gravettian burials seem to differ from post-LGM ones. The former ranged across Europe from Portugal to Siberia, whereas the latter conspicuously restricted to Italy, Germany, and southwest France. About half of buried Gravettians were infants, whereas infant burials were much less common post-LGM, but it is debated if this was due to social differences or infant mortality rates. Graves are also commonly associated with animal remains and tools, but it is unclear if this was intentional or was coincidentally a part of the filler. They are much less common post-LGM, and post-LGM graves are more commonly associated with ornaments than Gravettian graves. The most lavish Palaeolithic burial is a grave from the Gravettian of Sungir, Russia, where a boy and a girl were placed crown-to-crown in a long, shallow grave, and adorned with thousands of perforated ivory beads, hundreds of perforated arctic fox canines, ivory pins, disc pendants, ivory animal figurines, and mammoth tusk spears. The beads were a third the size of those found with a man from the same site, which could indicate these small beads were specifically designed for the children. Only two other Upper Palaeolithic graves were found with grave goods other than personal adornment (one from Arene Candide, Italy, and Brno, Czech Republic), and the grave of these two children is unique in bearing any functional implements (the spears) as well as a bone from another individual (a partial femur). The 5 other buried individuals from Sungir did not receive nearly as many grave goods, with one seemingly given no formal treatment whatsoever. However, most Gravettian graves feature few ornaments, and the buried were probably wearing them before death. Due to such rich material culture and the marked difference of treatment between different individuals, it has been suggested that these peoples had a complex society beyond band level, and with social class distinction. In this model, young individuals given elaborate funerals were potentially born into a position of high status. However, about 75% of Cro-Magnon skeletons were men, which sharply contrasts with the predominance of depictions of women in art. Because of the great amount of time, labour, and resources all these grave goods would have required, it has been hypothesised that the grave goods were made long in advance of the ceremony. Because of such planning for multiple burials as well as their abundance in the archaeological record, the seemingly purposeful presence of both sexes, and an apparent preference for individuals with some congenital disorder (about a third of identified burials), it is generally speculated that these cultures practiced human sacrifice either in fear, disdain, or worship of those with abnormal features, like in many present-day and historical societies. Intricate funerals, in addition to evidence of shamanism and ritualism, has also provoked hypotheses of the belief of an afterlife by Cro-Magnons. Cannibalism The earliest evidence of skull cups, and thus ritual cannibalism, comes from the Magdalenian of Gough's Cave, England. Further concrete evidence of such rituals does not appear until after the Palaeolithic. The Gough's Cave cup seems to have followed a similar method of scalping as those from Neolithic Europe, with incisions being made along the midline of the skull (whereas the Native American method of scalping involved a circular incision around the crown). Earlier examples of non-ritual cannibalism in Europe do not seem to have followed the same method of defleshing. At least 1 skull cup was transported from a different site. In addition, Gough's Cave also yielded a human radius with a zig-zag engraving. Compared to other artefacts in the cave or common to the Magdalenian period, the radius was modified quite little, with the engraving probably quickly etched on (indicated by scrape marks not recorded on any other Magdalenian engraving), and the bone broken and discarded soon thereafter. This may indicate the bone's only function was as a tool in some cannibalistic and/or funerary ritual, rather than being prepared to be carried around by the group as an ornament or tool. In media The "caveman" archetype is quite popular in both literature and visual media and can be portrayed as highly muscular, hairy, or monstrous, and to represent a wild and animalistic character, drawing on the characteristics of a wild man. Cavemen are often represented in front of a cave or fighting a dangerous animal; wielding stone, bone, or wooden tools usually for combat; and dressed in an exposing fur cloak. Men often are depicted with unkempt, unstyled, shoulder-length or longer hair, usually with a beard. Cavemen first appeared in visual media in D. W. Griffith's 1912 Man's Genesis, and among the first appearances in fictional literature were Stanley Waterloo's 1897 The Story of Ab and Jack London's 1907 Before Adam. Cavemen have also been popularly portrayed (inaccurately) as confronting dinosaurs, first done in Griffith's 1914 Brute Force (the sequel to Man's Genesis) featuring a Ceratosaurus. Cro-Magnons are also portrayed interacting with Neanderthals, such as in J.-H. Rosny's 1911 Quest for Fire, H. G. Wells' 1927 The Grisly Folk, William Golding's 1955 The Inheritors, Björn Kurtén's 1978 Dance of the Tiger, Jean M. Auel's 1980 Clan of the Cave Bear and her Earth's Children series, and Elizabeth Marshall Thomas' 1987 Reindeer Moon and its 1990 sequel The Animal Wife. Cro-Magnons are generally portrayed as superior in some way to Neanderthals which allowed them to take Europe. See also Explanatory notes References External links Anatomically modern humans Paleoanthropology Articles containing video clips
Cro-Magnon
Biology
17,936
39,608,703
https://en.wikipedia.org/wiki/C9H9NO4
{{DISPLAYTITLE:C9H9NO4}} The molecular formula C9H9NO4 (molar mass: 195.17 g/mol) may refer to: L-Dopaquinone, also known as o-dopaquinone Pencolide Salicyluric acid Molecular formulas
C9H9NO4
Physics,Chemistry
69
4,282,781
https://en.wikipedia.org/wiki/Ayn%20Ghazal%20%28archaeological%20site%29
Ayn Ghazal () is a Neolithic archaeological site located in metropolitan Amman, Jordan, about 2 km (1.24 mi) north-west of Amman Civil Airport. The site is remarkable for being the place where the ʿAin Ghazal statues were found, which are among the oldest large-sized statues ever discovered. Background The settlement at Ayn Ghazal ('Spring of the Gazelle') first appeared in the Middle Pre-Pottery Neolithic B (MPPNB) and is split into two phases. Phase I starts circa 10,300 Before Present (BP) and ends c. 9,950 BP, while phase II ends c. 9,550 BP. The 9th millennium MPPNB period in the Levant represented a major transformation in prehistoric lifeways from small bands of mobile hunter–gatherers to large settled farming and herding villages in the Mediterranean zone, the process having been initiated some 2–3 millennia earlier. In its prime era, circa 7000 BCE (9000 BP), the site extended over 10–15 hectares (25–37 ac) and was inhabited by ca. 3,000 people (four to five times the population of contemporary Jericho). After 6500 BC, however, the population dropped sharply to about one sixth within only a few generations, probably due to environmental degradation, the 8.2 kilo-year event (Köhler-Rollefson 1992). Location and physical dimensions It is situated in a relatively rich environmental setting immediately adjacent to the Zarqa River (Wadi Zarqa), the longest drainage system in highland Jordan. It is located at an elevation of about 720 m within the ecotone between the oak-park woodland to the west and the open steppe-desert to the east. Ayn Ghazal started as a typical aceramic, Neolithic village of modest size. It was set on terraced ground in a valley-side, and was built with rectangular mud-brick houses that accommodated a square main room and a smaller anteroom. Walls were plastered with mud on the outside, and with lime plaster inside that was renewed every few years. Evidence recovered from the excavations suggests that much of the surrounding countryside was forested and offered the inhabitants a wide variety of economic resources. Arable land is plentiful within the site's immediate environs. These variables are atypical of many major Neolithic sites in the Near East, several of which are located in marginal environments. Yet despite its apparent richness, the area of Ayn Ghazal is climatically and environmentally sensitive because of its proximity throughout the Holocene to the fluctuating steppe-forest border. In Ayn Ghazal, the early Pottery Neolithic period starts c. 6,400 BC, and continues to 5,000 BC. Economy As an early farming community, the Ayn Ghazal people cultivated cereals (barley and ancient species of wheat), legumes (peas, beans, lentils and chickpeas) in fields above the village, and herded domesticated goats. In addition, they hunted wild animals – deer, gazelle, equids, pigs and smaller mammals such as fox or hare. The estimated population of the MPPNB site from Ayn Ghazal is of 259–1349 individuals with an area of 3.01–4.7 ha. It is argued that at its founding at the commencement of the MPPNB Ayn Ghazal was likely about 2 ha in size and grew to 5 ha by the end of the MPPNB. At this point in time their estimated population was 600–750 people or 125–150 people per hectare. The diet of the occupants of PPNB Ayn Ghazal was varied. Domesticated plants included wheat and barley species, but legumes (primarily lentils and peas) appear to have been preferred cultigens. Wild plants also were consumed. The determination of domesticated animals, sensu stricto, is a topic of much debate. At PPNB Ayn Ghazal goats were a major species, and they were used in a domestic sense, although they may not have been morphologically domestic. Many of the phalanges recovered exhibit pathologies that are suggestive of tethering. An impressive range of wild animal species also were consumed at the site. Over 50 taxa have been identified, including gazelle, Bos, Sus sp., Lepus, and Vulpes. Ayn Ghazal was in an area that was suitable for agriculture. Archaeologists think that throughout the mid east much of the land was exhausted after some 700 years of planting and so became unsuitable for agriculture. The people from those small villages abandoned their unproductive fields and migrated, with their domestic animals, to places with better ecological conditions, like Ayn Ghazal that could support larger populations. As opposed to other sites as new people migrated to Ayn Ghazal, probably with few possessions and possibly starving, class distinctions began to develop. The influx of new people placed stresses on the social fabric – new diseases, more people to feed from what was planted and more animals that needed grazing. There are evidences of mining activities as part of a production sequence conducted by craftsmen at the site of Ayn Ghazal, these potential part-time specialists in some way controlled access to such raw materials. Genetics Y-DNA haplogroups such as E1b1b1b2 (E-Z830) has been found in remains at Ayn Ghazal, along with the general PPNB populations and in most Natufians. Haplogroup T1a (T-M70) has also been found among the Pre-Pottery Neolithic B inhabitants from Ayn Ghazal, and is currently the oldest known sample ever found at any ancient site. This second haplotype marker wasn't found among the early Levant epipaleolithic populations. It is thought therefore, based on uniparental and autosomal data, that the Pre-Pottery Neolithic B populations at Ayn Ghazal Jordan, were mostly composed of two to three different populations: the members of the early Natufian industries, a population resulting from immigration from Anatolia, and another likely from the Fertile Crescent in Iraq or possibly Iran originating near Ganj Dareh. Culture Statues In the earlier levels at Ayn Ghazal there are small ceramic figures that seem to have been used as personal or familial ritual figures. There are figurines of both animals and people. The animal figures are of horned animals and the front part of the animal is the most clearly modeled. They all give the impression of dynamic force. Some of the animal figures have been stabbed in their vital parts; these figures have then been buried in the houses. Other figurines were burned and then discarded with the rest of the fire. They built ritual buildings and used large figurines or statues. The actual building of them is also a way for an elite group to demonstrate and underline its authority over those who owe the community or the elite labor as service and to bond laborers together as part of a new community. In addition to the monumental statues, small clay and stone tokens, some incised with geometric or naturalistic shapes, were found at Ayn Ghazal.The 195 figurines (40 human and 155 animal) recovered were from MPPNB contexts; 81% of the figurines have been found to belong to the MPPNB while only 19% belonging to the LPPNB and PPNC. The vast majority of figurines are of cattle, a species that makes up only 8% of the overall number of identified specimens (NISP) count. The importance of hunted cattle to the domestic ritual sphere of Ayn Ghazal is telling. It was seemingly of importance for individual households to have members who participated both the hunting of cattle – likely a group activity – and the subsequent feasting on the remains. Ayn Ghazal is renowned for a set of anthropomorphic statues found buried in pits in the vicinity of some special buildings that may have had ritual functions. These statues are half-size human figures modeled in white plaster around a core of bundled twigs. The figures have painted clothes, hair, and in some cases, ornamental tattoos or body paint. The eyes are created using plaster with a bitumen pupil and dioptase highlighting. In all, 32 of those plaster figures were found in two caches, 15 of them full figures, 15 busts, and 2 fragmentary heads. Three of the busts were two-headed. Burial practices Considerable evidence for mortuary practices during the PPNB period have been described in recent years. Post-mortem skull removal, commonly restricted to the cranium, but on occasion including the mandible, and apparently following preliminary primary interments of the complete corpse. Such treatment has commonly been interpreted as representing rituals connected with veneration of the dead or some form of "ancestor worship". There is evidence of class in the way the dead were treated. Some people were buried in the floors of their houses. After the flesh had wasted away, often the head was later retrieved and the skull buried in a separate shallow pit beneath the house floor, and some of the skulls were decorated. This could have been either a form of respect or so that they could impart their power to the house and the people in it. However, some people were thrown on trash heaps and their bodies remain intact, indicating that not every deceased was ceremoniously put to rest. Scholars have estimated that a third of adult burials were found in trash pits with their heads intact. Why only some of the inhabitants were properly buried and others simply disposed of remains unresolved. Burials seem to have taken place approximately every 15–20 years, indicating a rate of one burial per generation, though gender and age were not constant in this practice. Excavation and conservation The site is located at the boundary between Amman's Tariq and Basman districts, next to, and named for, the Ayn Ghazal Interchange connecting Al-Shahid Street and Army Street (Ayn Ghazal is the name of a minor village just north of the road, now within Tariq district). The site was discovered in 1974 by developers who were building Army St, the road connecting Amman and Zarqa. Excavation began in 1982, however by this time, around 600 meters (1,970 ft) of road ran through the site. Despite the damage urban expansion brought, what remained of Ayn Ghazal provided a wealth of information and continued to do so until 1989. One of the more notable archaeological finds during these first excavations came to light in 1983. While examining a cross section of earth in a path carved out by a bulldozer, archaeologists came across the edge of a large pit 2.5 meters (8 ft) under the surface containing plaster statues. Another set of excavations, under the direction of Gary O. Rollefson and Zeidan Kafafi took place in the early 1990s. The site was included in the 2004 World Monuments Watch by the World Monuments Fund to call attention to the threat of encroaching urban development. Relative chronology References Footnotes Further reading External links ʿAin Ghazal statues at Smithsonian Institution 'Ain Ghazal Excavation Reports (menic.utexas.edu) Institut du Monde Arabe () UCL (University College London): The ʿAin Ghazal Statue Project () The Joukowsky Institute of Archaeology Photos of Ain Ghazal at the American Center of Research Ain Ghazal – An Ancient Mystery Populated places established in the 8th millennium BC Populated places disestablished in the 5th millennium BC Ain Ghazal Former populated places in Jordan Neolithic settlements 1974 archaeological discoveries Collection of the Jordan Museum Megasites Pre-Pottery Neolithic B
Ayn Ghazal (archaeological site)
Physics,Mathematics
2,427
186,623
https://en.wikipedia.org/wiki/Isaac%20Singer
Isaac Merritt Singer (October 27, 1811 – July 23, 1875) was an American inventor, actor, and businessman. He made important improvements in the design of the sewing machine and was the founder of what became one of the first American multi-national businesses, the Singer Sewing Machine Company. Many others, including Walter Hunt and Elias Howe, had patented sewing machines before Singer, but his success was based on the practicality of his machine, the ease with which it could be adapted to home use and its availability on an installments payment basis. Singer died in 1875, dividing his $13 million fortune unequally among 20 of his living children by his wives and various mistresses, although one son, who had supported his mother in her divorce case against Singer, received only $500. Altogether, he fathered 26 children by five different women. Early life Isaac Merritt Singer was born on October 27, 1811, in Pittstown, Schaghticoke, New York. He was the youngest of eight children born to a German Jewish father, Adam Singer (né Reisinger) (1772–1855), and his American Jewish wife, Ruth (née Benson) Singer. His siblings were John Valentine Singer, Alexander Singer, Elizabeth (née Singer) Colby, Christiana (née Singer) Cleveland, and Elijah Singer. In 1821, his parents divorced and his mother abandoned Isaac. At twelve, he ran away from home to join a traveling stage act, called the Rochester Players, after finding bits of work as a joiner and lathe operator. Career In 1839, Singer obtained his first patent, for a machine to drill rock, selling it for $2,000 (or over $150,000 in 2024 dollars) to the I & M Canal Building Company. With this financial success, he opted to return to his career as an actor. He went on tour, forming a troupe known as the "Merritt Players", appearing onstage under the name "Isaac Merritt", with Mary Ann Sponsler (one of his mistresses) also appearing onstage, calling herself "Mrs. Merritt". The tour lasted about five years. He developed and patented a "machine for carving wood and metal" on April 10, 1849. At 38, with Mary Ann and eight children, he packed up his family and moved back to New York City, hoping to market his wood-block cutting machine. He obtained an advance to build a working prototype, and constructed one in A. B. Taylor & Co shop, where he met G. B. Zieber, who became Singer's financier and partner. However, not long after the machine was built, the steam boiler blew up at the shop, destroying the prototype. Zieber persuaded Singer to make a new start in Boston, a center of the printing trade. The singer went to Boston in 1850 to display his invention at the machine shop of Orson C. Phelps. Orders for Singer's wood cutting machine were not, however, forthcoming. Lerow & Blodgett sewing machines were being constructed and repaired in Phelps' shop. Phelps asked Singer to look at the sewing machines, which were difficult to use and produce. Singer concluded that the sewing machine would be more reliable if the shuttle moved in a straight line rather than a circle, with a straight rather than a curved needle. Singer was able to obtain US Patent number 8294 for his improvements on August 12, 1851. I. M. Singer & Co In 1856, manufacturers Grover & Baker, Singer, Wheeler & Wilson, all accusing each other of patent infringement, met in Albany, New York to pursue their suits. Orlando B. Potter, a lawyer and president of the Grover and Baker Company, proposed that, rather than squander their profits on litigation, they pool their patents. This was the first patent pool, a process which enables the production of complicated machines without legal battles over patent rights. They agreed to form the Sewing Machine Combination, but for this to be of any use, they had to secure the cooperation of Elias Howe, who still held certain vital uncontested patents. Terms were arranged; Howe received a royalty on every sewing machine manufactured. Sewing machines began to be mass-produced. I. M. Singer & Co manufactured 2,564 machines in 1856, and 13,000 in 1860 at a new plant on Mott Street in New York. Later, a massive plant was built near Elizabeth, New Jersey. Up to then, sewing machines had been industrial machines, made for garments, shoes, bridles and for tailors, but in 1856, smaller machines began to be marketed for home use. However, at the then enormous price of over $100 ($4,094.82 in 2024 USD), few sold. Singer invested heavily in mass production utilizing the concept of interchangeable parts developed by Samuel Colt and Eli Whitney for their firearms. He was able to cut the price in half, while at the same time increasing his profit margin by 530%. Singer was the first who put a family machine, "the turtle back", on the market. Eventually, the price came down to $10(about $404.23 in 2024 USD) According to PBS, "His partner, Edward Cabot Clark, pioneered installment purchasing plans and accepted trade-ins, causing sales to soar." Women were able to make items at home for their families or for sale and charitable groups began to support poorer women to find useful skills and respectable employment in sewing, such as the Ladies Work Society (1875), the Association for the Sale of Works of Ladies of Limited Means, the Co-operative Needlewoman's Society and associated magazines, pattern books and group classes began for the better off women who also wanted to have some form of useful, economic activity, which a sewing machine at home now offered. I. M. Singer expanded into the European market, first starting in Bonnybridge, Stirlingshire, next to the iron foundries that supplied the castings for the chassis until expansion was hindered by the expansion of the foundries around them and they then moved to Clydebank, establishing the world's largest sewing machine factory, built between 1882 and 1885, by George McKenzie in Kilbowie, Clydebank, near Glasgow, consisting of two main manufacturing buildings on three levels (one building for making the domestic machines, the other for industrial model production), with a 200 ft (over 60meters) high tower with the 'Singer' name logo and four clock faces which was the largest four-sided clock tower at the time. Singer opened the factory at Clydebank with 3,500 people making 8,000 sewing machines a week on average. The factory was linked directly to railway lines, and via stations in Dumbarton and Helensburgh to assist in distribution. Later improvements included a further two levels for the production blocks and a power station and sawmills. (Note: images of the tower and the factory's transport connections are available on the Scottish National Buildings Record) The factory later supplied military and home sewers, and made munitions during World War II. In 1941, the factory and area was severely damaged (losing 390,000 sq ft 36,000 sq m) in the 'Clydebank Blitz' when at least 35,000 homes were damaged and 500 people, including 39 Singer workers were killed. Even as early as 1880, Singer machines compared favorably with their nearest competitors: information articles becoming marketing tool. By the 1900s, this factory, controlled by the parent company, made 1.5 million machines sold around the world, helping the Singer company in becoming one of the first American-based multinational corporations, with agencies in Paris and Rio de Janeiro. Later as the Singer Manufacturing Company and its competitors expanded, due to its affordability (or purchase plan terms) by the 1940s there were 24,000 sewing classes a year running in the UK alone, and the Education Act 1944 made practical dressmaking a compulsory subject for girls in all state schools. By the 1950s, there were Singer Teen-Age Sewing Classes and advertising campaigns to encourage girls to make their own fashions to attract boys' interest. Changes to company in Europe In 1863, I. M. Singer & Co. was dissolved by mutual consent with Edward Cabot Clark seeing Singer's reputation as a risk to growth; but the business continued with Singer owning 40% of shares and still on the board, as "The Singer Manufacturing Company", in 1887. In 1871, Singer purchased an estate and settled with Isabella in Paignton, Devon, England. He commissioned the 110-roomed Oldway Mansion as his private residence, with a hall of mirrors, maze and grotto garden; it was rebuilt by Paris Singer, his third son from Isabella, in the style of the Palace of Versailles. And the area became known locally as 'Singerton'. It has been named by the Victorian Society as a heritage building at risk of disrepair. Consequence on global garment industry Singer's prototype sewing machine became the first to work in a practical way. It could sew 900 stitches per minute, far better than the 40 of an accomplished seamstress on simple work. This started the industrialisation of garment and textile manufacturing, as a shirt took an hour to make compared to fifteen hours previously, but these still needed finishing by hand, and the finishers worked alone on piecework terms at home, but mass over-production by factories' machines, led to pressure on wages and to unemployment. In 1911, most of the mainly female workforce at the Clydebank Singer factory went on strike in support of 12 workers who had objected to increased workload and lower pay conditions imposed (by this time there were 11,500 employees). Although the strike did not succeed, Singer fired 400 workers including the union leaders. The Singer Strike was one of the key actions leading to protests known as Red Clydeside. In the 1960s, Japanese production efficiency brought aluminium body machines and products at lower pricing which outsold the cast iron Singer machines. The symbolic tower was knocked down as the Singer Clydebank factory was modernised, but it closed in 1980 and was demolished in the late 1990s. Personal life In 1830, at nineteen, Isaac Singer married fifteen-year-old Catherine Maria Haley (1815–1884). The couple had two children before he left her to join the Baltimore Strolling Players. In 1860, Singer divorced Catherine on the basis of her adultery with Stephen Kent. Their son William spoke up for his mother in the divorce case and was snubbed by Singer, including in his will where William received just $500 of Singer's $13,000,000 fortune. Their two children were: William Adam Singer (1834–1914), who in 1872 married Sarah Augusta Webb (1851–1909), a twin sister of William Seward Webb (who married Eliza Osgood Vanderbilt). Lillian C. Singer (1837–1912), who married Harry Hodson. In 1836, while still married to Catherine, Singer began a 25-year affair with Mary Ann Sponsler (1817–1896). Together, Mary Ann and Isaac had ten children, two of whom died at birth, including: Isaac Augustus Singer (1837–1902), who married Sarah Jane Clarke. Vouletti Theresa Singer (1840–1913), who married William Fash Proctor. John Albert Singer (1842–1911), who married Jennie C. Belinski. Fanny Elizabeth Singer (1844–1909), who married William S. Archer. Jasper Hamlet Singer (1846–1922), who married Jane Collier Cook. Mary Olivia Singer (1848–1900), who married Sturges Selleck Whitlock, a Connecticut state senator. Julia Ann Singer (1855–1923), who married Martin J. Herz. Caroline Virginia Singer (1857–1896), who married Augustus C. Foster. Financial success allowed Singer to buy a mansion on Fifth Avenue, into which he moved his second family. He and Mary Ann had abandoned their joint acting company, the Merritt Players, as his inventions were more successful. He continued to live with Mary Ann, until she spotted him driving down Fifth Avenue seated beside Mary McGonigal, an employee, about whom Mary Ann already had suspicions. Reportedly, Singer also had an affair with McGonigal's sister, Kate McGonigal. Together, Mary McGonigal and Isaac were the parents of seven children (who used the surname Matthews), two of whom died at birth, including: Ruth Mary Matthews (b. 1852) Clara Matthews (1854–1933), who married Col. Hugh Stafford in 1880. Margaret Matthews (1858–1939), who married Granville Henry Jackson Alexander, Esq., the High Sheriff of Armagh. Charles Alexander Matthews (1859–1883), who married Minnie Mathews. Florence Adelaide Matthews (–1932), who married Harry Ruthven Pratt. And Mary Ann, still calling herself Mrs. I. M. Singer, had her husband arrested for bigamy. Singer was let out on bond and, disgraced, fled to London in 1862, taking Mary McGonigal with him. In the aftermath, another of Isaac's families was discovered: he had a "wife", Mary Eastwood Walters, a machine demonstrator, and had had a daughter in Lower Manhattan: Alice Eastwood (née Walters) Merritt (1852–1890), who adopted the surname Merritt and married twice, including to W. A. P. LaGrove at age eighteen in a marriage arranged by Singer. By 1860, Isaac had fathered and acknowledged twenty children, sixteen of them still then living, by four women. In 1861, his longstanding mistress Mary Ann took him to court for abusing her and daughter Vouletti. With Isaac in London, Mary Ann began setting about securing a financial claim to his assets by filing documents detailing his infidelities, and claiming that, though she had never been formally married to Isaac, they were wed under common law by living together for seven months after Isaac had been divorced from his first wife, Catherine. Eventually, a settlement was made, but no divorce was granted. However, she asserted that she was free to marry, and indeed she married John E. Foster. Isaac, meanwhile, had renewed acquaintance with Isabella Eugenie Boyer, a nineteen year old Frenchwoman, whom he had lived with in Paris when he was staying there in 1860. She left her husband and married Isaac, who was by now fifty, under the name of Isabella Eugenie Sommerville on June 13, 1863, while she was pregnant. Together, they had six children: Sir Adam Mortimer Singer (1863–1929) Winnaretta Eugenie Singer (1865–1943), a patron of 20th-century music who married Prince Louis de Scey-Montbéliard in 1887. They divorced in ⁠1892 and she married Prince Edmond de Polignac. Washington Merritt Grant Singer (1866–1934), who married Blanche Emmeline Hale and Ellen Mary Allen. Paris Eugene Singer (1867–1932), who married Cecilia Henrietta Augusta "Lillie" Graham (1867–1951). Paris was a close friend of the Palm Beach architect Addison Mizner. Isabelle-Blanche Singer (1869–1896), who married the French aristocrat Jean, Duc Decazes et de Glücksbierg in 1888. Franklin Merritt Morse Singer (1870–1939), who married Emilie Maigret. Isaac Singer died in 1875, shortly after the wedding of his daughter by Mary Eastwood Walters, Alice, whose dress had cost as much as a London apartment. His funeral was an elaborate affair with eighty horse-drawn carriages, and around 2,000 mourners, to see him buried locally in Torquay Cemetery, at his request in three layers of coffin (cedar lined with satin, lead, English oak with silver decoration) and a marble tomb. Legacy and honors The World War II Liberty Ship was named in his honor. Singer Island, Florida was named for his son Paris Singer References Further reading Brandon, Ruth, Singer and the Sewing Machine: A Capitalist Romance, Kodansha International, New York, 1977. Glander, Angelika, Singer – Der König der Nähmaschinen, Die Biographie, Norderstedt, 2009 (in German) Hawthorne, Paul Oldway Mansion, historic home of the Singer family Torbay Books, Paignton, 2009 External links Isaac Merritt Singer, Detailed Biography 1811 births 1875 deaths 19th-century American businesspeople 19th-century American male actors American male stage actors Male actors from New York (state) 19th-century American inventors American manufacturing businesspeople Burials in Devon Businesspeople from New York (state) People from Pittstown, New York Sewing equipment Sewing machines American people of German descent
Isaac Singer
Physics,Technology
3,411
2,015,367
https://en.wikipedia.org/wiki/Two-hybrid%20screening
Two-hybrid screening (originally known as yeast two-hybrid system or Y2H) is a molecular biology technique used to discover protein–protein interactions (PPIs) and protein–DNA interactions by testing for physical interactions (such as binding) between two proteins or a single protein and a DNA molecule, respectively. The premise behind the test is the activation of downstream reporter gene(s) by the binding of a transcription factor onto an upstream activating sequence (UAS). For two-hybrid screening, the transcription factor is split into two separate fragments, called the DNA-binding domain (DBD or often also abbreviated as BD) and activating domain (AD). The BD is the domain responsible for binding to the UAS and the AD is the domain responsible for the activation of transcription. The Y2H is thus a protein-fragment complementation assay. History Pioneered by Stanley Fields and Ok-Kyu Song in 1989, the technique was originally designed to detect protein–protein interactions using the Gal4 transcriptional activator of the yeast Saccharomyces cerevisiae. The Gal4 protein activated transcription of a gene involved in galactose utilization, which formed the basis of selection. Since then, the same principle has been adapted to describe many alternative methods, including some that detect protein–DNA interactions or DNA-DNA interactions, as well as methods that use different host organisms such as Escherichia coli or mammalian cells instead of yeast. Basic premise The key to the two-hybrid screen is that in most eukaryotic transcription factors, the activating and binding domains are modular and can function in proximity to each other without direct binding. This means that even though the transcription factor is split into two fragments, it can still activate transcription when the two fragments are indirectly connected. The most common screening approach is the yeast two-hybrid assay. In this approach the researcher knows where each prey is located on the used medium (agar plates). Millions of potential interactions in several organisms have been screened in the latest decade using high-throughput screening systems (often using robots) and over thousands of interactions have been detected and categorized in databases as BioGRID. This system often utilizes a genetically engineered strain of yeast in which the biosynthesis of certain nutrients (usually amino acids or nucleic acids) is lacking. When grown on media that lacks these nutrients, the yeast fail to survive. This mutant yeast strain can be made to incorporate foreign DNA in the form of plasmids. In yeast two-hybrid screening, separate bait and prey plasmids are simultaneously introduced into the mutant yeast strain or a mating strategy is used to get both plasmids in one host cell. The second high-throughput approach is the library screening approach. In this set up the bait and prey harboring cells are mated in a random order. After mating and selecting surviving cells on selective medium the scientist will sequence the isolated plasmids to see which prey (DNA sequence) is interacting with the used bait. This approach has a lower rate of reproducibility and tends to yield higher amounts of false positives compared to the matrix approach. Plasmids are engineered to produce a protein product in which the DNA-binding domain (BD) fragment is fused onto a protein while another plasmid is engineered to produce a protein product in which the activation domain (AD) fragment is fused onto another protein. The protein fused to the BD may be referred to as the bait protein, and is typically a known protein the investigator is using to identify new binding partners. The protein fused to the AD may be referred to as the prey protein and can be either a single known protein or a library of known or unknown proteins. In this context, a library may consist of a collection of protein-encoding sequences that represent all the proteins expressed in a particular organism or tissue, or may be generated by synthesising random DNA sequences. Regardless of the source, they are subsequently incorporated into the protein-encoding sequence of a plasmid, which is then transfected into the cells chosen for the screening method. This technique, when using a library, assumes that each cell is transfected with no more than a single plasmid and that, therefore, each cell ultimately expresses no more than a single member from the protein library. If the bait and prey proteins interact (i.e., bind), then the AD and BD of the transcription factor are indirectly connected, bringing the AD in proximity to the transcription start site and transcription of reporter gene(s) can occur. If the two proteins do not interact, there is no transcription of the reporter gene. In this way, a successful interaction between the fused protein is linked to a change in the cell phenotype. The challenge of separating cells that express proteins that happen to interact with their counterpart fusion proteins from those that do not, is addressed in the following section. Fixed domains In any study, some of the protein domains, those under investigation, will be varied according to the goals of the study whereas other domains, those that are not themselves being investigated, will be kept constant. For example, in a two-hybrid study to select DNA-binding domains, the DNA-binding domain, BD, will be varied while the two interacting proteins, the bait and prey, must be kept constant to maintain a strong binding between the BD and AD. There are a number of domains from which to choose the BD, bait and prey and AD, if these are to remain constant. In protein–protein interaction investigations, the BD may be chosen from any of many strong DNA-binding domains such as Zif268. A frequent choice of bait and prey domains are residues 263–352 of yeast Gal11P with a N342V mutation and residues 58–97 of yeast Gal4, respectively. These domains can be used in both yeast- and bacterial-based selection techniques and are known to bind together strongly. The AD chosen must be able to activate transcription of the reporter gene, using the cell's own transcription machinery. Thus, the variety of ADs available for use in yeast-based techniques may not be suited to use in their bacterial-based analogues. The herpes simplex virus-derived AD, VP16 and yeast Gal4 AD have been used with success in yeast whilst a portion of the α-subunit of E. coli RNA polymerase has been utilised in E. coli-based methods. Whilst powerfully activating domains may allow greater sensitivity towards weaker interactions, conversely, a weaker AD may provide greater stringency. Construction of expression plasmids A number of engineered genetic sequences must be incorporated into the host cell to perform two-hybrid analysis or one of its derivative techniques. The considerations and methods used in the construction and delivery of these sequences differ according to the needs of the assay and the organism chosen as the experimental background. There are two broad categories of hybrid library: random libraries and cDNA-based libraries. A cDNA library is constituted by the cDNA produced through reverse transcription of mRNA collected from specific cells of types of cell. This library can be ligated into a construct so that it is attached to the BD or AD being used in the assay. A random library uses lengths of DNA of random sequence in place of these cDNA sections. A number of methods exist for the production of these random sequences, including cassette mutagenesis. Regardless of the source of the DNA library, it is ligated into the appropriate place in the relevant plasmid/phagemid using the appropriate restriction endonucleases. E. coli-specific considerations By placing the hybrid proteins under the control of IPTG-inducible lac promoters, they are expressed only on media supplemented with IPTG. Further, by including different antibiotic resistance genes in each genetic construct, the growth of non-transformed cells is easily prevented through culture on media containing the corresponding antibiotics. This is particularly important for counter selection methods in which a lack of interaction is needed for cell survival. The reporter gene may be inserted into the E. coli genome by first inserting it into an episome, a type of plasmid with the ability to incorporate itself into the bacterial cell genome with a copy number of approximately one per cell. The hybrid expression phagemids can be electroporated into E. coli XL-1 Blue cells which after amplification and infection with VCS-M13 helper phage, will yield a stock of library phage. These phage will each contain one single-stranded member of the phagemid library. Recovery of protein information Once the selection has been performed, the primary structure of the proteins which display the appropriate characteristics must be determined. This is achieved by retrieval of the protein-encoding sequences (as originally inserted) from the cells showing the appropriate phenotype. E. coli The phagemid used to transform E. coli cells may be "rescued" from the selected cells by infecting them with VCS-M13 helper phage. The resulting phage particles that are produced contain the single-stranded phagemids and are used to infect XL-1 Blue cells. The double-stranded phagemids are subsequently collected from these XL-1 Blue cells, essentially reversing the process used to produce the original library phage. Finally, the DNA sequences are determined through dideoxy sequencing. Controlling sensitivity The Escherichia coli-derived Tet-R repressor can be used in line with a conventional reporter gene and can be controlled by tetracycline or doxicycline (Tet-R inhibitors). Thus the expression of Tet-R is controlled by the standard two-hybrid system but the Tet-R in turn controls (represses) the expression of a previously mentioned reporter such as HIS3, through its Tet-R promoter. Tetracycline or its derivatives can then be used to regulate the sensitivity of a system utilising Tet-R. Sensitivity may also be controlled by varying the dependency of the cells on their reporter genes. For example, this may be affected by altering the concentration of histidine in the growth medium for his3-dependent cells and altering the concentration of streptomycin for aadA dependent cells. Selection-gene-dependency may also be controlled by applying an inhibitor of the selection gene at a suitable concentration. 3-Amino-1,2,4-triazole (3-AT) for example, is a competitive inhibitor of the HIS3-gene product and may be used to titrate the minimum level of HIS3 expression required for growth on histidine-deficient media. Sensitivity may also be modulated by varying the number of operator sequences in the reporter DNA. Non-fusion proteins A third, non-fusion protein may be co-expressed with two fusion proteins. Depending on the investigation, the third protein may modify one of the fusion proteins or mediate or interfere with their interaction. Co-expression of the third protein may be necessary for modification or activation of one or both of the fusion proteins. For example, S. cerevisiae possesses no endogenous tyrosine kinase. If an investigation involves a protein that requires tyrosine phosphorylation, the kinase must be supplied in the form of a tyrosine kinase gene. The non-fusion protein may mediate the interaction by binding both fusion proteins simultaneously, as in the case of ligand-dependent receptor dimerization. For a protein with an interacting partner, its functional homology to other proteins may be assessed by supplying the third protein in non-fusion form, which then may or may not compete with the fusion-protein for its binding partner. Binding between the third protein and the other fusion protein will interrupt the formation of the reporter expression activation complex and thus reduce reporter expression, leading to the distinguishing change in phenotype. Split-ubiquitin yeast two-hybrid One limitation of classic yeast two-hybrid screens is that they are limited to soluble proteins. It is therefore impossible to use them to study the protein–protein interactions between insoluble integral membrane proteins. The split-ubiquitin system provides a method for overcoming this limitation. In the split-ubiquitin system, two integral membrane proteins to be studied are fused to two different ubiquitin moieties: a C-terminal ubiquitin moiety ("Cub", residues 35–76) and an N-terminal ubiquitin moiety ("Nub", residues 1–34). These fused proteins are called the bait and prey, respectively. In addition to being fused to an integral membrane protein, the Cub moiety is also fused to a transcription factor (TF) that can be cleaved off by ubiquitin specific proteases. Upon bait–prey interaction, Nub and Cub-moieties assemble, reconstituting the split-ubiquitin. The reconstituted split-ubiquitin molecule is recognized by ubiquitin specific proteases, which cleave off the transcription factor, allowing it to induce the transcription of reporter genes. Fluorescent two-hybrid assay Zolghadr and co-workers presented a fluorescent two-hybrid system that uses two hybrid proteins that are fused to different fluorescent proteins as well as LacI, the lac repressor. The structure of the fusion proteins looks like this: FP2-LacI-bait and FP1-prey where the bait and prey proteins interact and bring the fluorescent proteins (FP1 = GFP, FP2=mCherry) in close proximity at the binding site of the LacI protein in the host cell genome. The system can also be used to screen for inhibitors of protein–protein interactions. Enzymatic two-hybrid systems: KISS While the original Y2H system used a reconstituted transcription factor, other systems create enzymatic activities to detect PPIs. For instance, the KInase Substrate Sensor ("KISS"), is a mammalian two-hybrid approach has been designed to map intracellular PPIs. Here, a bait protein is fused to a kinase-containing portion of TYK2 and a prey is coupled to a gp130 cytokine receptor fragment. When bait and prey interact, TYK2 phosphorylates STAT3 docking sites on the prey chimera, which ultimately leads to activation of a reporter gene. One-, three- and one-two-hybrid variants One-hybrid The one-hybrid variation of this technique is designed to investigate protein–DNA interactions and uses a single fusion protein in which the AD is linked directly to the binding domain. The binding domain in this case however is not necessarily of fixed sequence as in two-hybrid protein–protein analysis but may be constituted by a library. This library can be selected against the desired target sequence, which is inserted in the promoter region of the reporter gene construct. In a positive-selection system, a binding domain that successfully binds the UAS and allows transcription is thus selected. Note that selection of DNA-binding domains is not necessarily performed using a one-hybrid system, but may also be performed using a two-hybrid system in which the binding domain is varied and the bait and prey proteins are kept constant. Three-hybrid RNA-protein interactions have been investigated through a three-hybrid variation of the two-hybrid technique. In this case, a hybrid RNA molecule serves to adjoin together the two protein fusion domains—which are not intended to interact with each other but rather the intermediary RNA molecule (through their RNA-binding domains). Techniques involving non-fusion proteins that perform a similar function, as described in the 'non-fusion proteins' section above, may also be referred to as three-hybrid methods. One-two-hybrid Simultaneous use of the one- and two-hybrid methods (that is, simultaneous protein–protein and protein–DNA interaction) is known as a one-two-hybrid approach and expected to increase the stringency of the screen. Host organism Although theoretically, any living cell might be used as the background to a two-hybrid analysis, there are practical considerations that dictate which is chosen. The chosen cell line should be relatively cheap and easy to culture and sufficiently robust to withstand application of the investigative methods and reagents. The latter is especially important for doing high-throughput studies. Therefore the yeast S. cerevisiae has been the main host organism for two-hybrid studies. However it is not always the ideal system to study interacting proteins from other organisms. Yeast cells often do not have the same post translational modifications, have a different codon use or lack certain proteins that are important for the correct expression of the proteins. To cope with these problems several novel two-hybrid systems have been developed. Depending on the system used agar plates or specific growth medium is used to grow the cells and allow selection for interaction. The most common used method is the agar plating one where cells are plated on selective medium to see of interaction takes place. Cells that have no interaction proteins should not survive on this selective medium. S. cerevisiae (yeast) The yeast S. cerevisiae was the model organism used during the two-hybrid technique's inception. It is commonly known as the Y2H system. It has several characteristics that make it a robust organism to host the interaction, including the ability to form tertiary protein structures, neutral internal pH, enhanced ability to form disulfide bonds and reduced-state glutathione among other cytosolic buffer factors, to maintain a hospitable internal environment. The yeast model can be manipulated through non-molecular techniques and its complete genome sequence is known. Yeast systems are tolerant of diverse culture conditions and harsh chemicals that could not be applied to mammalian tissue cultures. A number of yeast strains have been created specifically for Y2H screens, e.g. Y187 and AH109, both produced by Clontech. Yeast strains R2HMet and BK100 have also been used. Candida albicans C. albicans is a yeast with a particular feature: it translates the CUG codon into serine rather than leucine. Due to this different codon usage it is difficult to use the model system S. cerevisiae as a Y2H to check for protein-protein interactions using C. albicans genes. To provide a more native environment a C. albicans two-hybrid (C2H) system was developed. With this system protein-protein interactions can be studied in C. albicans itself. A recent addition was the creation of a high-throughput system. E. coli Bacterial two hybrid methods (B2H or BTH) are usually carried out in E. coli and have some advantages over yeast-based systems. For instance, the higher transformation efficiency and faster rate of growth lends E. coli to the use of larger libraries (in excess of 108). The absence of requirements for a nuclear localisation signal to be included in the protein sequence and the ability to study proteins that would be toxic to yeast may also be major factors to consider when choosing an experimental background organism. The methylation activity of certain E. coli DNA methyltransferase proteins may interfere with some DNA-binding protein selections. If this is anticipated, the use of an E. coli strain that is defective for a particular methyltransferase may be an obvious solution. The B2H may not be ideal when studying eukaryotic protein-protein interactions (e.g. human proteins) as proteins may not fold as in eukaryotic cells or may lack other processing. Mammalian cells In recent years a mammalian two hybrid (M2H) system has been designed to study mammalian protein-protein interactions in a cellular environment that closely mimics the native protein environment. Transiently transfected mammalian cells are used in this system to find protein-protein interactions. Using a mammalian cell line to study mammalian protein-protein interactions gives the advantage of working in a more native context. The post-translational modifications, phosphorylation, acylation and glycosylation are similar. The intracellular localization of the proteins is also more correct compared to using a yeast two hybrid system. It is also possible with the mammalian two-hybrid system to study signal inputs. Another big advantage is that results can be obtained within 48 hours after transfection. Arabidopsis thaliana In 2005 a two hybrid system in plants was developed. Using protoplasts of A. thaliana protein-protein interactions can be studied in plants. This way the interactions can be studied in their native context. In this system the GAL4 AD and BD are under the control of the strong 35S promoter. Interaction is measured using a GUS reporter. In order to enable a high-throughput screening the vectors were made gateway compatible. The system is known as the protoplast two hybrid (P2H) system. Aplysia californica The sea hare A californica is a model organism in neurobiology to study among others the molecular mechanisms of long-term memory. To study interactions, important in neurology, in a more native environment a two-hybrid system has been developed in A californica neurons. A GAL4 AD and BD are used in this system. Bombyx mori An insect two-hybrid (I2H) system was developed in a silkworm cell line from the larva or caterpillar of the domesticated silk moth, Bombyx mori (BmN4 cells). This system uses the GAL4 BD and the activation domain of mouse NF-κB P65. Both are under the control of the OpIE2 promoter. Applications Determination of sequences crucial for interaction By changing specific amino acids by mutating the corresponding DNA base-pairs in the plasmids used, the importance of those amino acid residues in maintaining the interaction can be determined. After using bacterial cell-based method to select DNA-binding proteins, it is necessary to check the specificity of these domains as there is a limit to the extent to which the bacterial cell genome can act as a sink for domains with an affinity for other sequences (or indeed, a general affinity for DNA). Drug and poison discovery Protein–protein signalling interactions pose suitable therapeutic targets due to their specificity and pervasiveness. The random drug discovery approach uses compound banks that comprise random chemical structures, and requires a high-throughput method to test these structures in their intended target. The cell chosen for the investigation can be specifically engineered to mirror the molecular aspect that the investigator intends to study and then used to identify new human or animal therapeutics or anti-pest agents. Determination of protein function By determination of the interaction partners of unknown proteins, the possible functions of these new proteins may be inferred. This can be done using a single known protein against a library of unknown proteins or conversely, by selecting from a library of known proteins using a single protein of unknown function. Zinc finger protein selection To select zinc finger proteins (ZFPs) for protein engineering, methods adapted from the two-hybrid screening technique have been used with success. A ZFP is itself a DNA-binding protein used in the construction of custom DNA-binding domains that bind to a desired DNA sequence. By using a selection gene with the desired target sequence included in the UAS, and randomising the relevant amino acid sequences to produce a ZFP library, cells that host a DNA-ZFP interaction with the required characteristics can be selected. Each ZFP typically recognises only 3–4 base pairs, so to prevent recognition of sites outside the UAS, the randomised ZFP is engineered into a 'scaffold' consisting of another two ZFPs of constant sequence. The UAS is thus designed to include the target sequence of the constant scaffold in addition to the sequence for which a ZFP is selected. A number of other DNA-binding domains may also be investigated using this system. Strengths Two-hybrid screens are low-tech; they can be carried out in any lab without sophisticated equipment. Two-hybrid screens can provide an important first hint for the identification of interaction partners. The assay is scalable, which makes it possible to screen for interactions among many proteins. Furthermore, it can be automated, and by using robots many proteins can be screened against thousands of potentially interacting proteins in a relatively short time. Two types of large screens are used: the library approach and the matrix approach. Yeast two-hybrid data can be of similar quality to data generated by the alternative approach of coaffinity purification followed by mass spectrometry (AP/MS). Weaknesses The main criticism applied to the yeast two-hybrid screen of protein–protein interactions are the possibility of a high number of false positive (and false negative) identifications. The exact rate of false positive results is not known, but earlier estimates were as high as 70%. This also, partly, explains the often found very small overlap in results when using a (high throughput) two-hybrid screening, especially when using different experimental systems. The reason for this high error rate lies in the characteristics of the screen: Certain assay variants overexpress the fusion proteins which may cause unnatural protein concentrations that lead to unspecific (false) positives. The hybrid proteins are fusion proteins; that is, the fused parts may inhibit certain interactions, especially if an interaction takes place at the N-terminus of a test protein (where the DNA-binding or activation domain is typically attached). An interaction may not happen in yeast, the typical host organism for Y2H. For instance, if a bacterial protein is tested in yeast, it may lack a chaperone for proper folding that is only present in its bacterial host. Moreover, a mammalian protein is sometimes not correctly modified in yeast (e.g., missing phosphorylation), which can also lead to false results. The Y2H takes place in the nucleus. If test proteins are not localized to the nucleus (because they have other localization signals) two interacting proteins may be found to be non-interacting. Some proteins might specifically interact when they are co-expressed in the yeast, although in reality they are never present in the same cell at the same time. However, in most cases it cannot be ruled out that such proteins are indeed expressed in certain cells or under certain circumstances. Each of these points alone can give rise to false results. Due to the combined effects of all error sources yeast two-hybrid have to be interpreted with caution. The probability of generating false positives means that all interactions should be confirmed by a high confidence assay, for example co-immunoprecipitation of the endogenous proteins, which is difficult for large scale protein–protein interaction data. Alternatively, Y2H data can be verified using multiple Y2H variants or bioinformatics techniques. The latter test whether interacting proteins are expressed at the same time, share some common features (such as gene ontology annotations or certain network topologies), have homologous interactions in other species. See also Phage display, an alternative method for detecting protein–protein and protein–DNA interactions Protein array, a chip-based method for detecting protein–protein interactions Synthetic genetic array analysis, a yeast-based method for studying gene interactions References External links Detail on sister technique two-hybrid system Science Creative Quarterly's overview of the yeast two hybrid system Gateway-Compatible Yeast One-Hybrid Screens Video animation of the Yeast Two-Hybrid System Yeast Two-Hybrid BioGrid Database with protein-protein interactions Cell biology Molecular biology Protein–protein interaction assays Systems biology
Two-hybrid screening
Chemistry,Biology
5,723
56,894,097
https://en.wikipedia.org/wiki/NGC%204598
NGC 4598 is a barred lenticular galaxy located in the constellation Virgo. NGC 4598 was discovered by astronomer William Herschel on April 15, 1784. The distance to NGC 4598 has not been accurately determined; measurements vary from 64 to 102 million light-years. According to the NASA/IPAC Extragalactic Database, its redshift based distance is while its redshift independent based distance is . Also, according to SIMBAD, its distance is . NGC 4598's average distance is . NGC 4598 is usually considered to be a member of the Virgo Cluster. However, P. Fouqu´e et al. suggests it may be a background galaxy independent of the main cluster. See also NGC 1533 Notes 1.This value was determined by using the three other measured values given above. References External links Virgo (constellation) Barred lenticular galaxies 4598 42427 7829 Astronomical objects discovered in 1784 Discoveries by William Herschel Virgo Cluster
NGC 4598
Astronomy
207
1,456,628
https://en.wikipedia.org/wiki/Judi-Dart
The Judi-Dart is a United States solid fueled sounding rocket. It was manufactured by Rocket Power Inc. It belonged to the Loki rocket family. The Judi-Dart was launched 89 times between 1964 and 1970. The Judi-Dart has a length of 2.70 metres, a diameter of 0.08 metres, a maximum flight altitude of 65 kilometres and a launch thrust of 9000 newtons. Description Engine The engine was internal combustion, with 1.70 m long and 8 cm in diameter (1.9KS2150) with an initial thrust of 9 kN, and a combustion time of only 2 seconds, reaching almost a speed of 1500 m/s. At this time, the engine separates from the bolt, which reaches its 75 km peak at around 135 seconds after ignition start. Dart The upper part of the rocket was a cylinder 4 cm in diameter and approximately 1 m high, and aerodynamically optimized. The diameter of this dart varied slightly depending on the payload. Usage It was used for meteorological survey by different countries in between 30 and 60 km altitude. The dart was propelled by the Judi rocket to an altitude of approximately 76 km, when a small explosive expelled the payload, normally made up of polarized metal sheets (copper) which were then tracked in their lateral movement. In addition to this standard payload, others were supported, such as: parachutes, inflatable spheres, and temperature sensors. It was used by following different organisations: NASA ISRO SUPARCO See also References External links Encyclopedia Astronautica at astronautix.com Sounding rockets of the United States
Judi-Dart
Astronomy
325
68,186,913
https://en.wikipedia.org/wiki/Leslie%20Fowden
Sir Leslie Fowden (1925–2008) was a British organic chemist and plant scientist, notable for his pioneering research on phytochemistry and plant amino acids, as well as for his role in promoting agricultural research in the UK. Biography Leslie Fowden was born at Birch Hill House, Wardle, Rochdale on 13 October 1925, the only child of Herbert Fowden, an iron turner, and Amy Dorothy (née Rabbich), a cotton minder. He was a diligent student who excelled at mathematics and won a fee-paying scholarship to Rochdale Grammar School for Boys (now Balderstone Technology College), where he studied from 1936 to 1943. He gained five distinctions in the School Certificate Examinations in 1940, including mathematics, physics and chemistry. In the 1942 Higher School Certificate (HSC) he was awarded distinctions for the same three subjects. Fowden went on to read chemistry at University College (UCL) in a two-year intensive degree course (a special requirement for chemistry students in the war years). Another requirement was that he also had to participate in officer training. He was awarded a first class BSc degree in chemistry with honours, and told that he was the top student in chemistry in the University of London as a whole. He started his PhD in late 1945, supervised jointly by Professor Ingold and by Professor E. D. Hughes of the University College of Wales, Bangor. Ingold was the UK authority on organic reaction mechanisms, and Fowden was set to work investigating nucleophilic substitution in alkyl halides as the alkyl group became progressively larger or more branched in structure. The degree was awarded in 1948, and the main findings were published in 1955. In 1947 Fowden accepted a post as scientific officer in the Human Nutrition Research Unit of the MRC in London. This was a key moment in his career, marking a move to work of more direct benefit to mankind. He was involved with two projects: (1) on kwashiorkor and a growth-retarding factor in maize bran ; and (2) a chromatographic study of peanut protein hydrolysates and their free amino acid content, as part of a scheme to improve post-war nutrition and the economy of Commonwealth countries in East Africa. Some aspects of the chromatographic work did not fit in with the MRC’s aims, so he accepted a lectureship in plant chemistry, back at UCL, where he had greater freedom. He set up a new lab where the main focus was on the identification and structural analysis of plant non-protein amino acids. He recruited PhD students and technical assistants; they, and later postdoctoral research fellows and foreign visitors, discovered several new plant amino and imino acids. Fowden isolated and characterized non-protein amino acids from a growing number and variety of plants, emphasizing their general importance in plant nitrogen metabolism. His researches were recognized by promotion to a readership in 1956. On 31 January 1955, Leslie Fowden and his family sailed on the America from Southampton to New York, en route to Ithaca, where Fowden took up a Rockefeller Visiting Fellowship to work with Professor F. C. Steward at Cornell. Their work together “provided one of the earliest demonstrations of how chemical data could be used to establish phylogenetic relationships within and between plant families and their constituent genera”. The Fowdens returned to the UK aboard the Queen Mary, arriving on 21 December 1955. Leslie Fowden made several more trips in the coming years, including: A visit in 1957 to Professor Virtanen for three months at the Biochemical Institute in Helsinki Several trips to East Germany and the USSR in the early 1960s to attend specialist meetings on plant nitrogen metabolism and visit influential scientists of the Eastern Bloc In 1961 he made his first visit to California to lecture at a specialist amino acid meeting and visit University of California campuses. He returned to the University of California, Davis for a seven-month sabbatical in 1963 as a visiting professor. In the summers of 1969 and 1970 Fowden returned to Davis on a NATO Cooperative grant He was the first Royal Society visiting professor to the University of Hong Kong in 1967, where he was attached to the Botany Department for four months These trips strengthened his love of travel and languages. In 1972 Fowden was invited to fill the post of Director of Rothamsted Experimental Station; he took up the position on 1 April 1973. When he arrived the research being undertaken by some 500 scientists “needed reinvigoration—and new investment—to regain its past reputation for scientific excellence”. Fragmented departments were combined into five new divisions. In 1986 Rothamsted itself was amalgamated with other Stations across the country to form the new Institute of Arable Crops Research (IACR), and Fowden became its inaugural director. He retired in 1988, but did not slow down. He joined the council of the Royal Institution and became a trustee and then Director of the Foundation and Friends of Kew Gardens. He became a scientific adviser to several international agrochemical companies, and maintained visiting professorships at the University of London and the University of Wales Swansea. Honours, degrees and awards 1945 BSc (first class honours) Chemistry, University College, University of London 1948 PhD Physical Organic Chemistry, University College 1964 Fellow of the Royal Society 1966 Fellow of University College London 1967 Royal Society Visiting Prof., University of Hong Kong 1970 Council of Royal Society 1971 Foreign Member of Deutsche Akademie der Naturforsher Leopoldina 1978 Foreign Member of Lenin All-Union Academy of Agricultural Sciences of the USSR 1981 Corresponding Member of the American Society of Plant Physiologists 1982 Knighthood awarded by the Queen 1986 Foreign Member of the Academy of Agricultural Sciences of the German Democratic Republic 1986 Honorary Member of the Phytochemical Society of Europe 1989 Lawes Trust Senior Fellow, Rothamsted Experimental Station 1991 Foreign Member of the Russian Academy of Agricultural Sciences 1992 DSc (honorary), University of Westminster 1992 Fellowship of the International Institute of Biotechnology 1994 Director of the Foundation and Friends of Kew Gardens Family Leslie Fowden married fellow chemistry student Margaret (Peggy) Oakes on 9 July 1949 at the Methodist chapel in East Ham. They had two children: Abigail L, born on 13 January 1954. She is now retired but was formerly Professor of Perinatal Physiology at the Department of Physiology, Development and Neuroscience, University of Cambridge Jeremy S G, born in 1957 and has served in senior executive positions in eight or more companies. He is currently with Constellation Brands, a leading international producer and marketer of beer, wine, and spirits. Sir Leslie Fowden died from renal and heart failure at a care home in Histon on 16 December 2008 and was cremated in Cambridge on the 29th. References 1925 births 2008 deaths Alumni of University College London British organic chemists Rothamsted Experimental Station people Royal Society of Chemistry Fellows of the Royal Society Knights Bachelor
Leslie Fowden
Chemistry
1,407
40,278,895
https://en.wikipedia.org/wiki/Theonellamide%20F
Theonellamide F is an antifungal isolate of a sea sponge. References Antifungals 4-Bromophenyl compounds Heterocyclic compounds with 3 rings Carboxylic acids Amides Carboxamides
Theonellamide F
Chemistry
52
9,629,714
https://en.wikipedia.org/wiki/Adenine%20nucleotide%20translocator
Adenine nucleotide translocator (ANT), also known as the ADP/ATP translocase (ANT), ADP/ATP carrier protein (AAC) or mitochondrial ADP/ATP carrier, exchanges free ATP with free ADP across the inner mitochondrial membrane. ANT is the most abundant protein in the inner mitochondrial membrane and belongs to the mitochondrial carrier family. Free ADP is transported from the cytoplasm to the mitochondrial matrix, while ATP produced from oxidative phosphorylation is transported from the mitochondrial matrix to the cytoplasm, thus providing the cell with its main energy currency. ADP/ATP translocases are exclusive to eukaryotes and are thought to have evolved during eukaryogenesis. Human cells express four ADP/ATP translocases: SLC25A4, SLC25A5, SLC25A6 and SLC25A31, which constitute more than 10% of the protein in the inner mitochondrial membrane. These proteins are classified under the mitochondrial carrier superfamily. Types In humans, there exist three paraologous ANT isoforms: SLC25A4 – found primarily in heart and skeletal muscle SLC25A5 – primarily expressed in fibroblasts SLC25A6 – primarily express in liver Structure ANT has long been thought to function as a homodimer, but this concept was challenged by the projection structure of the yeast Aac3p solved by electron crystallography, which showed that the protein was three-fold symmetric and monomeric, with the translocation pathway for the substrate through the centre. The atomic structure of the bovine ANT confirmed this notion, and provided the first structural fold of a mitochondrial carrier. Further work has demonstrated that ANT is a monomer in detergents and functions as a monomer in mitochondrial membranes. ADP/ATP translocase 1 is the major AAC in human cells and the archetypal protein of this family. It has a mass of approximately 30 kDa, consisting of 297 residues. It forms six transmembrane α-helices that form a barrel that results in a deep cone-shaped depression accessible from the outside where the substrate binds. The binding pocket, conserved throughout most isoforms, mostly consists of basic residues that allow for strong binding to ATP or ADP and has a maximal diameter of 20 Å and a depth of 30 Å. Indeed, arginine residues 96, 204, 252, 253, and 294, as well as lysine 38, have been shown to be essential for transporter activity. Function ADP/ATP translocase transports ATP synthesized from oxidative phosphorylation into the cytoplasm, where it can be used as the principal energy currency of the cell to power thermodynamically unfavorable reactions. After the consequent hydrolysis of ATP into ADP, ADP is transported back into the mitochondrial matrix, where it can be rephosphorylated to ATP. Because a human typically exchanges the equivalent of their own mass of ATP on a daily basis, ADP/ATP translocase is an important transporter protein with major metabolic implications. ANT transports the free, i.e. deprotonated, non-Magnesium, non-Calcium bound forms of ADP and ATP, in a 1:1 ratio. Transport is fully reversible, and its directionality is governed by the concentrations of its substrates (ADP and ATP inside and outside mitochondria), the chelators of the adenine nucleotides, and the mitochondrial membrane potential. The relationship of these parameters can be expressed by an equation solving for the 'reversal potential of the ANT" (Erev_ANT), a value of the mitochondrial membrane potential at which no net transport of adenine nucleotides takes place by the ANT. The ANT and the F0-F1 ATP synthase are not necessarily in directional synchrony. Apart from exchange of ADP and ATP across the inner mitochondrial membrane, the ANT also exhibits an intrinsic uncoupling activity ANT is an important modulatory and possible structural component of the Mitochondrial Permeability Transition Pore, a channel involved in various pathologies whose function still remains elusive. Karch et al. propose a "multi-pore model" in which ANT is at least one of the molecular components of the pore. Translocase mechanism Under normal conditions, ATP and ADP cannot cross the inner mitochondrial membrane due to their high negative charges, but ADP/ATP translocase, an antiporter, couples the transport of the two molecules. The depression in ADP/ATP translocase alternatively faces the matrix and the cytoplasmic sides of the membrane. ADP in the intermembrane space, coming from the cytoplasm, binds the translocase and induces its eversion, resulting in the release of ADP into the matrix. Binding of ATP from the matrix induces eversion and results in the release of ATP into the intermembrane space, subsequently diffusing to the cytoplasm, and concomitantly brings the translocase back to its original conformation. ATP and ADP are the only natural nucleotides recognized by the translocase. The net process is denoted by: ADP3−cytoplasm + ATP4−matrix → ADP3−matrix + ATP4−cytoplasm ADP/ATP exchange is energetically expensive: about 25% of the energy yielded from electron transfer by aerobic respiration, or one hydrogen ion, is consumed to regenerate the membrane potential that is tapped by ADP/ATP translocase. The translocator cycles between two states, called the cytoplasmic and matrix state, opening up to these compartments in an alternating way. There are structures available that show the translocator locked in a cytoplasmic state by the inhibitor carboxyatractyloside, or in the matrix state by the inhibitor bongkrekic acid. Alterations Rare but severe diseases such as mitochondrial myopathies are associated with dysfunctional human ADP/ATP translocase. Mitochondrial myopathies (MM) refer to a group of clinically and biochemically heterogeneous disorders that share common features of major mitochondrial structural abnormalities in skeletal muscle. The major morphological hallmark of MM is ragged, red fibers containing peripheral and intermyofibrillar accumulations of abnormal mitochondria. In particular, autosomal dominant progressive external ophthalmoplegia (adPEO) is a common disorder associated with dysfunctional ADP/ATP translocase and can induce paralysis of muscles responsible for eye movements. General symptoms are not limited to the eyes and can include exercise intolerance, muscle weakness, hearing deficit, and more. adPEO shows Mendelian inheritance patterns but is characterized by large-scale mitochondrial DNA (mtDNA) deletions. mtDNA contains few introns, or non-coding regions of DNA, which increases the likelihood of deleterious mutations. Thus, any modification of ADP/ATP translocase mtDNA can lead to a dysfunctional transporter, particularly residues involved in the binding pocket which will compromise translocase efficacy. MM is commonly associated with dysfunctional ADP/ATP translocase, but MM can be induced through many different mitochondrial abnormalities. Inhibition ADP/ATP translocase is very specifically inhibited by two families of compounds. The first family, which includes atractyloside (ATR) and carboxyatractyloside (CATR), binds to the ADP/ATP translocase from the cytoplasmic side, locking it in a cytoplasmic side open conformation. In contrast, the second family, which includes bongkrekic acid (BA) and isobongkrekic acid (isoBA), binds the translocase from the matrix, locking it in a matrix side open conformation. The negatively charged groups of the inhibitors bind strongly to the positively charged residues deep within the binding pocket. The high affinity (Kd in the nanomolar range) makes each inhibitor a deadly poison by obstructing cellular respiration/energy transfer to the rest of the cell. There are structures available that show the translocator locked in a cytoplasmic state by the inhibitor carboxyatractyloside, or in the matrix state by the inhibitor bongkrekic acid. History In 1955, Siekevitz and Potter demonstrated that adenine nucleotides were distributed in cells in two pools located in the mitochondrial and cytosolic compartments. Shortly thereafter, Pressman hypothesized that the two pools could exchange nucleotides. However, the existence of an ADP/ATP transporter was not postulated until 1964 when Bruni et al. uncovered an inhibitory effect of atractyloside on the energy-transfer system (oxidative phosphorylation) and ADP binding sites of rat liver mitochondria. Soon after, an overwhelming amount of research was done in proving the existence and elucidating the link between ADP/ATP translocase and energy transport. cDNA of ADP/ATP translocase was sequenced for bovine in 1982 and a yeast species Saccharomyces cerevisiae in 1986 before finally Battini et al. sequenced a cDNA clone of the human transporter in 1989. The homology in the coding sequences between human and yeast ADP/ATP translocase was 47% while bovine and human sequences extended remarkable to 266 out of 297 residues, or 89.6%. In both cases, the most conserved residues lie in the ADP/ATP substrate binding pocket. See also Mitochondrial carrier Cellular respiration Oxidative phosphorylation References External links Solute carrier family Cellular respiration
Adenine nucleotide translocator
Chemistry,Biology
2,057
38,479,099
https://en.wikipedia.org/wiki/Musumecia%20bettlachensis
Musumecia bettlachensis is a species of agaric (gilled mushroom) in the family Pseudoclitocybaceae. The species was originally described from France and forms whitish, clustered basidiocarps in woodland. DNA analysis indicates that Musumecia bettlachensis is not closely related to species of Clitocybe which it superficially resembles. References Fungi of Europe Fungi described in 2011 Fungus species
Musumecia bettlachensis
Biology
89
1,247,082
https://en.wikipedia.org/wiki/NGC%203109
NGC 3109 is a small barred Magellanic type spiral or irregular galaxy around 4.35 Mly away in the direction of the constellation of Hydra. NGC 3109 is believed to be tidally interacting with the dwarf elliptical galaxy Antlia Dwarf. It was discovered by John Herschel on March 24, 1835 while he was in what is now South Africa. Size and morphology NGC 3109 is classified as a Magellanic type irregular galaxy, but it may in fact be a small spiral galaxy. Based on the D25.5 isophote at the B-band with an angular diameter of arcseconds, it has an isophotal diameter approximately across, slightly larger than the Large Magellanic Cloud but smaller than the Triangulum Galaxy. If it is a spiral galaxy, it would be the smallest in the Local Group. NGC 3109 has a mass of about times the mass of the Sun (), of which 20% is in the form of neutral hydrogen. It is oriented edge-on from our point of view, and may contain a disk and a halo. The disk appears to be composed of stars of all ages, whereas the halo contains only very old and metal-poor stars. NGC 3109 does not appear to possess a galactic nucleus. From measurements of the neutral atomic hydrogen in the galaxy, it has been found that the disk of NGC 3109 is warped. The warp has the same radial velocity as gas in the Antlia Dwarf galaxy, indicating that the two galaxies had a close encounter approximately one billion years ago. Composition Based on spectroscopy of blue supergiants in NGC 3109, it is known that the galaxy has a low metallicity, similar to that to the Small Magellanic Cloud. It is one of the most metal-poor star-forming galaxies in the Local group. NGC 3109 seems to contain an unusually large number of planetary nebulae for its luminosity. It also contains a substantial amount of dark matter. Location NGC 3109 is located about away, in the constellation Hydra. This puts it at the very outskirts of the Local Group. Its membership of the Local Group has been questioned, because it seems to be receding faster than estimates of the Local Group's escape velocity. It is distant enough from the largest members of the Local Group that it has not been tidally influenced by them. Luminous Blue Variable Although no supernovae have been observed in NGC 3109 yet, a luminous blue variable, designated AT2018akx (type LBV, mag. 17.5), was discovered on 22 March 2018. Gallery Notes See also List of NGC objects (3001–4000) References Further reading Grebel, Gallagher, Harbeck (2003) The Progenitors of Dwarf Spheroidal Galaxies ArXiv.org. retrieved November 2007 External links Magellanic spiral galaxies Barred spiral galaxies NGC 3109 subgroup Hydra (constellation) 3109 29128 Local Group 18350324 UGCA objects
NGC 3109
Astronomy
607
4,228,754
https://en.wikipedia.org/wiki/United%20States%20National%20Grid
The United States National Grid (USNG) is a multi-purpose location system of grid references used in the United States. It provides a nationally consistent "language of location", optimized for local applications, in a compact, user friendly format. It is similar in design to the national grid reference systems used in other countries. The USNG was adopted as a national standard by the Federal Geographic Data Committee (FGDC) of the US Government in 2001. Overview While latitude and longitude are well suited to describing locations over large areas of the Earth's surface, most practical land navigation situations occur within much smaller, local areas. As such, they are often better served by a local Cartesian coordinate system, in which the coordinates represent actual distance units on the ground, using the same units of measurement from two perpendicular coordinate axes. This can improve human comprehension by providing reference of scale, as well as making actual distance computations more efficient. Paper maps often are published with overlaid rectangular (as opposed to latitude/longitude) grids to provide a reference to identify locations. However, these grids, if non-standard or proprietary (such as so-called "bingo" grids with references such as "B-4"), are typically not interoperable with each other, nor can they usually be used with GPS. The goal of the USNG is to provide a uniform, nationally consistent rectangular grid system that is interoperable across maps at different scales, as well as with GPS and other location based systems. It is intended to provide a frame of reference for describing and communicating locations that is easier to use than latitude/longitude for many practical applications, works across jurisdictional boundaries, and is simple to learn, teach, and use. It is also designed to be both flexible and scalable so that location references are as compact and concise as possible. The USNG is intended to supplement—not to replace—other location systems such as street addresses. It can be applied to printed maps and to computer mapping and other (GIS) applications. It has found increasing acceptance especially in emergency management, search and rescue, and other public safety applications; yet, its utility is by no means limited to those fields. Description and functioning The USNG is an alpha-numeric reference system that overlays the UTM coordinate system. A number of brief tutorial references explain the system in detail, with examples. . Briefly, an example of a full USNG spatial address (grid reference) is:18S UJ 23371 06519(This example used by the FGDC is the full one-meter grid reference of the Jefferson Pier in Washington DC.) This full form (15 characters) uniquely identifies a single one-meter grid square out of the entire surface of the earth. It consists of three parts (each of which follows a "read-right-then-up" paradigm familiar with other "X,Y" coordinates): Grid Zone Designation (GZD); for a world-wide unique address. This consists of up to 2 digits (6-degree longitude UTM zone) for West to East, followed by a letter (8-degree latitude band) from South to North; in this example, "18S". 100,000-meter (100 km) Square Identification; for regional areas. This consists of two letters, the first West to East, the second South to North; in this example, "UJ". Grid Coordinates; for local areas. This part consists of an even number of digits, in this example, 23371 06519, and specifies a location within the 100 km grid square, relative to its lower-left corner. Split in half, the first part (here 23371), called the "easting", gives the displacement east of the left edge of the square; the second part (here 06519), called the "northing"), gives a distance north of the bottom edge of the containing square. Users determine the required precision, so a grid reference is typically truncated to fewer than the full 10 digits when less precision is required. These values represent a point position (southwest corner) for an area of refinement: Ten digits..... 23371 06519 ..Locating a point within a 1 m square Eight digits..... 2337 0651 ...Locating a point within a 10 m square Six digits......... 233 065 .....Locating a point within a 100 m square Four digits......... 23 06 .......Locating a point within a 1000 m (1 km) square Two digits........... 2 0 .........Locating a point within a 10000 m (10 km) square Note that when going from a higher- to a lower-precision grid reference, it is important to truncate rather than round when removing the unneeded digits. Because one is always measuring from the lower-left corner of the 100 km square, this ensures that a lower-precision grid reference is a square that contains all of the higher-precision references contained within it. In addition to truncating references (on the right) when less precision is required, another powerful feature of USNG is the ability to omit (on the left) the Grid Zone Designation, and possibly even the 100 km Square Identification, when one or both of these are unambiguously understood; that is, when operating within a known regional or local area. For example: Full USNG: 18S UJ 23371 06519 (world-wide unique reference to 1 meter precision) Without Grid Zone Designation: UJ 2337 0651 (when regional area is understood; here to 10 meter precision) Without 100 km Square Identification: 233 065 (when local area is understood; here to 100 meter precision) Thus in practical usage, USNG references are typically very succinct and compact, making them convenient (and less error prone) for communication. History Rectangular, distance-based (Cartesian) coordinate systems have long been recognized for their practical utility for land measurement and geolocation over local areas. In the United States, the Public Land Survey System (PLSS), created in 1785 in order to survey land newly ceded to the nation, introduced a rectangular coordinate system to improve on the earlier metes-and-bounds survey basis used earlier in the original colonies. In the first half of the 20th century, State Plane Coordinate Systems (SPCS) brought the simplicity and convenience of Cartesian coordinates to state-level areas, providing high accuracy (low distortion) survey-grade coordinates for use primarily by state and local governments. (Both of these planar systems remain in use today for specialized purposes.) Internationally, during the period between World Wars I and II, several European nations mapped their territory with national-scale grid systems optimized for the geography of each country, such as the Ordnance Survey National Grid (British National Grid). Near the end of World War II, the Universal Transverse Mercator (UTM) coordinate system extended this grid concept around the globe, dividing it into 60 zones of 6 degrees longitude each. Circa 1949, the US further refined UTM for ease of use (and combined it with the Universal Polar Stereographic system covering polar areas) to create the Military Grid Reference System (MGRS), which remains the geocoordinate standard used across the militaries of NATO counties. In the 1990s, a US grass-roots citizen effort led to the Public X-Y Mapping Project, a not-for-profit organization created specifically to promote the acceptance of a national grid for the United States. The Public XY Mapping Project developed the idea, conducting informal tests and surveys to determine which coordinate reference system best met the requirements of national consistency and ease of human use. Based on its findings, a standard based on the MGRS was adopted and brought to the Federal Geographic Data Committee (FGDC) in 1998. After an iterative review process and public comment period, the USNG was adopted by the FGDC as standard FGDC-STD-011-2001 in December 2001. Since then, the USNG has seen gradual but steadily increasing adoption both in formal standards and in practical use and applications, in public safety and in other fields. Advantages over latitude/longitude Users encountering the USNG (or similar grid reference systems) sometimes question why they are used instead of latitude and longitude coordinates, with which they may be more familiar. Proponents note that, in contrast to latitude and longitude coordinates, the USNG provides: Coordinate units that represent actual distances on the ground Equal distance units in both east–west and north–south directions An intuitive sense of scale and distance, across a local area Simpler distance calculation (by Pythagorean Theorem, rather than spherical trigonometry) A single unambiguous representation instead of the three (3) formats of latitude and longitude, each in widespread use, and each having punctuation sub-variants: degrees-minutes-seconds (DMS): N 38°53'23.3", W 077°02'11.6" degrees-minutes-decimal minutes (DMM or DDM): 38°53.388' N, 077°02.193' W decimal degrees (DDD or DD): 38.88980°, -077.03654° This format ambiguity has led to confusion with potentially serious consequences, particularly in emergency situations. References comprising only alphanumeric characters (letters and positive numbers). (Spaces have no significance but are allowed for readability.) No negative numbers, hemisphere indicators (+, -, N, S, E, W), decimal points (.), or special symbols (°, ′, ″, :). A familiar "read right then up" convention of XY Cartesian coordinates. An explicit convention for shortening references (at two levels) when the local or regional area is already unambiguously known. A reference to a definite grid square with variable, explicit precision (size), rather than to a point with (usually) unspecified precision implicit in number of decimal places. All of the above also lead to USNG references being typically very succinct and compact, with flexibility to convey precise location information in a short sequence of characters that is easily relayed in writing or by voice. Limitations As with any projection that seeks to represent the curved Earth as a flat surface, distortions and tradeoffs will inevitably occur. The USNG attempts to balance and minimize these, consistent with making the grid as useful as possible for its intended purpose of efficiently communicating practical locations. Since the UTM (the basis for USNG) is not a single projection, but rather a set of 6-degree longitudinal zones, there will necessarily be a local discontinuity along each of the 'seam' meridians between zones. However, every point continues to have a well-defined, unique geoaddress, and there are established conventions to minimize confusion near zone intersections. The six-degree zone width of UTM strikes a balance between the frequency of these discontinuities versus distortion of scale, which would increase unacceptably if the zones were made wider. (UTM further uses a 0.9996 scale factor at the central meridian, growing to 1.0000 at two meridians offset from the center, and increasing toward the zone boundaries, so as to minimize the overall effect of scale distortion across the zone breadth.) The USNG is not intended for surveying, for which a higher-precision (lower-distortion) coordinate system such as SPCS would be more appropriate. Also, since USNG north-south grid lines are (by design) a fixed distance from the zone central meridian, only the central meridian itself will be aligned with "true north". Other grid lines establish a local "grid north", which will differ from true north by a small amount. The amount of this deviation, which is indicated on USGS topographic maps, is typically much less than the magnetic declination (between true north and magnetic north), and is small enough that it can be disregarded in most land navigation situations. Adoption and current applications Standards Since its adoption as a national standard in 2001, the USNG has itself been incorporated into standards and operating procedures of other organizations: In 2011, the US Government's National Search and Rescue Committee (NSARC) released Version 1.0 of the Land Search and Rescue Addendum to the National Search and Rescue Supplement to the International Aeronautical and Maritime Search and Rescue Manual. This document specifies the US National Grid as the primary standard coordinate reference system to be used for all land-based search and rescue (SAR) activities in the US. In 2015, the Federal Emergency Management Agency (FEMA) issued FEMA Directive 092–5, "Use of the United States National Grid (USNG)": "POLICY STATEMENT: FEMA will use the United States National Grid (USNG) as its standard geographic reference system for land-based operations and will encourage use of the USNG among whole community partners." A number of state and local Emergency Management agencies have also adopted the USNG for their operations. Other organizations including the National Fire Protection Association (NFPA) and the Society of Automotive Engineers (SAE) have incorporated the USNG into specific standards issued by those organizations. Gridded maps The utility of almost every large or medium scale map (paper or electronic) can be greatly enhanced by having an overlaid coordinate grid. The USNG provides such a grid that is universal, interoperable, non-proprietary, works across all jurisdictions, and can readily be used with GPS receivers and other location service applications. In addition to providing a convenient means to identify and communicate specific locations (points and areas), an overlaid USNG grid also provides an orientation, and—because it is distance based—a scale of distance that is present across the map. USGS topographic maps have for decades been published with 1000-meter UTM tick marks in the map collar, and sometimes with full grid lines across the map. Recent editions of these maps (those referenced to the North American datum of 1983, or NAD83) are compatible with USNG, and current editions also contain a standard USNG information box in the collar which identifies the GZD(s) (Grid Zone Designator(s) and the 100 km Grid Square ID(s) covering the area of the particular map. USNG can now be found on various pre-printed and custom-printed maps available for purchase, or generated from various mapping software packages. Software applications A growing number of software applications incorporate or refer to the US National Grid. See the External Links section below for links to some of these, including The National Map (USGS). These applications include conventional mapping applications with overlaid USNG grid and/or coordinate readouts, and several 'you-are-here' mobile applications which give the user's current USNG coordinates, such as USNGapp.org and FindMeSAR.com. Mission Manager, the most widely used incident management software tool for first responders, integrates the USNG in its functionality. Search and rescue (SAR) As noted above under Standards, since 2011 the USNG has been designated by the US Government's National Search and Rescue Committee (NSARC) as the primary coordinate reference system to be used for all land-based search and rescue (SAR) activities in the US. (Latitude and longitude [DMM variant] may be used as the secondary system for land responders; especially when coordinating with air and sea based responders who may use it as their primary system, and USNG as secondary.) The National Association for Search and Rescue (NASAR) is moving its education and certification testing programming towards USNG. Other organizations such as the National Alliance for Public Safety GIS (NAPSG) also provide USNG SAR training. FEMA Urban Search and Rescue (USAR) task forces including Florida Task Force 4 (FL-TF4) and Iowa Task Force 1 (IA-TF1) have incorporated the USNG into their training and operations. Emergency Location Marker (ELM) Responders are often faced with significant geolocation issues when a responding to an emergency without a street address. This is particularly true in the recreational trail environment: 34% of U.S. response calls go to a location without a street address – recreational trails are a leading category. Trails with location signs typically employ an approach unique to that park or trail system, and Locally unique marking systems have no value to responders unless those locations are readily available via dispatch and response systems. In response to these issues, in 2009, a project funded by the nonprofit SharedGeo and University of Minnesota/Minnesota Department of Transportation Local Operational Research Assistance (OPERA) grant program got underway which had the following objectives: Develop a standardized Emergency Location Marker (ELM) which can be used anywhere in the nation in a variety of scenarios, Align the marking system with established federal and state cartographic and signage standards, Ensure the format leverages GPS instead of requiring constant updating of Computer Aided Dispatch (CAD) systems, Use a consistent approach which over time will become instantly recognizable by the public, and Involve multiple stakeholders during development to ensure a "Best Practices" outcome. After three years of field research and vetting by multiple focus groups of trail users, responders, and geospatial experts, a design based on USNG was adopted. This format, which can be used anywhere in the United States, was originally offered in three sizes to conform to federal, state and local signage standards: 6" x 9" (15 cm x 23 cm) -- for non-motorized trails 9" x 12" (23 cm x 30 cm) -- for motorized trails 12" x 12" (30 cm x 30 cm) -- for trail heads and huts In the years since introduction, the USNG ELM program now includes vertical ELM versions for breakaway scenarios (e.g. mountain bike trails), ELM information signs, ELM stickers to retrofit trail posts, and corresponding apps such as USNGapp.org. USNG ELM implementations can be found in Minnesota, Florida, Georgia, Hawaii, Michigan, and other states. First responders The USNG can increase the effectiveness of all types of emergency response, ranging from missing persons searches to off-road medical responses. In Lake County MN, with 900 miles of recreational trails, dispatchers and first responders have been provided the tools and training to use USNG as their primary means of geo-location. The goal of this education for responders and the public is to "Take the 'Search' out of 'Search and Rescue.'" In addition to ELM signs, notices at trailheads encourage hikers and off-road vehicle operators to "Download this USNG App" on their cell phones. Trail maps including USNG grid lines allow responders to interpolate locations from 911 callers who give their coordinates from ELMs or GPS apps. Cell phones also provide responders the opportunity to counsel lost or injured persons to determine their location by downloading USNG apps on the spot. This saves time and effort for responders and patients alike who are not on roads or addressed locations. When multiple teams of responders are working in close vicinity, such as during woods searches for lost individuals, communicating with USNG allows them to truncate their coordinate string to eight digits, giving their location within 10 meters without the use of decimals, special symbols or unit descriptors, and intuitively estimate the distance and direction between teams for better coordination. Emergency management Emergency managers coordinate response to and recovery from all types of natural hazards and man-made threats. In large scale events, where responders may be imported from many jurisdictions, coordination of geo-location formats is mandatory. The USNG is used to reduce confusion and improve efficiency in response to wildfires, floods and hurricanes and other events. As noted above, In 2015, the Federal Emergency Management Agency (FEMA) issued FEMA Directive 092–5, "Use of the United States National Grid (USNG)":"POLICY STATEMENT: FEMA will use the United States National Grid (USNG) as its standard geographic reference system for land-based operations and will encourage use of the USNG among whole community partners." "Lessons learned from several large-scale disasters within the past three decades highlight the need for a common, geographic reference system in order to anticipate resource requirements, facilitate decision-making, and accurately deploy resources. ... Decision support tools that apply the USNG enable emergency managers to locate positions and identify areas of interest or operations where traditional references (i.e., landmarks or street signs) may be destroyed, damaged, or missing due to the effects of a disaster."The USNG is also seen as a tool for enhancing situational awareness and facilitating a common operating picture in emergency scenarios. The Department of Defense also has recognized the role of the civil USNG standard for the Armed Forces in support of homeland security and homeland defense. Asset identification and mapping Organizations such as public utilities, transportation departments, emergency responders, and others own or rely upon fixed, field-based assets which they need to track, inventory, maintain, and locate efficiently when needed.  Examples include fire hydrants, overhead utility poles, storm drains, roadside signs, and many others. Assigning unique identifiers is a common method for identifying and referencing particular assets.  A strategically assigned asset identifier can include location information, thereby assuring both that the name is unique and that the location of the asset is always known. The USNG offers a method to locate any place or any object in the world with a brief alphanumeric code, which can be shortened depending on the known service area, and enhanced with a prefix code to identify the type of asset. Organizations have successfully fielded this type of USNG-based asset naming recently:"The Mohawk Valley Water Authority serves 40,000 customers in the Greater Utica Area in Central New York. We have 700+ miles of pipe, 28 storage tanks, 21 pump stations, and numerous fire hydrants. We communicate hydrant status information internally and with many fire departments. We need to name these items meaningfully. We have tried several naming conventions—both sequential and hierarchical—with confusing and disappointing results. We converted to USNG asset naming and have used this successfully for over 4 years!" -- Elisabetta T. DeGeronimo, Watershed/GIS Coordinator at Mohawk Valley Water Authority, Utica, New York -- "Hundreds of thousands of roadside assets—culverts, drains, signs on ground mounts, signs on overhead support structures, signs on span wires, and guide rails—are found along the routes maintained by the New York State Department of Transportation. In the past, the existence of these assets was only recorded in construction plans and the minds and memories of dedicated career staff. Our new asset naming convention, based upon the U.S. National Grid, benefits the entire department and particularly the field forces." -- Mary Susan Knauss, Senior Transportation Analyst, Office of Transportation Management, New York State Department of Transportation, Albany, New York These and other contributors at Florida State University and elsewhere have collaborated to produce a manual to guide GIS users and others through the practical steps of naming assets using the USNG. Recreation and other uses There has been a concerted outreach to educate the public in the uses and advantages of USNG. Sharing USNG maps and apps with friends and families encourages them to keep each other informed of their locations when traveling off-road (i.e., in wilderness or on the water) for work or recreation. In addition, USNG can be used to mark and communicate locations in busy or remote urban areas, including where to meet friends in a wooded park, locating a car in a mall parking lot, or requesting help inside a large warehouse or business complex. One doesn't even need compass directions. Scientific research fieldwork can also benefit. Future direction and initiatives The USNG has seen steady but gradually increasing adoption and use since the standard was approved in 2001. Formal adoption by other standards bodies has taken place, while practical adoption in actual use has been more uneven in achieving its full potential. In 2018, the USNG Institute (UGNGI) was established "to study and report on USNG implementation efforts taking place across the United States" , as was a USNG Implementation Working Group (USNG IWG) to help assist and coordinate implementation efforts. Further adoption of USNG for public safety and the Emergency Location Marker (ELM) system may depend in part on greater coordination of USNG adoption at Public Safety Answering Points (PSAPs, or 911 centers), in their procedures and Computer-Aided-Dispatch (CAD) systems. Currently such implementations, being generally under local control, have been more fragmented than some national adoption initiatives. Proponents of the USNG envision many other ways in which it could play roles in improving safety, convenience, and quality of life. See also Cartesian coordinate system Grid reference Ordnance Survey National Grid (British National Grid) Irish national grid reference system Spatial Reference System List of National Coordinate Reference Systems Universal Transverse Mercator coordinate system (UTM) Military Grid Reference System (MGRS) Federal Geographic Data Committee (FGDC) Public Land Survey System (PLSS) State Plane Coordinate System (SPCS) References Further reading A Quick Guide to Using USNG Coordinates (MapTools) How to Read US National Grid (USNG) Coordinates (FGDC/NGA) How to Read USNG Spatial Addresses (FGDC) A Quick Guide to the USNG (NAPSG via USNG Center) United States National Grid Standard (FGDC-STD-011-2001) (FGDC, official standard) FEMA Directive 092-5: Use of the United States National Grid (USNG) (FEMA policy directive) Implementation Guide to the USNG (NAPSG) Emergency Location Marker (ELM) system (USNG Florida on Medium) Hikers, Know Your Grid! (USNG Florida on Medium) 911 Caller Location Solutions (USNG Florida on Medium) Why PSAPs Should Be Using The U.S. National Grid To Find 911 Callers (Kova Corp) An Introduction to Standards-Based GIT and the US National Grid Instructions for GIS Asset Naming Using the U.S. National Grid (USNG) External links General information sites about the USNG: U.S. National Grid Information Center USNG home page at the Federal Geographic Data Committee (FGDC) USNG resources at the NAPSG Foundation USNG resources at ESRI USNG Florida USNG Iowa USNG resources at Florida Division of Emergency Management USNG resources at Minnesota Geospatial Information Office USNG resources at Dakota County (MN) USNG resources at Clinton County (OH) Online mapping and coordinate conversion sites: USNGapp.org and FindMeSAR.com (mobile applications that give the user's current coordinates, e.g., for relay on calls for help) GISsurfer (a general purpose web map with a USNG overlay and more) GISsurfer: USNG and MGRS Coordinates (documentation, including "Why are USNG coordinates important?") NAPSG Situational Awareness Viewer (select Grid Overlay button in toolbar for USNG) The National Map Viewer (USGS; set coordinate display to USNG) NOAA/NWS Enhanced Data Display (EDD) (with USNG coordinate display enabled) Utility to convert latitude and longitude to USNG (NOAA/NGS) Programmer resource: JavaScript utility for converting between lat/long and MGRS/USNG Emergency Location Marker (ELM) system brief introductory videos: Cook & Lake Counties (MN) (49s) Cobb County (GA): "Cobb County Expands Trail Marker Program" (1m 59s) "Cobb's Trail Marker Program EXPLAINED!" (3m 20s) "Cobb's Trail Markers Now at Kennesaw Mountain [National Battlefield Park]!" (2m 10s) Fire Engineering's USNG Video Series Geography of the United States Cartography of the United States Geographic coordinate systems Geocodes
United States National Grid
Mathematics
5,845
388,185
https://en.wikipedia.org/wiki/George%20Woltman
George Woltman (born November 10, 1957) is the founder of the Great Internet Mersenne Prime Search (GIMPS), a distributed computing project researching Mersenne prime numbers using his software Prime95. He graduated from the Massachusetts Institute of Technology (MIT) with a degree in computer science. He lives in North Carolina. His mathematical libraries created for the GIMPS project are the fastest known for multiplication of large integers, and are used by other distributed computing projects as well, such as Seventeen or Bust. He also worked on a TTL version of Maze War while a student at MIT. Later he worked as a programmer for Data General. See also Prime95 References External links The Prime Pages Titan Biography GIMPS home page 1957 births Living people Great Internet Mersenne Prime Search 20th-century American mathematicians 21st-century American mathematicians MIT School of Engineering alumni
George Woltman
Technology
177
14,296,821
https://en.wikipedia.org/wiki/Global%20meteoric%20water%20line
The Global Meteoric Water Line (GMWL) describes the global annual average relationship between hydrogen and oxygen isotope (oxygen-18 [O] and deuterium [H]) ratios in natural meteoric waters. The GMWL was first developed in 1961 by Harmon Craig, and has subsequently been widely used to track water masses in environmental geochemistry and hydrogeology. Development and definition of GMWL When working on the global annual average isotopic composition of O and H in meteoric water, geochemist Harmon Craig observed a correlation between these two isotopes, and subsequently developed and defined the equation for GMWL: Where δO and δH (aka δD) are the ratio of heavy to light isotopes (e.g. O/O, H/H). The relationship of δO and δH in meteoric water is caused by mass dependent fractionation of oxygen and hydrogen isotopes between evaporation from ocean seawater and condensation from vapor. As oxygen isotopes (O) and hydrogen isotopes (H) have different masses, they behave differently in the evaporation and condensation processes, and thus result in the fractionation between O and O as well as H and H. Equilibrium fractionation causes the isotope ratios of δO and δH to vary between localities within the area. The fractionation processes can be influenced by a number of factors including: temperature, latitude, continentality, and most importantly, humidity. Applications Craig observed that δO and δH isotopic composition of cold meteoric water from sea ice in the Arctic and Antarctica are much more negative than that in warm meteoric water from the tropic. A correlation between temperature (T) and δO was proposed later in the 1970s. Such correlation is then applied to study surface temperature change over time. The δO of ancient meteoric water, preserved in ice cores, can also be collected and applied to reconstruct paleoclimate. A meteoric water line can be calculated for a given area, named as local meteoric water line (LMWL), and used as a baseline within that area. Local meteoric water line can differ from the global meteoric water line in slope and intercept. Such deviated slope and intercept is a result largely from humidity. In 1964, the concept of deuterium excess d (d = δH - 8δO) was proposed. Later, a parameter of deuterium excess as a function of humidity has been established, as such the isotopic composition in local meteoric water can be applied to trace local relative humidity, study local climate and used as a tracer of climate change. In hydrogeology, the δO and δH of groundwater are often used to study the origin of groundwater and groundwater recharge. It has been shown that, even taking into account the standard deviation related to instrumental errors and the natural variability of the amount-weighted precipitations, the LMWL calculated with the EIV (error in variable regression) method has no differences on the slope compared to classic OLSR (ordinary least square regression) or other regression methods. However, for certain purposes such as the evaluation of the shifts from the line of the geothermal waters, it would be more appropriate to calculate the so-called "prediction interval" or "error wings" related to LMWL. See also Isotope fractionation Meteoric water Water cycle References Precipitation Deuterium Isotopes of hydrogen Isotopes of oxygen Hydrology
Global meteoric water line
Chemistry,Engineering,Environmental_science
711
59,587,692
https://en.wikipedia.org/wiki/ThinkPad%20T61
The ThinkPad T61 is a premium, business-class laptop computer manufactured originally by IBM, which sold the rights to Lenovo. A ThinkPad, it was part of the T series, and was first manufactured in 2006. It was offered as a modular platform, allowing buyers to customize most all of its major features, including processor speed, amount of RAM and hard disk storage, screen size and resolution, quality and speed of video card, and additional capabilities such as a fingerprint reader, smart card reader, and Zip drive. The T61 came with the Windows Vista operating system. References External links Thinkwiki.de - T61 (in German) Thinkpad T61 wiki IBM laptops Lenovo laptops T61 Computer-related introductions in 2006
ThinkPad T61
Technology
162
78,195,155
https://en.wikipedia.org/wiki/Diisobutene
Diisobutene (also known as Diisobutylene and Isooctene) refers to a pair of organic compounds with the overall formula C8H16. The isomers have the same carbon skeleton but differ in the location of the C=C bond. Both are colorless liquids with very similar physical properties. These compounds arise via the acid catalyzed dimerization of isobutene, a reaction that proceeds via the carbocation . The process also leads to some triisobutenes and tetraisobutenes. Applications Hydrogenation is performed at a significant scale to give isooctane, which is an important fuel additive. Diisobutene is used as precursors to isononylol and octylphenols by hydroformylation/hydrogenation and phenol alkylation, respectively. Both are precursors to plasticizers. The isononylol (3,5,5-trimethyl-hexan-1-ol) is a precursor to 3,5,5-trimethylhexyl acetate, a commercial fragrance. Diisobutenes were once of interest as components for automotive fuels. See also 1-Octene - the corresponding linear alpha-olefin References Alkenes
Diisobutene
Chemistry
266
25,982,350
https://en.wikipedia.org/wiki/Avizo%20%28software%29
Avizo (pronounce: ‘a-VEE-zo’) is a general-purpose commercial software application for scientific and industrial data visualization and analysis. Avizo is developed by Thermo Fisher Scientific and was originally designed and developed by the Visualization and Data Analysis Group at Zuse Institute Berlin (ZIB) under the name Amira. Avizo was commercially released in November 2007. For the history of its development, see the Wikipedia article about Amira. Overview Avizo is a software application which enables users to perform interactive visualization and computation on 3D data sets. The Avizo interface is modelled on the visual programming. Users manipulate data and module components, organized in an interactive graph representation (called Pool), or in a Tree view. Data and modules can be interactively connected together, and controlled with several parameters, creating a visual processing network whose output is displayed in a 3D viewer. With this interface, complex data can be interactively explored and analyzed by applying a controlled sequence of computation and display processes resulting in a meaningful visual representation and associated derived data. Application areas Avizo has been designed to support different types of applications and workflows from 2D and 3D image data processing to simulations. It is a versatile and customizable visualization tool used in many fields: Scientific visualization Materials Research Tomography, Microscopy, etc. Nondestructive testing, Industrial Inspection, and Visual Inspection Computer-aided Engineering and simulation data post-processing Porous medium analysis Civil Engineering Seismic Exploration, Reservoir Engineering, Microseismic Monitoring, Borehole Imaging Geology, Digital Rock Physics (DRP), Earth Sciences Archaeology Food technology and agricultural science Physics, Chemistry Climatology, Oceanography, Environmental Studies Astrophysics Features Data import: 2D and 3D image stack and volume data: from microscopes (electron, optical), X-ray tomography (CT, micro-/nano-CT, synchrotron), neutron tomography and other acquisition devices (MRI, radiography, GPR) Geometric models (such as point sets, line sets, surfaces, grids) Numerical simulation data (such as Computational fluid dynamics or Finite element analysis data) Molecular data Time series and animations Seismic data Well logs 4D Multivariate Climate Models 2D/3D data visualization: Volume rendering Digital Volume Correlation Visualization of sections, through various slicing and clipping methods Isosurface rendering Polygonal meshes Scalar fields, Vector fields, Tensor representations, Flow visualization (Illuminated Streamlines, Stream Ribbons) Image processing: 2D/3D Alignment of image slices, Image registration Image filtering Mathematical Morphology (erode, dilate, open, close, tophat) Watershed Transform, Distance Transform Image segmentation 3D models reconstruction: Polygonal surface generation from segmented objects Generation of tetrahedral grids Surface reconstruction from point clouds Skeletonization (reconstruction of dendritic, porous or fracture network) Surface model simplification Quantification and analysis: Measurements and statistics Analysis spreadsheet and charting Material properties computation, based on 3D images: Absolute permeability Thermal conductivity Molecular diffusivity Electrical resistivity/formation factor 3D image-based meshing for CFD and FEA: From 3D imaging modalities (CT, micro-CT, MRI, etc.) Surface and volume meshes generation Export to FEA and CFD solvers for simulation Post-processing for simulation analysis Presentation, automation: MovieMaker, Multiscreen, Video wall, collaboration, and VR support TCL Scripting, C++ extension API Avizo is based on Open Inventor 3D graphics toolkits (FEI Visualization Sciences Group). References External links Scientific Publications Official Avizo forum Avizo videos 3D graphics software 3D imaging Computational fluid dynamics Computer vision software Data and information visualization software Earth sciences graphics software Graphics software Image processing software Image segmentation Mesh generators Molecular dynamics software Molecular modelling software Nondestructive testing Physics software Science software Simulation software Software that uses Qt Virtual reality
Avizo (software)
Physics,Chemistry,Materials_science
815
39,350,099
https://en.wikipedia.org/wiki/APOPT
APOPT (for Advanced Process OPTimizer) is a software package for solving large-scale optimization problems of any of these forms: Linear programming (LP) Quadratic programming (QP) Quadratically constrained quadratic program (QCQP) Nonlinear programming (NLP) Mixed integer programming (MIP) Mixed integer linear programming (MILP) Mixed integer nonlinear programming (MINLP) Applications of the APOPT include chemical reactors, friction stir welding, prevention of hydrate formation in deep-sea pipelines, computational biology, solid oxide fuel cells, and flight controls for Unmanned Aerial Vehicles (UAVs). Benchmark Testing Standard benchmarks such as CUTEr and SBML curated models are used to test the performance of APOPT relative to solvers BPOPT, IPOPT, SNOPT, and MINOS. A combination of APOPT (Active Set SQP) and BPOPT (Interior Point Method) performed the best on 494 benchmark problems for solution speed and total fraction of problems solved. See also APOPT is supported in AMPL, APMonitor, Gekko, Julia, MATLAB, Pyomo, and Python. References External links Web interface to solve optimization problems with the APOPT solver Download APOPT for AMPL, MATLAB, Julia, Python, or APMonitor Numerical software Mathematical optimization software
APOPT
Mathematics
284
50,773,588
https://en.wikipedia.org/wiki/Neoteny%20in%20humans
Neoteny is the retention of juvenile traits well into adulthood. In humans, this trend is greatly amplified, especially when compared to non-human primates. Neotenic features of the head include the globular skull; thinness of skull bones; the reduction of the brow ridge; the large brain; the flattened and broadened face; the hairless face; hair on (top of) the head; larger eyes; ear shape; small nose; small teeth; and the small maxilla (upper jaw) and mandible (lower jaw). Neoteny of the human body is indicated by glabrousness (hairless body). Neoteny of the genitals is marked by the absence of a baculum (penis bone); the presence of a hymen; and the forward-facing vagina. Neoteny in humans is further indicated by the limbs and body posture, with the limbs proportionately short compared to torso length; longer leg than arm length; the structure of the foot; and the upright stance. Humans also retain a plasticity of behavior that is generally found among animals only in the young. The emphasis on learned, rather than inherited, behavior requires the human brain to remain receptive much longer. These neotenic changes may have disparate roots. Some may have been brought about by sexual selection in human evolution. In turn, they may have permitted the development of human capacities such as emotional communication. However, humans also have relatively large noses and long legs, both peramorphic (not neotenic) traits, though these peramorphic traits separating modern humans from extant chimpanzees were present in Homo erectus to an even higher degree than in Homo sapiens, which means general neoteny is valid for the H.erectus to H.sapiens transition (although there were perimorphic changes separating H.erectus from even earlier hominins such as most Australopithecus). Later research shows that some species of Australopithecus, including Australopithecus sediba, had the non-neotenic traits of H.erectus to at least the same extent which separate them from other Australopithecus, making it possible that general neoteny applies throughout the evolution of the genus Homo depending on what species of Australopithecus that Homo descended from. The type specimen of A.sediba had these non-neotenic traits, despite being a juvenile, suggesting that the adults may have been less neotenic in these regards than any H.erectus or other Homo. Neoteny and heterochrony Heterochrony is defined as “a genetic shift in timing of the development of a tissue or anatomical part, or in the onset of a physiological process, relative to an ancestor”. Heterochrony can lead to a modification in shape, size and/or behavior of an organism through a variety of different ways. With heterochrony being more of an umbrella term, there are two different types of heterochrony where development timing is altered: paedomorphosis and peramorphosis. These terms refer to deceleration and acceleration of development, respectively. With neoteny (as described above) being defined as retention of juvenile features into adulthood, neoteny falls under paedomorphosis, as physical development of features is slowed. Human evolution Many prominent evolutionary theorists propose that neoteny has been a key feature in human evolution. Stephen Jay Gould believed that the "evolutionary story" of humans is one where we have been "retaining to adulthood the originally juvenile features of our ancestors". J. B. S. Haldane mirrors Gould's hypothesis by stating a "major evolutionary trend in human beings" is "greater prolongation of childhood and retardation of maturity." Delbert D. Thiessen said that "neoteny becomes more apparent as early primates evolved into later forms" and that primates have been "evolving toward flat face." Doug Jones, a visiting scholar in anthropology at Cornell University, said that human evolution's trend toward neoteny may have been caused by sexual selection in human evolution for neotenous facial traits in women by men with the resulting neoteny in male faces being a "by-product" of sexual selection for neotenous female faces. Jones said that this type of sexual selection "likely" had a major role in human evolution once a larger proportion of women lived past the age of menopause. This increasing proportion of women who were too old to reproduce resulted in a greater variance in fecundity in the population of women, and it resulted in a greater sexual selection for indicators of youthful fecundity in women by men. The anthropologist Ashley Montagu said that the fetalized Homo erectus represented by the juvenile Mojokerto skull and the fetalized australopithecine represented by the juvenile Australopithecus africanus skull would have had skulls with a closer resemblance to those of modern humans than to those of the adult forms of their own species. Montagu further listed the roundness of the skull, thinness of the skull bones, lack of brow ridges, lack of sagittal crests, form of the teeth, relative size of the brain and form of the brain as ways in which the juvenile skulls of these human ancestors resemble the skulls of adult modern humans. Montagu said that the retention of these juvenile characteristics of the skull into adulthood by australopithecine or H.erectus could have been a way that a modern type of human could have evolved earlier than what actually happened in human evolution. The psychiatrist Stanley Greenspan and Stuart G. Shanker proposed a theory in The First Idea of psychological development in which neoteny is seen as crucial for the "development of species-typical capacities" that depend upon a long period of attachment to caregivers for the opportunities to engage in and develop their capacity for emotional communication. Because of the importance of facial expression in the process of interactive signaling, neotenous features, such as hair loss, allow for more efficient and rapid communication of socially important messages that are based on facially expressive emotional signaling. Other theorists have argued that neoteny has not been the main cause of human evolution, because humans only retain some juvenile traits, while relinquishing others. For example, the high leg-to-body ratio (long legs) of adult humans as opposed to human infants shows that there is not a holistic trend in humans towards neoteny when compared to the other great apes. Andrew Arthur Abbie agrees, citing the gerontomorphic fleshy human nose and long human legs as contradicting the neoteny hominid evolution hypothesis, although he does believe humans are generally neotenous. Brian K. Hall also cites the long legs of humans as a peramorphic trait, which is in sharp contrast to neoteny. On the balance, an all or nothing approach could be regarded as pointless, with a combination of heterochronic processes being more likely and more reasonable (Vrba, 1996). Cooked food and protective genome simplification Based on calculations that show that more complex gene networks are more vulnerable to mutations as more conditions that are necessary but not sufficient increases the risk of one of them being hit, there is a theory that mutagens in food were more likely to be formed when food was burned while being cooked by human ancestors lacking modern cooking technology or the greater intelligence of modern humans. These commonly present mutagens thus selected against complex gene networks because longer genomes present a larger target for mutation. This theory successfully predicts that the human genome is shorter than other Great Ape genomes and that there are significantly more defunct pseudogenes with functional homologs in the chimpanzee genome than vice versa. While the protein coding portion of the FOXP2 gene is identical to that in Neanderthals, there is one point mutation in the regulatory part thereof (modern humans having a T where Neanderthals and all nonhuman vertebrates have an A). The observation that the effect of that difference is that the modern human FOXP2 gene does not interact with RNA from other genes while all other vertebrate including Neanderthal varieties did agrees with the idea that modern human origin was marked by the elimination (not formation) of complex gene networks, as predicted by this model. The researchers behind the theory argue that neoteny is a side effect of the destruction of gene networks preventing the firing of genetic activity patterns that marked adulthood in prehuman ancestors. Growth pattern of children In 1943 Konrad Lorenz noted that a newborn infant's rounded facial features might encourage guardians to show greater care for them, due to their perceived cuteness. He labeled this the Kewpie doll effect, because of their similarity to the eponymous doll. Desmond Collins who was an Extension Lecturer of Archaeology at London University said that the lengthened youth period of humans is part of neoteny. Physical anthropologist Barry Bogin said that the pattern of children's growth may intentionally increase the duration of their cuteness. Bogin said that the human brain reaches adult size when the body is only 40 percent complete, when "dental maturation is only 58 percent complete" and when "reproductive maturation is only 10 percent complete". Bogin said that this allometry of human growth allows children to have a "superficially infantile" appearance (large skull, small face, small body and sexual underdevelopment) longer than in other "mammalian species". Bogin said that this cute appearance causes a "nurturing" and "care-giving" response in "older individuals". Genetic diversity, relaxed sexual selection and immunity While upper body strength is on average more sexually dimorphic in humans than in most other primates, with the exception of gorillas, some fossil evidence suggests that male upper-body strength and muscular sexual dimorphism during human evolution peaked in Homo erectus and decreased, along with overall robustness, during the evolution of H.sapiens with its neotenic traits. The reduction in sexual dimorphism would suggest that taxa with high sexual dimorphism do not necessarily have an increased evolutionary advantage. This could be explained by the theory that sexual dimorphism could reduce genetic diversity in a population, i.e., if individuals are attracted to only highly masculine or highly feminine mates, then those without distinctly gendered features are excluded as potential partners, thus creating speciation. Neoteny in H.sapiens is explained by this theory as a result of relaxed sexual selection shifting human evolution into a less speciation-prone but more intraspecies adaptable strategy, decreasing sexual dimorphism and making adults assume a more juvenile form. As a possible trigger of such a change, it has been cited that while the Neanderthal version of the FOXP2 gene differed on only one point from the modern human version (not two points as the difference between chimpanzees and modern humans) interacted strongly with other genes and was part of a gene regulatory network, the derived mutation that is unique to the modern human version of the gene knocked out the attachment to which RNA strains from other genes connected to it so that the gene was disconnected from its former genetic network. It is suggested that since the FOXP2 gene controls synapses, its disconnection from a formerly complex network of genes instantly removed many instincts including ones that drove sexual selection. It is also suggested that it allowed more genetic variants that affect the phenotype to accumulate in humans, which in combination with increased synaptic plasticity made modern humans more able to survive environmental change and to colonize new environments and innovate. The theory that the origin of complex language was the most recent step in human evolution is considered unlikely as storytelling about past environments would be of little use in droughts with novel distributions of water while individual ability to make correct predictions would be useful and allow for differential survival that could eliminate the archaic version altogether, as opposed to selection for language in which some primitives could use imitation as long as there were enough storytellers in the group to keep the knowledge alive for long times which predicts that some individuals would have retained the archaic version if the modern version was for language. H.sapiens is known from fossils to have had a mix of modern neotenic traits and older non-neotenic traits from its origin some 300000 years ago to the transition to early agriculture when the non-neotenic traits disappeared, which is theorized to be due to selection for the immune system adapting to survive a higher pathogen load caused by agriculture and men who retained more childlike traits being less burdened by weakening of the immune system from upper body musculature competing with the immune system over nutrients. It is argued that the genetic evidence of only a small part of the male population of the time of early agriculture passing on their Y chromosomes can be explained by the heredity of non-neotenic traits causing the male descendants of the non-neotenic men who were not killed by diseases in one generation to die from them in subsequent generations, leaving no Y chromosome evidence of their short term continuation of paternal bloodlines in present humans. Sexual selection for stereotypic masculinity causing most men to fail to breed is ruled out as it would have selected against neoteny, not for as the archaeological evidence shows. Milder punishment as a survival advantage One hypothesis of the premise that Stone Age humans did not record birth date but instead assumed age based on appearance holds that if milder punishment to juvenile delinquents existed in Paleolithic times, it would have imparted milder punishment for longer on those retaining a more youthful appearance into adulthood. This hypothesis posits that those who got milder punishment for the same breach of rules had the evolutionary advantage, passing their genes on while those who got more severe punishment had more limited reproductive success due to either limiting their survival by following all rules or by being severely punished. Neotenous features elicit help The Multiple Fitness Model proposes that the qualities that make babies appear cute to adults additionally look "desirable" to adults when they see other adults. Neotenous features in adult females may help elicit more resource investment and nurturing from adult males. Likewise, neotenous features in adult males may similarly help elicit more resource investment and nurturing from adult females in addition to possibly making neotenous adult males appear less threatening and possibly making neotenous adult males more able to elicit resources from "other resource-rich people". Therefore, it could be adaptive for adult females to be attracted to adult males that have "some" neotenous traits. Neotenous features elicits fitness benefits for mimickers. From the point of view of the mimicker, the neoteny expression signals appeasement or submissiveness. Thus, extra parental or alloparental care will more likely be administered because the mimicker appears to be more childlike and maybe ill-equipped to survive on its own. On the other hand, the recipient often faces aggression because of this signaled vulnerability. Caroline F. Keating et al. tested the hypothesis that adult male and female faces with more neotenous features would elicit more help than adult male and female faces with less neotenous features. Keating et al. digitally modified photographs of faces of African-Americans and European Americans to make them appear more or less neotenous by either enlarging or decreasing the size of their eyes and lips. Keating et al. said that the more neotenous white male, white female and black female faces elicited more help from people in the United States and Kenya, but the difference in help from people in the United States and Kenya for more neotenous black male faces was not significantly different from less neotenous black male faces. A 1987 study using 20 Caucasian subjects found that "babyfaced" individuals are assumed by both Korean and U.S. participants to possess more childlike psychological attributes than their mature-faced counterparts. In her dissertation from the University of Michigan, Sookyung Choi explained how perception of cuteness can contribute to perception of value. Different physical cues were shown to trigger protective feelings from their adult caregivers or other adults from which they engaged in interaction. Participants in the study were asked to design their own version of a cute rectangle. They were allowed to edit the rectangle in terms of shape roundedness, color, size, orientation, etc. Associational coefficients showed that shapes with a smaller area and rounder features were found to be cuter, and that lighter coloring and contrast playing a lesser but important role in predicting cuteness. As an additional part of the study, the asymmetric dominance paradigm was introduced, where a decoy option is presented to observe how it affects a person's decision on a certain matter. In the United States this asymmetric dominance paradigm induced a person to be more prone to a cuter item, whereas in Korea the opposite effect occurred. Cho concluded that this may be due to a different attitude toward cuteness, and so the advantages related to neoteny may be different in different countries. Brain The developmental psychologist Helmuth Nyborg said that a testable hypothesis can be made using his General Trait Covariance-Androgen/Estrogen (GTC-A/E) model with regards to "neoteny". Nyborg said that the hypothesis is that "feminized", slower maturing, "neotenic" "androtypes" will differ from "masculinized", faster maturing "androtypes" by having bigger brains, more fragile skulls, bigger hips, narrower shoulders, less physical strength, live in cities (as opposed to living in the countryside) and by receiving higher performance scores on ability tests. Nyborg said that if the predictions made by this hypothesis are true, then the "material basis" of the differences would be "explained". Nyborg said that some ecological situations would favor the survival and reproduction of the "masculinized," faster maturing "androtypes" due to their "sheer brutal force" while other ecological situations would favor the survival and reproduction of the "feminized," slower maturing, "neotenic" "androtypes" due to their "subtle tactics." Aldo Poiani, an evolutionary ecologist at Monash University, Australia, said that he agrees that neoteny in humans may have become "accelerated" through "two-way sexual selection" whereby females have been choosing smart males as mates and males have been choosing smart females as mates. Somel et al. said that 48% of the genes that affect the development of the prefrontal cortex change with age differently between humans and chimpanzees. Somel et al. said that there is a "significant excess of genes" related to the development of the prefrontal cortex that show "neotenic expression in humans" relative to chimpanzees and rhesus macaques. Somel et al. said that this difference was in accordance with the neoteny hypothesis of human evolution. In terms of brain size differences, it has been noted that given the larger skull in neoteny humans, brain volume may be larger than an average human brain. It has been hypothesized that this is one mode of which the brains of Homo sapiens grew as a species, as the prolonged development of neurons may have led to hypermorphosis, or excessive neuronal growth. Especially in the prefrontal cortex, brain pruning from childhood may be slower than usual, allowing for more time for neuronal maturation. This prolongs the transformation of otherwise very juvenile features. Bruce Charlton, a Newcastle University psychology professor, said what looks like immaturity — or in his terms, the "retention of youthful attitudes and behaviors into later adulthood" — is actually a valuable developmental characteristic, which he calls psychological neoteny. The ability of an adult human to learn is considered a neotenous trait. However, some studies may suggest the opposite of this idea of neoteny being beneficial. In general, the process of learning and developing new skills can be attributed to plasticity of neurons in the brain, especially in the prefrontal cortex for higher order decisions and activity. As neurons go through ontogeny and maturity, it becomes more difficult to make new neuronal connections and change already present pathways and connections. However, during juvenile periods, cortical neurons are described to have higher plasticity and metabolic activity. In cases with neoteny, neurons are lingering in their more juvenile states since development is decelerated. On the surface this seems beneficial for the increased potential of younger cells. However, this may not be the case, as the consequences of the increased cellular activity must be taken into account. In general, oxidative phosphorylation is the process used to supply energy for neuronal processes in the brain. When resources for oxidative phosphorylation are exhausted, neurons turn to aerobic glycolysis in the place of oxygen. However, this can be taxing on a cell. Given that the neurons in question retain juvenile characteristics, they may not be entirely myelinated. Bufill, Agusti, Blesa et al. note how “The increase of the aerobic metabolism in these neurons may lead, however, to higher levels of oxidative stress, therefore, favoring the development of neurodegenerative diseases which are exclusive, or almost exclusive, to humans, such as Alzheimer's disease.” Specifically through various studies of the brain, aerobic glycolysis activity has been detected at high levels in the dorsolateral prefrontal cortex, which has functionality regarding the working memory. Stress on these working memory cells may support conditions related to neurodegenerative diseases such as Alzheimer's Disease. Physical attractiveness Women Montagu said that the following neotenous traits are in women when compared to men: more delicate skeleton, smoother ligament attachments, smaller mastoid processes, reduced brow ridges, more forward tilt of the head, narrower joints, less hairy, retention of fetal body hair, smaller body size, more backward tilt of pelvis, greater longevity, lower basal metabolism, faster heartbeat, higher pitched voice and larger tear ducts. In a cross-cultural study, more neotenized female faces were the most attractive to men while less neotenized female faces were the least attractive to men, regardless of the females' actual age. Using a panel of East Asian, Hispanic and White judges, one study found that the female faces tended to be judged as more attractive if they had a mixture of youthful and sexually mature features. Hispanic and East Asian women were judged as more attractive than White and Black women, and they happened to possess more of the attributes defined as attractive, however the authors noted that it would be inaccurate to conclude that any ethnic group was more attractive than the other, based on their sample. Using a panel of African Americans and whites as judges, Cunningham found more neotenous faces were perceived as having both higher "femininity" and "sociability". The authors found no evidence of ethnocentric bias in the Asian or White samples, as Asians and Whites did not differ significantly in preference for neonate cues, and positive ratings of white women did not increase with exposure to Western media. In contrast, Cunningham said that faces that were "low in neoteny" were judged as "intimidating". Upon analyzing the results of his study Cunningham concluded that preference for "neonate features may display the least cross-cultural variability" in terms of "attractiveness ratings". In a study of Italian women who have won beauty competitions, the study said that the women had faces characterized by more "babyness" traits compared to the "normal" women used as a reference. In a study of sixty Caucasian female faces, the average facial composite of the fifteen faces considered most attractive differed from the facial composite of the whole by having a reduced lower facial region, a thinner jaw, and a higher forehead. In a solely Westernized study, it was recorded that the high ratio of neurocranial to lower facial features, signified by a small nose and ears, and full lips, is viewed interchangeably as both youthful and or neotenous. This interchangeability between neotenous features and youth leads to the idea that male attraction to youth may also apply to females that display exaggerated age-related cues. For example, if a female was much older but retained these “youthful” features, males may find her more favorable over other females who look their biological age. Beyond the face value of what males find physically attractive, secondary sexual characteristics related to body shape are factored in so adults may be able to recognize other adults from juveniles. A major part of the cosmetic world is built around capitalizing on enhancing these neonate features. Making eyes and lips appear larger as well as reducing the appearance of any age-related blemishes such as wrinkles or skin discoloration are some of the key target areas of this industry. Doug Jones, a visiting scholar in anthropology at Cornell University, said that there is cross-cultural evidence for preference for facial neoteny in women, because of sexual selection for the appearance of youthful fecundity in women by men. Jones said that men are more concerned about women's sexual attractiveness than women are concerned about men's sexual attractiveness. Jones said that this greater concern over female attractiveness is unusual among animals, because it is usually the females that are more concerned with the male's sexual attractiveness in other species. Jones said that this anomalous case in humans is due to women living past their reproductive years and due to women having their reproductive capacity diminish with age, resulting in the adaption in men to be selective against physical traits of age that indicate lessening female fecundity. Jones said that the neoteny in men's faces may be a "by-product" of men's attraction to indicators of "youthful fecundity" in "adult females". Likewise, neotenous features have also been loosely linked to providing information about levels of ovarian function, which is another integral part of sexual selection. Both of these factors, seeming like extra help is needed as well as neotenous features expression, being tied to optimal ovarian function, lead to a fitness advantage because males respond positively. However, it was noted that neotenous face structures are not the only thing to be taken into consideration when thinking about attractiveness and mate selection. Once again, secondary sex characteristics come into play because they are dominated by the endocrine system and appear only when sexual maturity is reached. The facial features are ever present and may not be the strongest case for sexual selection. Other scientists, noting that other primates have not evolved neoteny to the same extent as humans despite fertility being as reproductively significant for them, argue that if human children need more parental investment than nonhuman primate young, that would have selected for a preference for more experienced females more capable of providing parental care. As this would make experience more relevant for effective reproductive success (producing offspring that survive to reproductive age, as opposed to simply the number of births) and therefore more able to compensate for a slight to moderate decrease in biological fertility from recent sexual maturity to late pre-menopausal life, these scientists argue that the sexual selection model of neoteny makes the false prediction that primates that need less parental investment than humans should display more neoteny than humans. Men A study was conducted on the attractiveness of males with the subject of the skull and its application in human morphology, using psychology and evolutionary biology to understand selection on facial features. It found that averageness was the result of stabilizing selection, whereas facial paedomorphosis or juvenile traits had been caused by directional selection. In directional selection, a single phenotypic trait is driven by selection toward fixation in a population. In contrast, in stabilizing selection both alleles are driven toward fixation (or polymorphism) in a population. To compare the effects of directional and stabilizing selection on facial paedomorphosis, Wehr used graphic morphing to alter appearances to make faces appear more or less juvenile. The results concluded that the effect of averageness was preferred nearly twice over juvenile trait characteristics which indicates that stabilizing selection influences facial preference, and averageness was found more attractive than the retention of juvenile facial characteristics. It was perplexing to find that women tend to prefer the average facial features over the juvenile, because in animals the females tend to drive sexual selection by female choice and the Red Queen hypothesis. Because men generally exhibit uniform preference for neotenous women's faces, Elia (2013) questioned if women's varying preferences for neotenous men's faces could "help determine" the range of facial neoteny in humans. Neoteny and its connection with human specialization features Neoteny is not a ubiquitous trait of the human phenotype. Human expression timing, compared to chimpanzee, has a completely different trajectory uncovering that there is no uniform shift in developmental timing. Humans undergo this neotenous shift once sexual maturity is reached. A question prompted by the Mehmet Somel et al. study, is whether or not human-specific neotenic changes are indicative of human-specific cognitive traits. The tracking of where developmental landmarks occur in humans and other primates is a step towards a better understanding of how neoteny manifests specifically in our species and how it may contribute to our specialized features, such as smaller jaws. In humans, the neotenic shift is concentrated around a group of gray matter genes. This shift in neotenic genes also coincides with cortical reorganization that is related to synaptic elimination and is at a much more rapid pace over others during adolescence. It is also linked to the development of linguistic skills and the development of certain neurological disorders like ADHD. Among primates and early humans Delbert D. Thiessen said that Homo sapiens are more neotenized than Homo erectus, Homo erectus were more neotenized than Australopithecus, Great Apes are more neotenized than Old World monkeys, and Old World monkeys are more neotenized than New World monkeys. Nancy Lynn Barrickman said that Brian T. Shea concluded by multivariate analysis that Bonobos are more neotenized than the common chimpanzee, taking into account such features as the proportionately long torso length of the Bonobo. Montagu said that part of the differences seen in the morphology of "modernlike types of man" can be attributed to different rates of "neotenous mutations" in their early populations. Regarding behavioral neoteny, Mathieu Alemany Oliver says that neoteny partly (and theoretically) explains stimulus seeking, reality conflict, escapism, and control of aggression in consumer behavior. However, if these characteristics are more or less visible among people, Alemany Oliver argues, it is more the fact of cultural variables than the result of different levels of neoteny. Such a view makes behavioral neoteny play a non-significant role in gender and race differences, and puts an emphasis on culture. Specific neotenies Populations with a history of dairy farming have evolved to be lactose tolerant in adulthood, whereas other populations generally lose the ability to break down lactose as they grow into adults. Down syndrome neotenizes the brain and body. The syndrome is characterized by decelerated maturation (neoteny), incomplete morphogenesis (vestigia) and atavisms. Dwarfism and achondroplasia also neotenize the size of the human height as well as the limbs. This is due to dwarfing in the growth hormone deficiency. See also Ageing Cuteness Kawaii Moe (slang) Sexual selection References Vertebrate developmental biology Evolutionary biology Taxonomy (biology) Human biology
Neoteny in humans
Biology
6,614
13,980
https://en.wikipedia.org/wiki/Homeostasis
In biology, homeostasis (British also homoeostasis; ) is the state of steady internal physical and chemical conditions maintained by living systems. This is the condition of optimal functioning for the organism and includes many variables, such as body temperature and fluid balance, being kept within certain pre-set limits (homeostatic range). Other variables include the pH of extracellular fluid, the concentrations of sodium, potassium, and calcium ions, as well as the blood sugar level, and these need to be regulated despite changes in the environment, diet, or level of activity. Each of these variables is controlled by one or more regulators or homeostatic mechanisms, which together maintain life. Homeostasis is brought about by a natural resistance to change when already in optimal conditions, and equilibrium is maintained by many regulatory mechanisms; it is thought to be the central motivation for all organic action. All homeostatic control mechanisms have at least three interdependent components for the variable being regulated: a receptor, a control center, and an effector. The receptor is the sensing component that monitors and responds to changes in the environment, either external or internal. Receptors include thermoreceptors and mechanoreceptors. Control centers include the respiratory center and the renin-angiotensin system. An effector is the target acted on, to bring about the change back to the normal state. At the cellular level, effectors include nuclear receptors that bring about changes in gene expression through up-regulation or down-regulation and act in negative feedback mechanisms. An example of this is in the control of bile acids in the liver. Some centers, such as the renin–angiotensin system, control more than one variable. When the receptor senses a stimulus, it reacts by sending action potentials to a control center. The control center sets the maintenance range—the acceptable upper and lower limits—for the particular variable, such as temperature. The control center responds to the signal by determining an appropriate response and sending signals to an effector, which can be one or more muscles, an organ, or a gland. When the signal is received and acted on, negative feedback is provided to the receptor that stops the need for further signaling. The cannabinoid receptor type 1, located at the presynaptic neuron, is a receptor that can stop stressful neurotransmitter release to the postsynaptic neuron; it is activated by endocannabinoids such as anandamide (N-arachidonoylethanolamide) and 2-arachidonoylglycerol via a retrograde signaling process in which these compounds are synthesized by and released from postsynaptic neurons, and travel back to the presynaptic terminal to bind to the CB1 receptor for modulation of neurotransmitter release to obtain homeostasis. The polyunsaturated fatty acids are lipid derivatives of omega-3 (docosahexaenoic acid, and eicosapentaenoic acid) or of omega-6 (arachidonic acid). They are synthesized from membrane phospholipids and used as precursors for endocannabinoids to mediate significant effects in the fine-tuning adjustment of body homeostasis. Etymology The word homeostasis () uses combining forms of homeo- and -stasis, Neo-Latin from Greek: ὅμοιος homoios, "similar" and στάσις stasis, "standing still", yielding the idea of "staying the same". History The concept of the regulation of the internal environment was described by French physiologist Claude Bernard in 1849, and the word homeostasis was coined by Walter Bradford Cannon in 1926. In 1932, Joseph Barcroft a British physiologist, was the first to say that higher brain function required the most stable internal environment. Thus, to Barcroft homeostasis was not only organized by the brain—homeostasis served the brain. Homeostasis is an almost exclusively biological term, referring to the concepts described by Bernard and Cannon, concerning the constancy of the internal environment in which the cells of the body live and survive. The term cybernetics is applied to technological control systems such as thermostats, which function as homeostatic mechanisms but are often defined much more broadly than the biological term of homeostasis. Overview The metabolic processes of all organisms can only take place in very specific physical and chemical environments. The conditions vary with each organism, and with whether the chemical processes take place inside the cell or in the interstitial fluid bathing the cells. The best-known homeostatic mechanisms in humans and other mammals are regulators that keep the composition of the extracellular fluid (or the "internal environment") constant, especially with regard to the temperature, pH, osmolality, and the concentrations of sodium, potassium, glucose, carbon dioxide, and oxygen. However, a great many other homeostatic mechanisms, encompassing many aspects of human physiology, control other entities in the body. Where the levels of variables are higher or lower than those needed, they are often prefixed with hyper- and hypo-, respectively such as hyperthermia and hypothermia or hypertension and hypotension. If an entity is homeostatically controlled it does not imply that its value is necessarily absolutely steady in health. Core body temperature is, for instance, regulated by a homeostatic mechanism with temperature sensors in, amongst others, the hypothalamus of the brain. However, the set point of the regulator is regularly reset. For instance, core body temperature in humans varies during the course of the day (i.e. has a circadian rhythm), with the lowest temperatures occurring at night, and the highest in the afternoons. Other normal temperature variations include those related to the menstrual cycle. The temperature regulator's set point is reset during infections to produce a fever. Organisms are capable of adjusting somewhat to varied conditions such as temperature changes or oxygen levels at altitude, by a process of acclimatisation. Homeostasis does not govern every activity in the body. For instance, the signal (be it via neurons or hormones) from the sensor to the effector is, of necessity, highly variable in order to convey information about the direction and magnitude of the error detected by the sensor. Similarly, the effector's response needs to be highly adjustable to reverse the error – in fact it should be very nearly in proportion (but in the opposite direction) to the error that is threatening the internal environment. For instance, arterial blood pressure in mammals is homeostatically controlled and measured by stretch receptors in the walls of the aortic arch and carotid sinuses at the beginnings of the internal carotid arteries. The sensors send messages via sensory nerves to the medulla oblongata of the brain indicating whether the blood pressure has fallen or risen, and by how much. The medulla oblongata then distributes messages along motor or efferent nerves belonging to the autonomic nervous system to a wide variety of effector organs, whose activity is consequently changed to reverse the error in the blood pressure. One of the effector organs is the heart whose rate is stimulated to rise (tachycardia) when the arterial blood pressure falls, or to slow down (bradycardia) when the pressure rises above the set point. Thus the heart rate (for which there is no sensor in the body) is not homeostatically controlled but is one of the effector responses to errors in arterial blood pressure. Another example is the rate of sweating. This is one of the effectors in the homeostatic control of body temperature, and therefore highly variable in rough proportion to the heat load that threatens to destabilize the body's core temperature, for which there is a sensor in the hypothalamus of the brain. Controls of variables Core temperature Mammals regulate their core temperature using input from thermoreceptors in the hypothalamus, brain, spinal cord, internal organs, and great veins. Apart from the internal regulation of temperature, a process called allostasis can come into play that adjusts behaviour to adapt to the challenge of very hot or cold extremes (and to other challenges). These adjustments may include seeking shade and reducing activity, seeking warmer conditions and increasing activity, or huddling. Behavioral thermoregulation takes precedence over physiological thermoregulation since necessary changes can be affected more quickly and physiological thermoregulation is limited in its capacity to respond to extreme temperatures. When the core temperature falls, the blood supply to the skin is reduced by intense vasoconstriction. The blood flow to the limbs (which have a large surface area) is similarly reduced and returned to the trunk via the deep veins which lie alongside the arteries (forming venae comitantes). This acts as a counter-current exchange system that short-circuits the warmth from the arterial blood directly into the venous blood returning into the trunk, causing minimal heat loss from the extremities in cold weather. The subcutaneous limb veins are tightly constricted, not only reducing heat loss from this source but also forcing the venous blood into the counter-current system in the depths of the limbs. The metabolic rate is increased, initially by non-shivering thermogenesis, followed by shivering thermogenesis if the earlier reactions are insufficient to correct the hypothermia. When core temperature rises are detected by thermoreceptors, the sweat glands in the skin are stimulated via cholinergic sympathetic nerves to secrete sweat onto the skin, which, when it evaporates, cools the skin and the blood flowing through it. Panting is an alternative effector in many vertebrates, which cools the body also by the evaporation of water, but this time from the mucous membranes of the throat and mouth. Blood glucose Blood sugar levels are regulated within fairly narrow limits. In mammals, the primary sensors for this are the beta cells of the pancreatic islets. The beta cells respond to a rise in the blood sugar level by secreting insulin into the blood and simultaneously inhibiting their neighboring alpha cells from secreting glucagon into the blood. This combination (high blood insulin levels and low glucagon levels) act on effector tissues, the chief of which is the liver, fat cells, and muscle cells. The liver is inhibited from producing glucose, taking it up instead, and converting it to glycogen and triglycerides. The glycogen is stored in the liver, but the triglycerides are secreted into the blood as very low-density lipoprotein (VLDL) particles which are taken up by adipose tissue, there to be stored as fats. The fat cells take up glucose through special glucose transporters (GLUT4), whose numbers in the cell wall are increased as a direct effect of insulin acting on these cells. The glucose that enters the fat cells in this manner is converted into triglycerides (via the same metabolic pathways as are used by the liver) and then stored in those fat cells together with the VLDL-derived triglycerides that were made in the liver. Muscle cells also take glucose up through insulin-sensitive GLUT4 glucose channels, and convert it into muscle glycogen. A fall in blood glucose, causes insulin secretion to be stopped, and glucagon to be secreted from the alpha cells into the blood. This inhibits the uptake of glucose from the blood by the liver, fats cells, and muscle. Instead the liver is strongly stimulated to manufacture glucose from glycogen (through glycogenolysis) and from non-carbohydrate sources (such as lactate and de-aminated amino acids) using a process known as gluconeogenesis. The glucose thus produced is discharged into the blood correcting the detected error (hypoglycemia). The glycogen stored in muscles remains in the muscles, and is only broken down, during exercise, to glucose-6-phosphate and thence to pyruvate to be fed into the citric acid cycle or turned into lactate. It is only the lactate and the waste products of the citric acid cycle that are returned to the blood. The liver can take up only the lactate, and, by the process of energy-consuming gluconeogenesis, convert it back to glucose. Iron levels Controlling iron levels in the body is a critically important part of many aspects of human health and disease. In humans iron is both necessary to the body and potentially harmful. Copper regulation Copper is absorbed, transported, distributed, stored, and excreted in the body according to complex homeostatic processes which ensure a constant and sufficient supply of the micronutrient while simultaneously avoiding excess levels. If an insufficient amount of copper is ingested for a short period of time, copper stores in the liver will be depleted. Should this depletion continue, a copper health deficiency condition may develop. If too much copper is ingested, an excess condition can result. Both of these conditions, deficiency and excess, can lead to tissue injury and disease. However, due to homeostatic regulation, the human body is capable of balancing a wide range of copper intakes for the needs of healthy individuals. Many aspects of copper homeostasis are known at the molecular level. Copper's essentiality is due to its ability to act as an electron donor or acceptor as its oxidation state fluxes between Cu1+ (cuprous) and Cu2+ (cupric). As a component of about a dozen cuproenzymes, copper is involved in key redox (i.e., oxidation-reduction) reactions in essential metabolic processes such as mitochondrial respiration, synthesis of melanin, and cross-linking of collagen. Copper is an integral part of the antioxidant enzyme copper-zinc superoxide dismutase, and has a role in iron homeostasis as a cofactor in ceruloplasmin. Levels of blood gases Changes in the levels of oxygen, carbon dioxide, and plasma pH are sent to the respiratory center, in the brainstem where they are regulated. The partial pressure of oxygen and carbon dioxide in the arterial blood is monitored by the peripheral chemoreceptors (PNS) in the carotid artery and aortic arch. A change in the partial pressure of carbon dioxide is detected as altered pH in the cerebrospinal fluid by central chemoreceptors (CNS) in the medulla oblongata of the brainstem. Information from these sets of sensors is sent to the respiratory center which activates the effector organs – the diaphragm and other muscles of respiration. An increased level of carbon dioxide in the blood, or a decreased level of oxygen, will result in a deeper breathing pattern and increased respiratory rate to bring the blood gases back to equilibrium. Too little carbon dioxide, and, to a lesser extent, too much oxygen in the blood can temporarily halt breathing, a condition known as apnea, which freedivers use to prolong the time they can stay underwater. The partial pressure of carbon dioxide is more of a deciding factor in the monitoring of pH. However, at high altitude (above 2500 m) the monitoring of the partial pressure of oxygen takes priority, and hyperventilation keeps the oxygen level constant. With the lower level of carbon dioxide, to keep the pH at 7.4 the kidneys secrete hydrogen ions into the blood and excrete bicarbonate into the urine. This is important in acclimatization to high altitude. Blood oxygen content The kidneys measure the oxygen content rather than the partial pressure of oxygen in the arterial blood. When the oxygen content of the blood is chronically low, oxygen-sensitive cells secrete erythropoietin (EPO) into the blood. The effector tissue is the red bone marrow which produces red blood cells (RBCs, also called ). The increase in RBCs leads to an increased hematocrit in the blood, and a subsequent increase in hemoglobin that increases the oxygen carrying capacity. This is the mechanism whereby high altitude dwellers have higher hematocrits than sea-level residents, and also why persons with pulmonary insufficiency or right-to-left shunts in the heart (through which venous blood by-passes the lungs and goes directly into the systemic circulation) have similarly high hematocrits. Regardless of the partial pressure of oxygen in the blood, the amount of oxygen that can be carried, depends on the hemoglobin content. The partial pressure of oxygen may be sufficient for example in anemia, but the hemoglobin content will be insufficient and subsequently as will be the oxygen content. Given enough supply of iron, vitamin B12 and folic acid, EPO can stimulate RBC production, and hemoglobin and oxygen content restored to normal. Arterial blood pressure The brain can regulate blood flow over a range of blood pressure values by vasoconstriction and vasodilation of the arteries. High pressure receptors called baroreceptors in the walls of the aortic arch and carotid sinus (at the beginning of the internal carotid artery) monitor the arterial blood pressure. Rising pressure is detected when the walls of the arteries stretch due to an increase in blood volume. This causes heart muscle cells to secrete the hormone atrial natriuretic peptide (ANP) into the blood. This acts on the kidneys to inhibit the secretion of renin and aldosterone causing the release of sodium, and accompanying water into the urine, thereby reducing the blood volume. This information is then conveyed, via afferent nerve fibers, to the solitary nucleus in the medulla oblongata. From here motor nerves belonging to the autonomic nervous system are stimulated to influence the activity of chiefly the heart and the smallest diameter arteries, called arterioles. The arterioles are the main resistance vessels in the arterial tree, and small changes in diameter cause large changes in the resistance to flow through them. When the arterial blood pressure rises the arterioles are stimulated to dilate making it easier for blood to leave the arteries, thus deflating them, and bringing the blood pressure down, back to normal. At the same time, the heart is stimulated via cholinergic parasympathetic nerves to beat more slowly (called bradycardia), ensuring that the inflow of blood into the arteries is reduced, thus adding to the reduction in pressure, and correcting the original error. Low pressure in the arteries, causes the opposite reflex of constriction of the arterioles, and a speeding up of the heart rate (called tachycardia). If the drop in blood pressure is very rapid or excessive, the medulla oblongata stimulates the adrenal medulla, via "preganglionic" sympathetic nerves, to secrete epinephrine (adrenaline) into the blood. This hormone enhances the tachycardia and causes severe vasoconstriction of the arterioles to all but the essential organ in the body (especially the heart, lungs, and brain). These reactions usually correct the low arterial blood pressure (hypotension) very effectively. Calcium levels The plasma ionized calcium (Ca2+) concentration is very tightly controlled by a pair of homeostatic mechanisms. The sensor for the first one is situated in the parathyroid glands, where the chief cells sense the Ca2+ level by means of specialized calcium receptors in their membranes. The sensors for the second are the parafollicular cells in the thyroid gland. The parathyroid chief cells secrete parathyroid hormone (PTH) in response to a fall in the plasma ionized calcium level; the parafollicular cells of the thyroid gland secrete calcitonin in response to a rise in the plasma ionized calcium level. The effector organs of the first homeostatic mechanism are the bones, the kidney, and, via a hormone released into the blood by the kidney in response to high PTH levels in the blood, the duodenum and jejunum. Parathyroid hormone (in high concentrations in the blood) causes bone resorption, releasing calcium into the plasma. This is a very rapid action which can correct a threatening hypocalcemia within minutes. High PTH concentrations cause the excretion of phosphate ions via the urine. Since phosphates combine with calcium ions to form insoluble salts (see also bone mineral), a decrease in the level of phosphates in the blood, releases free calcium ions into the plasma ionized calcium pool. PTH has a second action on the kidneys. It stimulates the manufacture and release, by the kidneys, of calcitriol into the blood. This steroid hormone acts on the epithelial cells of the upper small intestine, increasing their capacity to absorb calcium from the gut contents into the blood. The second homeostatic mechanism, with its sensors in the thyroid gland, releases calcitonin into the blood when the blood ionized calcium rises. This hormone acts primarily on bone, causing the rapid removal of calcium from the blood and depositing it, in insoluble form, in the bones. The two homeostatic mechanisms working through PTH on the one hand, and calcitonin on the other can very rapidly correct any impending error in the plasma ionized calcium level by either removing calcium from the blood and depositing it in the skeleton, or by removing calcium from it. The skeleton acts as an extremely large calcium store (about 1 kg) compared with the plasma calcium store (about 180 mg). Longer term regulation occurs through calcium absorption or loss from the gut. Another example are the most well-characterised endocannabinoids like anandamide (N-arachidonoylethanolamide; AEA) and 2-arachidonoylglycerol (2-AG), whose synthesis occurs through the action of a series of intracellular enzymes activated in response to a rise in intracellular calcium levels to introduce homeostasis and prevention of tumor development through putative protective mechanisms that prevent cell growth and migration by activation of CB1 and/or CB2 and adjoining receptors. Sodium concentration The homeostatic mechanism which controls the plasma sodium concentration is rather more complex than most of the other homeostatic mechanisms described on this page. The sensor is situated in the juxtaglomerular apparatus of kidneys, which senses the plasma sodium concentration in a surprisingly indirect manner. Instead of measuring it directly in the blood flowing past the juxtaglomerular cells, these cells respond to the sodium concentration in the renal tubular fluid after it has already undergone a certain amount of modification in the proximal convoluted tubule and loop of Henle. These cells also respond to rate of blood flow through the juxtaglomerular apparatus, which, under normal circumstances, is directly proportional to the arterial blood pressure, making this tissue an ancillary arterial blood pressure sensor. In response to a lowering of the plasma sodium concentration, or to a fall in the arterial blood pressure, the juxtaglomerular cells release renin into the blood. Renin is an enzyme which cleaves a decapeptide (a short protein chain, 10 amino acids long) from a plasma α-2-globulin called angiotensinogen. This decapeptide is known as angiotensin I. It has no known biological activity. However, when the blood circulates through the lungs a pulmonary capillary endothelial enzyme called angiotensin-converting enzyme (ACE) cleaves a further two amino acids from angiotensin I to form an octapeptide known as angiotensin II. Angiotensin II is a hormone which acts on the adrenal cortex, causing the release into the blood of the steroid hormone, aldosterone. Angiotensin II also acts on the smooth muscle in the walls of the arterioles causing these small diameter vessels to constrict, thereby restricting the outflow of blood from the arterial tree, causing the arterial blood pressure to rise. This, therefore, reinforces the measures described above (under the heading of "Arterial blood pressure"), which defend the arterial blood pressure against changes, especially hypotension. The angiotensin II-stimulated aldosterone released from the zona glomerulosa of the adrenal glands has an effect on particularly the epithelial cells of the distal convoluted tubules and collecting ducts of the kidneys. Here it causes the reabsorption of sodium ions from the renal tubular fluid, in exchange for potassium ions which are secreted from the blood plasma into the tubular fluid to exit the body via the urine. The reabsorption of sodium ions from the renal tubular fluid halts further sodium ion losses from the body, and therefore preventing the worsening of hyponatremia. The hyponatremia can only be corrected by the consumption of salt in the diet. However, it is not certain whether a "salt hunger" can be initiated by hyponatremia, or by what mechanism this might come about. When the plasma sodium ion concentration is higher than normal (hypernatremia), the release of renin from the juxtaglomerular apparatus is halted, ceasing the production of angiotensin II, and its consequent aldosterone-release into the blood. The kidneys respond by excreting sodium ions into the urine, thereby normalizing the plasma sodium ion concentration. The low angiotensin II levels in the blood lower the arterial blood pressure as an inevitable concomitant response. The reabsorption of sodium ions from the tubular fluid as a result of high aldosterone levels in the blood does not, of itself, cause renal tubular water to be returned to the blood from the distal convoluted tubules or collecting ducts. This is because sodium is reabsorbed in exchange for potassium and therefore causes only a modest change in the osmotic gradient between the blood and the tubular fluid. Furthermore, the epithelium of the distal convoluted tubules and collecting ducts is impermeable to water in the absence of antidiuretic hormone (ADH) in the blood. ADH is part of the control of fluid balance. Its levels in the blood vary with the osmolality of the plasma, which is measured in the hypothalamus of the brain. Aldosterone's action on the kidney tubules prevents sodium loss to the extracellular fluid (ECF). So there is no change in the osmolality of the ECF, and therefore no change in the ADH concentration of the plasma. However, low aldosterone levels cause a loss of sodium ions from the ECF, which could potentially cause a change in extracellular osmolality and therefore of ADH levels in the blood. Potassium concentration High potassium concentrations in the plasma cause depolarization of the zona glomerulosa cells' membranes in the outer layer of the adrenal cortex. This causes the release of aldosterone into the blood. Aldosterone acts primarily on the distal convoluted tubules and collecting ducts of the kidneys, stimulating the excretion of potassium ions into the urine. It does so, however, by activating the basolateral Na+/K+ pumps of the tubular epithelial cells. These sodium/potassium exchangers pump three sodium ions out of the cell, into the interstitial fluid and two potassium ions into the cell from the interstitial fluid. This creates an ionic concentration gradient which results in the reabsorption of sodium (Na+) ions from the tubular fluid into the blood, and secreting potassium (K+) ions from the blood into the urine (lumen of collecting duct). Fluid balance The total amount of water in the body needs to be kept in balance. Fluid balance involves keeping the fluid volume stabilized, and also keeping the levels of electrolytes in the extracellular fluid stable. Fluid balance is maintained by the process of osmoregulation and by behavior. Osmotic pressure is detected by osmoreceptors in the median preoptic nucleus in the hypothalamus. Measurement of the plasma osmolality to give an indication of the water content of the body, relies on the fact that water losses from the body, (through unavoidable water loss through the skin which is not entirely waterproof and therefore always slightly moist, water vapor in the exhaled air, sweating, vomiting, normal feces and especially diarrhea) are all hypotonic, meaning that they are less salty than the body fluids (compare, for instance, the taste of saliva with that of tears. The latter has almost the same salt content as the extracellular fluid, whereas the former is hypotonic with respect to the plasma. Saliva does not taste salty, whereas tears are decidedly salty). Nearly all normal and abnormal losses of body water therefore cause the extracellular fluid to become hypertonic. Conversely, excessive fluid intake dilutes the extracellular fluid causing the hypothalamus to register hypotonic hyponatremia conditions. When the hypothalamus detects a hypertonic extracellular environment, it causes the secretion of an antidiuretic hormone (ADH) called vasopressin which acts on the effector organ, which in this case is the kidney. The effect of vasopressin on the kidney tubules is to reabsorb water from the distal convoluted tubules and collecting ducts, thus preventing aggravation of the water loss via the urine. The hypothalamus simultaneously stimulates the nearby thirst center causing an almost irresistible (if the hypertonicity is severe enough) urge to drink water. The cessation of urine flow prevents the hypovolemia and hypertonicity from getting worse; the drinking of water corrects the defect. Hypo-osmolality results in very low plasma ADH levels. This results in the inhibition of water reabsorption from the kidney tubules, causing high volumes of very dilute urine to be excreted, thus getting rid of the excess water in the body. Urinary water loss, when the body water homeostat is intact, is a compensatory water loss, correcting any water excess in the body. However, since the kidneys cannot generate water, the thirst reflex is the all-important second effector mechanism of the body water homeostat, correcting any water deficit in the body. Blood pH The plasma pH can be altered by respiratory changes in the partial pressure of carbon dioxide; or altered by metabolic changes in the carbonic acid to bicarbonate ion ratio. The bicarbonate buffer system regulates the ratio of carbonic acid to bicarbonate to be equal to 1:20, at which ratio the blood pH is 7.4 (as explained in the Henderson–Hasselbalch equation). A change in the plasma pH gives an acid–base imbalance. In acid–base homeostasis there are two mechanisms that can help regulate the pH. Respiratory compensation a mechanism of the respiratory center, adjusts the partial pressure of carbon dioxide by changing the rate and depth of breathing, to bring the pH back to normal. The partial pressure of carbon dioxide also determines the concentration of carbonic acid, and the bicarbonate buffer system can also come into play. Renal compensation can help the bicarbonate buffer system. The sensor for the plasma bicarbonate concentration is not known for certain. It is very probable that the renal tubular cells of the distal convoluted tubules are themselves sensitive to the pH of the plasma. The metabolism of these cells produces carbon dioxide, which is rapidly converted to hydrogen and bicarbonate through the action of carbonic anhydrase. When the ECF pH falls (becoming more acidic) the renal tubular cells excrete hydrogen ions into the tubular fluid to leave the body via urine. Bicarbonate ions are simultaneously secreted into the blood that decreases the carbonic acid, and consequently raises the plasma pH. The converse happens when the plasma pH rises above normal: bicarbonate ions are excreted into the urine, and hydrogen ions released into the plasma. When hydrogen ions are excreted into the urine, and bicarbonate into the blood, the latter combines with the excess hydrogen ions in the plasma that stimulated the kidneys to perform this operation. The resulting reaction in the plasma is the formation of carbonic acid which is in equilibrium with the plasma partial pressure of carbon dioxide. This is tightly regulated to ensure that there is no excessive build-up of carbonic acid or bicarbonate. The overall effect is therefore that hydrogen ions are lost in the urine when the pH of the plasma falls. The concomitant rise in the plasma bicarbonate mops up the increased hydrogen ions (caused by the fall in plasma pH) and the resulting excess carbonic acid is disposed of in the lungs as carbon dioxide. This restores the normal ratio between bicarbonate and the partial pressure of carbon dioxide and therefore the plasma pH. The converse happens when a high plasma pH stimulates the kidneys to secrete hydrogen ions into the blood and to excrete bicarbonate into the urine. The hydrogen ions combine with the excess bicarbonate ions in the plasma, once again forming an excess of carbonic acid which can be exhaled, as carbon dioxide, in the lungs, keeping the plasma bicarbonate ion concentration, the partial pressure of carbon dioxide and, therefore, the plasma pH, constant. Cerebrospinal fluid Cerebrospinal fluid (CSF) allows for regulation of the distribution of substances between cells of the brain, and neuroendocrine factors, to which slight changes can cause problems or damage to the nervous system. For example, high glycine concentration disrupts temperature and blood pressure control, and high CSF pH causes dizziness and syncope. Neurotransmission Inhibitory neurons in the central nervous system play a homeostatic role in the balance of neuronal activity between excitation and inhibition. Inhibitory neurons using GABA, make compensating changes in the neuronal networks preventing runaway levels of excitation. An imbalance between excitation and inhibition is seen to be implicated in a number of neuropsychiatric disorders. Neuroendocrine system The neuroendocrine system is the mechanism by which the hypothalamus maintains homeostasis, regulating metabolism, reproduction, eating and drinking behaviour, energy utilization, osmolarity and blood pressure. The regulation of metabolism, is carried out by hypothalamic interconnections to other glands. Three endocrine glands of the hypothalamic–pituitary–gonadal axis (HPG axis) often work together and have important regulatory functions. Two other regulatory endocrine axes are the hypothalamic–pituitary–adrenal axis (HPA axis) and the hypothalamic–pituitary–thyroid axis (HPT axis). The liver also has many regulatory functions of the metabolism. An important function is the production and control of bile acids. Too much bile acid can be toxic to cells and its synthesis can be inhibited by activation of FXR a nuclear receptor. Gene regulation At the cellular level, homeostasis is carried out by several mechanisms including transcriptional regulation that can alter the activity of genes in response to changes. Energy balance The amount of energy taken in through nutrition needs to match the amount of energy used. To achieve energy homeostasis appetite is regulated by two hormones, grehlin and leptin. Grehlin stimulates hunger and the intake of food and leptin acts to signal satiety (fullness). A 2019 review of weight-change interventions, including dieting, exercise and overeating, found that body weight homeostasis could not precisely correct for "energetic errors", the loss or gain of calories, in the short-term. Clinical significance Many diseases are the result of a homeostatic failure. Almost any homeostatic component can malfunction either as a result of an inherited defect, an inborn error of metabolism, or an acquired disease. Some homeostatic mechanisms have inbuilt redundancies, which ensures that life is not immediately threatened if a component malfunctions; but sometimes a homeostatic malfunction can result in serious disease, which can be fatal if not treated. A well-known example of a homeostatic failure is shown in type 1 diabetes mellitus. Here blood sugar regulation is unable to function because the beta cells of the pancreatic islets are destroyed and cannot produce the necessary insulin. The blood sugar rises in a condition known as hyperglycemia. The plasma ionized calcium homeostat can be disrupted by the constant, unchanging, over-production of parathyroid hormone by a parathyroid adenoma resulting in the typically features of hyperparathyroidism, namely high plasma ionized Ca2+ levels and the resorption of bone, which can lead to spontaneous fractures. The abnormally high plasma ionized calcium concentrations cause conformational changes in many cell-surface proteins (especially ion channels and hormone or neurotransmitter receptors) giving rise to lethargy, muscle weakness, anorexia, constipation and labile emotions. The body water homeostat can be compromised by the inability to secrete ADH in response to even the normal daily water losses via the exhaled air, the feces, and insensible sweating. On receiving a zero blood ADH signal, the kidneys produce huge unchanging volumes of very dilute urine, causing dehydration and death if not treated. As organisms age, the efficiency of their control systems becomes reduced. The inefficiencies gradually result in an unstable internal environment that increases the risk of illness, and leads to the physical changes associated with aging. Various chronic diseases are kept under control by homeostatic compensation, which masks a problem by compensating for it (making up for it) in another way. However, the compensating mechanisms eventually wear out or are disrupted by a new complicating factor (such as the advent of a concurrent acute viral infection), which sends the body reeling through a new cascade of events. Such decompensation unmasks the underlying disease, worsening its symptoms. Common examples include decompensated heart failure, kidney failure, and liver failure. Biosphere In the Gaia hypothesis, James Lovelock stated that the entire mass of living matter on Earth (or any planet with life) functions as a vast homeostatic superorganism that actively modifies its planetary environment to produce the environmental conditions necessary for its own survival. In this view, the entire planet maintains several homeostasis (the primary one being temperature homeostasis). Whether this sort of system is present on Earth is open to debate. However, some relatively simple homeostatic mechanisms are generally accepted. For example, it is sometimes claimed that when atmospheric carbon dioxide levels rise, certain plants may be able to grow better and thus act to remove more carbon dioxide from the atmosphere. However, warming has exacerbated droughts, making water the actual limiting factor on land. When sunlight is plentiful and the atmospheric temperature climbs, it has been claimed that the phytoplankton of the ocean surface waters, acting as global sunshine, and therefore heat sensors, may thrive and produce more dimethyl sulfide (DMS). The DMS molecules act as cloud condensation nuclei, which produce more clouds, and thus increase the atmospheric albedo, and this feeds back to lower the temperature of the atmosphere. However, rising sea temperature has stratified the oceans, separating warm, sunlit waters from cool, nutrient-rich waters. Thus, nutrients have become the limiting factor, and plankton levels have actually fallen over the past 50 years, not risen. As scientists discover more about Earth, vast numbers of positive and negative feedback loops are being discovered, that, together, maintain a metastable condition, sometimes within a very broad range of environmental conditions. Predictive Predictive homeostasis is an anticipatory response to an expected challenge in the future, such as the stimulation of insulin secretion by gut hormones which enter the blood in response to a meal. This insulin secretion occurs before the blood sugar level rises, lowering the blood sugar level in anticipation of a large influx into the blood of glucose resulting from the digestion of carbohydrates in the gut. Such anticipatory reactions are open loop systems which are based, essentially, on "guess work", and are not self-correcting. Anticipatory responses always require a closed loop negative feedback system to correct the 'over-shoots' and 'under-shoots' to which the anticipatory systems are prone. Other fields The term has come to be used in other fields, for example: Risk An actuary may refer to risk homeostasis, where (for example) people who have anti-lock brakes have no better safety record than those without anti-lock brakes, because the former unconsciously compensate for the safer vehicle via less-safe driving habits. Previous to the innovation of anti-lock brakes, certain maneuvers involved minor skids, evoking fear and avoidance: Now the anti-lock system moves the boundary for such feedback, and behavior patterns expand into the no-longer punitive area. It has also been suggested that ecological crises are an instance of risk homeostasis in which a particular behavior continues until proven dangerous or dramatic consequences actually occur. Stress Sociologists and psychologists may refer to stress homeostasis, the tendency of a population or an individual to stay at a certain level of stress, often generating artificial stresses if the "natural" level of stress is not enough. Jean-François Lyotard, a postmodern theorist, has applied this term to societal 'power centers' that he describes in The Postmodern Condition, as being 'governed by a principle of homeostasis,' for example, the scientific hierarchy, which will sometimes ignore a radical new discovery for years because it destabilizes previously accepted norms. Technology Familiar technological homeostatic mechanisms include: A thermostat operates by switching heaters or air-conditioners on and off in response to the output of a temperature sensor. Cruise control adjusts a car's throttle in response to changes in speed. An autopilot operates the steering controls of an aircraft or ship in response to deviation from a pre-set compass bearing or route. Process control systems in a chemical plant or oil refinery maintain fluid levels, pressures, temperature, chemical composition, etc. by controlling heaters, pumps and valves. The centrifugal governor of a steam engine, as designed by James Watt in 1788, reduces the throttle valve in response to increases in the engine speed, or opens the valve if the speed falls below the pre-set rate. Society and culture The use of sovereign power, codes of conduct, religious and cultural practices and other dynamic processes in a society can be described as a part of an evolved homeostatic system of regularizing life and maintaining an overall equilibrium that protects the security of the whole from internal and external imbalances or dangers. Healthy civic cultures can be said to have achieved an optimal homeostatic balance between multiple contradictory concerns such as in the tension between respect for individual rights and concern for the public good, or that between governmental effectiveness and responsiveness to the interests of citizens. See also References Further reading electronic-book electronic- External links Homeostasis Walter Bradford Cannon, Homeostasis (1932) Physiology Biology terminology Cybernetics
Homeostasis
Biology
9,177
58,041,212
https://en.wikipedia.org/wiki/Xinjiang%20internment%20camps
The Xinjiang internment camps, officially called vocational education and training centers by the government of China, are internment camps operated by the government of Xinjiang and the Chinese Communist Party Provincial Standing Committee. Human Rights Watch says that they have been used to indoctrinate Uyghurs and other Muslims since 2017 as part of a "people's war on terror", a policy announced in 2014. Thirty-seven countries have expressed support for China's government for "counter-terrorism and deradicalization measures", including countries such as Russia, Saudi Arabia, Cuba, and Venezuela; meanwhile 22 or 43 countries, depending on source, have called on China to respect the human rights of the Uyghur community, including countries such as Canada, Germany, Turkey and Japan. Xinjiang internment camps have been described as "the most extreme example of China's inhumane policies against Uighurs". The camps have been criticized by the subcommittee of the Canadian House of Commons Standing Committee on Foreign Affairs and International Development for persecution of Uyghurs in China, including mistreatment, rape, torture, and genocide. The camps were established in 2017 by the administration of CCP general secretary Xi Jinping. Between 2017 and 2021 operations were led by Chen Quanguo, who was formerly a CCP Politburo member and the committee secretary who led the region's party committee and government. The camps are reportedly operated outside the Chinese legal system; many Uyghurs have reportedly been interned without trial and no charges have been levied against them (held in administrative detention). Local authorities are reportedly holding hundreds of thousands of Uyghurs in these camps as well as members of other ethnic minority groups in China, for the stated purpose of countering extremism and terrorism and promoting social integration. The internment of Uyghurs and other Turkic Muslims in the camps constitutes the largest-scale arbitrary detention of ethnic and religious minorities since World War II. , it was estimated that Chinese authorities may have detained up to 1.8 million people, mostly Uyghurs but also including Kazakhs, Kyrgyz and other ethnic Turkic Muslims, Christians, as well as some foreign citizens including Kazakhstanis, in these secretive internment camps located throughout the region. According to Adrian Zenz, a major researcher on the camps, the mass internments peaked in 2018 and abated somewhat since then, with officials shifting focus towards forced labor programs. Other human rights activists and US officials have also noted a shifting of individuals from the camps into the formal penal system. In May 2018, Randall Schriver, US Assistant Secretary of Defense for Indo-Pacific Security Affairs, said that "at least a million but likely closer to three million citizens" were imprisoned in detention centers, which he described as "concentration camps". In August 2018, Gay McDougall, a US representative at the United Nations Committee on the Elimination of Racial Discrimination, said that the committee had received many credible reports that 1 million ethnic Uyghurs in China have been held in "re-education camps". There have been comparisons between the Xinjiang camps and the Chinese Cultural Revolution. In 2019, at the United Nations, 54 countries, including China itself, rejected the allegations and supported the Chinese government's policies in Xinjiang. In another letter, 23 countries shared the concerns in the committee's reports and called on China to uphold human rights. In September 2020, the Australian Strategic Policy Institute (ASPI) reported in its Xinjiang Data Project that construction of camps continued despite government claims that their function was winding down. In October 2020, it was reported that the total number of countries that denounced China increased to 39, while the total number of countries that defended China decreased to 45. Sixteen countries that defended China in 2019 did not do so in 2020. The Xinjiang Zhongtai Group is running some of the reeducation camps and uses reallocated workers in their facilities. Background Xinjiang conflict Various Chinese dynasties have historically exerted various degrees of control and influence over parts of what is modern-day Xinjiang. The region came under complete Chinese rule as a result of the westward expansion of the Manchu-led Qing dynasty, which also conquered Tibet and Mongolia. This conquest, which marked the beginning of Xinjiang under Qing rule, ended circa 1758. While it was nominally declared to be a part of China's core territory, it was generally seen as a distant land unto its own by the imperial court; in 1758, it was designated a penal colony and a site of exile, and as a result, it was governed as a military protectorate, not integrated as a province. After the 1928 assassination of Yang Zengxin, the governor of the semi-autonomous Kumul Khanate in east Xinjiang under the Republic of China, Jin Shuren succeeded Yang as governor of the Khanate. On the death of the Kamul Khan Maqsud Shah in 1930, Jin entirely abolished the Khanate and took control of the region as its warlord. In 1933, the breakaway First East Turkestan Republic was established in the Kumul Rebellion. In 1934, the First Turkestan Republic was conquered by warlord Sheng Shicai with the aid of the Soviet Union before Sheng reconciled with the Republic of China in 1942. In 1944, the Ili Rebellion led to the Second East Turkestan Republic with dependency on the Soviet Union for trade, arms, and "tacit consent" for its continued existence before being absorbed into the People's Republic of China in 1949. From the 1950s to the 1970s, the government sponsored a mass migration of Han Chinese to the region, policies promoting Chinese cultural unity, and policies punishing certain expressions of Uyghur identity. During this time, militant Uyghur separatist organizations with potential support from the Soviet Union emerged, with the East Turkestan People's Party being the largest in 1968. During the 1970s, the Soviets supported the United Revolutionary Front of East Turkestan (URFET) to fight the Chinese. In 1997, a police roundup and execution of 30 suspected separatists during Ramadan led to large demonstrations in February 1997 that resulted in the Ghulja incident, a People's Liberation Army (PLA) crackdown that led to at least nine deaths. The Ürümqi bus bombings later that month killed nine people and injured 68 with responsibility acknowledged by Uyghur exile groups. In March 1997, a bus bomb killed two people with responsibility claimed by Uyghur radicals and the Turkey-based Organisation for East Turkistan Freedom. In July 2009, riots broke out in Xinjiang in response to a violent dispute between Uyghur and Han Chinese workers in a factory and they resulted in over 100 deaths. Following the riots, Uyghur radicals killed dozens of Chinese citizens in coordinated attacks from 2009 to 2016. These included the August 2009 syringe attacks, the 2011 bomb-and-knife attack in Hotan, the March 2014 knife attack in the Kunming railway station, the April 2014 bomb-and-knife attack in the Ürümqi railway station, and the May 2014 car-and-bomb attack in an Ürümqi street market. Several of the attacks were orchestrated by the Turkistan Islamic Party (formerly the East Turkestan Islamic Movement) which has been designated a terrorist organization by several countries including Russia, Turkey, the United Kingdom, and the United States (until 2020), in addition to the United Nations. Strategic motivations After initially denying the existence of the camps the Chinese government has maintained that its actions in Xinjiang are justifiable responses to the threats of extremism and terrorism. As a region on the northwestern periphery of China which is inhabited by ethnic/linguistic/religious minorities, Xinjiang has been said (by Raffi Khatchadourian) to have "never seemed fully within the Communist Party's grasp". Part of Xinjiang was once seized by Czarist Russia and it was also independent for a short period of time. Traditionally, the government of the People's Republic of China has favored an assimilationist policy towards minorities and it has accelerated this policy by encouraging the mass immigration of Han Chinese into minority lands. After the collapse of its rival and neighbor the Soviet Union—another huge multi-national communist state with one dominant ethnicity—the Chinese Communist Party was "convinced that ethnic nationalism had helped tear the former superpower to pieces". In addition, terrorist attacks were committed by Uyghurs in 2009, 2013, and 2014. Several additional potential motives for the increased repression in Xinjiang have been presented by scholars who have conducted research outside China. First, the repression may simply be the result of increased dissent within the region beginning in circa 2009; second, it may be due to changes in minority policy which promoted assimilation into Han culture; and third, the repression may primarily be spearheaded by Chen Quanguo himself, the result of his personally hardline attitude towards perceived acts of sedition. China's government has used the terrorist attacks of 9/11 as a justification for its actions against the Uyghurs. It claims that its actions in Xinjiang are necessary because Xinjiang is another front in the "global war on terrorism". Specifically, they are trying to rid China of the Shanghai Cooperation Organization's three evils. The three evils are "transnational terrorism, separatism, and religious extremism," all three of which the CCP believes the Uyghurs possess. The true reason for the repression of the Uyghurs is quite convoluted but some argue that this is based on the CCP's desire to preserve China's identity and integrity, rather than its desire to condemn terrorism. Additionally, some analysts have suggested that the CCP considers Xinjiang a key route in China's Belt and Road Initiative (BRI), however, it considers Xinjiang's local population a potential threat to the initiative's success, or it fears that opening Xinjiang up may also open it up to radicalizing influences from other states which are participating in the BRI. Sean Roberts of George Washington University said the CCP sees Uyghurs' attachment to their traditional lands as a risk to the BRI. Researcher Adrian Zenz has suggested that the initiative is an important reason for the Chinese government's control of Xinjiang. In November 2020, when the US dropped the Turkistan Islamic Party from its terrorist list because it was no longer "in existence", the decision was lauded by some intelligence officials because it removed the pretext for the Chinese government's decision to wage "terrorism eradication" campaigns against the Uyghurs. However, Yue Gang, a military commentator in Beijing stated, "in the wake of the US decision on the ETIM, China might seek to increase its counterterrorism activities." The group continues to be designated as a terrorist group by the United Nations Security Council as well as by the governments of other countries. Policies from 2009 to 2016 Both prior to and until shortly after the July 2009 Ürümqi riots, Wang Lequan was the Party Secretary for the Xinjiang region, effectively the highest subnational role; roughly equivalent to a governor in a Western province or state. Wang worked on modernization programs in Xinjiang, including industrialization, development of commerce, roads, railways, hydrocarbon development and pipelines with neighboring Kazakhstan to eastern China. Wang also constrained local culture and religion, replaced the Uyghur language with Standard Mandarin as the medium of education in primary schools, and penalized or banned among government workers (in a region in which the government was a very large employer), the wearing of beards and headscarves, religious fasting and praying while on the job. In the 1990s, many Uyghurs in parts of Xinjiang could not speak Mandarin Chinese. In April 2010, after the Ürümqi riots, Zhang Chunxian replaced Wang Lequan as the Communist Party chief. Zhang Chunxian continued and strengthened Wang's repressive policies. In 2011, Zhang proposed "modern culture leads the development in Xinjiang" as his policy statement and started to implement his modern culture propaganda. In 2012, he first mentioned the phrase "de-extremification" () campaigns and started to educate "wild Imams" () and extremists (). In 2013, the Belt and Road Initiative was announced, a massive trade project at the heart of which is Xinjiang. In 2014, Chinese authorities announced a "people's war on terror" and local government introduced new restrictions, including a ban on long beards and wearing the burqa in public. In 2014, the concept of "transformation through education" began to be used in contexts outside of Falun Gong through the systematic "de-extremification" campaigns. Under Zhang, the Communist Party launched its "Strike Hard Campaign against Violent Terrorism" in Xinjiang. In August 2016, Chen Quanguo, a well-known hardline Communist Party secretary in Tibet, took charge of the Xinjiang autonomous region. Chen was branded as responsible for a major component of Tibet's "subjugation" by critics. Following Chen's arrival, local authorities recruited over 90,000 police officers in 2016 and 2017 – twice as many as they recruited in the past seven years, and laid out as many as 7,300 heavily guarded check points in the region. The province has come to be known as one of the most heavily policed regions of the world. English-language news reports have labelled the current regime in Xinjiang as the most extensive police state in the world. Antireligious campaigns As a communist state, China does not have an official state religion, but its government recognizes five different religious denominations, namely Buddhism, Taoism, Islam, Catholicism, and Protestantism. In 2014, Western media outlets reported that it has conducted antireligious campaigns in order to promote atheism. According to The Washington Post, the CCP under Xi Jinping shifted its policies in favor of a so-called "Sinicization" of ethnic and religious minorities. The trend accelerated in 2018 when the State Ethnic Affairs Commission and the State Administration for Religious Affairs were placed under the control of the CCP's United Front Work Department. Groups that are targeted for surveillance Around 2015, according to Chinese Human Rights Defenders, a senior CCP official argued that "a third" of Xinjiang's Uyghurs were "polluted by religious extremist forces", and needed to be "educated and reformed through concentrated force". At about the same time, the Chinese state-security apparatus was developing a "Integrated Joint Operations Platform" (IJOP) to analyze information which was collected from its surveillance data. According to an analysis of this software by Human Rights Watch, a member of a minority group might be assessed by the IJOP as falling under one of 36 "person types" that could lead to arrest and internment in a re-education camp. Some of these person types included: people who do not use a mobile phone people who use the back door instead of the front people who consume an "unusual" amount of electricity people who have an "abnormal" beard people who socialize too little people who maintain "complex" relationships people who have a family member that exhibits some of these traits and so is "insufficiently loyal" History Beginning in 2017, local media outlets generally referred to the facilities as "counter-extremism training centers" () and "education and transformation training centers" (). Most of those facilities were converted from existing schools or other official buildings, although some of them were purpose-built. The heavily policed region and thousands of check points assisted and accelerated the detention of locals in the camps. In 2017 the region constituted 21% of all arrests in China despite comprising less than 2% of the national population, eight times more than the previous year. The judicial and other government bureaus of many cities and counties started to release a series of procurement and construction bids for those planned camps and facilities. Increasingly, massive detention centers were built up throughout the region and are being used to hold hundreds of thousands of people targeted for their religious practices and ethnicity. Victor Shih, a political economist at the University of California, San Diego, said in July 2019 the mass internments were unnecessary because "no active insurgencies" existed, only "isolated terrorist incidents". He suggested that because a great deal of money was spent setting up the camps, the money likely went to associates of the politicians who created them. According to the Chinese ambassador to Australia Cheng Jingye in December 2019, all of the "trainees" in the centers have graduated and have gradually returned to their jobs or found new jobs with government assistance. Cheng also called reports that one million Uyghurs had been detained in Xinjiang "fake news" and that "what has been done in Xinjiang has no ... difference with what the other countries, including western countries, [do] to fight against terrorists." During the COVID-19 pandemic in mainland China, there were no reports of cases of the coronavirus in Xinjiang prisons or of conditions in the internment camps. After program suspensions due to the 2019–20 coronavirus pandemic, Uyghur workers were reported to have been returned to other parts of Xinjiang and the rest of China to resume work beginning in March 2020. In September 2020, the Australian Strategic Policy Institute (ASPI) launched its Xinjiang Data Project, which reported that construction of camps continued despite claims that their function was winding down, with 380 camps and detention centers identified. The Muslim-majority countries like the UAE, Saudi Arabia and Egypt were showing open support towards the Asian nation, stating that "China has the right to take anti‐terrorism and de‐extremism measures". The Arab nations were neglecting the human rights abuses to not ruin the economic ties they maintained with China, which is a crucial trading partner and investor for these countries. Moreover, the exiled Uyghur Muslims in these countries were regularly being detained and deported back to China. According to the Associated Press, a young Chinese woman, Wu Huan was captured for eight days in a Chinese-run secret detention site in Dubai. She revealed that at least two other Uyghur prisoners were detained with her at a villa turned into jail. Critics have largely criticized the UAE for its supporting role in detaining as well as deporting the Uyghur Muslims and other Chinese political dissidents at the orders of the Chinese government. Leaks and hacks The New York Times leak On 16 November 2019, The New York Times released an extensive leak of 400 pages of documents, sourced from a member of the Chinese government, in the hope that CCP General Secretary Xi Jinping would be held accountable for his actions. The New York Times stated that the leak suggests discontent inside the Communist Party relating to the crackdown in Xinjiang. The anonymous government official who leaked the documents did so with the intent that the disclosure "would prevent party leaders, including Mr. Xi, from escaping culpability for the mass detentions." We must be as harsh as them and show absolutely no mercy. — Xi Jinping on the terror attacks in 2014, (translated from Mandarin Chinese) One document was a manual aimed at communicating messages to Uyghur students who were returning home and would ask about their missing friends or relatives who had been interned in the camps. It said that government staff should acknowledge that the internees had not committed a crime and that "it is just that their thinking has been infected by unhealthy thoughts." Officials were directed to say that even grandparents and family members who seemed too old to carry out violence could not be spared. The New York Times stated that speeches obtained show how Xi views risks to the party similar to the collapse of the Soviet Union, which The New York Times stated Xi "blamed on ideological laxity and spineless leadership." Concerned that violence in the Xinjiang region could damage social stability in the rest of China, Xi stated "social stability will suffer shocks, the general unity of people of every ethnicity will be damaged, and the broad outlook for reform, development and stability will be affected." Xi encouraged officials to study how the US responded following the September 11 attacks. Xi likened Islamic extremism alternately to a virus-like contagion and a dangerously addictive drug, and declared that addressing it would require "a period of painful, interventionary treatment." The China Daily reported in 2018 that CCP official Wang Yongzhi was removed for "serious disciplinary violations". The New York Times obtained a copy of Wang's confession (which the report noted was likely signed under duress) and stated that The New York Times believed he was sacked for being too lenient on Uyghurs, for example his release of 7,000 detainees. Wang had told his superiors that he was concerned that the actions against the Uyghurs would breed discontent and thus result in greater violence in the future. The leaked documents stated, "he ignored the party central leadership's strategy for Xinjiang, and he went as far as brazen defiance. ... He refused, to round up everyone who should be rounded up". The article was discreetly shared on the Chinese platform Sina Weibo, where some netizens expressed sympathy for him. In 2017, there were more than 12,000 investigations into party members in Xinjiang for infractions or resistance in the "fight against separatism", which was more than 20 times the figure in the previous year. ICIJ leak On 24 November 2019, the International Consortium of Investigative Journalists (ICIJ) published the China Cables, consisting of six documents, an "operations manual" for running the camps and detailed use of predictive policing and artificial intelligence to target people and regulate life inside the camps. Shortly after the publication of the China Cables, leaker Asiye Abdulaheb went on to provide Adrian Zenz with the "Karakax list", allegedly a Chinese government spreadsheet that tracks the rationale behind 311 of the internments at a "Vocational Training Internment Camp" in the seat of Karakax County in Xinjiang. The purpose of the list may have been to coordinate judgments on whether an individual should remain in internment; in some entries, the word "agree" was written beside a judgment. Records detail how subjects dress and pray, and how their relatives and acquaintances behave. One subject was interned because she wore a veil years ago; another was interned for clicking on a link to a foreign website; a third was interned for applying for a passport, despite posing "no practical risk" according to the spreadsheet. In general, the subjects on the Karakax list all have relatives living abroad, a category that reportedly leads to "almost certain internment". 149 subjects are documented as violating birth control policies. 116 of the subjects are listed without explanation as "untrustworthy"; for 88 of these, this "untrustworthy" label is the only reason listed for internment. Younger men, in particular, are often listed as "untrustworthy person born in a certain decade". 24 subjects are accused of formal crimes, including six terrorism-related allegations. Most of the subjects have been released, or scheduled for release, following the end of their one-year internment term; however, some of these are recommended for release into "industrial park employment", raising concerns about possible forced labor. Xinjiang Police Files hack The 'Xinjiang Police Files', a large body of police files derived from data found in a hack of a local computer server, was sent to the German anthropologist Adrian Zenz, who works for the Victims of Communism Memorial Foundation. Zenz has been sanctioned by the Chinese government since 2021. He has been instrumental in exposing the camp system in Xinjiang. The files and some English translations are partly accessible via their special homepage set up by this foundation or via the links to an academic repository in Zenz' article in the Journal of the European Association for Chinese Studies. The data was evaluated by journalists from 14 media companies worldwide, including the British BBC, Le Monde in France and El País in Spain. In Germany, Bayerischer Rundfunk and Der Spiegel examined and researched the data. According to the evaluation of a number of digital forensic scientists and other experts, the Xinjiang Police Files come from the computers of the Chinese authorities. It is the largest data leak on Chinese state-run re-education camps that has been made public outside of China to date. In May 2022, the BBC published summaries of the Xinjiang Police Files. The Xinjiang Police Files were published during the first visit by a UN human rights commissioner to China in 14 years. By combining the photographs of some 5,000 Uyghurs contained in the data with other data in the hack, details of over 2,800 detentions emerged. Other documents in the leak included police protocols for running an internment camp. Camp facilities In urban areas, most of the camps are converted from existing vocational schools, CCP schools, ordinary schools or other official buildings, while in suburban or rural areas the majority of camps were specially built for the purposes of re-education. These camps are guarded by armed forces or special police and equipped with prison-like gates, surrounding walls, security fences, surveillance systems, watchtowers, guard rooms, and facilities for armed police. While there is no public, verifiable data for the number of camps, there have been various attempts to document suspected camps based on satellite imagery and government documents. On 15 May 2017, Jamestown Foundation, a Washington, DC-based think tank, released a list of 73 government bids related to re-education facilities. On 1 November 2018, the Australian Strategic Policy Institute (ASPI) reported on suspected camps in 28 locations. On 29 November 2018, Reuters and Earthrise Media reported 39 suspected camps. The East Turkistan National Awakening Movement reported an even larger numbers of camps. In a 2018 report from US government-funded Radio Free Asia, Awat County (Awati) was said to have three re-education camps. An RFA listener provided a copy of a "confidentiality agreement" requiring re-education camp detainees to not discuss the workings of the camps, and said local residents were instructed to tell members of re-education camp inspection teams visiting No. 2 Re-education Camp that there was only one camp in the county. The RFA listener also said the No. 2 Re-education Camp had transferred thousands of detainees and removed barbed wire from the perimeter of the camp walls. Boarding schools for the children of detainees The detention of Uyghurs and other ethnic minorities has allegedly left many children without their parents. The Chinese government has allegedly held these children at a variety of institutions and schools colloquially known as "boarding schools", although not all are residential institutions, that serve as de facto orphanages. In September 2018, the Associated Press reported that thousands of boarding schools were being built. According to the Chinese Department of Education children as young as eight are enrolled in these schools. According to Adrian Zenz and BBC in 2019, children of detained parents in boarding schools were penalized for failing to speak Mandarin Chinese and prevented from exercising their religion. In a paper published in the Journal of Political Risk, Zenz calls the effort a "systematic campaign of social re-engineering and cultural genocide". Human Rights Watch said that the children detained at child welfare facilities and boarding schools were held without parental consent or access. In December 2019, The New York Times reported that approximately 497,000 elementary and junior high school students were enrolled in these boarding schools. They also reported that students are only allowed to see family members once every two weeks and that they were forbidden from speaking the Uyghur language. Locations Numerous locations have been identified as re-education camps. The Australian Strategic Policy Institute, whose funding is primarily from the Australian Government with overseas funding primarily from the US State Department and Department of Defense, had identified more than 380 "suspected detention facilities". Camps in Akto County (Aktu, Aketao), Kizilsu Kyrgyz Autonomous Prefecture Four detention centers in Aksu City (Akesu), Aksu Prefecture Artux City Vocational Skills Education Training Service Center in Artux in Kizilsu Prefecture Jiashi County Secondary Vocational School () in Payzawat County (Jiashi), Kashgar Prefecture Three detention centers in Kalpin County (Kelpin, Keping), Aksu Prefecture Eight vocational training centres in Lop County (Luopu), Hotan Prefecture Lop County No. 4 Vocational Skills Education and Training Center Maralbexi County (Bachu County) re-education camp in Kashgar Prefecture Eight camps in Turfan Prefecture No. 4 Training Center (on the road between Turpan and Toksun County) Three re-education camps in Uqturpan County (Uchturpan, Wushi), Aksu Prefecture Yutian county vocational training centre in Yutian County (Keriya), Hotan Prefecture, among the largest of the camps Camp detainees The mass internment of Uyghurs and other Turkic Muslims in the camps has become the largest-scale arbitrary detention of ethnic and religious minorities since World War II. Many media outlets have reported that hundreds of thousands of Uyghurs, as well as Kazakhs, Kyrgyz and other ethnic minorities, are held in the camps. Radio Free Asia, a news service funded by the US government, estimated in January 2018 that 120,000 members of the Uyghurs were being held in political re-education camps in Kashgar prefecture alone at the time. In 2018, local government authorities in Qira County expected to have almost 12,000 detainees in vocational camps and detention centres and some projects related to the centres outstripped budgetary limits. Reports of Uyghurs living or studying abroad being detained upon return to Xinjiang are common, which is thought to be connected to the re-education camps. Many living abroad have gone for years without being able to contact their family members still in Xinjiang, who may be detainees. Uyghur political figure Rebiya Kadeer, who has been in exile since 2005, has had as many as 30 relatives detained or disappeared, including her sisters, brothers, children, grandchildren, and siblings, according to Amnesty International. It is unclear when they were taken away. In February 2021, two of Kadeer's granddaughters appeared in a video on Twitter denying abuses and telling her not to be "fooled again by those bad foreigners". On 13 July 2018, Sayragul Sauytbay, an ethnic Kazakh Chinese national and former employee of the Chinese state, appeared in a court in the city of Zharkent, Kazakhstan for being accused of illegally crossing the border between the two countries. During the trial she talked about her forced work at a re-education camp for 2,500 ethnic Kazakhs. Her lawyer argued that if she is extradited to China, she would face the death penalty for exposing re-education camps in Kazakh court. Her testimony for the re-education camps have become the focus of a court case in Kazakhstan, which is also testing the country's ties with Beijing. On 1 August 2018, Sauytbay was released with a six-month suspended sentence and directed to regularly check-in with police. She applied for asylum in Kazakhstan to avoid deportation to China. Kazakhstan refused her application. On 2 June 2019 she flew to Sweden where she was subsequently granted political asylum. According to a Radio Free Asia interview with an officer at the Onsu County police station, as of August 2018, 30,000 persons, or about one in six Uyghurs in the county (approximately 16% of the overall population of the county), were detained in re-education camps. Russian-American Gene Bunin created the Xinjiang Victims Database to collect public testimonies on people detained in the camps, and its content had been referenced in articles by Al Jazeera, RFA, Foreign Policy, the Uyghur Human Rights Project, Amnesty and Human Rights Watch. On 14 January 2023, the database included photos of Hong Kong actors Andy Lau and Chow Yun-fat in a list of police officers responsible for rounding up "thousands of documented victims", which aroused suspicion on Twitter about the database's authenticity. Writing in the Journal of Political Risk in July 2019, independent researcher Adrian Zenz estimated an upper speculative limit to the number of people detained in Xinjiang re-education camps at 1.5 million. In November 2019, Adrian Zenz estimated that the number of internment camps in Xinjiang had surpassed 1,000. In November 2019, George Friedman estimated that 1 in 10 Uyghurs are being detained in re-education camps. When the BBC was invited to the camps in June 2019, officials there told them the detainees were "almost criminals" who could choose "between a judicial hearing or education in the de-extremification facilities". The Globe and Mail reported in September 2019 that some Han Chinese and Christian Uyghurs in Xinjiang who had disputes with local authorities or expressed politically unwelcome thoughts had also been sent to the camps. Anonymous drone footage posted on YouTube in September 2019 showed kneeling blindfolded inmates that an analyst at the Australian Strategic Policy Institute said may have been an inmate transfer at a train station near Korla and may have been from a re-education camp. Anar Sabit, an ethnic Kazakh from Kuytun living in Canada who was imprisoned in 2017 after returning home following the death of her father, was detained for having gone abroad. She found other minorities were interned for offenses such as using forbidden technology (WhatsApp, a V.P.N.), travelling abroad, but that even a Uyghur working for the Communist party as a propagandist could be interned for the offense of having been booked in a hotel by an airline with others who were under suspicion. According to an anonymous Uyghur local government employee quoted in an article by US government-sponsored Radio Free Asia, during Ramadan 2020 (23 April to 23 May), residents of Makit County (Maigaiti), Kashgar Prefecture were told they could face punishment for religious fasting including being sent to a re-education camp. According to an official report by the Chinese government, 1.3 million people received "vocational training" sessions annually between 2014 and 2019. Waterboarding, mass rape, and sexual abuse are reported to be among the forms of torture used as part of the indoctrination process at the camps. Testimonies about treatment Officially, the camps are known as Vocational Education and Training Centers, informally as "schools", and described by some officials as "hospitals" where inmates are treated for the "disease" of "extremist ideology". According to internment officials quoted in Xinjiang Daily, (a Communist Party-run newspaper) while "requirements for our students" are "strict ... we have a gentle attitude, and put our hearts into treating them". Being in one "is actually like staying at a boarding school." The newspaper quoted a former inmate as stating during his internment he had realized he had been "increasingly drifting away from 'home,'" under the influence of extremism. "With the government's help and education, I've returned. ... "our lives are improving every day. No matter who you are, first and foremost you are a Chinese citizen.'" Testimonies in non-Communist Party literature from freed inmates have been considerably different. Kayrat Samarkand, a Kazakh citizen who migrated from Xinjiang, was detained in one of the internment camps in the region for three months for visiting neighboring Kazakhstan. On 15 February 2018, Kazakh Foreign Minister Kairat Abdrakhmanov sent a diplomatic note to the Chinese Foreign Ministry, the same day as Kayrat Samarkand was freed from custody. After his release, Samarkand said that he faced endless brainwashing and humiliation, and that he was forced to study communist propaganda for hours every day and chant slogans giving thanks and wishing for a long life to Xi Jinping. Mihrigul Tursun, a Uyghur woman detained in China, after escaping one of these camps, talked of beatings and torture. After moving to Egypt, she traveled to China in 2015 to spend time with her family and was immediately detained and separated from her infant children. When Tursun was released three months later, one of the triplets had died and the other two had developed health problems. Tursun said the children had been operated on. She was arrested for the second time about two years later. Several months later, she was detained the third time and spent three months in a cramped prison cell with 60 other women, having to sleep in turns, use the toilet in front of security cameras and sing songs praising the Chinese Communist Party. Tursun said she and other inmates were forced to take unknown medication, including pills that made them faint and a white liquid that caused bleeding in some women and loss of menstruation in others. Tursun said nine women from her cell died during her three months there. One day, Tursun recalled, she was led into a room and placed in a high chair, and her legs and arms were locked in place. "The authorities put a helmet-like thing on my head, and each time I was electrocuted, my whole body would shake violently and I would feel the pain in my veins," Tursun said in a statement read by a translator. "I don't remember the rest. White foam came out of my mouth, and I began to lose consciousness," Tursun said. "The last word I heard them saying is that you being an Uyghur is a crime." She was eventually released so that she could take her children to Egypt, but she was ordered to return to China. Once in Cairo, Tursun contacted U.S. authorities and, in September, went to the United States and settled in Virginia. China's Foreign Ministry Spokesperson Hua Chunying has stated that Tursun was taken into custody by police on "suspicion of inciting ethnic hatred and discrimination" for a period lasting 20 days, but denies that Tursun was detained in a re-education camp. Former inmates say that they are required to learn to sing the national anthem of China and communist songs. Punishments, like being placed in handcuffs for hours, waterboarding, or being strapped to "tiger chair" (a metal contraption) for long periods of time, are allegedly used on those who fail to follow. Anar Sabit, a cooperative inmate who had a relatively minor offense of foreign travel, described her confinement in the women's section as prison-like and marked by bureaucratic rigidity but said that she was not beaten or tortured . Before and after her internment, Sabit said that she experienced what Chinese sometimes call gui da qiang, or 'ghost walls' "that confuse and entrap travelers". After her release from internment, she said that she remains a "focus person" in her hometown of Kuytun where she lives with her uncle's family. She described the town as resembling an "open air prison" due to the careful monitoring by cameras, sensors, police, and the neighborhood residential committee, and that she feels shunned by almost all friends and family and worries that she will endanger anyone who helps her. After Sabit moved out of her uncle's house, Sabit lived in the dormitory of the neighborhood residential committee who she said threatened to return her to the internment camp for speaking out of turn. According to detainees, they were also forced to drink alcohol and eat pork, which are forbidden in Islam. Some reportedly received unknown medicines while others attempted suicide. There have also been deaths reported due to unspecified causes. Detainees have alleged widespread sexual abuse, including forced abortions, forced use of contraceptive devices and compulsory sterilization. It has been reported that Han officials have been assigned to reside in the homes of Uyghurs who are in the camps. Rushan Abbas of the Campaign for Uyghurs argues that the actions of the Chinese government amount to genocide according to United Nations definitions which are laid out in the Genocide Convention. According to Time, Sarsenbek Akaruli, 45, a veterinarian and trader from Ili, Xinjiang, was arrested in Xinjiang on 2 November 2017. As of November 2019, he is still in a detention camp. According to his wife Gulnur Kosdaulet, Akaruli was put in the camp after police found the banned messaging app WhatsApp on his cell phone. Kosdaulet, a citizen of neighboring Kazakhstan, has traveled to Xinjiang on four occasions to search for her husband but could not get help from friends in the Chinese Communist Party. Kosdaulet said of her friends, "Nobody wanted to risk being recorded on security cameras talking to me in case they ended up in the camps themselves." In May to June 2017, a woman native to Maralbexi County (Bachu) named Mailikemu Maimati (also spelled Mamiti) was detained in the county's re-education camp according to her husband Mirza Imran Baig. He said that after her release, she and their young son were not given their passports by Chinese authorities. According to Time, former prisoner Bakitali Nur, 47, native of Khorgos, Xinjiang on the Sino-Kazakh border, was arrested because authorities were suspicious of his frequent trips abroad. He reported spending a year in a cell with seven other prisoners. The prisoners sat on stools seventeen hours a day, were not allowed to talk or move and were under constant surveillance. Movement carried the punishment of being put into stress positions for hours. After release, he was forced to make daily self-criticisms, report on his plans and work for negligible payment in government factories. In May 2019, he escaped to Kazakhstan. Nur summarized his experience in jail and under constant monitoring after his release saying, "The entire system is designed to suppress us." According to Radio Free Asia, Ghalipjan, a 35 year old Uyghur man from Shanshan/Pichan County who was married and had a five-year-old son, died in a re-education camp on 21 August 2018. Authorities reported his death was due to heart attack, but the head of the Ayagh neighborhood committee said that he was beaten to death by a police officer. His family was not allowed to carry out Islamic funeral rites. According to the Xinjiang Police Files, Chen Quanguo issued a shooting order for detainees attempting to escape in 2018. In June 2018, President of the World Uyghur Congress (WUC) Dolkun Isa was told that his mother Ayhan Memet, 78, had died two months earlier while in detention at a "political re-education camp". The WUC president was unsure if she had been incarcerated in one of the many "political re-education camps". According to a 2018 report in The New York Times, Abdusalam Muhemet, 41, who ran a restaurant in Hotan before fleeing China in 2018, said he spent seven months in prison and more than two months in a camp in Hotan in 2015 without ever being criminally charged. Muhemet said that on most days, the inmates at the camp would assemble to hear long lectures by officials who warned them not to embrace Islamic radicalism, support Uyghur independence or defy the Communist Party. In an interview with Radio Free Asia, an officer at the Kuqa (Kuchar, Kuche) County Police Department reported that from June to December 2018, 150 people at the No. 1 Internment Camp in the Yengisher district of Kuqa county had died, corroborating earlier reports attributed to Himit Qari, former area police chief. In August 2020, the BBC released texts and a video smuggled out of a re-education camp by Merdan Ghappar, a former model of Uyghur heritage. Mergan had been allowed access to personal effects, and used a phone to take videos of the camp he is interned in. In February 2021, the BBC issued further eyewitness accounts of mass rape and torture in the camps. Sayragul Sauytbay told the BBC as a teacher forced to work in the camps that "rape was common" and the guards "picked the girls and young women they wanted and took them away". She also described a woman who was brought to make a forced confession in front of 100 other detainees while the police took turns to rape her as she cried out for help. In 2018, a Globe and Mail interview with Sauytbay found that she did not personally see violence at the camp, but did witness hunger and a complete lack of freedom. Tursunay Ziawudun, a Uyghur who fled to Kazakhstan and then the US, told the BBC that she was raped three times in the camps and kicked in the abdomen during interrogations. In a 2020 interview with BuzzFeed News, Ziawudun reported that she "wasn't beaten or abused" while inside, but was instead subjected to long interrogations, forced to watch propaganda, kept in cold conditions with poor food, and had her hair cut. Forced labor Adrian Zenz reported that the re-education camps also function as forced labor camps in which Uyghurs and Kazakhs produce various products for export, especially those made from cotton grown in Xinjiang. The growing of cotton is central to the industry of the region as "43 percent of Xinjiang's exports are apparel, footwear, or textiles". In 2018, 84% of China's cotton was produced in the Xinjiang province. Since cotton is grown and processed into textiles in Xinjiang, a November 2019 article from The Diplomat said that "the risk of forced labor exists at multiple steps in the creation of a product". Academics Zhun Xu and Fangfei Lin write that the conclusion of forced labor in cotton production in Xinjiang is insufficiently supported. They cite the historic significance of Uyghur agricultural workers as a long-standing labor force for manual cotton harvesting and staffing companies' widespread recruitment of Uyghur workers due to lower travel costs. In their view, "[T]he labor demand of Uyghur seasonal cotton pickers in south Xinjiang is largely decided by its relatively low degree of agricultural capitalization, not due to the 'special treatment' towards labor migrants of a certain ethnic minority." In 2018, the Financial Times reported that the Yutian / Keriya county vocational training centre, among the largest of the Xinjiang re-education camps, had opened a forced labour facility including eight factories spanning shoemaking, mobile phone assembly and tea packaging, giving a base monthly salary of . Between 2016 and 2018, the centre expanded 269 percent in total area. The Australian Strategic Policy Institute reported that from 2017 to 2019 more than 80,000 Uyghurs were shipped elsewhere in China for factory jobs that "strongly suggest forced labour". Conditions of these factories were consistent with the stipulations of forced labor as defined by the International Labour Organization. In 2021, former supplier for Nike, Esquel Group, sued the United States Government for listing it on a sanction list for forced labor allegations in Xinjiang. It was later removed from the sanction list due to lack of evidence provided by the US Commerce department. In October 2021, the CBC in collaboration with the Investigative Reporting Project Italy along with The Guardian reported on the export of tomato products from Xinjiang and tied to forced labor by the Uyghurs. The report identified tomato products being exported to other countries such as Italy to be repackaged for sale in other markets such as Canada. In June 2021, human rights reports indicated that costs of solar modules had been depressed in recent years due to Chinese forced labor practices in the solar module and wind turbine exports industry. Globally, China dominated manufacturing, installation and exports in the field. The practice of forced labor was blamed for the bankruptcy of firms in the US and German solar industries, multiple times, over the decade 2010–2020. In one report, upon declaring a bankruptcy, the cost of raw materials for manufacturing panels was suggested to be 30% of the total manufacturing costs. It was argued that China do not pay labor costs. Notable detainees Ablajan Awut Ayup, rapper Merdan Ghappar, model Adil Mijit, comedian, suspected detainee Mihrigul Tursun (former detainee) Responses from China Prior to October 2018, when international media had asked about the re-education camps, China's Ministry of Foreign Affairs said that they have not heard of this situation. The Chinese government officially legalized re-education camps in Xinjiang in October 2018. On 12 August 2018, a Chinese state-run tabloid, Global Times, defended the crackdown in Xinjiang after a U.N. anti-discrimination committee raised concerns over China's treatment of Uyghurs. According to the Global Times, China prevented Xinjiang from becoming 'China's Syria' or 'China's Libya', and local authorities' policies saved countless lives and avoided a 'great tragedy'. On 13 August 2018, at a UN meeting in Geneva, the delegation from China told the United Nations Human Rights Committee that "There is no such thing as re-education centers in Xinjiang and it is completely untrue that China put 1 million Uyghurs into re-education camps". A Chinese delegation said that "Xinjiang citizens, including the Uyghurs, enjoy equal freedom and rights." They said that "Some minor offenders of religious extremism or separatism have been taken to 'vocational education' and employment training centers with a view to assisting in their rehabilitation". On 14 August 2018, Chinese Foreign Ministry spokesman Lu Kang said "anti-China forces had made false accusations against China for political purposes and a few foreign media outlets misrepresented the committee's discussions and were smearing China's anti-terror and crime-fighting measures in Xinjiang" after a UN human rights committee raised concern over reported mass detentions of ethnic Uyghurs. On 21 August 2018, Liu Xiaoming, the Ambassador of China to the United Kingdom, wrote an article in response to a Financial Times report entitled "Crackdown in Xinjiang: Where have all the people gone?". Liu's response said: "The education and training measures taken by the local government of Xinjiang have not only effectively prevented the infiltration of religious extremism and helped those lost in extremist ideas to find their way back, but also provided them with employment training in order to build a better life." On 10 September 2018, China's Foreign Ministry spokesperson Geng Shuang condemned a report about the re-education camps issued by Human Rights Watch. He said: "This organisation has always been full of prejudice and distorting facts about China." Geng also added that: "Xinjiang is enjoying overall social stability, sound economic development and harmonious co-existence of different ethnic groups. The series of measures implemented in Xinjiang are meant to improve stability, development, solidarity and people's livelihood, crack down on ethnic separatist activities and violent and terrorist crimes, safeguard national security, and protect people's life and property." On 11 September 2018, China called for UN human rights chief Michelle Bachelet to "respect its sovereignty", after she urged China to allow monitors into Xinjiang and expressed concern about the situation there. Chinese Foreign Ministry spokesman Geng Shuang said: "China urges the U.N. human rights high commissioner and office to scrupulously abide by the mission and principles of the U.N. charter, respect China's sovereignty, fairly and objectively carry out its duties, and not listen to one-sided information". On 16 October 2018, a CCTV prime-time program aired a 15-minute episode on what was termed as Xinjiang's 'Vocational Skills Educational Training Centers', featuring the Muslim internees. Sinologist Manya Koetse documented that it received a mixture of supportive and critical responses on the Sina Weibo social media platform. In March 2019, against the background of the US considering imposing sanctions against Chen Quanguo, who is the region's most senior Communist Party official, Xinjiang governor Shohrat Zakir refuted international claims of concentration camps and re-education camps, instead comparing the institutions to boarding schools. On 18 March 2019, the Chinese government released a white paper about counter-terrorism and de-radicalization in Xinjiang. The white paper states, "A country under the rule of law, China respects and protects human rights in accordance with the principles of its Constitution." The white paper also contends that Xinjiang has not had violent terrorist cases for more than two consecutive years, extremist penetration has been effectively curbed, and social security has improved significantly. In November 2019, the Chinese ambassador to the United Kingdom responded to questions about newly leaked documents on Xinjiang by calling the documents "fake news". On 6 December 2019, China's Foreign Ministry spokesperson Hua Chunying accused the US of hypocrisy on human rights issues relating to allegations of torture at Guantanamo Bay detention camp. In September 2020, Xi Jinping acclaimed the success of his policies in Xinjiang in a 2-day conference expected to set the country's policy for the next years. The Chinese government published a white paper defending its vocational training centers and stating that the regional government organised 'employment-oriented training' and labour skills for 1.29 million workers a year from 2014 to 2019. On 7 January 2021, the US Chinese embassy published a tweet that said: "The minds of (Uighur) women in Xinjiang were emancipated and gender equality and reproductive health were promoted, making them no longer baby-making machines," which drew sharp criticism from human rights groups as well as Sam Brownback, the US envoy on international religious freedom. Subsequently, the tweet was deleted and Twitter locked the embassy's account. In March 2021, following sanctions imposed on several Chinese officials by the European Union, the United States, the United Kingdom and Canada, the Chinese government responded with sanctions on several individuals and groups that had criticized China over the camps, including five European Parliament members (among them Reinhard Butikofer, the head of the European Parliament's delegation to China), German scholar Adrian Zenz, and the non-profit Alliance of Democracies Foundation. In June 2021, ProPublica and The New York Times documented a Chinese government-backed propaganda campaign on Twitter and YouTube involving more than 5000 videos analysed. They showed Uyghurs in Xinjiang denying abuses and scolding foreign officials and multinational corporations who had questioned China's human rights record in the province. Some of the videos' accounts were removed on YouTube as part of the company's efforts to combat spam and influence operations. In October 2022, the Australian Strategic Policy Institute documented a number of CCP-backed Uyghur influencers in Xinjiang posting propaganda videos on Chinese and Western social media which pushed back against abuse allegations. Some of the influencers' accounts were suspended on Twitter for alleged inauthenticity. International reactions Reactions at the UN On 8 July 2019, 22 countries issued a statement in which they called for an end to mass detentions in China and expressed their concerns about widespread surveillance and repression. 50 countries issued a counter-statement, reportedly coordinated by Algeria, criticizing the practice of "politicizing human rights issues", stating "China has invited a number of diplomats, international organizations officials and journalist to Xinjiang" and that "what they saw and heard in Xinjiang completely contradicted what was reported in the media." The counter-statement also commended China's "remarkable achievements in the field of human rights", claiming that "safety and security has returned to Xinjiang and the fundamental human rights of people of all ethnic groups there are safeguarded." Qatar formally withdrew its name from the counter-statement on 18 July, six days after it was published, expressing a desire "to maintain a neutral stance and we offer our mediation and facilitation services." In October 2019, 23 countries issued a joint statement urging China to "uphold its national laws and international obligations and commitments to respect human rights, including freedom of religion or belief," urging China to refrain from "arbitrary detention of Uyghurs and members of other Muslim communities. In response, on the same day, 54 countries (including China itself) issued a joint statement reiterating that the work of human rights in the United Nations should be conducted in a "non-politicized manner", and supporting China's Xinjiang policies. The statement spoke positively of the results of counter-terrorism and de-radicalization measures in Xinjiang and held that these measures have effectively safeguarded the basic human rights of people of all ethnic groups." Civil society groups in Muslim-majority countries with governments that have supported China's policies in Xinjiang have been noted to be uncomfortable with their governments' stance and have organized boycotts, protests, and media campaigns concerning Uyghurs. In October 2020, Axios reported that more countries at the UN joined the condemnation of China over Xinjiang abuses. The total number of countries that denounced China increased to 39, while the total number of countries that defended China decreased to 45. Notably, 16 countries that defended China in 2019 did not do so in 2020. At the 46th session of the Human Rights Council, Cuba delivered a joint statement supporting China, signed by 64 countries. Reactions by international organizations Governmental organizations On 21 May 2018, during the resumed session of the Committee on Non-Governmental Organizations in the United Nations, Kelley Currie, the United States representative to the United Nations for economic and social affairs, raised the issue of mass detention of Uyghurs in re-education camps, and she said that "reports of mass incarcerations in the Xinjiang were documented by looking at Chinese procurement requests on Chinese websites requesting Chinese companies to tender offers to build political re-education camps". On 10 August 2018, United Nations human rights experts expressed alarm over many credible reports that China had detained a million or more ethnic Uyghurs in Xinjiang. Gay McDougall, a member of the United Nations Committee on the Elimination of Racial Discrimination, said that "In the name of combating religious extremism, China had turned Xinjiang into something resembling a massive internment camp, shrouded in secrecy, a sort of no-rights zone". On 10 September 2018, UN human rights chief Michelle Bachelet called on China to ease restrictions on her and her office's team, urging China to allow observers into Xinjiang and expressing concern about the situation there. She said, "The UN rights group had shown that Uyghurs and other Muslims are being detained in camps across Xinjiang and I expect discussions with Chinese officials to begin soon". In June 2019, UN counter-terrorism chief Vladimir Voronkov visited Xinjiang and found nothing incriminating at the camps. On 1 November 2019, ten UN Special Rapporteurs together with vice-chair of the UN Working Group on Arbitrary Detention and chair-rapporteur of the UN Working Group on Enforced or Involuntary Disappearances released a report on the effect and application of the Counter-Terrorism Law of China and its Regional Implementing Measures in Xinjiang, which states that:The De-Extremism Regulations have been criticised by UN Special Procedures mandates for their lack of compliance with international human rights standards. Following the introduction of those laws, an estimated million Uyghurs and other Turkic Muslims have reportedly been sent to internment facilities under the guise of "counterterrorism and de-extremism" policies since 2016. (p.4) ...... In this context, previous communications by the Special Rapporteur on freedom of religion or belief and the Working Group on Arbitrary Detention have voiced their concern that the "re-education facilities", sometimes termed "vocational training centres", due to their coercive character, amount to detention centres. It is alleged that between 1 million to 1.5 million ethnic Uyghurs and other minorities in Xinjiang may have been arbitrary forced into these facilities, where there have been allegations of deaths in custody, physical and psychological abuse and torture, as well as lack of access to medical care. It is also reported that in several cases they have been denied free contact with their families and friends or been unable to inform them of their location and denied their basic freedom of movement.(p.8) In June 2020, nearly 50 UN independent experts had repeatedly communicated with the Government of the People's Republic of China their alarm regarding the repression of fundamental freedoms in China. They had also raised their concerns regarding a range of issues of grave concern, including the collective repression of the population, especially religious and ethnic minorities in Xinjiang and Tibet. In March 2021, sixteen UN human right experts raised grave concerns about the "alleged detention and forced labour of Muslim Uyghurs in China". The experts were appointed by the UN Human Rights Council, and several of them said they had "received information that connected over 150 domestic Chinese and foreign domiciled companies to serious allegations of human rights abuses against Uyghur workers". The experts also called for unrestricted access to China in order to conduct "fact-finding missions", meanwhile urging "global and domestic companies to closely scrutinize their supply chains". On 11 September 2018, Federica Mogherini, the High Representative of the European Union for Foreign Affairs and Security Policy, raised the re-education camps issue in European Parliament. She said:The most outstanding disagreement we have with China concerns the human rights situation in China, as underlined in your Report. We also focused on the situation in Xinjiang, especially the expansion of political re-education camps. And we discussed the detention of human rights defenders, including particular cases. On 19 December 2019, the European Parliament passed a non-binding resolution condemning the mass incarceration of Uyghurs and calling on EU companies with supply chains in the region to ensure that they are not complicit with crimes against humanity. On 17 December 2020, the European Parliament adopted a resolution that strongly condemns China over allegations of forced labor by ethnic and religious minorities. In the statement, the EU body said Parliament "strongly condemns the government-led system of forced labor, in particular the exploitation of Uyghur, ethnic Kazakh and Kyrgyz, and other Muslim minority groups, in factories both within and outside of internment camps in Xinjiang, as well as the transfer of forced laborers to other Chinese administrative divisions, and the fact that well-known European brands and companies have been benefiting from the use of forced labor." On 22 March 2021, the European Union, joined by the United States, the United Kingdom and Canada, imposed sanctions on four senior Chinese officials and the Public Security Bureau of the Xinjiang Production and Construction Corps over the human rights abuses of Uyghurs in Xinjiang. This was the first sanction by the EU against China since the 1989 Tiananmen Square massacre. World Bank On 11 November 2019, the World Bank issued a statement: In line with standard practice, immediately after receiving a series of serious allegations in August 2019 in connection with the Xinjiang Technical and Vocational Education and Training Project, the Bank launched a fact-finding review, and World Bank senior managers traveled to Xinjiang to gather information directly. After receiving the allegations, no disbursements were made on the project. The team conducted a thorough review of project documents... The review did not substantiate the allegations. In light of the risks associated with the partner schools, which are widely dispersed and difficult to monitor, the scope and footprint of the project is being reduced. Specifically, the project component that involves the partner schools in Xinjiang is being closed. Organization for Islamic Cooperation On 1 March 2019, the OIC produced a document which "commends the efforts of the People's Republic of China in providing care to its Muslim citizens." A coalition of American Muslim groups criticized the OIC's decision and accused member states of being influenced by Chinese power. The groups included the Council on American-Islamic Relations. Human rights organisations On 10 September 2017, Human Rights Watch released a report that said "The Chinese government should immediately free people held in unlawful 'political education' centers in Xinjiang and shut them down." On 9 September 2018, Human Rights Watch released a 117-page report, "'Eradicating Ideological Viruses': China's Campaign of Repression Against Xinjiang's Muslims", which accused China of the systematic mass detention of tens of thousands of ethnic Uyghurs and other Muslims in political re-education camps without being charged or tried and presented new evidence of the Chinese government's mass arbitrary detention, torture, and mistreatment, and the increasingly pervasive controls on daily life. The report also urged foreign governments to pursue a range of multilateral and unilateral actions against China for its actions, including "targeted sanctions" against those responsible. On 7 January 2020, CAIR National Executive Director Nihad Awad condemned a tweet by the US Chinese embassy, saying that China was openly admitting to and celebrating forced sterilizations and abortions of Muslim Uyghur women by saying it had "emancipated" them from being "baby-making machines". Amnesty International published a dedicated website and an extensive report in 2021. Amnesty estimates up to 1 million prisoners and concludes "The evidence Amnesty International has gathered provides a factual basis for the conclusion that the Chinese government has committed at least the following crimes against humanity: imprisonment or other severe deprivation of physical liberty in violation of fundamental rules of international law; torture; and persecution." Their full report includes recommendations to the Chinese government, the UN and the international community in general. Reactions by countries In September 2019, Australian Foreign Minister Marise Payne stated, "I have previously raised Australia's concerns about reports of mass detentions of Uyghurs and other Muslim peoples in Xinjiang. We have consistently called for China to cease the arbitrary detention of Uyghurs and other Muslim groups. We have raised these concerns—and we will continue to raise them—both bilaterally and in relevant international meetings." In January 2020, the Bahrain Council of Representatives called on the international community to protect Uyghur Muslims in China and "expressed deep concern over the inhumane and painful conditions to which Uyghur Muslims in China are subjected, including the detention of more than one million Muslims in mass detention camps, denial of their most basic rights, the removal of their children, wives and families, their prevention of prayer, worship and religious practices, confronting murder, ill-treatment and torture." On 5 March 2021, a group of 65 member states—led by Belarus—expressed their support of China's Xinjiang policy and opposed the "unfounded allegations against China based on disinformation" at the 44th session of Human Rights Council. On 15 March 2021, the Walloon Parliament voted to approve a motion condemning the "unacceptable" practices introduced by the Chinese government, including the exploitation of Uyghurs and all other ethnic minorities, in Xinjiang. All parties voted in favor, with the exception of the Workers' Party, which abstained. On 22 February 2021, the Canadian House of Commons voted 266–0 to approve a motion that formally recognizes China is committing genocide against its Muslim minorities. Prime Minister Justin Trudeau and his cabinet did not vote. On 6 October 2020, Cuba delivered a joint statement with 45 other countries voicing their support of China's measures in Xinjiang. Egypt signed both statements at the UN (in July and October 2019) that supported China's Xinjiang policies. Egypt has been accused of deporting Uyghurs to China. In November 2019, French Foreign Minister Jean-Yves Le Drian called on China to close down the camps. He also called on China to permit the UN High Commissioner for Human Rights to visit Xinjiang at the earliest possible date to make a report on the situation. The French Ministry of Europe and Foreign Affairs issued a statement on 27 November: In December 2020, France said it will oppose the proposed Comprehensive Agreement on Investment between the European Union and China over the use of forced labour of Uyghurs. In December 2018, leaders of the Muslim organization Muhammadiyah issued an open letter citing reports of violence against the "weak and innocent" community of Uyghurs and asking Beijing to explain. Soon after, Beijing responded by inviting more than a dozen top Indonesian religious leaders to the Xinjiang province and camps, and criticism greatly diminished. Since then, Indonesia's largest Muslim organizations have purportedly treated reports of widespread human rights violations in Xinjiang with skepticism, dismissing them as U.S. propaganda. In October 2022, the Indonesian delegation for the UNHCR voted against debate in the chamber on the topic of the treatment of Uyghurs in Xinjiang as it "will not yield meaningful progress", but Ambassador Febrian Ruddyard also stated, "As the world's largest Muslim country and a vibrant democracy, we cannot close our eyes to the plight of our Muslim brothers and sisters." In a December 2016 report, the research unit of the Iranian state-owned television's external services said that China is not opposed to Muslims, but instead to pro-Saudi radical ideology. In August 2020, Ali Motahari, a former member of the Iranian Parliament, tweeted that the Iranian government has kept silent about the situation of Muslims in China because the government of Iran needs China's economic support. He said that this silence has been humiliating for the Islamic Republic. Critics of Motahari responded that China was opposed to Wahabism, and had no problem with Islam or Chinese Muslims. Iran signed an October 2019 letter that publicly expressed support for China's treatment of Uyghurs. On 26 November 2019, Japanese Foreign Minister Toshimitsu Motegi said Japan was "monitoring the human rights situation in the Xinjiang Uygur Autonomous Region with concern" and that he brought up Japan's position with State Councilor Wang Yi in their meeting on 25 November. In November 2017, Kazakhstan's Ambassador to China Shahrat Nuryshev met with Chinese Vice Minister of Foreign Affairs Li Huilai regarding Kazakh diaspora issues. On 15 February 2018, Kazakh Foreign Minister Kairat Abdrakhmanov sent a diplomatic note to the Chinese Foreign Ministry, the same day Samarkand, a Kazakhstan citizen, was released from re-education camp. From 17 to 19 April, Kazakh First Deputy Foreign Minister Mukhtar Tleuberdi visited Xinjiang to meet with local officials. On 20 May 2021, the Seimas passed a non-binding resolution condemning China's treatment of Uyghurs. In September 2020, the Muhyiddin government confirmed that it would not extradite ethnic Uyghurs to China if Beijing requests it, continuing the policy set by the Mahathir administration. Although it is the government of Malaysia's stance not to get involved in Chinese internal affairs, it stated that the oppression of Uyghurs in the country could not be denied. Mohd Redzuan Md Yusof, minister in the Prime Minister's Department also stated that his government would grant free passage to those refugees who wished to settle in a third country. On 25 February, the States General of the Netherlands declared China's treatment of the Uyghur ethnic minority a genocide, the third country to do so. On 6 May 2021, the New Zealand Parliament passed a motion condemning China's treatment of the Uyghurs in Xinjiang, but fell short of calling it genocide, due to opposition from the governing Labour Party, who would not pass the motion unless the term 'genocide' was removed. New Zealand Prime Minister Jacinda Ardern has raised the issue of the Uyghurs on numerous occasions, including in her 2019 meeting with CCP General Secretary Xi Jinping. She did not detail exactly what was said. In July 2019, New Zealand Foreign Minister Winston Peters, asked why New Zealand had signed the letter to the president of the United Nations Human Rights Council criticizing Beijing for its treatment of ethnic Uyghurs in the Xinjiang region stated, "Because we believe in human rights, we believe in freedom and we believe in the liberty of personal beliefs and the right to hold them." In 2017, National MP Todd McClay represented his party in Beijing before a dialogue organised by the International Liaison Department of the Chinese Communist Party. McClay also referred to the Xinjiang internment camps as "vocational training centers" in line with CCP talking points. Pakistan signed both statements at the UN (in July and October 2019) that supported China's Xinjiang policies. On 19 January 2020, Pakistani Prime Minister Imran Khan was asked why he was not more outspoken about the situation of Uyghurs in China. He said that he has not been as outspoken primarily because the human rights situation in Kashmir and Citizenship Amendment Act were problems much larger in scale. He said that the second reason was that China has been a great friend of Pakistan and had helped Pakistan through their toughest time with the economic crisis, so that "the way we deal with China is that when we talk about things, we talk about privately. We do not talk about things with China in public right now because they are very sensitive. That's how they deal with issues." In July 2020, Xi Jinping met with Palestinian President Mahmoud Abbas to express Beijing's "full support" for the two-state solution to the Israeli–Palestinian conflict, saying that "China and Palestine are good brothers, good friends and good partners". Abbas then voiced support for China's "legitimate position on Hong Kong, Xinjiang and other matters concerning China's core interests." After Palestinian ambassador to China visited Xinjiang in March 2021, he remarked on Chinese state media that he was impressed by the region's infrastructure and upkeep of mosques, saying "if you have to calculate it all, it's something like 2,000 inhabitants for one mosque. This ratio, we don't have it in our country. It's not available anywhere." RFA journalist Shohret Hoshur wrote in response that Mehdawi was neglecting the harsh reality of locals with whom he had met and who had no ability to speak the truth under the watch of officials, adding that his true motivation seemed to be a shared anti-US agenda with China. On 4 February 2019, Russian Foreign Minister Sergey Lavrov said he was not aware of reports about political re-education camps in China's Xinjiang Uygur Autonomous Region, though he had seen the US actively raising the issue. In July 2019, Russia signed the letter supporting China at the UN Human Rights Council. On 9 October 2019, Lavrov said that "China has repeatedly given explanations concerning the accusations that you have mentioned probably citing our Western colleagues. We have no reason to take any steps other than the procedures that exist at the UN that I mentioned, such as at the Human Rights Council and its Universal Periodic Reviews." In February 2019, Saudi Arabia's Crown Prince Mohammad bin Salman defended China's use of the camps, saying "China has the right to carry out anti-terrorism and de-extremisation work for its national security." Saudi Arabia was among the 24 countries (excluding China) that backed China's position at the UN Human Rights Council in July 2019, and again at the UN General Assembly in October 2020. On 6 November 2018 during the UN Human Rights Council's Universal Periodic Review of China, Switzerland called on China to close down its detention camps in Xinjiang, to grant the UN High Commissioner for Human Rights unrestricted access to Xinjiang, and to allow an independent UN investigation of the detention camps. On 26 November 2019, the Federal Department of Foreign Affairs called on the Chinese government to address the concerns raised by many states and to allow the UN unhindered access to the region. In December 2019, the Syrian Ministry of Foreign Affairs and Expatriates defended China's actions in Xinjiang days after the US condemnation, stating that it is a "blatant interference by the US in the internal affairs of the People's Republic of China." The statement concluded that "Syria emphasizes the right of China to preserve its sovereignty, people, territorial integrity, and security and protect the security and property of the state and individuals." On 2 October 2018 the Minister of Foreign Affairs, Joseph Wu, used the MOFA's official Twitter account to send out a Radio Free Asia article titled "Xinjiang Authorities Secretly Transferring Uyghur Detainees to Jails Throughout China" and stated that, "relocation of Uyghurs to re-education camps around China warrants the world's attention." On 5 July 2019, Joseph Wu, again on Twitter, sent out a BBC News article titled "China Muslims: Xinjiang schools used to separate children from families" and called on China to "Close the camps! Send the children home!" On 18 November 2019, the MOFA's official Twitter sent out a New York Times article titled "'Absolutely No Mercy': Leaked Files Expose How China Organized Mass Detentions of Muslims" saying, "This chilling NYTimes expose on the mass detention of Muslims by China is a must-read! Leaked internal documents tell the truth about the crackdown on ethnic minorities in Xinjiang, as well as the 'ruthless & extraordinary campaign' run by senior Communist Party officials." In February 2019, after Turkish media had picked up rumors of Uyghur musician Abdurehim Heyit dying in detention, the Spokesperson for the Turkish Foreign Ministry denounced China for "violating the fundamental human rights of Uyghur Turks and other Muslim communities in the Xinjiang Uygur Autonomous Region." In July 2019, Turkish journalists from Milliyet and Aydınlık interviewed Heyit in Ürümqi who denied that Uyghurs had problems in China. In July 2019, Chinese state media reported that when Turkish President Erdoğan visited China, he said, "It is a fact that the people of all ethnicities in Xinjiang are leading a happy life amid China's development and prosperity." Turkish officials then claimed the paraphrase was mistranslated by the Turkish side, saying it should rather have read "hopes the peoples of China's Xinjiang live happily in peace and prosperity". Erdoğan also said that some people were seeking to "abuse" the Xinjiang crisis to jeopardize the "Turkish–Chinese relationship". Some Uyghurs in Turkey have expressed concerns that they may face deportation back to China. On 3 July 2018, at a U.K. Parliamentary roundtable, the Rights Practice helped to organize a Parliamentary Round-table on increased repression and forced assimilation in Xinjiang. Rahima Mahmut, an Uyghur singer and human rights activist, gave a personal testimony about the violations suffered by the Uyghur community. Dr. Adrian Zenz, European School of Culture and Theology, (Germany), outlined the evidence of a large scale and sophisticated political re-education network designed to detain people for long periods and which the Chinese government officially denies. On 16 December 2020, the U.K. said there was credible, growing, and troubling evidence of forced labor among Uyghur Muslims in Xinjiang. Nigel Adams, Minister of State for Asia told Parliament, "Evidence of forced Uyghur labor within Xinjiang, and in other parts of China, is credible, it is growing and deeply troubling to the UK government." Adams said firms had a duty to ensure their supply chains were free of forced labor. On 12 January 2021, the Foreign Secretary, Dominic Raab, announced if British businesses fail to ensure their supply chains are free of slave labour could face fines. Raab appeared to be targeting China's mistreatment of internees in Xinjiang, saying it was Britain's "moral duty" to respond to the "far-reaching" evidence of human rights abuses being perpetrated in Xinjiang. On 23 April, a group of MPs led by Sir Iain Duncan Smith passed a motion declaring the mass detention of Uyghur Muslims in Xinjiang province a genocide. The United Kingdom is the fourth country in the world to make such action. In response, the Chinese Embassy in London said "The unwarranted accusation by a handful of British MPs that there is 'genocide' in Xinjiang is the most preposterous lie of the century..." On 3 April 2018, U.S. Senator Marco Rubio and Representative Chris Smith sent a letter urging Ambassador to China Terry Branstad to launch an investigation into the reported mass detention of Uyghurs in political re-education camps in Xinjiang. On 26 July 2018, Vice President of the United States Mike Pence raised the re-education camps issue at Ministerial to Advance Religious Freedom. He said that "Sadly, as we speak as well, Beijing is holding hundreds of thousands, and possibly millions, of Uyghur Muslims in so-called 're-education camps', where they're forced to endure around-the-clock political indoctrination and to denounce their religious beliefs and their cultural identity as the goal." On 26 July 2018, the Congressional-Executive Commission on China, an independent agency of the U.S. government which monitors human rights and rule of law developments in the People's Republic of China, released a report that said as many as a million people are or have been detained in what are being called "political re-education" centers, the largest mass incarceration of an ethnic minority population in the world today. On 27 July 2018, The U.S. Embassy & Consulate in China released Ministerial to Advance Religious Freedom Statement on China, which mentioned the detention of hundreds of thousands, and possibly millions, of Uyghurs and members of other Muslim minority groups in "political re-education camps", and called the Chinese government to release immediately all those arbitrarily detained. On 28 August 2018, U.S. senator Marco Rubio and 16 other members of Congress urged the United States to impose sanctions under the Global Magnitsky Human Rights Accountability Act against Chinese officials who are responsible for human rights abuses in Xinjiang. In a letter to Secretary of State Mike Pompeo and Treasury Secretary Steven Mnuchin, they called for the sanctions on Chen Quanguo who is the current Communist Party Secretary of the Xinjiang (the highest post in an administrative unit of China) and six other Chinese officials and two businesses that make surveillance equipment in Xinjiang. U.S. Secretary of State Mike Pompeo criticized Iran's Supreme Leader Ayatollah Ali Khamenei for his refusal to condemn the Chinese government's repressions against the Uyghurs. On 3 May 2019, U.S. Assistant Secretary of Defense for Indo-Pacific Security Affairs Randall Schriver condemns the detention of Uyghurs as concentration camps. On 11 September 2019, the U.S. Senate unanimously passed the Uyghur Human Rights Policy Act. On 3 December 2019, the U.S. House of Representatives passed a stronger version of the Uyghur Human Rights Policy Act by a vote of 407 to 1. The bill was signed into law on 17 June 2020. On 8 January 2020, the Congressional-Executive Commission on China released its annual report, which stated that Chinese government actions in Xinjiang may constitute crimes against humanity. In April 2020, United States lawmakers from the Congressional-Executive Commission on China, led by Jim McGovern and Marco Rubio, introduced the Uyghur Forced Labor Prevention Act that aims to prevent the importation of Chinese products tied to evidence of unfree labor. In June 2020, Trump's former national security adviser John Bolton claimed that President Donald Trump told Chinese leader Xi Jinping that China's decision to detain Uyghurs in re-education camps was "exactly the right thing to do". US Congress passed the Uyghur Human Rights Policy Act which was signed into law by President Trump on 17 June 2020. On 9 July 2020, the Trump administration imposed sanctions and visa restrictions against senior Chinese officials, including Chen Quanguo. The same month, sanctions under the Global Magnitsky Act were levied against the Xinjiang Production and Construction Corps and related officials including Sun Jinlong and Peng Jiarui. On 14 September 2020, the U.S. Department of Homeland Security blocked imports to the United States of products from four entities in Xinjiang: all products made with labor from the Lop County No. 4 Vocational Skills Education and Training Center; hair products made in the Lop County Hair Product Industrial Park; apparel produced by Yili Zhuowan Garment Manufacturing Co., Ltd. and Baoding LYSZD Trade and Business Co., Ltd; and cotton produced and processed by Xinjiang Junggar Cotton and Linen Co., Ltd. On 22 September 2020, the United States House of Representatives passed the Uyghur Forced Labor Prevention Act. On 19 January 2021, US Secretary of State Mike Pompeo designated China's treatment of the Uyghurs a genocide, making the United States the first country in the world to make such a designation. China responded a day later by sanctioning US officials in the outgoing Trump administration, including Pompeo, for their criticisms of China's treatment of the Uyghurs. On 9 July 2021 The US Department of Commerce's Bureau of Industry and Security (BIS) added 14 entities, that are based in the People's Republic of China (PRC) and have enabled Beijing's campaign of repression, mass detention, and high-technology surveillance against Uyghurs, Kazakhs, and members of other Muslim minority groups in the Xinjiang Uyghur Autonomous Regions of China (XUAR), where the PRC continues to commit genocide and crimes against humanity, to the Entity List. The Entity List is a tool utilized by BIS to restrict the export, reexport, and transfer (in-country) of items subject to the Export Administration Regulations. Response from dissidents On 10 August 2018, about 47 Chinese intellectuals and others issued an appeal against what they describe as "shocking human rights atrocities perpetrated in Xinjiang". In December 2019, during the anti-government protests in Hong Kong, a mixed crowd of young and elderly people, numbering around 1,000 and dressed in black and wearing masks to shield their identities, held up signs reading "Free Uyghur, Free Hong Kong" and "Fake 'autonomy' in China results in genocide". They rallied calmly, waving Uyghur flags and posters. The local riot police pepper sprayed demonstrators to disperse the crowd. International Criminal Court's complaint In July 2020, the East Turkistan National Awakening Movement and the East Turkistan Government in Exile filed a complaint with the International Criminal Court calling for it to investigate PRC officials for crimes committed against Uyghurs, including allegations of genocide. In December 2020, the International Criminal Court declined to take investigative action against China on the basis of not having jurisdiction over China for most of the alleged crimes. See also Notes References External links Xinjiang Documentation Project at the University of British Columbia Xinjiang Data Project at the Australian Strategic Policy Institute Regulation for the Removal of Extremism in the Xinjiang Uygur Autonomous Region (Wikisource, in Chinese) 2017 establishments in China 21st-century human rights abuses 2010s in China 2020s in China Cultural assimilation Human rights of ethnic minorities in China Internment camps in China Linguistic discrimination Anti-Islam sentiment in China Islamophobia in China Political repression in China Language policy in Xinjiang Racism in China Separatism in China Total institutions Violence against Muslims Xi Jinping Religious persecution by communists Xinjiang conflict Counterterrorism in China Ethnic cleansing in Asia Uyghurs Kazakhs in China Islam-related controversies in Asia Human rights abuses in China Communist repression Collective punishment Genocides in Asia
Xinjiang internment camps
Biology
18,020
893,337
https://en.wikipedia.org/wiki/Local%20hidden-variable%20theory
In the interpretation of quantum mechanics, a local hidden-variable theory is a hidden-variable theory that satisfies the principle of locality. These models attempt to account for the probabilistic features of quantum mechanics via the mechanism of underlying, but inaccessible variables, with the additional requirement that distant events be statistically independent. The mathematical implications of a local hidden-variable theory with regards to quantum entanglement were explored by physicist John Stewart Bell, who in 1964 proved that broad classes of local hidden-variable theories cannot reproduce the correlations between measurement outcomes that quantum mechanics predicts, a result since confirmed by a range of detailed Bell test experiments. Models Single qubit A collection of related theorems, beginning with Bell's proof in 1964, show that quantum mechanics is incompatible with local hidden variables. However, as Bell pointed out, restricted sets of quantum phenomena can be imitated using local hidden-variable models. Bell provided a local hidden-variable model for quantum measurements upon a spin-1/2 particle, or in the terminology of quantum information theory, a single qubit. Bell's model was later simplified by N. David Mermin, and a closely related model was presented by Simon B. Kochen and Ernst Specker. The existence of these models is related to the fact that Gleason's theorem does not apply to the case of a single qubit. Bipartite quantum states Bell also pointed out that up until then, discussions of quantum entanglement focused on cases where the results of measurements upon two particles were either perfectly correlated or perfectly anti-correlated. These special cases can also be explained using local hidden variables. For separable states of two particles, there is a simple hidden-variable model for any measurements on the two parties. Surprisingly, there are also entangled states for which all von Neumann measurements can be described by a hidden-variable model. Such states are entangled, but do not violate any Bell inequality. The so-called Werner states are a single-parameter family of states that are invariant under any transformation of the type where is a unitary matrix. For two qubits, they are noisy singlets given as where the singlet is defined as . Reinhard F. Werner showed that such states allow for a hidden-variable model for , while they are entangled if . The bound for hidden-variable models could be improved until . Hidden-variable models have been constructed for Werner states even if positive operator-valued measurements (POVM) are allowed, not only von Neumann measurements. Hidden variable models were also constructed to noisy maximally entangled states, and even extended to arbitrary pure states mixed with white noise. Beside bipartite systems, there are also results for the multipartite case. A hidden-variable model for any von Neumann measurements at the parties has been presented for a three-qubit quantum state. Time-dependent variables Previously some new hypotheses were conjectured concerning the role of time in constructing hidden-variables theory. One approach was suggested by K. Hess and W. Philipp and relies upon possible consequences of time dependencies of hidden variables; this hypothesis has been criticized by Richard D. Gill, , Anton Zeilinger and Marek Żukowski, as well as D. M. Appleby. See also EPR paradox Bohr–Einstein debates References Quantum measurement Hidden variable theory
Local hidden-variable theory
Physics
680
62,624,572
https://en.wikipedia.org/wiki/Epichlo%C3%AB%20sinica
Epichloë sinica is a hybrid asexual species in the fungal genus Epichloë. A systemic and seed-transmissible grass symbiont first described in 2009, Epichloë sinica is a natural allopolyploid of Epichloë bromicola and a strain in the Epichloë typhina complex. Epichloë sinica is found in Asia, where it has been identified in species of the grass genus Roegneria. References sinica Fungi described in 2009 Fungi of Asia Fungus species
Epichloë sinica
Biology
109
20,165,219
https://en.wikipedia.org/wiki/Brian%20Spalding
Dudley Brian Spalding (9 January 1923 – 27 November 2016) was Professor of Heat Transfer and Head of the Computational Fluid Dynamics Unit at Imperial College, London. He was one of the founders of computational fluid dynamics (CFD) and an internationally recognized contributor to the fields of heat transfer, fluid mechanics and combustion. He created the practice of CFD – its application to problems of interest to engineers. Most of today’s commercially available CFD software tools trace their origin to the work done by Spalding's group in the decade spanning the mid-60s and mid-70s. Spalding became a Fellow of the Royal Society and Fellow of the Royal Academy of Engineering. Life Spalding was born at New Malden, Surrey, England, and educated at King's College School, Wimbledon. He received his BA degree in Engineering Science from Oxford University in 1944 and PhD from Cambridge University in 1952. He joined the Department of Mechanical Engineering at Imperial College in 1954 as a Reader in Heat Transfer. On his promotion to Professor of Heat Transfer in 1958 he gave his inaugural lecture entitled Heat Transfer in Rocket Motors. He was the founder of the company Concentration Heat And Momentum Limited, (CHAM) specialising in computational fluid dynamics and heat transfer processes. CHAM's major product is the widely used PHOENICS CFD code. Spalding himself was the main creator of, and contributor to, PHOENICS. Together with his student Suhas Patankar he developed the SIMPLE algorithm, a widely used numerical procedure to solve the Navier–Stokes equations. In late 1970s and early 1980s, Brian Spalding was the Reilly Professor of Combustion Engineering at Purdue University. Though in his 90s, Spalding continued to be active in his field, and was taken ill while at an international conference in Russia. He died on his return to the UK. CHAM Spalding formed Combustion Heat and Momentum Ltd on 14 November 1969, which was renamed to Concentration Heat and Momentum Ltd (CHAM) in 1974. The company was set up as a means of financing and conducting research and development in the fields of fluid mechanics, heat transfer and combustion, with special emphasis on the development of computer programs for the design of engineering equipment, and for the analysis and prediction of the motion of matter and heat in the environment. From the outset commercial CFD services were provided to industrial and governmental clients based on the technology that had emerged from his research group at Imperial College in the late 1960s. Later these services were based on PHOENICS, the first commercially available Computational Fluid Dynamics Software, which he created in 1978 and released commercially in 1981. The first contracts undertaken by CHAM were conducted by members of the academic staff of the Heat Transfer Section of the Mechanical Engineering Department of Imperial College, London. The contracts resulted in the development for industrial organisations of particular versions of certain computer programs, general versions of which had been constructed by Spalding and his colleagues and published in the open scientific literature. The computer programs concerned the analysis of two-dimensional steady fluid-flow phenomena. Subsequently, in the early 1970s, CHAM funds generated from contract work for industrial and governmental clients were used for the development of new families of computer programs, for three-dimensional phenomena as well as two-dimensional ones, and for time-dependent as well as steady flows. At that time CHAM placed a general research contract with Imperial College to develop a complete array of computer programs for predicting all the major types of convective, heat-transfer and chemically-reactive processes which are likely to be encountered in engineering and the natural environment. These programs were equipped with mathematical models for turbulence, radiation, chemical reaction and some two-phase effects, such as particle-size change. During this period, CHAM programs were used by arrangement for calculations required in researches sponsored, for example, by the Science Research Council and the Department of Trade and Industry. In addition, CHAM-generated techniques were incorporated into post-graduate and under-graduate teaching curricula at Imperial College. Between 1969 and 1980, CHAM developed numerous application-specific CFD computer codes. In 1978, Spalding conceived the idea of a single CFD code capable of handling all fluid-flow processes. Consequently, CHAM abandoned the policy of developing individual application-specific CFD codes, and during late 1978 the company began creating the world’s first general-purpose CFD code, PHOENICS, which is an acronym for Parabolic, Hyperbolic Or Elliptic Numerical Integration Code Series. The initial creation of PHOENICS was largely the work of Spalding and Harvey Rosten, and the code was launched commercially in 1981, and so here for the first time, a single CFD code was to be used for all thermo-fluids problems. Biographical material Brian Spalding tribute lecture, CHT-08, Marrakech 2008 A tribute to D.B. Spalding and his contributions in science and engineering, Int. J. Heat Mass Transfer, Vol.52, 3884–3905, 2009 Selected books B. E. Launder and D. B. Spalding, Mathematical Models of Turbulence, Academic Press (1972). D. B. Spalding and E. H. Cole, Engineering Thermodynamics, 3rd ed., Hodder Arnold (1973). D. B. Spalding, Combustion and Mass Transfer, Elsevier (1978). D. B. Spalding, Convective Mass Transfer- An Introduction, McGraw Hill (1963). Honours and awards Max Jakob Memorial Award, 1978 Fellowship of the Royal Society, 1983 Fellowship of the Royal Academy of Engineering, 1989 Global Energy Prize, 2009 Benjamin Franklin Medal in Mechanical Engineering of The Franklin Institute, 2010 References Fluid dynamicists Computational fluid dynamicists Alumni of the Queen's College, Oxford People educated at King's College School, London 1923 births 2016 deaths Academics of Imperial College London Fellows of the Royal Society Fellows of the Royal Academy of Engineering Foreign members of the Russian Academy of Sciences Purdue University faculty People from New Malden Benjamin Franklin Medal (Franklin Institute) laureates
Brian Spalding
Chemistry
1,231
65,770,702
https://en.wikipedia.org/wiki/Scythian%20metallurgy
From the 7th to 3rd Century BC, the Scythian people of the Pontic–Caspian steppe engaged in the widespread practice of metallurgy. Though Scythian society was heavily based around a nomadic, mobile lifestyle, the culture was capable of practicing metallurgy and of producing metal objects. Many works of Scythian metalworking have subsequently been found throughout the range of the people. Description The Scythians emerged as a people prior to the 7th Century BC, when they were first mentioned in historical records. The Scythian civilization consisted of a number of distinct tribal groups scattered across the Pontic Steppes, Caucasus, and Central Asia. Though primarily a nomadic people, the Scythians established a number of settlements across their territory; these establishments in turn allowed for the development of a sedentary society and the accompanying development of trade skills, including metalworking. Scythian knowledge of metalworking likely originated with the peoples of Iran and China, with this knowledge spreading along trade routes and arriving in the steppes from the 2nd to 1st Millennium BC. Early Scythian metallurgy was centered around bronzeworking, as these skills had already been widely adopted by the Scythians' neighbors. The Minusinsk Basin of Siberia has been speculated as the origin point for the raw materials used in Bronze-age Scythian metallurgy, and Scythian access to this region fueled the peoples' later centuries of expansion. During the 8th Century BC Scythians were often employed by nations in the Near East and these returning soldiers may have brought knowledge of iron-working back to their homeland, and by the start of the 6th-century BC the practice was widespread in the Pontic steppes. In addition to bronze and iron working, gold and copper-working were also present in Scythian society; in his commentary on the Scythian people, Greek historian Herodotus remarked on their fondness for making things from gold and copper. Metallurgy held a major place in Scythian society as metalworkers were needed to produce material goods to support the Scythian way of life. As a nomadic society with broad borders, the Scythians often raided neighboring peoples and as such required metal weaponry - particularly iron swords and bronze arrowheads. It has been speculated that the Scythian's use of stylized metal adornments may have been copied from their opponents during these conflicts. In addition, jewelry and other adornment was in demand among all classes of society, as can be seen with the discovery of metal adornments in the burial tombs attributed to the Scythians. One notable aspect of Scythian clothing was the widespread use of metal belts. Other signs of Scythian metalworking can be found throughout sites attributed to the people. Several notable Scythian archeological sites contain the remnants of metalworking operations; at one settlement along the Dnieper, remnants of blast furnaces and slag have been found, implying the existence of a large metallurgical center. Studies of other Scythian sites have also led to the remains of metal workshops and tools being found, further supporting the theory that the Scythians were organized craftspeople. Scythian metalworkers were particularly renowned for the high quality of their copper crafting. During war, portable molds were brought to forge arrowheads for the Scythian cavalry. Scythian metallurgy also influenced the metallurgy of the Koban people of the North Caucasus. References See also Scythian art Scythian clothing History of metallurgy Scythia
Scythian metallurgy
Chemistry,Materials_science
734
26,603,341
https://en.wikipedia.org/wiki/Seymour%20Cray%20Computer%20Engineering%20Award
The Seymour Cray Computer Engineering Award, also known as the Seymour Cray Award, is an award given by the IEEE Computer Society, to recognize significant and innovative contributions in the field of high-performance computing. The award honors scientists who exhibit the creativity demonstrated by Seymour Cray, founder of Cray Research, Inc., and an early pioneer of supercomputing. Cray was an American electrical engineer and supercomputer architect who designed a series of computers that were the fastest in the world for decades, and founded Cray Research which built many of these machines. Called "the father of supercomputing," Cray has been credited with creating the supercomputer industry. He played a key role in the invention and design of the UNIVAC 1103, a landmark high-speed computer and the first computer available for commercial use. In 1972 the IEEE presented Cray with the Harry H. Goode Memorial Award for his contributions to large-scale computer design and the development of multiprocessing systems. One year after Cray's death in 1996, IEEE created the Seymour Cray Computer Engineering Award in honor of his creative spirit. The award is one of the 12 technical awards sponsored by the IEEE computer society as recognition given to pioneers in the field of computer science and engineering. The winner receives a crystal memento, certificate, and US$10,000 honorarium. The first recipient, in 1999, was John Cocke. Nomination and Ceremony The following criteria are considered when selecting a recipient: Leadership in field Breadth of work Achievement in other fields Inventive value (patents) Individual vs. group contribution Publications (articles, etc.) Originality of contribution Quality of nomination IEEE Society activities and honors Quality of endorsements The annual nomination deadline is July 1. Anyone may nominate a candidate, although self-nomination is not allowed. A candidate must receive at least three nominations to be considered by the award committee. Nominations should be prepared and submitted through the IEEE official website. The Seymour Cray Computer Engineering Award presentation and reception are held at the SC conference, the international conference for high-performance computing networks, storage, and analysis. The conference is sponsored by the ACM (Association for Computing Machinery) and the IEEE Computer Society. It is held annually in mid-November. Several other IEEE sponsored awards are presented at the same event, including the ACM Gordon Bell Prize, the ACM/IEEE-CS Ken Kennedy Award, the ACM/IEEE-CS George Michael Memorial HPC Fellowship, the ACM SIGHPC / Intel Computational & Data Science Fellowships, the IEEE-CS Seymour Cray Computer Engineering Award, and the IEEE-CS Sidney Fernbach Memorial Award. Recipients See also List of computer-related awards List of computer science awards List of prizes named after people IEEE John von Neumann Medal Gordon Bell Prize References External links Cray Computer science awards Supercomputers IEEE society and council awards
Seymour Cray Computer Engineering Award
Technology
592
75,714,162
https://en.wikipedia.org/wiki/Patrick%20H.%20Scully
Patrick H. Scully was a Catholic priest and astronomer who served as a missionary in Cape Town and built the first Catholic parish church in South Africa. Biography After the Anglo-Dutch Treaty of 1814, the Colonial Office gave permission for a Catholic priest to be stationed in Cape Town. Irish priest Patrick H. Scully arrived in Cape Town on 1 January 1820, in the company of the bishop . On 13 February 1820, Scully opened a church in a repurposed store on Buitekant street, donated by a local Catholic, Philip Albertus. There he said Mass on Sundays and holy days at 11 AM. He initially ministered mainly to local Irish soldiers. Rufane Shaw Donkin, the acting governor of Cape Town, approved a salary of £75 for Scully on 17 January 1821. In April 1821, Scully petitioned the burgher senate for land to build a proper church. The senate agreed, and Scully announced the planned construction in September. In November, the Cape Gazette announced that the plans for the church were available to view. In 1821, the churchwardens of Cape Town wrote to Slater with a number of complaints about Scully. Scully, they said, only offered Mass on Sundays, gave infrequent and inaudible sermons, failed to follow up on home visits to parishioners, and was irregular in recording baptisms. They also claimed that Scully was breaking the law by baptizing slaves. In response, Slater told the churchwardens not to interfere in Scully's pastoral decisions. Low donations from parishioners were a recurring problem. In 1821, the churchwardens attempted to raise funding for the parish by charging a fee for access to the sacraments. Scully continued to perform sacraments without their permission, and fired the sacristan when he attempted to interfere. Lord Charles Somerset, the governor of Cape Town, returned from leave in December 1821 and stopped Scully's salary. Scully therefore looked for work elsewhere, and that same year, Fearon Fallows, head of the Royal Observatory, Cape of Good Hope, wrote to John Barrow asking for approval to hire Scully as an assistant. Scully began work on 18 January 1822, and on 4 April 1822 the Board of Longitude sent their official approval for the decision to hire Scully at a salary of £100. In a letter to Barrow, Fallows wrote of Scully: Fallows also praised Scully in a letter to John Herschel, writing: Construction began on Scully's church on 28 October 1822. Due to continuing funding issues, Scully took out a number of loans in 1823 to fund the ongoing construction of the church. He never paid interest on these loans, and they were the subject of extensive litigation after his departure. In March 1824, he began to say Mass in the unfinished chapel. In July 1824, Fallows found Scully in bed with Fallows's 17-year-old maid. Due to the "improprieties" committed and the "violence of [Scully's] manner" when discovered, Fallows promptly dismissed Scully, who was also defrocked over the incident. Scully left the colony for London on 11 July 1824, aboard the Venus. Upon his departure, he entrusted the church he had built to two curators. He was succeeded as chaplain by Theodore Wagner. Fallows asked the Admiralty to continue Scully's salary for six months after his dismissal, but the request was declined, and Scully was formally dismissed on 5 October 1824. References Cape Colony people 19th-century Irish Roman Catholic priests Laicized Roman Catholic priests Astronomers Date of birth missing Place of birth missing
Patrick H. Scully
Astronomy
724
24,576,054
https://en.wikipedia.org/wiki/Leucopholiota%20decorosa
Leucopholiota decorosa is a species of fungus in the mushroom family Squamanitaceae. Commonly known as the decorated pholiota, it is distinguished by its fruit body which is covered with pointed brown, curved scales on the cap and stem, and by its white gills. Found in the eastern United States, France, and Pakistan, it is saprobic, growing on the decaying wood of hardwood trees. L. decorosa was first described by American mycologist Charles Horton Peck as Agaricus decorosus in 1873, and the species has been transferred to several genera in its history, including Tricholoma, Tricholomopsis, Armillaria, and Floccularia. Three American mycologists considered the species unique enough to warrant its own genus, and transferred it into the new genus Leucopholiota in a 1996 publication. Lookalike species with similar colors and scaly fruit bodies include Pholiota squarrosoides, Phaeomarasmius erinaceellus, and Leucopholiota lignicola. L. decorosa is considered an edible mushroom. Taxonomy and naming The species now known as Leucopholiota decorosa was first described by Charles Peck in 1873, based on a specimen he found in New York State; he placed it in Tricholoma, then considered a subgenus of Agaricus. In 1947, Alexander Smith and Walters transferred the species into the genus Armillaria, based on its apparent close relationship to Armillaria luteovirens; the presence of clamp connections in the hyphae, the amyloid spores, and the structure of the veil and its remnants. The genus Armillaria, as it was understood at the time, would later be referred to as a "taxonomic refugium for about 270 white-spored species with attached gills and an annulus." Smith later transferred the species to the genus Tricholomopsis; however, he neglected the amyloid spores, the recurved scales of the cap cuticle, and the lack of cells known as pleurocystidia, features which should have ruled out a taxonomic transfer into the genus. In 1987, the species was transferred yet again, this time to the genus Floccularia. The appearance of a specimen at a 1994 mushroom foray in North Carolina resulted in a collaboration between mycologists Tom Volk, Orson K. Miller, Jr. and Alan Bessette, who renamed the species Leucopholiota decorosa in a 1996 Mycologia publication. Leucopholiota was originally a subgenus of Armillaria, but the authors raised it to generic level to accommodate L. decorosa, which would become the type species. In 2008, Henning Knudsen considered L. decorosa to be the same species as what was then known as Amylolepiota lignicola, and considered the two names to be synonymous. However, Finnish mycologist Harri Harmaja rejected this interpretation. Originally, Harmaja believed Lepiota lignicola sufficiently distinct from other similar taxa to deserve its own genus Amylolepiota, which he described in a 2002 publication. He changed his mind in 2010, writing "the differences between the type species of both genera are small and are thus best considered as differences at the species level"; with this he transferred the taxon to Leucopholiota, and it is now known as Leucopholiota lignicola, the second species in genus Leucopholiota. The genus name Leucopholiota means "white Pholiota" (from λευκός, leukós), referring to the gills and the spores; it was proposed in 1980 by Henri Romagnesi who originally described it as a subgenus of Armillaria. The specific epithet decorosa, though intended for "elegant" or "handsome", actually means "decent", "respectable", "modest", or "decorous". L. decorosa is commonly known as the "decorated Pholiota". Phylogenetics Phylogenetic analysis based on evidence from ITS and large subunit ribosomal RNA sequence data have not confirmed that Leucopholiota decorosa belongs in the family Tricholomataceae. However, the analysis does show it to be phylogenetically related to Phaeolepiota aurea, a species of unclear status in the Agaricales, and it confirms that L. decorosa does not belong in the family Agaricaceae. According to the species authors, L. decorosa would fit best in the tribe Biannularieae of the Tricholomataceae as described by Rolf Singer in his comprehensive monograph on the Agaricales. This tribe also contains the genera Catathelasma and Armillaria. Description The caps of L. decorosa, initially conic or hemispherical in shape, later expand to become convex or flattened in maturity. The caps are typically between in diameter, with surfaces covered with many small curved brown scales. The edge of the cap is typically curved inwards and may have coarse brown fibers attached. The cap is cinnamon brown, darker in the center. The gills are spaced together closely; they have a narrow (adnexed) attachment to the stem, and their edges are "finely scalloped". The stem is tall by thick, and like the cap, is covered with scales from the bottom to the level of the annular zone; above this point the stipe is smooth. The partial veil is made up of brown fibers "that flare upward as an annulus." It is roughly the same thickness throughout the length of the stem, or may be slightly thinner near the top. The flesh is white and thick, and has a firm texture; its odor is indistinct, and the taste either mild or bitter. The spore deposit is white. The spores are hyaline (translucent), roughly elliptical in shape, have thin walls, and are amyloid, meaning they absorb iodine stain in Melzer's reagent. Additionally, in acetocarmine stain, they appear binucleate (having two nuclei). They have dimensions of 5.5–6 (more rarely 7) by 3.5–4.0 μm. The spore-bearing cells, the basidia, are club-shaped, translucent, and four-spored. The cheilocystida (cystidia on the gill edge) are club-shaped and 19-24 by 3–5 μm. The cap cuticle is a trichodermium—a type of tissue composed of erect, long, threadlike hyphae of same or different lengths, and originating from an interwoven layer of hyphae that ascends gradually until terminal cells are somewhat parallel to each other. The trichodermal hyphae are thin-walled, measuring 7.6–22.0 μm, and stain yellowish in Melzer's reagent. The hyphae comprising the cap tissue are thin-walled and 5–10 μm in diameter, while those of the gill tissue are also thin-walled, and 3.5–7.0 μm, and interspersed with oleiferous cells (characterized by strongly refractive, homogeneous contents). Clamp connections are present in the hyphae of all tissues. Edibility Leucopholiota decorosa was recorded as edible in 1900 by McIlvaine and MacAdam, who wrote that "it is of good consistency and flavor, having a decided mushroom taste." Later sources report the edibility as unknown. Similar species The species Pholiota squarrosoides has a similar outward appearance, but it may be distinguished by its brown spores and sticky cap surface underneath the scales. In the hedgehog pholiota (Phaeomarasmius erinaceellus), the overall size is smaller—cap diameter —and the spores are cinnamon-brown. Some species in the genus Cystoderma also appear similar, but can be distinguished by microscopic features, like the presence of spherical (rather than club-shaped) cells in the cuticle of the cap, and also their habitat—Cystoderma usually grows on soil, rather than wood. The only other species of Leucopholiota, L. lignicola, may be distinguished from L. decorosa by the following characteristics: free gills in L. lignicola compared with adnexed gills in L. decorosa; L. lignicola tends to grow on the wood of Birch, and preferably in old-growth forests; L. lignicola is restricted to boreal forest, compared to L. decorosa that grows in temperate regions; L. lignicola has a wide distribution throughout northern coniferous forests in Eurasia. Habitat and distribution Leucopholiota decorosa is a saprobic species, deriving nutrients from decaying organic matter, particularly the rotting branches and stumps of deciduous trees. One field guide notes a preference for sugar maple. It grows singly or in bunches, clustered together at the base of the stem. In Ohio, it typically fruits from late September to mid November. In addition to its known distribution in mostly eastern North America, Leucopholiota decorosa has also been collected from France. In 2007, it was reported from the Astore District of Pakistan, at an altitude of about . See also List of Tricholomataceae genera References External links Fungi described in 1873 Fungi of Asia Fungi of Europe Fungi of North America Tricholomataceae Taxa named by Charles Horton Peck Fungus species
Leucopholiota decorosa
Biology
2,001
44,283,744
https://en.wikipedia.org/wiki/Hyper-real%20religion
Hyper-real religion is a sociological term to describe a new consumer trend in acquiring and enacting religion. The term was first described in the book Religion and Popular Culture: A Hyper-Real Testament by Adam Possamai. The term is used to explore the intersection between postmodernity and religion. The idea has been expanded and critiqued by a number of academics since its creation. Origins and usage According to theories of postmodernization, the last half of the 20th century (often termed as the "postmodern era") saw consumerism, individualisation and choice come to the forefront of Western societies via capitalism. Thus religion as a part of this culture became increasingly commercial, individualised and democratized. People now have more choices in religion, they can often practice it in privacy and as they wish, outside of traditional institutional boundaries. Due to this change, the sociology of religion has become increasingly interested in the potential for typologization of the modes of non-institutional religion and the foundation of non-institutional religion in human nature. It has become increasingly clear that the people leaving the structures and ceremonies of traditional religions are not instantly becoming non-religious in an atheistic sense. For example, some continue believing without belonging to a church, others turn to alternative spiritualities and others, as discussed by Possamai, turn to consumer based religions partly based on popular culture, what he calls "hyper-real religions." With hyper-real religion, elements from religions and popular culture are highly intertwined. They are post-modern expressions of religion, likely to be consumed and individualised, and thus have more relevance to the self than to a community and/or congregation. Thus in postmodern times, the relation between people and religion is very fluid; if modernity brought the disenchantment of the world, as Max Weber puts it, postmodernity is re-enchanting the world through a proliferation of 'subjective myths' (myths that are relevant to the self) and through the expansion of consumerism and the internet. Possamai explains that the concept of hyper-real religions is derived from the work of Jean Baudrillard. Baudrillard put forward that we are living in an age of hyper-reality in which we are fascinated by simulations that lack a real world referent or simulacra. Possamai sees these simulations as part of the popular cultural milieu, in which "signs get their meanings from their relations with each other, rather than by reference to some independent reality or standard". With no way to "distinguish the real from the unreal", hyper-reality – the situation in which reality collapses – emerges. For example, we may refer to a person as being like Superman or Homer Simpson, rather than a real-life example of a hero or dunce, and theme parks represent movies or Disney creations rather than real life. The fictional character and world become more real for us than the real person or real world. Possamai, as Mark Geoffroy puts it, re-adapted Baudrillard's theory by applying it to religions that are engaged with these same simulated realities. In the most obvious examples, the Church of All Worlds draws its inspiration largely from Robert Heinlein’s Stranger in a Strange Land, Jediism draws on George Lucas' Star Wars mythology and Matrixism on The Matrix film franchise. Following these ideas Possamai defined hyper-real religions as: ...a simulacrum of a religion partly created out of popular culture which provides inspiration for believers/consumers at a metaphorical level. Following critiques in the Handbook of Hyper-real Religions, Possamai modifies his original definition of hyper-real religions to: A hyper-real religion is a simulacrum of a religion created out of, or in symbiosis with, commodified popular culture which provides inspiration at a metaphorical level and/or is a source of beliefs in everyday life. Expansions and critiques of hyper-real religion as a concept Eileen Barker suggests that the concept of hyper-real religions is ambiguous, however, she goes on, it is this ambiguity, this liminality, that gives it its greatest strength. Through this positioning it allows us to view novel religious developments and to view the effects of those developments on the older religions of the world. This allows the sharpening and refining of the tools of sociology of religion to take on contemporary developments in the religious field. Through Possamai's concepts, the differentiating effects of individualism, consumerism and democratisation of religion become salient. Markus Davidsen argues that Possamai has identified a real class of religions but that the concept he uses to refer to them needs to be replaced. He argues that for Baudrillard, all religions are hyper-real in the sense that they ascribe reality to the socially constructed. Barker also suggests that if we were to take the methodologically agnostic stance of the social constructionists with regard to hyper-real religions, then we would insist that all religions should be evaluated as hyper-real. They all draw on realities without referent (see Cusack). Similarly, Geoffroy argues that Baudrillard would not have been happy with Possamai’s "re-adaptation". For Baudrillard, religion had been out of the hyper-reality picture for a long time. Religion was an illusion of modernity that could not exist in hyper-reality, as our value systems exclude predestination of evil. Baudrillard was very cynical of the ability of popular culture to provide any sort of meaning: It is a form of alienation that cannot be a source of inspiration. Due to this, Geoffroy argues that it is unclear how hyper-real religions can be a re-adaptation of Baudrillard, since he does not acknowledge religion or the liberating effects of popular culture in his work. He suggests that what Possamai has done is to reinterpret Baudrillard. Geoffroy puts forward James A. Beckford's theories on "religion as a cultural resource", Forgues "symbolic activity" and Luckmann's "Invisible Religion" as alternative theories that could have been chosen to express Possamai's idea of hyper-real religions. However he concludes that it works as a useful and enlightening ‘re-interpretation’ of hyper-reality. Davidsen concludes similarly, that this is a distinct class of religions but that we cannot meaningfully refer to them as hyper-real. He offers "fiction-based religion" as a more accurate term. Fiction-based religions draw their main inspiration from fictional narratives (e.g., Star Wars and The Lord of the Rings) which do not claim to refer to the actual world, but create a fictional world of their own. He draws a distinction between fiction (such as Star Wars), which does not claim to refer to the actual world, and history, including religious narratives, which does make such a claim. Third, he criticizes scholars like Cusack who argue that fandom, for instance, Star Trek fandom, is a form of religion. He draws an analytical distinction between religion and play, which he suggests makes it possible to distinguish between religious use of fiction (fiction-based religion) and playful engagement with fiction (fandom). Barker also questions whether there is a need to include these social manifestations as part of the concept of religion when perhaps they are examples of secular fiction, rather than religion, thus hyper-real religions blur the line between religion and non-religion, bringing more ideas and objects into the fold of religion. Authors criticize the supposedly subjective nature of such consumer religions and Possamai’s use of the term "subjective myth". Roeland uses Possamai’s discussion of individualistic consumer religions to compare the subjectivity of such religious consumers with the realist construction of the evangelical Christian God. While the meaning of God and religion are subjective to Possamai’s subjective myth driven consumers, Dutch Christian evangelicals believe that their religion and God exist independently of our subjective desires and constructions. Thus, while subjectivism is present in evangelical circles, by the experiential orientation that they employ, it is employed to explore the real presence of God. The reality of God is experienced subjectively and attests to His reality. Thus, subjective perceptions of religious ideas do not necessarily negate their reality for practitioners and the authenticity of the experience can remain. Likewise, Johan Roeland, Stef Aupers, Dick Houtman, Martijn de Koning, and Ineke Noomen criticize Possamai's idea of the New Age being constructed by the subjective myths of practitioners. They suggest that these portrayals show the New Age as spiritually and religiously incoherent. To the authors, this is an unfair portrayal which misses the fact that "self-spirituality" is a shared myth amongst New Age practitioners, one which goes beyond a personal story or subjective myth. They suggest, against Possamai, that the New Age is not a postmodern flight to the surface but a quest for solid foundations in a world ruined by complacent and shiftless religion. Anneke van Otterloo, Stef Aupers and Dick Houtman also argue that the New Age milieu is not as individualistic and rhizomatic as accounts such as Possamai's make it seem. They argue that this is due to New Age diffusion into Western culture by cultural and popular sources. There are criticisms of Possamai's use of consumption. Paul Heelas critiques Possamai's view that the New Age is a consumer religion par excellence with a specific focus on individualistic preferences. He suggests that the practices of the New Age require a relational element that connects the Me to the We and thus are far less consumeristic and individualistic than Possamai argues. He argues that although Possamai denotes New Age as the consumer religion par excellence, he does little to discuss the actual processes that he thinks lead to this label. According to Heelas ‘Consumption’ and the ‘Consumer’ as words and processes remain undefined in Possamai's work and the work of many others in the field. Helen Berger and Douglas Ezzy suggest that the witchcraft movements that they discuss have durability and integration that goes beyond the consumer religions discussed by Possamai. While Possamai sees witchcraft as consumerist in the same way as Matrixism, Berger and Ezzy argue that the stronger historical roots and variety of cultural resources available to the witchcraft community allow a higher level of durability and integration. Lastly, Geoffroy is critical of the market based nature of Possamai's idea, as he does not think that all metaphors can be sold as commodities. Several authors critique Possamai's thinking about the authenticity of such religions. They argue that Possamai's concept suggests that these religions are less than ‘real’ in comparison to other religious forms. Tremlett describes this as the jargon of authenticity and cites Possamai, Cusack and Chidester as examples of this kind of framing of consumer religion. He suggests that Possamai's work is notable for rendering material relationships within capitalism in symbols and signs. Tremlett suggests that Possamai completely fails to acknowledge or apparently understand the importance of the term spectacle, which comes from Debord’s the Society of the Spectacle or the more general consequences that follow such a strategy of analysis. For example, according to Debord, the spectacle is not a collection of images, but a social relation among people, mediated by images. In other words, the apparent weightlessness of the sign and the image in postmodernity – of, in short, the spectacle – is precisely that: it is an appearance. To Tremlett such a mode of analysis will miss the real forces that lie behind these signs and images, regardless of what certain communities may have to say about them. In resonance, but in a different critical direction, while applying the concept of hyper-real religion to Hinduism, Scheifinger makes the argument that hyper-real religion is a Western construction. Given the generally hyper-real nature of the Hindu gods, his analysis raises the question of the universality of the concept, suggesting that it may only fit within a post-Christian environment where popular culture is fully commodified. References Sociology of religion Religious studies Postmodern religion Hyperreality
Hyper-real religion
Technology
2,600
26,413,989
https://en.wikipedia.org/wiki/List%20of%20fen%20plants
The following is a list of plant species to be found in a north European fen habitat with some attempt to distinguish between reed bed relicts and the carr pioneers. However, nature does not come in neat compartments so that for example, the odd stalk of common reed will be found in carr. In pools Beaked sedge; Carex rostrata Whorl grass; Catabrosa aquatica Needle spike-rush; Eleocharis acicularis Northern spike-rush; Eleocharis austriaca Sweet grasses; Glyceria species Common reed; Phragmites australis Swamp meadow grass; Poa palustris In typical fen Flat sedge; Blysmus compressus Great fen sedge; Cladium mariscus Lesser tufted sedge; Carex acuta Lesser pond sedge; Carex acutiformis Davall's sedge; Carex davalliana Dioecious sedge; Carex dioica Brown sedge; Carex disticha Tufted sedge; Carex elata Slender sedge; Carex lasiocarpa Flea sedge; Carex pulicaris Greater pond sedge; Carex riparia Common spike-rush; Eleocharis palustris Few-flowered spike-rush; Eleocharis quinqueflora Slender spike-rush; Eleocharis uniglumis Broad-leaved cotton sedge; Eriophorum latifolium Reed sweet-grass; Glyceria maxima Yellow flag iris; Iris pseudacorus Brown bog rush; Schoenus ferrugineus In fen carr Narrow small-reed; Calamagrostis stricta Purple small-reed; Calamagrostis canescens Tussock sedge; Carex paniculata Cyperus sedge; Carex pseudocyperus Wood club rush; Scirpus sylvaticus References Rose, F. Grasses, Sedges, Rushes and Ferns of the British Isles and north-western Europe (1989) Fen Fen Fen Fen Fen Fen Fen
List of fen plants
Biology,Environmental_science
431
67,662,457
https://en.wikipedia.org/wiki/Inosperma%20adaequatum
Inosperma adaequatum, until 2019 known as Inocybe adaequata, is a species of fungus of the family Inocybaceae found in North America and Europe. References adaequatum Fungi described in 1879 Fungi of North America Fungi of Europe Fungus species
Inosperma adaequatum
Biology
62
1,063,353
https://en.wikipedia.org/wiki/Aspartate%20transaminase
Aspartate transaminase (AST) or aspartate aminotransferase, also known as AspAT/ASAT/AAT or (serum) glutamic oxaloacetic transaminase (GOT, SGOT), is a pyridoxal phosphate (PLP)-dependent transaminase enzyme () that was first described by Arthur Karmen and colleagues in 1954. AST catalyzes the reversible transfer of an α-amino group between aspartate and glutamate and, as such, is an important enzyme in amino acid metabolism. AST is found in the liver, heart, skeletal muscle, kidneys, brain, red blood cells and gall bladder. Serum AST level, serum ALT (alanine transaminase) level, and their ratio (AST/ALT ratio) are commonly measured clinically as biomarkers for liver health. The tests are part of blood panels. The half-life of total AST in the circulation approximates 17 hours and, on average, 87 hours for mitochondrial AST. Aminotransferase is cleared by sinusoidal cells in the liver. Function Aspartate transaminase catalyzes the interconversion of aspartate and α-ketoglutarate to oxaloacetate and glutamate. L-Aspartate (Asp) + α-ketoglutarate ↔ oxaloacetate + L-glutamate (Glu) As a prototypical transaminase, AST relies on PLP (Vitamin B6) as a cofactor to transfer the amino group from aspartate or glutamate to the corresponding ketoacid. In the process, the cofactor shuttles between PLP and the pyridoxamine phosphate (PMP) form. The amino group transfer catalyzed by this enzyme is crucial in both amino acid degradation and biosynthesis. In amino acid degradation, following the conversion of α-ketoglutarate to glutamate, glutamate subsequently undergoes oxidative deamination to form ammonium ions, which are excreted as urea. In the reverse reaction, aspartate may be synthesized from oxaloacetate, which is a key intermediate in the citric acid cycle. Isoenzymes Two isoenzymes are present in a wide variety of eukaryotes. In humans: GOT1/cAST, the cytosolic isoenzyme derives mainly from red blood cells and heart. GOT2/mAST, the mitochondrial isoenzyme is present predominantly in liver. These isoenzymes are thought to have evolved from a common ancestral AST via gene duplication, and they share a sequence homology of approximately 45%. AST has also been found in a number of microorganisms, including E. coli, H. mediterranei, and T. thermophilus. In E. coli, the enzyme is encoded by the aspCgene and has also been shown to exhibit the activity of an aromatic-amino-acid transaminase (). Structure X-ray crystallography studies have been performed to determine the structure of aspartate transaminase from various sources, including chicken mitochondria, pig heart cytosol, and E. coli. Overall, the three-dimensional polypeptide structure for all species is quite similar. AST is dimeric, consisting of two identical subunits, each with approximately 400 amino acid residues and a molecular weight of approximately 45 kD. Each subunit is composed of a large and a small domain, as well as a third domain consisting of the N-terminal residues 3-14; these few residues form a strand, which links and stabilizes the two subunits of the dimer. The large domain, which includes residues 48-325, binds the PLP cofactor via an aldimine linkage to the ε-amino group of Lys258. Other residues in this domain – Asp 222 and Tyr 225 – also interact with PLP via hydrogen bonding. The small domain consists of residues 15-47 and 326-410 and represents a flexible region that shifts the enzyme from an "open" to a "closed" conformation upon substrate binding. The two independent active sites are positioned near the interface between the two domains. Within each active site, a couple arginine residues are responsible for the enzyme's specificity for dicarboxylic acid substrates: Arg386 interacts with the substrate's proximal (α-)carboxylate group, while Arg292 complexes with the distal (side-chain) carboxylate. In terms of secondary structure, AST contains both α and β elements. Each domain has a central sheet of β-strands with α-helices packed on either side. Mechanism Aspartate transaminase, as with all transaminases, operates via dual substrate recognition; that is, it is able to recognize and selectively bind two amino acids (Asp and Glu) with different side-chains. In either case, the transaminase reaction consists of two similar half-reactions that constitute what is referred to as a ping-pong mechanism. In the first half-reaction, amino acid 1 (e.g., L-Asp) reacts with the enzyme-PLP complex to generate ketoacid 1 (oxaloacetate) and the modified enzyme-PMP. In the second half-reaction, ketoacid 2 (α-ketoglutarate) reacts with enzyme-PMP to produce amino acid 2 (L-Glu), regenerating the original enzyme-PLP in the process. Formation of a racemic product (D-Glu) is very rare. The specific steps for the half-reaction of Enzyme-PLP + aspartate ⇌ Enzyme-PMP + oxaloacetate are as follows (see figure); the other half-reaction (not shown) proceeds in the reverse manner, with α-ketoglutarate as the substrate. Internal aldimine formation: First, the ε-amino group of Lys258 forms a Schiff base linkage with the aldehyde carbon to generate an internal aldimine. Transaldimination: The internal aldimine then becomes an external aldimine when the ε-amino group of Lys258 is displaced by the amino group of aspartate. This transaldimination reaction occurs via a nucleophilic attack by the deprotonated amino group of Asp and proceeds through a tetrahedral intermediate. As this point, the carboxylate groups of Asp are stabilized by the guanidinium groups of the enzyme's Arg386 and Arg 292 residues. Quinonoid formation: The hydrogen attached to the a-carbon of Asp is then abstracted (Lys258 is thought to be the proton acceptor) to form a quinonoid intermediate. Ketimine formation: The quinonoid is reprotonated, but now at the aldehyde carbon, to form the ketimine intermediate. Ketimine hydrolysis: Finally, the ketimine is hydrolyzed to form PMP and oxaloacetate. This mechanism is thought to have multiple partially rate-determining steps. However, it has been shown that the substrate binding step (transaldimination) drives the catalytic reaction forward. Clinical significance AST is similar to alanine transaminase (ALT) in that both enzymes are associated with liver parenchymal cells. The difference is that ALT is found predominantly in the liver, with clinically negligible quantities found in the kidneys, heart, and skeletal muscle, while AST is found in the liver, heart (cardiac muscle), skeletal muscle, kidneys, brain, and red blood cells. As a result, ALT is a more specific indicator of liver inflammation than AST, as AST may be elevated also in diseases affecting other organs, such as myocardial infarction, acute pancreatitis, acute hemolytic anemia, severe burns, acute renal disease, musculoskeletal diseases, and trauma. AST was defined as a biochemical marker for the diagnosis of acute myocardial infarction in 1954. However, the use of AST for such a diagnosis is now redundant and has been superseded by the cardiac troponins. Laboratory tests should always be interpreted using the reference range from the laboratory that performed the test. Example reference ranges are shown below: See also Alanine transaminase (ALT/ALAT/SGPT) Transaminases References Further reading External links AST - Lab Tests Online AST: MedlinePlus Medical Encyclopedia Liver function tests EC 2.6.1 Glutamate (neurotransmitter)
Aspartate transaminase
Chemistry
1,856
10,442,568
https://en.wikipedia.org/wiki/Internal%20drive%20propulsion
Internal drive propulsion or water-jet propulsion is a form of marine propulsion used in recreational boating. Like other forms of motorized boating, internal drive propulsion employs a motor that turns a propeller to move the boat forward. The primary difference between internal drive boats and external drive boats is that the propeller is enclosed inside the hull of an internal drive boat whereas the propeller is exposed outside the hull of a stern drive, V-drive or straight shaft drive boat. A conventional screw propeller accelerates a large volume of water by a small amount, similar to the way an airplane propeller accelerates a large volume of air by a small amount. An aircraft's jet engine, by contrast, accelerates a smaller volume of air by a large amount. Both methods yield thrust due to Newton's third law — every force gives rise to an equal and opposite force. In an internal drive boat, pumping a small volume of water and accelerating it by a large amount delivers the thrust. The acceleration of the water is achieved by using multiple impeller stages. Steering is accomplished by nozzles or small vanes that direct the water jet. Efficiency of the drive is related to the difference in speed of the vessel and the accelerated water producing the thrust. Jet drives are inefficient in low speed vessels, but may have other advantages that make them suitable for a given application. Internal drive propulsion was originally designed by Sir William Hamilton (who invented the waterjet in 1954) for operation in the fast-flowing and shallow rivers of New Zealand, specifically to overcome the problem of propellers striking rocks in such waters. Primary benefits: Water skiers, wakeboarders, swimmers, divers, etc. are not exposed to external propellers. Less potential for damage to internal drive boats from floating debris. Less potential for major drive damage from running aground as with exposed propellers. Better maneuverability and acceleration compared to stern-drive counterparts. Disadvantages are usually low efficiency and high cost. See also External links Yamaha Boats Marine propulsion
Internal drive propulsion
Engineering
400
39,266,470
https://en.wikipedia.org/wiki/Burst%20error-correcting%20code
In coding theory, burst error-correcting codes employ methods of correcting burst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other. Many codes have been designed to correct random errors. Sometimes, however, channels may introduce errors which are localized in a short interval. Such errors occur in a burst (called burst errors) because they occur in many consecutive bits. Examples of burst errors can be found extensively in storage mediums. These errors may be due to physical damage such as scratch on a disc or a stroke of lightning in case of wireless channels. They are not independent; they tend to be spatially concentrated. If one bit has an error, it is likely that the adjacent bits could also be corrupted. The methods used to correct random errors are inefficient to correct burst errors. Definitions A burst of length Say a codeword is transmitted, and it is received as Then, the error vector is called a burst of length if the nonzero components of are confined to consecutive components. For example, is a burst of length Although this definition is sufficient to describe what a burst error is, the majority of the tools developed for burst error correction rely on cyclic codes. This motivates our next definition. A cyclic burst of length An error vector is called a cyclic burst error of length if its nonzero components are confined to cyclically consecutive components. For example, the previously considered error vector , is a cyclic burst of length , since we consider the error starting at position and ending at position . Notice the indices are -based, that is, the first element is at position . For the remainder of this article, we will use the term burst to refer to a cyclic burst, unless noted otherwise. Burst description It is often useful to have a compact definition of a burst error, that encompasses not only its length, but also the pattern, and location of such error. We define a burst description to be a tuple where is the pattern of the error (that is the string of symbols beginning with the first nonzero entry in the error pattern, and ending with the last nonzero symbol), and is the location, on the codeword, where the burst can be found. For example, the burst description of the error pattern is . Notice that such description is not unique, because describes the same burst error. In general, if the number of nonzero components in is , then will have different burst descriptions each starting at a different nonzero entry of . To remedy the issues that arise by the ambiguity of burst descriptions with the theorem below, however before doing so we need a definition first. Definition. The number of symbols in a given error pattern is denoted by A corollary of the above theorem is that we cannot have two distinct burst descriptions for bursts of length Cyclic codes for burst error correction Cyclic codes are defined as follows: think of the symbols as elements in . Now, we can think of words as polynomials over where the individual symbols of a word correspond to the different coefficients of the polynomial. To define a cyclic code, we pick a fixed polynomial, called generator polynomial. The codewords of this cyclic code are all the polynomials that are divisible by this generator polynomial. Codewords are polynomials of degree . Suppose that the generator polynomial has degree . Polynomials of degree that are divisible by result from multiplying by polynomials of degree . We have such polynomials. Each one of them corresponds to a codeword. Therefore, for cyclic codes. Cyclic codes can detect all bursts of length up to . We will see later that the burst error detection ability of any code is bounded from above by . Cyclic codes are considered optimal for burst error detection since they meet this upper bound: The above proof suggests a simple algorithm for burst error detection/correction in cyclic codes: given a transmitted word (i.e. a polynomial of degree ), compute the remainder of this word when divided by . If the remainder is zero (i.e. if the word is divisible by ), then it is a valid codeword. Otherwise, report an error. To correct this error, subtract this remainder from the transmitted word. The subtraction result is going to be divisible by (i.e. it is going to be a valid codeword). By the upper bound on burst error detection (), we know that a cyclic code can not detect all bursts of length . However cyclic codes can indeed detect most bursts of length . The reason is that detection fails only when the burst is divisible by . Over binary alphabets, there exist bursts of length . Out of those, only are divisible by . Therefore, the detection failure probability is very small () assuming a uniform distribution over all bursts of length . We now consider a fundamental theorem about cyclic codes that will aid in designing efficient burst-error correcting codes, by categorizing bursts into different cosets. Burst error correction bounds Upper bounds on burst error detection and correction By upper bound, we mean a limit on our error detection ability that we can never go beyond. Suppose that we want to design an code that can detect all burst errors of length A natural question to ask is: given and , what is the maximum that we can never achieve beyond? In other words, what is the upper bound on the length of bursts that we can detect using any code? The following theorem provides an answer to this question. Now, we repeat the same question but for error correction: given and , what is the upper bound on the length of bursts that we can correct using any code? The following theorem provides a preliminary answer to this question: A stronger result is given by the Rieger bound: Definition. A linear burst-error-correcting code achieving the above Rieger bound is called an optimal burst-error-correcting code. Further bounds on burst error correction There is more than one upper bound on the achievable code rate of linear block codes for multiple phased-burst correction (MPBC). One such bound is constrained to a maximum correctable cyclic burst length within every subblock, or equivalently a constraint on the minimum error free length or gap within every phased-burst. This bound, when reduced to the special case of a bound for single burst correction, is the Abramson bound (a corollary of the Hamming bound for burst-error correction) when the cyclic burst length is less than half the block length. Remark. is called the redundancy of the code and in an alternative formulation for the Abramson's bounds is Fire codes Sources: While cyclic codes in general are powerful tools for detecting burst errors, we now consider a family of binary cyclic codes named Fire Codes, which possess good single burst error correction capabilities. By single burst, say of length , we mean that all errors that a received codeword possess lie within a fixed span of digits. Let be an irreducible polynomial of degree over , and let be the period of . The period of , and indeed of any polynomial, is defined to be the least positive integer such that Let be a positive integer satisfying and not divisible by , where and are the degree and period of , respectively. Define the Fire Code by the following generator polynomial: We will show that is an -burst-error correcting code. If we can show that all bursts of length or less occur in different cosets, we can use them as coset leaders that form correctable error patterns. The reason is simple: we know that each coset has a unique syndrome decoding associated with it, and if all bursts of different lengths occur in different cosets, then all have unique syndromes, facilitating error correction. Proof of Theorem Let and be polynomials with degrees and , representing bursts of length and respectively with The integers represent the starting positions of the bursts, and are less than the block length of the code. For contradiction sake, assume that and are in the same coset. Then, is a valid codeword (since both terms are in the same coset). Without loss of generality, pick . By the division theorem we can write: for integers and . We rewrite the polynomial as follows: Notice that at the second manipulation, we introduced the term . We are allowed to do so, since Fire Codes operate on . By our assumption, is a valid codeword, and thus, must be a multiple of . As mentioned earlier, since the factors of are relatively prime, has to be divisible by . Looking closely at the last expression derived for we notice that is divisible by (by the corollary of Lemma 2). Therefore, is either divisible by or is . Applying the division theorem again, we see that there exists a polynomial with degree such that: Then we may write: Equating the degree of both sides, gives us Since we can conclude which implies and . Notice that in the expansion: The term appears, but since , the resulting expression does not contain , therefore and subsequently This requires that , and . We can further revise our division of by to reflect that is Substituting back into gives us, Since , we have . But is irreducible, therefore and must be relatively prime. Since is a codeword, must be divisible by , as it cannot be divisible by . Therefore, must be a multiple of . But it must also be a multiple of , which implies it must be a multiple of but that is precisely the block-length of the code. Therefore, cannot be a multiple of since they are both less than . Thus, our assumption of being a codeword is incorrect, and therefore and are in different cosets, with unique syndromes, and therefore correctable. Example: 5-burst error correcting fire code With the theory presented in the above section, consider the construction of a -burst error correcting Fire Code. Remember that to construct a Fire Code, we need an irreducible polynomial , an integer , representing the burst error correction capability of our code, and we need to satisfy the property that is not divisible by the period of . With these requirements in mind, consider the irreducible polynomial , and let . Since is a primitive polynomial, its period is . We confirm that is not divisible by . Thus, is a Fire Code generator. We can calculate the block-length of the code by evaluating the least common multiple of and . In other words, . Thus, the Fire Code above is a cyclic code capable of correcting any burst of length or less. Binary Reed–Solomon codes Certain families of codes, such as Reed–Solomon, operate on alphabet sizes larger than binary. This property awards such codes powerful burst error correction capabilities. Consider a code operating on . Each symbol of the alphabet can be represented by bits. If is an Reed–Solomon code over , we can think of as an code over . The reason such codes are powerful for burst error correction is that each symbol is represented by bits, and in general, it is irrelevant how many of those bits are erroneous; whether a single bit, or all of the bits contain errors, from a decoding perspective it is still a single symbol error. In other words, since burst errors tend to occur in clusters, there is a strong possibility of several binary errors contributing to a single symbol error. Notice that a burst of errors can affect at most symbols, and a burst of can affect at most symbols. Then, a burst of can affect at most symbols; this implies that a -symbols-error correcting code can correct a burst of length at most . In general, a -error correcting Reed–Solomon code over can correct any combination of or fewer bursts of length , on top of being able to correct -random worst case errors. An example of a binary RS code Let be a RS code over . This code was employed by NASA in their Cassini-Huygens spacecraft. It is capable of correcting symbol errors. We now construct a Binary RS Code from . Each symbol will be written using bits. Therefore, the Binary RS code will have as its parameters. It is capable of correcting any single burst of length . Interleaved codes Interleaving is used to convert convolutional codes from random error correctors to burst error correctors. The basic idea behind the use of interleaved codes is to jumble symbols at the transmitter. This leads to randomization of bursts of received errors which are closely located and we can then apply the analysis for random channel. Thus, the main function performed by the interleaver at transmitter is to alter the input symbol sequence. At the receiver, the deinterleaver will alter the received sequence to get back the original unaltered sequence at the transmitter. Burst error correcting capacity of interleaver Block interleaver The figure below shows a 4 by 3 interleaver. The above interleaver is called as a block interleaver. Here, the input symbols are written sequentially in the rows and the output symbols are obtained by reading the columns sequentially. Thus, this is in the form of array. Generally, is length of the codeword. Capacity of block interleaver: For an block interleaver and burst of length the upper limit on number of errors is This is obvious from the fact that we are reading the output column wise and the number of rows is . By the theorem above for error correction capacity up to the maximum burst length allowed is For burst length of , the decoder may fail. Efficiency of block interleaver (): It is found by taking ratio of burst length where decoder may fail to the interleaver memory. Thus, we can formulate as Drawbacks of block interleaver : As it is clear from the figure, the columns are read sequentially, the receiver can interpret single row only after it receives complete message and not before that. Also, the receiver requires a considerable amount of memory in order to store the received symbols and has to store the complete message. Thus, these factors give rise to two drawbacks, one is the latency and other is the storage (fairly large amount of memory). These drawbacks can be avoided by using the convolutional interleaver described below. Convolutional interleaver Cross interleaver is a kind of multiplexer-demultiplexer system. In this system, delay lines are used to progressively increase length. Delay line is basically an electronic circuit used to delay the signal by certain time duration. Let be the number of delay lines and be the number of symbols introduced by each delay line. Thus, the separation between consecutive inputs = symbols. Let the length of codeword Thus, each symbol in the input codeword will be on distinct delay line. Let a burst error of length occur. Since the separation between consecutive symbols is the number of errors that the deinterleaved output may contain is By the theorem above, for error correction capacity up to , maximum burst length allowed is For burst length of decoder may fail. Efficiency of cross interleaver (): It is found by taking the ratio of burst length where decoder may fail to the interleaver memory. In this case, the memory of interleaver can be calculated as Thus, we can formulate as follows: Performance of cross interleaver : As shown in the above interleaver figure, the output is nothing but the diagonal symbols generated at the end of each delay line. In this case, when the input multiplexer switch completes around half switching, we can read first row at the receiver. Thus, we need to store maximum of around half message at receiver in order to read first row. This drastically brings down the storage requirement by half. Since just half message is now required to read first row, the latency is also reduced by half which is good improvement over the block interleaver. Thus, the total interleaver memory is split between transmitter and receiver. Applications Compact disc Without error correcting codes, digital audio would not be technically feasible. The Reed–Solomon codes can correct a corrupted symbol with a single bit error just as easily as it can correct a symbol with all bits wrong. This makes the RS codes particularly suitable for correcting burst errors. By far, the most common application of RS codes is in compact discs. In addition to basic error correction provided by RS codes, protection against burst errors due to scratches on the disc is provided by a cross interleaver. Current compact disc digital audio system was developed by N. V. Philips of The Netherlands and Sony Corporation of Japan (agreement signed in 1979). A compact disc comprises a 120 mm aluminized disc coated with a clear plastic coating, with spiral track, approximately 5 km in length, which is optically scanned by a laser of wavelength ~0.8 μm, at a constant speed of ~1.25 m/s. For achieving this constant speed, rotation of the disc is varied from ~8 rev/s while scanning at the inner portion of the track to ~3.5 rev/s at the outer portion. Pits and lands are the depressions (0.12 μm deep) and flat segments constituting the binary data along the track (0.6 μm width). The CD process can be abstracted as a sequence of the following sub-processes: Channel encoding of source of signals Mechanical sub-processes of preparing a master disc, producing user discs and sensing the signals embedded on user discs while playing – the channel Decoding the signals sensed from user discs The process is subject to both burst errors and random errors. Burst errors include those due to disc material (defects of aluminum reflecting film, poor reflective index of transparent disc material), disc production (faults during disc forming and disc cutting etc.), disc handling (scratches – generally thin, radial and orthogonal to direction of recording) and variations in play-back mechanism. Random errors include those due to jitter of reconstructed signal wave and interference in signal. CIRC (Cross-Interleaved Reed–Solomon code) is the basis for error detection and correction in the CD process. It corrects error bursts up to 3,500 bits in sequence (2.4 mm in length as seen on CD surface) and compensates for error bursts up to 12,000 bits (8.5 mm) that may be caused by minor scratches. Encoding: Sound-waves are sampled and converted to digital form by an A/D converter. The sound wave is sampled for amplitude (at 44.1 kHz or 44,100 pairs, one each for the left and right channels of the stereo sound). The amplitude at an instance is assigned a binary string of length 16. Thus, each sample produces two binary vectors from or 4 bytes of data. Every second of sound recorded results in 44,100 × 32 = 1,411,200 bits (176,400 bytes) of data. The 1.41 Mbit/s sampled data stream passes through the error correction system eventually getting converted to a stream of 1.88 Mbit/s. Input for the encoder consists of input frames each of 24 8-bit symbols (12 16-bit samples from the A/D converter, 6 each from left and right data (sound) sources). A frame can be represented by where and are bytes from the left and right channels from the sample of the frame. Initially, the bytes are permuted to form new frames represented by where represent -th left and right samples from the frame after 2 intervening frames. Next, these 24 message symbols are encoded using C2 (28,24,5) Reed–Solomon code which is a shortened RS code over . This is two-error-correcting, being of minimum distance 5. This adds 4 bytes of redundancy, forming a new frame: . The resulting 28-symbol codeword is passed through a (28.4) cross interleaver leading to 28 interleaved symbols. These are then passed through C1 (32,28,5) RS code, resulting in codewords of 32 coded output symbols. Further regrouping of odd numbered symbols of a codeword with even numbered symbols of the next codeword is done to break up any short bursts that may still be present after the above 4-frame delay interleaving. Thus, for every 24 input symbols there will be 32 output symbols giving . Finally one byte of control and display information is added. Each of the 33 bytes is then converted to 17 bits through EFM (eight to fourteen modulation) and addition of 3 merge bits. Therefore, the frame of six samples results in 33 bytes × 17 bits (561 bits) to which are added 24 synchronization bits and 3 merging bits yielding a total of 588 bits. Decoding: The CD player (CIRC decoder) receives the 32 output symbol data stream. This stream passes through the decoder D1 first. It is up to individual designers of CD systems to decide on decoding methods and optimize their product performance. Being of minimum distance 5 The D1, D2 decoders can each correct a combination of errors and erasures such that . In most decoding solutions, D1 is designed to correct single error. And in case of more than 1 error, this decoder outputs 28 erasures. The deinterleaver at the succeeding stage distributes these erasures across 28 D2 codewords. Again in most solutions, D2 is set to deal with erasures only (a simpler and less expensive solution). If more than 4 erasures were to be encountered, 24 erasures are output by D2. Thereafter, an error concealment system attempts to interpolate (from neighboring symbols) in case of uncorrectable symbols, failing which sounds corresponding to such erroneous symbols get muted. Performance of CIRC: CIRC conceals long bust errors by simple linear interpolation. 2.5 mm of track length (4000 bits) is the maximum completely correctable burst length. 7.7 mm track length (12,300 bits) is the maximum burst length that can be interpolated. Sample interpolation rate is one every 10 hours at Bit Error Rate (BER) and 1000 samples per minute at BER = Undetectable error samples (clicks): less than one every 750 hours at BER = and negligible at BER = . See also Error detection and correction Error-correcting codes with feedback Code rate Reed–Solomon error correction References Coding theory Error detection and correction Computer errors
Burst error-correcting code
Mathematics,Technology,Engineering
4,619
971,278
https://en.wikipedia.org/wiki/Aspic
Aspic () or meat jelly is a savory gelatin made with a meat stock or broth, set in a mold to encase other ingredients. These often include pieces of meat, seafood, vegetable, or eggs. Aspic is also sometimes referred to as aspic gelée or aspic jelly. In its simplest form, aspic is essentially a gelatinous version of conventional soup. History According to one poetic reference by Ibrahim ibn al-Mahdi, who described a version of a dish prepared with Iraqi carp, it was "like ruby on the platter, set in a pearl ... steeped in saffron thus, like garnet it looks, vibrantly red, shimmering on silver". Historically, meat aspics were made even before fruit- and vegetable-flavoured aspics. By the Middle Ages, cooks had discovered that a thickened meat broth could be made into a jelly. A detailed recipe for aspic is found in Le Viandier, written in or around 1375. In the early 19th century, the French chef Marie-Antoine Carême created chaudfroid. The term chaudfroid means "hot cold" in French, referring to foods that were prepared hot and served cold. Aspic was used as a chaudfroid sauce in many cold fish and poultry meals, where it added moisture and flavour to the food. Carême also invented various types of aspic and ways of preparing it. Aspic came into prominence in America in the early 20th century. By the 1950s, meat aspic was a popular dinner staple, as were other gelatin-based dishes such as tomato aspic. Cooks showed off their aesthetic skills by creating inventive aspics. Uses Aspic jelly may be colorless (white aspic) or contain various shades of amber. Aspic can be used to protect food from the air, to give food more flavor, or as a decoration. It can also be used to encase meats, preventing them from becoming spoiled. The gelatin keeps out air and bacteria, keeping the cooked meat or other ingredients fresh for longer. There are three types of aspic: delicate, sliceable, and inedible. The delicate aspic is soft. The sliceable aspic must be made in a terrine or in an aspic mold. It is firmer than the delicate aspic. The inedible aspic is never for consumption and is usually for decoration. Aspic is often used to glaze food pieces in food competitions to make the food glisten and make it more appealing to the eye. Foods dipped in aspic have a lacquered finish for a fancy presentation. Aspic can be cut into various shapes and be used as a garnish for deli meats or pâtés. Preparation The preparation of pork jelly includes placing lean pork meat, trotters, rind, ears, and snout in a pot of cold water and letting it cook over a slow fire for three hours. The broth is allowed to cool, while also removing any undesirable fats. Subsequently, white vinegar and the juice of half an orange or lemon can be added to the meat so that it is covered. The entire mixture is then allowed to cool and gel. Bay leaves or chili can be added to the broth for added taste (the Romanian variety is based on garlic and includes no vinegar, orange, lemon, chili, bay leaves, etc.). However, there are many alternate ways of preparing pork jelly, such as the usage of celery, beef and even pig bones. Poultry jellies are made the same way as making pork jelly, but less water is added to compensate for lower natural gelatin content. Almost any type of food can be set into aspics, and almost any type of meat (poultry or fish included) can be used to make gelatin, although in some cases, additional gelatin may be needed for the aspic to set properly. Stock can be clarified with egg whites and then filled and flavored just before the aspic sets. The most common are pieces of meat, seafood, eggs, fruits, or vegetables. Veal stock (in particular, stock from a boiled calf's foot) provides a great deal of gelatin, so other types of meat are often included when making stock. Fish consommés usually have too little natural gelatin, so fish stock may be double-cooked or supplemented. Since fish gelatin melts at a lower temperature than the gelatins of other meats, fish aspic is more delicate and melts more readily in the mouth. Most fish stocks usually do not maintain a molded shape with their natural gelatin alone, so additional gelatin is added. Vegetables have no natural gelatin. However, pectin serves a similar purpose in culinary applications such as jams and jellies. Global variations of aspic Pork jelly Pork jelly is an aspic made from low-grade cuts of pig meat, such as trotters, that contain a significant proportion of connective tissue. Pork jelly is a popular appetizer and, nowadays, is sometimes prepared in a more modern version using lean meat, with or without pig leftovers (which are substituted with store-bought gelatin). It is very popular in Croatia, Serbia, Poland, Czech Republic, Romania, Moldova, Estonia, Latvia, Lithuania, Slovakia (called ), Hungary, Greece, and Ukraine. In Russia, Belarus, Georgia and Ukraine, it is known as , during Christmas or Easter. In Russia, is a traditional winter and especially Christmas and New Year's dish, which is eaten with (horseradish paste) or mustard. It is also eaten in Vietnam () during Lunar New Year. The meat in pork pies is preserved using pork jelly. (), (), () is an aspic-like dish, generally made from lamb, chicken or pork meat, such as the head, shank, or hock, made into a semi-consistent gelatinous cake-like form. In some varieties, chicken is used instead of pork. Some recipes also include smoked meat and are well spiced. is commonly just one component of the traditional meal (or an appetizer), although it can be served as a main dish. It is usually accompanied by cold mastika or rakija (grape brandy) and turšija (pickled tomatoes, peppers, olives, cauliflower, cucumber). The recipe calls for the meat to be cleaned, washed, and then boiled for a short time, no longer than 10 minutes. Then the water is changed, and vegetables and spices are added. This is cooked until the meat begins to separate from the bones, then the bones are removed, the meat stock is filtered, and the meat and stock are poured into shallow bowls. Garlic is added as well as thin slices of tomatoes or green peppers (or something similar for decoration). It is left to sit in a cold spot, such as a fridge or outside if the weather is cold enough. It congeals into jelly and can be cut into cubes (it is often said that good are "cut like glass"). These cubes can be sprinkled with various spices or herbs as desired before serving. is usually cut and served in equal sized cubes. are frequently used in slavas and other celebratory occasions with Serbs. Romanian and Moldovan Romanian and Moldovan is also called (plural ), derived from the Romanian , meaning cold. has a different method of preparation. It is usually made with pig's trotter (but turkey or chicken meat can also be used), carrots and other vegetables, boiled to make a soup with high gelatin content. The broth containing gelatin is poured over the boiled meat and mashed garlic in bowls, the mixture being then cooled to become a jelly. is traditionally served for Epiphany. Korea () is a dish prepared by boiling beef and pork cuts with high collagen content such as the head, skin, tail, cow's trotters, or other cuts in water for a long time. The resulting stewing liquid sets to form a jelly-like substance when cooled. Nepal Among the Newars of Kathmandu Valley in Nepal, buffalo meat jelly, known as , is a major component of the winter festivity gourmet. It is eaten in combination with fish aspic (), which is made from dried fish and buffalo meat stock, soured, and containing a heavy mix of spices and condiments. Poland In Central, Eastern, and Northern Europe, aspic often takes the form of pork jelly and is popular around the Christmas and Easter holidays. In Poland, certain meats, fish and vegetables are set in aspic, creating a dish called . Eastern Europe In Belarusian, Russian, and Ukrainian cuisine, a meat aspic dish is called ( ; ; ; also written as holodetz outside these countries) derived from the word meaning "cold". In some areas it is called () or (), derived from a different root with a similar meaning. The dish is part of winter holiday celebrations such as the traditional Russian New Year (novy god) or Christmas meal. However, modern refrigeration allows for its year-round production, and it is not uncommon to see on a Russian table in summer. is usually made by boiling the bones and meat rich in collagen for about 5–8 hours to produce a thick and fatty broth, with the collagen hydrolizing into the natural gelatin, mixed with salt, pepper, and other spices. The meat is then separated from the bones, minced, recombined with the broth, dressed with the slices of boiled egg and herbs like parsley and cooled until it solidifies into a jelly. is usually eaten with or mustard. Croatia The Croatian version of this dish is called ( meaning cold). Variants range from one served in a dish with rather delicate gelatin, to more resembling the German sulze, a kind of head cheese. Slovenia In Slovenia, aspic is known as (derived from the German , meaning head cheese) or in Slovene. It is traditionally served at Easter. Denmark In Denmark, aspic is called and is made from meat juices, gelatin, and sometimes mushrooms. Sky is almost solely eaten as a topping for cold cuts or on Danish open faced sandwiches called . It is a key ingredient in , a dish combining , sliced salt beef and onions. Sky, with or without mushrooms, is an easy-to-find product in most supermarkets. Georgia or () is a traditional Georgian dish of cold jellied pork. Its ingredients include pork meat, tails, ears, feet, carrots, vinegar, garlic, herbs, onions, roots, bay leaves, allspice, and cinnamon. In some recipes, the dish is cooked in two separate processes, slightly pickled with wine vinegar and spiced with tarragon and basil. One part contains pork feet, tails and ears; the other contains the lean meat of piglets. They are combined into one dish, chilled and served with green onions and spicy herbs. Belgium Rog in 't zuur or rog in zure gelei is a Flemish traditional recipe to preserve ray wings which are otherwise notoriously quick to spoil. Ray wings are poached in a fish stock with vinegar, spices and onions, then preserved by adding gelatin to the stock and covering the fish with the gelatin stock. In this manner the fish would keep 2–4 days without refrigeration. The dish is served cold with bread for breakfast or as a snack, or can be served as an appetizer. China In Northern China, () is a traditional dish served in winter, especially during the Chinese New Year. This Chinese dish of aspic is usually made by boiling pork rind in water. The dishes cooled without pork rind are called () while those containing pork rind in the aspic are called (). In Zhenjiang, aspic using pig trotters is called Salted Pork in Jelly (). The dish has two layers of meat. The upper layer, about half an inch thick, is 'pigskin aspic', while the lower layer is half red and half white, made from boiling pig's trotter and pigskin until gelled, forming 'meat aspic'. The traditional method of preparing the dish involves boiling the trotter with Saltpeter, resulting in a crimson hue. However, due to the use of saltpeter in food being banned, the modern approach is using German pork knuckles. Vietnam Giò thủ, giò tai, also known by another popular name giò xào, is one of the traditional Vietnamese sausage dishes with the main ingredient being stir-fried meat with some other ingredients, then wrapped and compressed. Originating in Northern Vietnam and now popular throughout the country, more or less similar forms of preparation like this dish also exist in many other cuisines around the world. The processing process is relatively easy, the ingredients are easy to find, and the finished product is delicious and strangely chewy, making spring rolls a familiar dish of people all over the region. Giò thủ is often made by families during the traditional Lunar New Year, and is sold at sausage shops in Vietnam most markets nationwide. A more accurate variants of aspic in Vietnamese is called Thịt đông, or Vietnamese pork aspic. Health benefits Aspic is a source of various nutrients like iron, vitamin A, vitamin K, fatty acids, selenium, zinc, magnesium and phosphorus. An amino acid called glutamine in aspic may enhance the integrity of the intestinal barrier, which may be beneficial for inflammatory bowel disease and other digestive problems. Glycine from aspic can improve sleep and reduce fatigue during the day. See also Chaudfroid sauce Cretons Garde manger Galantine Head cheese Jell-O Kalvsylta Khash Meat-jelly Festival Pâté P'tcha Pig's trotters Terrine Larks' Tongues in Aspic (King Crimson album) References Notes Bibliography Allen, Gary; Ken Albala.The Business of Food: Encyclopedia of the Food and Drink Industries.Westport, Connecticut: Greenwood Publishing Group, October 2007. . Gisslen, Wayne. Professional Cooking, 6th edition. Hoboken, New Jersey: John Wiley and Sons, March 2006. Nenes, Michael. American Regional Cuisine, 2nd edition. Hoboken, New Jersey: Art Institute, March 2006. . Ruhlman, Michael; Anthony Bourdain. The Elements of Cooking: Translating the Chef's Craft for Every Kitchen. New York, New York: Simon and Schuster, November 2007. . Smith, Andrew. The Oxford Companion to American Food and Drink. New York, New York: Oxford University Press, March 2007. . External links Latvian pork aspic Russian Meat Aspic Cuisine of the Southern United States Russian cuisine Ukrainian cuisine Polish cuisine Lithuanian cuisine Romanian cuisine Brazilian cuisine Colombian cuisine Food ingredients Meat Garde manger Culinary terminology Gelatin dishes Romani cuisine
Aspic
Technology
3,122
35,035,097
https://en.wikipedia.org/wiki/C17H13N5O2
{{DISPLAYTITLE:C17H13N5O2}} The molecular formula C17H13N5O2 (molar mass: 319.317 g/mol, exact mass: 319.1069 u) may refer to: Nitrazolam SB-334867 Molecular formulas
C17H13N5O2
Physics,Chemistry
66
6,592,812
https://en.wikipedia.org/wiki/Thermal%20contact%20conductance
In physics, thermal contact conductance is the study of heat conduction between solid or liquid bodies in thermal contact. The thermal contact conductance coefficient, , is a property indicating the thermal conductivity, or ability to conduct heat, between two bodies in contact. The inverse of this property is termed thermal contact resistance. Definition When two solid bodies come in contact, such as A and B in Figure 1, heat flows from the hotter body to the colder body. From experience, the temperature profile along the two bodies varies, approximately, as shown in the figure. A temperature drop is observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Thermal contact resistance is defined as the ratio between this temperature drop and the average heat flow across the interface. According to Fourier's law, the heat flow between the bodies is found by the relation: where is the heat flow, is the thermal conductivity, is the cross sectional area and is the temperature gradient in the direction of flow. From considerations of conservation of energy, the heat flow between the two bodies in contact, bodies A and B, is found as: One may observe that the heat flow is directly related to the thermal conductivities of the bodies in contact, and , the contact area , and the thermal contact resistance, , which, as previously noted, is the inverse of the thermal conductance coefficient, . Importance Most experimentally determined values of the thermal contact resistance fall between 0.000005 and 0.0005 m2 K/W (the corresponding range of thermal contact conductance is 200,000 to 2000 W/m2 K). To know whether the thermal contact resistance is significant or not, magnitudes of the thermal resistances of the layers are compared with typical values of thermal contact resistance. Thermal contact resistance is significant and may dominate for good heat conductors such as metals but can be neglected for poor heat conductors such as insulators. Thermal contact conductance is an important factor in a variety of applications, largely because many physical systems contain a mechanical combination of two materials. Some of the fields where contact conductance is of importance are: Electronics Electronic packaging Heat sinks Brackets Industry Nuclear reactor cooling Gas turbine cooling Internal combustion engines Heat exchangers Thermal insulation Press hardening of automotive steels Flight Hypersonic flight vehicles Thermal supervision for space vehicles Residential/building science Performance of building envelopes Factors influencing contact conductance Thermal contact conductance is a complicated phenomenon, influenced by many factors. Experience shows that the most important ones are as follows: Contact pressure For thermal transport between two contacting bodies, such as particles in a granular medium, the contact pressure, and the area of true contact area that arises from this, is the factor of most influence on overall contact conductance . Governed by an interface's Normal contact stiffness, as contact pressure grows, true contact area increases and contact conductance grows (contact resistance becomes smaller). Since the contact pressure is the most important factor, most studies, correlations and mathematical models for measurement of contact conductance are done as a function of this factor. The thermal contact resistance of certain sandwich kinds of materials that are manufactured by rolling under high temperatures may sometimes be ignored because the decrease in thermal conductivity between them is negligible. Interstitial materials No truly smooth surfaces really exist, and surface imperfections are visible under a microscope. As a result, when two bodies are pressed together, contact is only performed in a finite number of points, separated by relatively large gaps, as can be shown in Fig. 2. Since the actual contact area is reduced, another resistance for heat flow exists. The gases/fluids filling these gaps may largely influence the total heat flow across the interface. The thermal conductivity of the interstitial material and its pressure, examined through reference to the Knudsen number, are the two properties governing its influence on contact conductance, and thermal transport in heterogeneous materials in general. In the absence of interstitial materials, as in a vacuum, the contact resistance will be much larger, since flow through the intimate contact points is dominant. Surface roughness, waviness and flatness One can characterise a surface that has undergone certain finishing operations by three main properties of: roughness, waviness, and fractal dimension. Among these, roughness and fractality are of most importance, with roughness often indicated in terms of a rms value, and surface fractality denoted generally by Df. The effect of surface structures on thermal conductivity at interfaces is analogous to the concept of electrical contact resistance, also known as ECR, involving contact patch restricted transport of phonons rather than electrons. Surface deformations When the two bodies come in contact, surface deformation may occur on both bodies. This deformation may either be plastic or elastic, depending on the material properties and the contact pressure. When a surface undergoes plastic deformation, contact resistance is lowered, since the deformation causes the actual contact area to increase Surface cleanliness The presence of dust particles, acids, etc., can also influence the contact conductance. Measurement of thermal contact conductance Going back to Formula 2, calculation of the thermal contact conductance may prove difficult, even impossible, due to the difficulty in measuring the contact area, (A product of surface characteristics, as explained earlier). Because of this, contact conductance/resistance is usually found experimentally, by using a standard apparatus. The results of such experiments are usually published in Engineering literature, on journals such as Journal of Heat Transfer, International Journal of Heat and Mass Transfer, etc. Unfortunately, a centralized database of contact conductance coefficients does not exist, a situation which sometimes causes companies to use outdated, irrelevant data, or not taking contact conductance as a consideration at all. CoCoE (Contact Conductance Estimator), a project founded to solve this problem and create a centralized database of contact conductance data and a computer program that uses it, was started in 2006. Thermal boundary conductance While a finite thermal contact conductance is due to voids at the interface, surface waviness, and surface roughness, etc., a finite conductance exists even at near ideal interfaces as well. This conductance, known as thermal boundary conductance, is due to the differences in electronic and vibrational properties between the contacting materials. This conductance is generally much higher than thermal contact conductance, but becomes important in nanoscale material systems. See also Heat transfer References External links Project CoCoE - Free software to estimate TCC Heat conduction Thermodynamics Physical quantities Heat transfer
Thermal contact conductance
Physics,Chemistry,Mathematics
1,347
1,618,271
https://en.wikipedia.org/wiki/Title%20page
The title page of a book, thesis or other written work is the page at or near the front which displays its title, subtitle, author, publisher, and edition, often artistically decorated. (A half title, by contrast, displays only the title of a work.) The title page is one of the most important parts of the "front matter" or "preliminaries" of a book, as the data on it and its verso (together known as the "title leaf") are used to establish the "title proper and usually, though not necessarily, the statement of responsibility and the data relating to publication". This determines the way the book is cited in library catalogs and academic references. The title page often shows the title of the work, the person or body responsible for its intellectual content, and the imprint, which contains the name and address of the book's publisher and its date of publication. Particularly in paperback editions it may contain a shorter title than the cover or lack a descriptive subtitle. Further information about the publication of the book, including its copyright information, is frequently printed on the verso of the title page. Also often included there are the ISBN and a "printer's key", also known as the "number line", which indicates the print run to which the volume belongs. The first printed books, or incunabula, did not have title pages: the text simply begins on the first page, and the book is often identified by the initial words—the incipit—of the text proper. Other older books may have bibliographic information on the colophon at the end of the book. The Bulla Cruciatae contra Turcos (1463) is the earliest use of a title on the first page. Margaret M. Smith's The Title-Page, Its Early Development, 1460-1510 provides the genesis and development of the title page. Contamination of historic books In the 19th century, Paris green and similar arsenic pigments were often used on front and back covers, top, fore and bottom edges, title pages, book decorations, and in printed or manual colorations of illustrations of books. Since February 2024, several German libraries started to block public access to their stock of 19th century books to check for the degree of poisoning. See also Colophon Book design Half title Printer's key References Publications Bertram, Gitta, Nils Büttner, and Claus Zittel, eds. 2021. Gateways to the Book: Frontispieces and Title Pages in Early Modern Europe. Leiden: Brill. Fowler, Alastair. 2017. The Mind of the Book: Pictorial Title Pages. First edition. Oxford, United Kingdom: Oxford University Press. Gilmont, J.-F, Vanautgaerden, A., Deraedt, F. (2008). La page de titre à la Renaissance : treize études suivies de cinquante-quatre pages de titre commentées et d'un lexique des termes relatifs à la page de titre. Brepols. Morison, Stanley, Brooke Crutchley, and Kenneth Day. 1963. The Typographic Book, 1450-1935: A Study of Fine Typography through Five Centuries, Exhibited in Upwards of Three Hundred and Fifty Title and Text Pages Drawn from Presses Working in the European Tradition. Chicago: University of Chicago Press. Smith, Margaret M. (2000). The title-page : its early development, 1460-1510. Oak Knoll. External links Prints & People: A Social History of Printed Pictures, an exhibition catalog from The Metropolitan Museum of Art (fully available online as PDF), which contains material on title pages Glasgow University Library, Special Collections Department, Book of the Month Book design Typography
Title page
Engineering
795
19,234,105
https://en.wikipedia.org/wiki/Lawrence%E2%80%93Krammer%20representation
In mathematics the Lawrence–Krammer representation is a representation of the braid groups. It fits into a family of representations called the Lawrence representations. The first Lawrence representation is the Burau representation and the second is the Lawrence–Krammer representation. The Lawrence–Krammer representation is named after Ruth Lawrence and Daan Krammer. Definition Consider the braid group to be the mapping class group of a disc with n marked points, . The Lawrence–Krammer representation is defined as the action of on the homology of a certain covering space of the configuration space . Specifically, the first integral homology group of is isomorphic to , and the subgroup of invariant under the action of is primitive, free abelian, and of rank 2. Generators for this invariant subgroup are denoted by . The covering space of corresponding to the kernel of the projection map is called the Lawrence–Krammer cover and is denoted . Diffeomorphisms of act on , thus also on , moreover they lift uniquely to diffeomorphisms of which restrict to the identity on the co-dimension two boundary stratum (where both points are on the boundary circle). The action of on thought of as a -module, is the Lawrence–Krammer representation. The group is known to be a free -module, of rank . Matrices Using Bigelow's conventions for the Lawrence–Krammer representation, generators for the group are denoted for . Letting denote the standard Artin generators of the braid group, we obtain the expression: Faithfulness Stephen Bigelow and Daan Krammer have given independent proofs that the Lawrence–Krammer representation is faithful. Geometry The Lawrence–Krammer representation preserves a non-degenerate sesquilinear form which is known to be negative-definite Hermitian provided are specialized to suitable unit complex numbers (q near 1 and t near i). Thus the braid group is a subgroup of the unitary group of square matrices of size . Recently it has been shown that the image of the Lawrence–Krammer representation is a dense subgroup of the unitary group in this case. The sesquilinear form has the explicit description: References Further reading Braid groups Representation theory
Lawrence–Krammer representation
Mathematics
457
60,938,901
https://en.wikipedia.org/wiki/NGC%203705
NGC 3705 is a barred spiral galaxy in the constellation Leo. It was discovered by William Herschel on Jan 18, 1784. It is a member of the Leo II Groups, a series of galaxies and galaxy clusters strung out from the right edge of the Virgo Supercluster. One supernova has been observed in NGC 3705: SN 2022xxf (type Ic, mag. 15.5). See also List of NGC objects (3001–4000) Gallery References External links Barred spiral galaxies Leo (constellation) 3705 Astronomical objects discovered in 1784 Discoveries by William Herschel 035440
NGC 3705
Astronomy
126
7,174,276
https://en.wikipedia.org/wiki/Half%20sphere%20exposure
Half Sphere exposure (HSE) is a protein solvent exposure measure that was first introduced by . Like all solvent exposure measures it measures how buried amino acid residues are in a protein. It is found by counting the number of amino acid neighbors within two half spheres of chosen radius around the amino acid. The calculation of HSE is found by dividing a contact number (CN) sphere in two halves by the plane perpendicular to the Cβ-Cα vector. This simple division of the CN sphere results in two strikingly different measures, HSE-up and HSE-down. HSE-up is defined as the number of Cα atoms in the upper half (containing the pseudo-Cβ atom) and analogously HSE-down is defined as the number of Cα atoms in the opposite sphere. If only Cα atoms are available (as is the case for many simplified representations of protein structure), a related measure, called HSEα, can be used. HSEα uses a pseudo-Cβ instead of the real Cβ atom for its calculation. The position of this pseudo-Cβ atom (pCβ) is derived from the positions of preceding Cα−1 and the following Cα+1. The Cα-pCβ vector is calculated by adding the Cα−1-Cα0 and Cα+1-Cα0 vectors. HSE is used in predicting discontinuous B-cell epitopes. Song et al. have developed an online webserver termed HSEpred to predict half-sphere exposure from protein primary sequences. HSEpred server can achieve the correlation coefficients of 0.72 and 0.68 between the predicted and observed HSE-up and HSE-down measures, respectively, when evaluated on a well-prepared non-homologous protein structure dataset. Moreover, residue contact number (CN) can also be accurately predicted by HSEpred webserver using the summation of the predicted HSE-up and HSE-down values, which has further enlarged the application of this new solvent exposure measure. Recently, Heffernan et al. has developed the most accurate predictor for both HSEα and HSEβ based on a big dataset by using multiple-step iterative deep neural-network learning. The predicted HSEa shows a higher correlation coefficient to the stability change by residue mutants than predicted HSEβ and ASA. The results, together with its easy Ca-atom-based calculation, highlight the potential usefulness of predicted HSEa for protein structure prediction and refinement as well as function prediction. References Amino acids Protein structure Nitrogen cycle
Half sphere exposure
Chemistry
537
66,071,147
https://en.wikipedia.org/wiki/Myrtle%20L.%20Richmond
Myrtle Leila Richmond (September 30, 1882 – January 2, 1973) was an American astronomical researcher, a computer who worked at the Mount Wilson Observatory from 1913 to 1947. Early life and education Richmond was born in Vinland, Kansas, the daughter of Frank L. Richmond and Leila Delight Richmond. Her father was construction superintendent in the railroad industry. She graduated from Smith College in 1907, and earned a master's degree in 1908 at the University of Denver. She was active in Smith College alumnae activities in Los Angeles. Career Richmond taught mathematics at the University of Denver, and worked at Chamberlin Observatory in Colorado in 1909. She was a fellow in mathematics and astronomy at Goodsell Observatory in 1912, where she worked on Variable stars and a comet's orbit. Richmond joined the Mount Wilson Observatory computing department in 1913, and retired in 1947, after she "ably assisted in a large number of stellar and solar investigations." She was listed as a member of the observatory's "investigatory staff" in 1917. Her work also helped to establish the location of the planet Pluto, and of the moons of Jupiter. She contributed to several observatory publications, including A photometric study of the pleiades (1931, with Harlow Shapley), Mean distribution of stars according to apparent magnitude and galactic latitude (1925), The mean color-index of stars of different apparent magnitudes. Some relations between magnitude scales (1925), and Mount Wilson catalogue of photographic magnitudes in selected areas 1–139 (1930). She co-authored articles with American astronomer Seth Barnes Nicholson and Danish astronomer Julie Vinter Hansen. Personal life Richmond enjoyed hiking. She died in 1973, aged 90 years, in Pasadena. Her gravesite is in Woodstock, Vermont, her father's hometown. References 1882 births 1973 deaths People from Kansas Human computers Smith College alumni University of Denver alumni American women scientists
Myrtle L. Richmond
Technology
385
4,794,513
https://en.wikipedia.org/wiki/Western%20gorilla
The western gorilla (Gorilla gorilla) is a great ape found in Africa, one of two species of the hominid genus Gorilla. Large and robust with males weighing around , the species is found in a region of midwest Africa, geographically isolated from the eastern gorilla (Gorilla beringei). The hair of the western species is significantly lighter in color. The western gorilla is the second largest living primate after the eastern gorilla. Two subspecies are recognised: the western lowland gorilla (Gorilla gorilla gorilla) is found in most of West Africa; while the Cross River gorilla (Gorilla gorilla diehli) is limited to a smaller range in the north at the border of Cameroon and Nigeria. Both subspecies are listed Critically Endangered. Taxonomy A formal description of the species was provided by Thomas Savage in 1847, allying the new species to an earlier description of the chimpanzee as Troglodytes gorilla in a group of eastern simians he referred to as "orangs". The author selected the specific epithet for the name given by Hanno to "wild men" he had noted on the east coast of Africa, presumed by Savage to be a species of orang. The population is recognised as two subspecies: Nearly all of the individuals of this taxon belong to the western lowland gorilla subspecies, whose population is approximately 276,000 individuals. Only 250 to 300 of the only other western gorilla subspecies, the Cross River gorilla, are thought to remain. Description Western gorillas are generally lighter colored than eastern gorillas. Western gorillas have black, dark grey or dark brown-grey hair with a brownish forehead. Males average although reaching a height up to , with males having an average weight of and females weighing . Captive western gorillas average in males and in females. Another source describes the weight of wild male western lowland gorillas as . The Cross River gorilla differs from the western lowland gorilla in both skull and tooth dimensions. Behavior and ecology Western gorillas live in groups that vary in size from two to twenty individuals. Such groups are composed of at least one male, several females and their offspring. A dominant male silverback heads the group, with younger males usually leaving the group when they reach maturity. Females transfer to another group before breeding, which begins at eight to nine years old; they care for their young infants for the first three to four years of their lives. The interval between births, therefore, is long, which partly explains the slow population growth rates that make the western gorilla so vulnerable to poaching. Due to the long gestation time, long period of parental care, and infant mortality, a female gorilla will only give birth to an offspring that survives to maturity every six to eight years. Western gorillas are long-lived and may survive for as long as 40 years in the wild. A group's home range may be as large as , but is not actively defended. Wild western gorillas are known to use tools. Western gorillas' diets are high in fiber, including leaves, stems, fruit, piths, flowers, bark, invertebrates, and soil. The frequency of when each of these are consumed depends on the particular western gorilla group and the season. Furthermore, different groups of western gorillas eat differing numbers and species of plants and invertebrates, suggesting they have a food culture. Fruit comprises most of the western lowland gorillas' diets when it is abundant, directly influencing their foraging and ranging patterns. There is a correlation between the amount of time a western gorilla travels and the season in which fruits are available. Western gorillas spend more time traveling and feeding during the seasons when fruits are abundant compared to when there is less fruits available. Fruits of the genera Tetrapleura, Chrysophyllum, Dialium, and Landolphia are favored by the western gorillas. Low-quality herbs, such as leaves and woody vegetation, are only eaten when fruit is scarce. In the dry season from January to March, when fleshy fruits are few and far between, more fibrous vegetation such as the leaves and bark of the low-quality herbs Palisota and Aframomum are consumed. Of the invertebrates consumed by the western gorillas, termites and ants make up the majority. Caterpillars, grubs, and larvae are also consumed in rarity. Some ethnographic and pharmacological studies have suggested a possible medicinal value in particular foods consumed by the western gorilla. The fruit and seeds of multiple Cola species are consumed. Given the low protein content, the main reason for their consumption may be the stimulating effect of the caffeine in them. Western gorillas inhabiting Gabon have been observed consuming the fruit, stems, and roots of Tabernanthe iboga, which, due to the compound ibogaine in it, acts on the central nervous system, producing hallucinogenic effects. It also has effects comparable to caffeine. There is also evidence for medicinal value for the seed pods of Aframomum melegueta in western gorillas' diets, which seem to have some sort of cardiovascular health benefit for western lowland gorillas, and are a known part of the natural diets for many wild populations. A study published in 2007 announced the discovery of this species fighting back against possible threats from humans. They "found several instances of gorillas throwing sticks and clumps of grass". This is unusual, because western gorillas usually flee and rarely charge when they encounter humans. One mirror test in Gabon shows that western gorilla silverbacks react aggressively when faced with a mirror, although refusing to look fully at their reflection. Conservation status The World Conservation Union lists the western gorilla as critically endangered, the most severe denomination next to global extinction, on its 2007 Red List of Threatened Species. The Ebola virus might be depleting western gorilla populations to a point where their recovery might become impossible, and the virus reduced populations in protected areas by 33% from 1992 to 2007, which may be equal to a decline of 45% for a period of just 20 years spanning 1992 to 2011. Poaching, commercial logging and civil wars in the countries that compose the western gorillas' habitat are also threats. Furthermore, reproductive rates are very low, with a maximum intrinsic rate of increase of about 3% and the high levels of decline from hunting and disease-induced mortality have caused declines in population of more than 60% over the last 20 to 25 years. Rather, under the optimistic estimate scenarios, population recovery would require almost 75 years. Yet within the next thirty years, habitat loss and degradation from agriculture, timber extraction, mining and climate change will become increasingly larger threats. Thus, a population reduction of more than 80% over three generations (i.e., 66 years from 1980 to 2046) seems likely. In the 1980s, a census taken of the gorilla populations in equatorial Africa was thought to be 100,000. Researchers adjusted the figure in 2008 after years of poaching and deforestation had reduced the population to approximately 50,000. Surveys conducted by the Wildlife Conservation Society in 2006 and 2007 found around 125,000 previously unreported gorillas have been living in the swamp forests of Lake Tele Community Reserve and in neighbouring Marantaceae (dryland) forests in the Republic of the Congo. This discovery could more than double the known population of the animals, though the effect that the discovery will have on the gorillas' conservation status is currently unknown. With the new discovery, the current population of western lowland gorillas could be around 150,000–200,000. However, the gorilla remains vulnerable to Ebola, deforestation, and poaching. Estimates on the number of Cross River gorillas remaining is 250–300 in the wild, concentrated in approximately 9-11 locations. Recent genetic research and field surveys suggest that there is occasional migration of individual gorillas between locations. The nearest population of western lowland gorilla is some away. Both loss of habitat and intense hunting for bushmeat have contributed to the decline of this subspecies. In 2007, a conservation plan for the Cross River gorilla was published, outlining the most important actions necessary to preserve this subspecies. The government of Cameroon has created the Takamanda National Park on the border with Nigeria, as an attempt to protect these gorillas. The park now forms part of an important trans-boundary protected area with Nigeria's Cross River National Park, safeguarding an estimated 115 gorillas—a third of the Cross River gorilla population—along with other rare species. The hope is that these gorillas will be able to move between the Takamanda reserve in Cameroon over the border to Nigeria's Cross River National Park. Individuals The names of individuals of the species includes: Jambo Koko Harambe Willie B. Snowflake Colo Timmy Pattycake Bokito References External links Western gorilla – silverbackgorillatours.com western gorilla EDGE species Critically endangered fauna of Africa western gorilla Mammals of Gabon Mammals of Cameroon Mammals of the Democratic Republic of the Congo Mammals of the Republic of the Congo Fauna of Nigeria Fauna of Central Africa Taxa named by Thomas S. Savage
Western gorilla
Biology
1,847
3,288,573
https://en.wikipedia.org/wiki/Biotic%20potential
Biotic potential is described by the unrestricted growth of populations resulting in the maximum growth of that population.   Biotic potential is the highest possible vital index of a species; therefore, when the species has its highest birthrate and lowest mortality rate. Quantitative Expression The biotic potential is the quantitative expression of the ability of a species to face natural selection in any environment. The main equilibrium of a particular population is described by the equation: Number of Individuals = Biotic Potential/Resistance of the Environment (Biotic and Abiotic) Chapman also relates to a "vital index", regarding a ratio to find the rate of surviving members of a species, whereas; Vital Index = (number of births/number of deaths)*100. Components According to the ecologist R.N. Chapman (1928), the biotic potential could be divided into a reproductive and survival potential. The survival potential could in turn be divided into nutritive and protective potentials. Reproductive potential (potential natality) is the upper limit to biotic potential in the absence of mortality. Survival potential is the reciprocal of mortality. Because reproductive potential does not account for the number of gametes surviving, survival potential is a necessary component of biotic potential. In the absence of mortality, biotic potential = reproductive potential. Chapman also identified two additional components of nutritive and protective potentials as divisions of the survival potential. Nutritive potential is the ability to acquire and use food for growth and energy. Protective potential is described by the ability of the organism to protect itself against the dynamic forces of environment in order to insure successful reproduction and offspring. Full expression of the biotic potential of an organism is restricted by environmental resistance, any condition that inhibits the increase in number of the population. It is generally only reached when environmental conditions are very favorable. A species reaching its biotic potential would exhibit exponential population growth and be said to have a high fertility, that is, how many offspring are produced per mother. References Reproduction Reproductive ecology
Biotic potential
Biology
406
18,161,859
https://en.wikipedia.org/wiki/Armed%20Forces%20Institute%20of%20Pathology%20%28Pakistan%29
The Armed Forces Institute of Pathology (Reporting name: AFIP) is the main Pakistani institution for defensive research into countermeasures against biological warfare. It is located in the vicinity of CMH Rawalpindi alongside the Armed Forces Institute of Cardiology in Rawalpindi Cantt, Punjab, Pakistan. Established in 1957, the AFIP, supported by civilian and military pathologists, has been engaged in the task of combating virus outbreaks in Pakistan. History Pre-independence era Station hospital facilities were initially developed for the Indian troops. They depended entirely on their regimental hospitals for medical treatment. In October 1918, Station Hospitals were sanctioned for the Indian troops. The Indian Hospital Corps (IHC) was initially divided into 10 Division Companies, which corresponded to the 10 existing Military Divisions in India and Burma. They were located at Peshawar, Rawalpindi, Lahore, Quetta, Mhow, Poona, Meerut, Lucknow, Secunderabad, and Rangoon. The whole corps was reorganized. On a command basis, five companies of the IHC were created in 1932. The companies were located as follows: No. 1 Company was at Rawalpindi, No. 2 Company at Lucknow, No. 3 Company at Poona, No. 4 Company at Quetta, and No. 5 Company at Rangoon. World War II was responsible for rapid developments. The idea of having a homogeneous corps by amalgamating the IMS and IMD gradually took shape, and Indian Army Medical Corps (IAMC) came into being on 3 Apr 1943. The medical institutions of the IAMC were concentrated in the areas that were to subsequently become Pakistan. The British Raj in India left a significant legacy in the territory, which later became Pakistan. The bulk of the troops of the British Indian Army were recruited from the areas that now constitute Pakistan. The threats posed by the Russian empire and a fear of Afghans and Central Asian forces overrunning the Indian territory, made the British rulers particularly cautious about the northwestern borders across the Hindu Kush mountains. The army was deployed on a large scale. Rawalpindi served as the pivotal military base, from which they controlled the command, logistics, and services provided to those troops. It was the Headquarters of Northern Command, India. Therefore, the city housed many offices and institutions. Armed Forces Medical Services were one of the most organized and highly developed support services in the British Indian Army. The members of the medical profession served in the IMS (Indian Medical Services), with pride and dignity. The senior jobs in civil medical services were also reserved for the medical professionals of the army. The troops of British Indian Army were deployed over an extensive area. They were exposed to the tropical climate. They were present in the heights of Chitral in the north-west to Burma in the east. The major bulk of their health related problems were of tropical infections and parasitic infestation. The medical services were committed for the prevention and treatment of tropical diseases. The diagnosis of these diseases was based upon the rudimentary laboratory service. The need for pathology services was, therefore, recognized even in those days. A well organized and up to the mark pathology organization was established. That was the time when experienced pathologists, especially, those dealing with the tropical pathology, served in IMS. As we mentioned earlier, Rawalpindi was the headquarters of the northern Command of the India. Combined Military Hospital (CMH) and Military Hospital (MH) were the largest and the most well-equipped hospitals in this area at that time. Alongside these major hospitals and field medical services, a pathology laboratory, the Command Laboratory, Northern Command, was established. This was indeed the best Armed Forces laboratory, which the newly independent Pakistan inherited from the British India in 1947. After 1947 Lt Col Sarup Narayan was the first Commanding Officer of this laboratory, at the time of independence. He originally belonged to Rawalpindi area and had happily opted to stay and serve in Pakistan. However, due to obvious reasons, he felt it difficult to continue living in Pakistan and he had to migrate to India for personal reasons. He was a very brilliant, hardworking and well mannered pathologist. He ultimately rose to the rank of Lt General in the Indian Army Medical Corps and was appointed as the Director of Medical Services (Research). After independence, Pakistan Army Medical Corps (AMC) was established from the fragmented available resources (monetary as well as personnel). Two senior IMS officers, Lt Gen S.M.A. Faruki and Lt Gen Wajid Ali Khan Burki, played a key role in early formative years of AMC. Shortly afterwards the Command Lab. at Rawalpindi was raised to the status of the Central Military Pathological Laboratory (CMPL). This cumbersome name was a legacy of the mother unit of the same name at Poona (India). The CMPL had started its existence in a small premises as a Central Diagnostic and Reference Laboratory for the Army, the Navy and the Air Force. Apart from being a diagnostic laboratory, it was also given the task of production of some reagents and bio-products. This was the modest beginning and the services were gradually evolved into a comprehensive institute, which is now called as Armed Forces Institute of Pathology (AFIP). It had very basic equipment, limited infrastructure and the tests used were conventional and very basic ones. The CMPL, as an army unit was commanded by Lt Col Muhammad Said, for a short duration. He was once again replaced by then Lt Col Manzoor Ahmed Chaudhry who by then had a background of heading the unit. Once the tradition is set, a smooth functioning can be ensured by a good command and control. CMPL to AFIP-1957 It was with the advice received in the form of a letter from Major Syed Azhar Ahmad that the CMPL was designated as Armed Forces Institute of Pathology (AFIP) on 20 August 1957. He was then being trained at AFIP Washington. As the CMPL was to be transformed to an institute of excellence, the infrastructure needed to be developed and the facilities enhanced. The existing working and administrative space had by then become too crowded and the quality work needed smooth mobility of the action. Therefore, more room was needed and a double-storey building was constructed which was later considered unsuitable for the work of the AFIP and was handed over to the newly created Armed Forces Medical College. It must be put on record that late Lt Gen W.A.Burki, was very keen to improve the scientific basis of AMC and was a moving force behind many projects of development. A new building was desperately needed for the institute. It needed finances, planning and supervision. It must not be forgotten at this stage that the finances were arranged by a very kind gesture of late Chaudhri Mohammad AH, then finance minister of the Federal Government. Chaudhri Mohammad Ali had to visit the AFlP quite frequently. During his visits, he too felt the need of an advanced diagnostic laboratory service and provided necessary financial support for the construction of the new building. He later became the Prime Minister of Pakistan and continued helping AFIP for long period of time. A large purpose built double-storey building was planned by Maj (later Col) SMH Bokahari, who took personal efforts in its design and construction. A new purpose-built building for AFIP was completed in 1958. Lt Col Noor Ahmed was appointed the Commanding Officer of AFIP, in September 1957. He was a very learned and experienced pathologist of that time. He had long served in the Indian Medical Department (IMD) of the Army and in the Indian Army Medical Corps. He was an excellent bench worker and a keen teacher, who himself had worked very hard and expected the same from others. He was highly disciplined, tough task master and took a keen interest in the lab work, improvement of the premises, postgraduate teaching and maintenance of a high quality work. During the tenure of Lt Col Noor Ahmed, a very well organised and systematic programme of academic activities and research was set up in this institute, on sound scientific basis. He was also responsible for raising the standards of (so far neglected) histopathology services in the Army. It was the fruit of his concerted efforts that the AFIP in Pakistan established close cordial links with AFIP Washington, D.C. and various American and British Universities. In 1959, during his Command, Armed Forces Tumour Registry was started. This was the beginning of the organised study of malignant tumours in Pakistan. The suggestion was once again made by Major (now Lt Gen Retired Syed Azhar Ahmed). When the locally available meager training resources of pathologists were exhausted, the matter of training of pathologists abroad was seriously taken, during his tenure. Consequently, Major (now Maj Gen Retd) M.I.Burney completed a one-year training course in Virology in Walter Reed Army Institute of USA. As he returned, the department of Virology was established on modern lines. The department was engaged in the study of Asian Flu effect on local population. He maintained his interest in parasitology and entomology. It was he, who discovered a new focus of Kala-Azar in Gilgit and Baltistan. A new species of sand fly flourishing in this cold climate was also discovered and named after Lt Col M.I. Burney as Phlebotomus Brunei. The newly constructed building of AFIP had to be furnished. So far, all the laboratory furniture including laboratory benches were imported from abroad. This was most probably the first time when it was manufactured locally, instead of being imported. Messrs Abdullah and Sons from Gujranwala accepted the task and they made all the furniture indigenous at a remarkably low price. Many such benched are still being used at AFIP. All the services like a high voltage electricity, water and gas were all supplied smoothly on a regular basis on these benches. The federal capital was shifted from Karachi to Rawalpindi, in 1958. The AFIP was the mainstay of the diagnostic laboratory services in that city. Therefore, the responsibilities of AFIP increased further. It was then supposed to look after the ministers and senior civil servants of the Federal Government of Pakistan ad their family members. Mr Mohammad Shoaib was then the Finance Minister. He used to attend AFIP for his regular check-up. He found the AFIP premises uncomfortable during the hot seasons. The building was closed and the newly inducted heavy gadgetry generated a lot of heat, in the close space. The working atmosphere was uncomfortable and it was a really tough task to work there. The patient reception area was extremely unpleasant. Mr Mohammad Shoaib shared the discomfort with other patients and he generously sanctioned a handsome amount for bringing changes. It was spent for the fixation of a modern central air conditioning plant for the building. It had an added advantage of the provision of a dust free environment. The constant temperature was needed for the fine functioning of many pieces of equipment and the conditions needed for the performance of sophisticated tests. During those days, the pressing demands of quality pathology service had compelled the AFIP to assume the role of a Central Reference Laboratory. It provided services for the Armed Forces, as well civil sector. The general population belonging to civil sector of this region, including many medical institutions in and around Rawalpindi benefited its expertise. This gave it new prestige and status. 1962 Colonel Noor Ahmad retired from his Army service and Lt Col Manzoor Ahmad Chaudri was called to take over the Command of this Institute, in August 1962 once again. A steady progress was maintained and some of the first generation automated laboratory equipment was introduced at that time. Major (Major General in Bangladesh Army) Mahmud ur Rahman Chaudhary worked in microbiology before he was sent to NIH, Islamabad. By then the department of virology was fully established. It was soon designated as National Centre for Viral Diseases. During this period, a proper animal house was also constructed and a Departmental of Experiment Pathology was established. It was used for animal experimentation and the production of reagents. The reagents produced there were provided to the pathology laboratories of the peripheral hospitals. It was always run without the hiring of veterinary personnel and high tech work. It was conventionally looked after by the officers of Virology department. Some sweepers were trained to feed and look after the animals. Gradually, the dependence on the use of animals had been declining, as more emphasis was laid upon the use of more sophisticated technology. 1970 In July 1970, Col (now Late Maj Gen) M.I.Burney took over as Commanding Officer of AFIP. 1973 In August 1973 Col (now Late Maj Gen) M.I.Burney was seconded to the civil as executive director of the National Institute of Health, Pakistan and Lt Col (now Retd Lt Gen) Syed Azhar Ahmed was appointed as the Commandant of the institute. His tenure was a period of continuing improvement and progress in every field. A major training programme was started and young officers were selected for higher training in Pakistan and abroad. The forum of the College of Physicians and Surgeons of Pakistan was utilized for this purpose and within few years a large number of candidates qualified for FCPS in pathology. To help the pathologists who fund the FCPS difficult at that time to do were helped by M.Phil. training programme, which was started in 1982 for the first time. Since-then a large number of Armed Forces pathologists as well as civilian doctors availed AFIP facilities for their research for M.Phil. thesis. A large number of very brilliant young officers was also sent for training abroad especially to UK and USA. By this action, he laid the foundations of future of sound basis of Virology, Immunology, Histopathology, Microbiology, Chemical Pathology and Nuclear Medicine. Major (Now Lt Gen Karamat Ahmad Karamat, HI(M) in July 1973 as a Microbiologist with dip Bact (London) and MRC Path (London) and remained in the team of Lt Gen Syed Azhed Ahmed till May 1981. A very important decision at this time was to establish a Nuclear Medical Centre as a part of AFIP. A new building was constructed for this purpose in a record period of eight and a half months. Simultaneously the required equipment was purchased, the officers and staff were selected and trained and within a year, the centre was made functional. Further training of these officers was arranged in UK and two of item obtained their master's degree in nuclear medicine from the London University. The premises of the institute were also expanded and a special emphasis was laid upon the improvement of facilities for the visiting patients and the staff. In 1983 a completely new block was designed and constructed for patient's reception. It contained different waiting rooms for officers, other ranks, families and the civilian patients. A canteen block was completed in 1984. An entirely new block in the name of Advanced Diagnostic and Research Centre (ADRC) was also planned in 1983. It was approved in early 1984 and, in a phased programme; the whole project was completed in 1988. This provided of additional laboratory space in a modern building with adequate training and research facilities. Subsequently, the departments of Haematology as well as Chemical pathology were housed in that building. Some space was subsequently utilized there for Immunology as well as Molecular biology. 1988 In 1988, Lt Gen Syed Azhar Ahmed left AFIP to become the executive director of national Institute of Health. The new Commandant of AFIP was Brigadier (now Maj Gen) Manzoor Ahmad. Maj Gen Manzoor Ahmed has been a very active participant in the progress and development of AFIP for over thirty years. He had a sound background of extensive training in Histopathology from the best centre of USA. He had been previously posted to PNS Shifa, Karachi. There he was involved in teaching of the postgraduate level at newly established Jinnah Postgraduate Medical Centre. He had been pioneer of the research in medical sciences with the collaboration of Dr Sarwar Jehan Zuberi and Prof Naeem Aon Jaaaferi. His team included the professional stalwarts like a great Haematologist Brig Mohammad Saleem and a renowned Microbiologist, Brig Abdul Hannan. The trained pathologists returned from abroad. They were Captain (now Major General) Farooq Ahmad Khan after obtaining PhD from London, Major (now Brigadier) Waheed Uz Zaman Tariq after obtaining his DpBact (Manchester) and MRCPath (London) in Virology and Maj (now Brigadier) Sajid Mushtaq who returned with MRCPath in Histopathology from London. His other team members consisted of Col (now Brigadier retired) Amir Hussain, Lt Col (now Major General retired) Masood Anwar, Wing Commander Iftikhar ul Hassan Abidi and Major (now Major General) Mohammad Amin Wiqar. During his days, emphasis was laid upon structured training, evening classes and seminars and conferences. The Lymphoma project which was already started, gave a new vigour and activity as the funds were diverted to hire the services of many more doctors, technical and secretarial staff. The institute was asked by other societies like Pakistan Society of gastroenterology and GI Endoscopy to organize the international conference. A faculty of overseas experts was established. Internationally renowned experts like Prof Maurice Longson (of Manchester University), Prof JE Banatavala of St Thomas’ Hospital, London and Prof Morag C Timbury, Director Central Public Health Laboratory, Colindale, London visited and delivered the lectures in the institute. The AFIP revived its ties with the Royal College of Pathologist, London. The officers were encouraged to attained international conferences and present their papers. Much emphasis was also diverted towards paper writing and compilation of the professional books and booklets. 1992 Maj Gen Manzoor Ahmad was the first pathologist, who was appointed as the Surgeon General Pakistan Army and was promoted to the rank of Lieutenant General, in 1992. Maj General Iftikhar Ahmad Malik was appointed the Commandant of AFIP. He had long worked as a pathologist. He had been in Saudi Arabia and then Professor of Pathology at Army Medical College Rawalpindi. He was also Chairman Pakistan Medical Research Council and actively involved in the activities of the College of Physicians and Surgeons, Pakistan. He brought new dimension to Pathology training and practice. He was disciplined, to the point and expected a high standard of work and tough training. The institute was open for long hours. The research papers were written in great number and work was expanded to different fields. His tenure brought meticulousness and diligence in the unit. The major change was brought in the Virology department. The newly constructed structure of the building was fully changed and converted into a purpose-built laboratory. The office space was created. The serum bank area, Viral serodiagnosis lab, Molecular Biology laboratory and laboratory sterilization areas were added. The department has a new shape. The expansion of the space led to mobility and freedom of work and increased the efficiency. The new gadgets, computers, telephonic connections and working areas were added in the department. During his tenure, the freedom of thoughts, acceptance of new ideas and an improvisation remained the priorities. The quick decision were made and the sudden expansion in the quality work was absorbed. The working of the LIMS was reviewed and innovations were introduced into it. The plan for the Forensic Lab was followed and the work started on it. After a long time the officers were encouraged to receive training from abroad and attend the conferences and make presentations. During his tenure, the earthquake struck the north of Pakistan and the Azad Kashmir. All laboratories and pathology service had disappeared there. He and his team played a vital role in the revival of the service, by offering the equipment, manpower and the resources from the institute. In a very little time, the services were running efficiently and smoothly. Organization In the diagnostics side its departments include Hematology, Chemical pathology, Microbiology, Immunology, Virology, Endocrinology and Nuclear Medicine. It is usually commanded by a Major General, currently being commanded by Major General Muhammad Tahir Khadim, HI(M). He is the senior most pathologist of the Pakistan Armed Forces, consultant Histopathologic specialist. As a Major General, he has held the highest professional medical appointments in Pakistan Army such as Advisor in Pathology (Armed Forces), Professor of Pathology, Army Medical College. Presently he is serving as Commandant Armed Forces Institute of Pathology Rawalpindi. List of commandants Lt Col Sarup Narayan Col Mohammad Akram Lt Col Mohammad Said Lt Col Noor Ahmad Col Manzoor Ahmad Chaudhri Maj Gen M I Burney Lt Gen Syed Azhar Ahmed Lt Gen Manzoor Ahmad Maj Gen Iftikhar Ahmad Malik (Late) Lt Gen Mohammad Saleem (Late) Maj Gen Muhammad Muzaffar Lt Gen Karamat Ahmad Karamat Brig Zahur-ur Rahman Maj Gen Masood Anwar Maj Gen Mohammad Amin Wiqar Maj Gen Farooq Ahmad Khan Maj Gen Muhammad Ayyub Maj Gen Pervez Ahmed HI(M) Maj Gen Muhammad Tahir Khadim Maj Gen Raza Jaffar Maj Gen Hafeez uddin (Current) References External links ghdx Website Rawalpindi District Military medical facilities in Pakistan Rawalpindi Cantonment Hospitals in Rawalpindi Medical research institutes in Pakistan Biological warfare facilities
Armed Forces Institute of Pathology (Pakistan)
Biology
4,417
6,464,750
https://en.wikipedia.org/wiki/Thomsen%E2%80%93Berthelot%20principle
In thermochemistry, the Thomsen–Berthelot principle is a hypothesis in the history of chemistry which argued that all chemical changes are accompanied by the production of heat and that processes which occur will be ones in which the most heat is produced. This principle was formulated in slightly different versions by the Danish chemist Julius Thomsen in 1854 and by the French chemist Marcellin Berthelot in 1864. This early postulate in classical thermochemistry became the controversial foundation of a research program that would last three decades. This principle came to be associated with what was called the thermal theory of affinity, which postulated that the heat evolved in a chemical reaction was the true measure of its affinity. Limitations The experimental objections to the Thomsen–Berthelot principle include incomplete dissociation, reversibility, and spontaneous endothermic processes. Such cases were dismissed by orthodox thermochemist as outliers not covered by the principle, or the experiments were manipulated to fit it through with somewhat contrived justifications was later disproved. In 1873, Thomsen acknowledged that his theory might not have universal or definitive credibility. Later, under newly created chemical thermodynamics framework, the principle was explained to only be valid as an idealization under extreme conditions (i.e., absolute zero). Thomsen openly admitted that his initial understanding was merely a close estimate of the reality, emphasizing that while chemical reactions typically release heat, this heat isn't always a trustworthy indicator of the strength of the bonds formed. On the other hand, Berthelot, was more resistant and continued to assert the validity of the principle until 1894. In 1882 the German scientist Hermann von Helmholtz proved that affinity was not given by the heat evolved in a chemical reaction but rather by the maximum work, or free energy, produced when the reaction was carried out reversibly. References See also Principle of maximum work. Thermochemistry Obsolete scientific theories
Thomsen–Berthelot principle
Chemistry
402
78,523,008
https://en.wikipedia.org/wiki/Candidatus%20Liberimonas%20magnetica
{{Automatic taxobox | name = Candidatus Liberimonas magnetica | taxon = Liberimonadaceae | genus_text = {{nowrap|''Ca. Liberimonas}} | species_text = | binomial_text = Candidatus Liberimonas magnetica | authority = Uzun et al. 2023 }}Candidatus'' Liberimonas magnetica is a species of bacteria from the phylum of Elusimicrobiota. It is considered part of the "rare biosphere", meaning it is found at very low concentrations in the environments they inhabit. Liberimonas magnetica is uncultured as of yet, so its name is currently provisional. References Bacteria Bacteria described in 2023
Candidatus Liberimonas magnetica
Biology
167